threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nThe backup manifest patch added a bunch of new code to\nsrc/backend/replication/basebackup.c, and while lamenting the\ncomplexity of that source file yesterday, I suddenly realized that I'd\nunwittingly contributed to making that problem worse, and that it\nwould be quite easy to move the code added by that patch into a\nseparate file. Attached is a patch to do that.\n\nDespite the fact that we are after feature freeze, I think it would be\na good idea to commit this to PG 13. It could be saved for PG 14, but\nthat will make future back-patching substantially harder. Also, a\npatch that's just moving code is pretty low-risk.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 18 Apr 2020 08:37:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "relocating the server's backup manifest code" }, { "msg_contents": "On Sat, Apr 18, 2020 at 08:37:58AM -0400, Robert Haas wrote:\n> Despite the fact that we are after feature freeze, I think it would be\n> a good idea to commit this to PG 13. It could be saved for PG 14, but\n> that will make future back-patching substantially harder. Also, a\n> patch that's just moving code is pretty low-risk.\n\nMakes sense to move this code around, so that's fine by me to do it\neven after the feature freeze as it means less back-patching pain in\nthe future.\n\n+static inline bool\n+IsManifestEnabled(manifest_info *manifest)\n+{\n+ return (manifest->buffile != NULL);\n+}\nI would keep this one static and located within backup_manifest.c as\nit is only used there.\n\n+extern void InitializeManifest(manifest_info *manifest,\n+ manifest_option want_manifest,\n+ pg_checksum_type manifest_checksum_type);\n+extern void AppendStringToManifest(manifest_info *manifest, char *s);\n+extern void AddFileToManifest(manifest_info *manifest, const char *spcoid,\n+ const char *pathname, size_t size,\n+ pg_time_t mtime,\n+ pg_checksum_context *checksum_ctx);\n+extern void AddWALInfoToManifest(manifest_info *manifest, XLogRecPtr startptr,\n+ TimeLineID starttli, XLogRecPtr endptr,\n+ TimeLineID endtli);\n+extern void SendBackupManifest(manifest_info *manifest);\n\nNow the names of those routines is not really consistent if you wish\nto make one single family. Here is a suggestion:\n- BackupManifestInit()\n- BackupManifestAppendString()\n- BackupManifestAddFile()\n- BackupManifestAddWALInfo()\n- BackupManifestSend()\n\n+ * Portions Copyright (c) 2010-2020, PostgreSQL Global Development Group\nI would vote for some more consistency. Personally I just use that\nall the time:\n * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n\n+typedef enum manifest_option\n+{\n[...]\n+typedef struct manifest_info\n+{\nThese ought to be named backup_manifest_info and backup_manifest_option?\n--\nMichael", "msg_date": "Sat, 18 Apr 2020 21:57:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: relocating the server's backup manifest code" }, { "msg_contents": "On Sat, Apr 18, 2020 at 8:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n> +static inline bool\n> +IsManifestEnabled(manifest_info *manifest)\n> +{\n> + return (manifest->buffile != NULL);\n> +}\n> I would keep this one static and located within backup_manifest.c as\n> it is only used there.\n\nOh, OK.\n\n> +extern void InitializeManifest(manifest_info *manifest,\n> + manifest_option want_manifest,\n> + pg_checksum_type manifest_checksum_type);\n> +extern void AppendStringToManifest(manifest_info *manifest, char *s);\n> +extern void AddFileToManifest(manifest_info *manifest, const char *spcoid,\n> + const char *pathname, size_t size,\n> + pg_time_t mtime,\n> + pg_checksum_context *checksum_ctx);\n> +extern void AddWALInfoToManifest(manifest_info *manifest, XLogRecPtr startptr,\n> + TimeLineID starttli, XLogRecPtr endptr,\n> + TimeLineID endtli);\n> +extern void SendBackupManifest(manifest_info *manifest);\n>\n> Now the names of those routines is not really consistent if you wish\n> to make one single family. Here is a suggestion:\n> - BackupManifestInit()\n> - BackupManifestAppendString()\n> - BackupManifestAddFile()\n> - BackupManifestAddWALInfo()\n> - BackupManifestSend()\n\nI'm not in favor of this renaming. Different people have different\npreferences, of course, but my impression is that the general project\nstyle is to choose names that follow English word ordering, i.e.\nappendStringInfo rather than stringInfoAppend; add_paths_to_joinrel\nrather than joinrel_append_paths; etc. There are many exceptions, but\nI tend to lean toward English word ordering unless it seems to create\na large amount of awkwardness in a particular case. At any rate, it\nseems a separate question from moving the code.\n\n> + * Portions Copyright (c) 2010-2020, PostgreSQL Global Development Group\n> I would vote for some more consistency. Personally I just use that\n> all the time:\n> * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> * Portions Copyright (c) 1994, Regents of the University of California\n\nSure, that's fine.\n\n> +typedef enum manifest_option\n> +{\n> [...]\n> +typedef struct manifest_info\n> +{\n> These ought to be named backup_manifest_info and backup_manifest_option?\n\nI'm OK with that. I don't think it's a big deal because \"manifest\"\nisn't a widely-used PostgreSQL term already, but it doesn't bother me\nto rename it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Apr 2020 10:43:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: relocating the server's backup manifest code" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Despite the fact that we are after feature freeze, I think it would be\n> a good idea to commit this to PG 13. It could be saved for PG 14, but\n> that will make future back-patching substantially harder. Also, a\n> patch that's just moving code is pretty low-risk.\n\n+1 in principle --- I didn't read the patch though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Apr 2020 11:42:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: relocating the server's backup manifest code" }, { "msg_contents": "On Sat, Apr 18, 2020 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Despite the fact that we are after feature freeze, I think it would be\n> > a good idea to commit this to PG 13. It could be saved for PG 14, but\n> > that will make future back-patching substantially harder. Also, a\n> > patch that's just moving code is pretty low-risk.\n>\n> +1 in principle --- I didn't read the patch though.\n>\n\nSame here, +1 in principle. This is not a new feature, this is \"polishing a\nfeature that was added in 13\", and doing so now will save a lot of work\ndown the road vs doing it later.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Apr 18, 2020 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> Despite the fact that we are after feature freeze, I think it would be\n> a good idea to commit this to PG 13. It could be saved for PG 14, but\n> that will make future back-patching substantially harder. Also, a\n> patch that's just moving code is pretty low-risk.\n\n+1 in principle --- I didn't read the patch though.Same here, +1 in principle. This is not a new feature, this is \"polishing a feature that was added in 13\", and doing so now will save a lot of work down the road vs doing it later. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 18 Apr 2020 18:33:08 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: relocating the server's backup manifest code" }, { "msg_contents": "On Sat, Apr 18, 2020 at 10:43:52AM -0400, Robert Haas wrote:\n> I'm not in favor of this renaming. Different people have different\n> preferences, of course, but my impression is that the general project\n> style is to choose names that follow English word ordering, i.e.\n> appendStringInfo rather than stringInfoAppend; add_paths_to_joinrel\n> rather than joinrel_append_paths; etc. There are many exceptions, but\n> I tend to lean toward English word ordering unless it seems to create\n> a large amount of awkwardness in a particular case. At any rate, it\n> seems a separate question from moving the code.\n\nBoth of us have rather different views on the matter then. I still\nprefer my suggestion because that's more consistent and easier to\ngrep, but I'll be fine with your call at the end. I would suggest to\nstill use BackupManifest instead of Manifest in those functions and\nstructures though, as we cannot really know if the concept of manifest\nwould apply to other parts of the system. A recent example of API\nname conflict I have in mind is index AM vs table AM. Index AMs have\nbeen using rather generic names, causing issues when table AMs have\nbeen introduced.\n\n> I'm OK with that. I don't think it's a big deal because \"manifest\"\n> isn't a widely-used PostgreSQL term already, but it doesn't bother me\n> to rename it.\n\nThanks.\n--\nMichael", "msg_date": "Sun, 19 Apr 2020 09:12:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: relocating the server's backup manifest code" }, { "msg_contents": "On Sat, Apr 18, 2020 at 8:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I would suggest to\n> still use BackupManifest instead of Manifest in those functions and\n> structures though, ...\n\nDone in the attached, which also adds \"backup_\" to the type names.\n\nAfter further examination, I think the Copyright header issue is\nentirely separate. If someone wants to standardize that across the\nsource tree, cool, but this patch just duplicated the header from the\nfile out of which it was moving code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 20 Apr 2020 14:54:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: relocating the server's backup manifest code" } ]
[ { "msg_contents": "\nI was just trying to revive lousyjack, my valgrind buildfarm animal\nwhich has been offline for 12 days, after having upgraded the machine\n(fedora 31, gcc 9.3.1, valgrind 3.15) and noticed lots of errors like this:\n\n\n2020-04-17 19:26:03.483 EDT [63741:3] pg_regress LOG:  statement: CREATE\nDATABASE \"regression\" TEMPLATE=template0\n==63717== VALGRINDERROR-BEGIN\n==63717== Use of uninitialised value of size 8\n==63717==    at 0xAC5BB5: pg_comp_crc32c_sb8 (pg_crc32c_sb8.c:82)\n==63717==    by 0x55A98B: XLogRecordAssemble (xloginsert.c:785)\n==63717==    by 0x55A268: XLogInsert (xloginsert.c:461)\n==63717==    by 0x8BC9E0: LogCurrentRunningXacts (standby.c:1005)\n==63717==    by 0x8BC8F9: LogStandbySnapshot (standby.c:961)\n==63717==    by 0x550CB3: CreateCheckPoint (xlog.c:8937)\n==63717==    by 0x82A3B2: CheckpointerMain (checkpointer.c:441)\n==63717==    by 0x56347D: AuxiliaryProcessMain (bootstrap.c:453)\n==63717==    by 0x83CA18: StartChildProcess (postmaster.c:5474)\n==63717==    by 0x83A120: reaper (postmaster.c:3045)\n==63717==    by 0x4874B1F: ??? (in /usr/lib64/libpthread-2.30.so)\n==63717==    by 0x5056F29: select (in /usr/lib64/libc-2.30.so)\n==63717==    by 0x8380A0: ServerLoop (postmaster.c:1691)\n==63717==    by 0x837A1F: PostmasterMain (postmaster.c:1400)\n==63717==    by 0x74A71D: main (main.c:210)\n==63717==  Uninitialised value was created by a stack allocation\n==63717==    at 0x8BC942: LogCurrentRunningXacts (standby.c:984)\n==63717==\n==63717== VALGRINDERROR-END\n{\n   <insert_a_suppression_name_here>\n   Memcheck:Value8\n   fun:pg_comp_crc32c_sb8\n   fun:XLogRecordAssemble\n   fun:XLogInsert\n   fun:LogCurrentRunningXacts\n   fun:LogStandbySnapshot\n   fun:CreateCheckPoint\n   fun:CheckpointerMain\n   fun:AuxiliaryProcessMain\n   fun:StartChildProcess\n   fun:reaper\n   obj:/usr/lib64/libpthread-2.30.so\n   fun:select\n   fun:ServerLoop\n   fun:PostmasterMain\n   fun:main\n}\n\n\nI can't see what the problem is immediately.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 18 Apr 2020 09:15:50 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "valgrind error" }, { "msg_contents": "\nOn 4/18/20 9:15 AM, Andrew Dunstan wrote:\n> I was just trying to revive lousyjack, my valgrind buildfarm animal\n> which has been offline for 12 days, after having upgraded the machine\n> (fedora 31, gcc 9.3.1, valgrind 3.15) and noticed lots of errors like this:\n>\n>\n> 2020-04-17 19:26:03.483 EDT [63741:3] pg_regress LOG:  statement: CREATE\n> DATABASE \"regression\" TEMPLATE=template0\n> ==63717== VALGRINDERROR-BEGIN\n> ==63717== Use of uninitialised value of size 8\n> ==63717==    at 0xAC5BB5: pg_comp_crc32c_sb8 (pg_crc32c_sb8.c:82)\n> ==63717==    by 0x55A98B: XLogRecordAssemble (xloginsert.c:785)\n> ==63717==    by 0x55A268: XLogInsert (xloginsert.c:461)\n> ==63717==    by 0x8BC9E0: LogCurrentRunningXacts (standby.c:1005)\n> ==63717==    by 0x8BC8F9: LogStandbySnapshot (standby.c:961)\n> ==63717==    by 0x550CB3: CreateCheckPoint (xlog.c:8937)\n> ==63717==    by 0x82A3B2: CheckpointerMain (checkpointer.c:441)\n> ==63717==    by 0x56347D: AuxiliaryProcessMain (bootstrap.c:453)\n> ==63717==    by 0x83CA18: StartChildProcess (postmaster.c:5474)\n> ==63717==    by 0x83A120: reaper (postmaster.c:3045)\n> ==63717==    by 0x4874B1F: ??? (in /usr/lib64/libpthread-2.30.so)\n> ==63717==    by 0x5056F29: select (in /usr/lib64/libc-2.30.so)\n> ==63717==    by 0x8380A0: ServerLoop (postmaster.c:1691)\n> ==63717==    by 0x837A1F: PostmasterMain (postmaster.c:1400)\n> ==63717==    by 0x74A71D: main (main.c:210)\n> ==63717==  Uninitialised value was created by a stack allocation\n> ==63717==    at 0x8BC942: LogCurrentRunningXacts (standby.c:984)\n> ==63717==\n> ==63717== VALGRINDERROR-END\n> {\n>    <insert_a_suppression_name_here>\n>    Memcheck:Value8\n>    fun:pg_comp_crc32c_sb8\n>    fun:XLogRecordAssemble\n>    fun:XLogInsert\n>    fun:LogCurrentRunningXacts\n>    fun:LogStandbySnapshot\n>    fun:CreateCheckPoint\n>    fun:CheckpointerMain\n>    fun:AuxiliaryProcessMain\n>    fun:StartChildProcess\n>    fun:reaper\n>    obj:/usr/lib64/libpthread-2.30.so\n>    fun:select\n>    fun:ServerLoop\n>    fun:PostmasterMain\n>    fun:main\n> }\n>\n>\n\n\nAfter many hours of testing I have a culprit for this. The error appears\nwith valgrind 3.15.0  with everything else held constant. 3.14.0  does\nnot produce the problem.  So lousyjack will be back on the air before long.\n\n\nHere are the build flags it's using:\n\n\nCFLAGS=-Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -fexcess-precision=standard -Wno-format-truncation\n-Wno-stringop-truncatio\nn -g -fno-omit-frame-pointer -O0 -fPIC\nCPPFLAGS=-DUSE_VALGRIND  -DRELCACHE_FORCE_RELEASE -D_GNU_SOURCE\n-I/usr/include/libxml2\n\n\nand valgrind is invoked like this:\n\n\nvalgrind --quiet --trace-children=yes --track-origins=yes\n--read-var-info=yes --num-callers=20 --leak-check=no\n--gen-suppressions=all --error-limit=no\n--suppressions=../pgsql/src/tools/valgrind.supp\n--error-markers=VALGRINDERROR-BEGIN,VALGRINDERROR-END bin/postgres -D data-C\n\n\nDoes anyone see anything here that needs tweaking?\n\n\nNote that this is quite an old machine:\n\n\nandrew@freddo:bf (master)*$ lscpu\nArchitecture:        x86_64\nCPU op-mode(s):      32-bit, 64-bit\nByte Order:          Little Endian\nCPU(s):              2\nOn-line CPU(s) list: 0,1\nThread(s) per core:  1\nCore(s) per socket:  2\nSocket(s):           1\nNUMA node(s):        1\nVendor ID:           AuthenticAMD\nCPU family:          16\nModel:               6\nModel name:          AMD Athlon(tm) II X2 215 Processor\nStepping:            2\nCPU MHz:             2700.000\nCPU max MHz:         2700.0000\nCPU min MHz:         800.0000\nBogoMIPS:            5425.13\nVirtualization:      AMD-V\nL1d cache:           64K\nL1i cache:           64K\nL2 cache:            512K\nNUMA node0 CPU(s):   0,1\nFlags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\npge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext\nfxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl\nnonstop_tsc cpuid extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy\nsvm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs\nskinit wdt hw_pstate vmmcall npt lbrv svm_lock nrip_save\n\n\nI did not manage to reproduce this anywhere else, tried on various\nphysical, Virtualbox and Docker instances.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 10 May 2020 09:29:05 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: valgrind error" }, { "msg_contents": "On Sun, May 10, 2020 at 09:29:05AM -0400, Andrew Dunstan wrote:\n> On 4/18/20 9:15 AM, Andrew Dunstan wrote:\n> > I was just trying to revive lousyjack, my valgrind buildfarm animal\n> > which has been offline for 12 days, after having upgraded the machine\n> > (fedora 31, gcc 9.3.1, valgrind 3.15) and noticed lots of errors like this:\n\n> > {\n> > �� <insert_a_suppression_name_here>\n> > �� Memcheck:Value8\n> > �� fun:pg_comp_crc32c_sb8\n> > �� fun:XLogRecordAssemble\n> > �� fun:XLogInsert\n> > �� fun:LogCurrentRunningXacts\n> > �� fun:LogStandbySnapshot\n> > �� fun:CreateCheckPoint\n> > �� fun:CheckpointerMain\n> > �� fun:AuxiliaryProcessMain\n> > �� fun:StartChildProcess\n> > �� fun:reaper\n> > �� obj:/usr/lib64/libpthread-2.30.so\n> > �� fun:select\n> > �� fun:ServerLoop\n> > �� fun:PostmasterMain\n> > �� fun:main\n> > }\n\n> After many hours of testing I have a culprit for this. The error appears\n> with valgrind 3.15.0� with everything else held constant. 3.14.0� does\n> not produce the problem.\n\nI suspect 3.15.0 is just better at tracking the uninitialized data. A\nmore-remote possibility is valgrind-3.14.0 emulating sse42. That would make\npg_crc32c_sse42_available() return true, avoiding the pg_comp_crc32c_sb8().\n\n> andrew@freddo:bf (master)*$ lscpu\n...\n> Flags:�������������� fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\n> pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext\n> fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl\n> nonstop_tsc cpuid extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy\n> svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs\n> skinit wdt hw_pstate vmmcall npt lbrv svm_lock nrip_save\n> \n> \n> I did not manage to reproduce this anywhere else, tried on various\n> physical, Virtualbox and Docker instances.\n\nI can reproduce this on a 2017-vintage CPU with ./configure\n... USE_SLICING_BY_8_CRC32C=1 and then running \"make installcheck-parallel\"\nunder valgrind-3.15.0 (as packaged by RHEL 7.8). valgrind.supp has a\nsuppression for CRC calculations, but it didn't get the memo when commit\n4f700bc renamed the function. The attached patch fixes the suppression.", "msg_date": "Fri, 5 Jun 2020 00:48:56 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: valgrind error" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I can reproduce this on a 2017-vintage CPU with ./configure\n> ... USE_SLICING_BY_8_CRC32C=1 and then running \"make installcheck-parallel\"\n> under valgrind-3.15.0 (as packaged by RHEL 7.8). valgrind.supp has a\n> suppression for CRC calculations, but it didn't get the memo when commit\n> 4f700bc renamed the function. The attached patch fixes the suppression.\n\nI can also reproduce this, on RHEL 8.2 which likewise has valgrind-3.15.0,\nusing the same configuration to force use of that CRC function. I concur\nwith your diagnosis that this is just a missed update of the pre-existing\nsuppression rule. However, rather than\n\n- fun:pg_comp_crc32c\n+ fun:pg_comp_crc32c*\n\nas you have it, I'd prefer to use\n\n- fun:pg_comp_crc32c\n+ fun:pg_comp_crc32c_sb8\n\nwhich precisely matches what 4f700bc did. The other way seems like\nit's giving a free pass to problems that could lurk in unrelated CRC\nimplementations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jun 2020 12:17:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: valgrind error" }, { "msg_contents": "On Fri, Jun 05, 2020 at 12:17:54PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > I can reproduce this on a 2017-vintage CPU with ./configure\n> > ... USE_SLICING_BY_8_CRC32C=1 and then running \"make installcheck-parallel\"\n> > under valgrind-3.15.0 (as packaged by RHEL 7.8). valgrind.supp has a\n> > suppression for CRC calculations, but it didn't get the memo when commit\n> > 4f700bc renamed the function. The attached patch fixes the suppression.\n> \n> I can also reproduce this, on RHEL 8.2 which likewise has valgrind-3.15.0,\n> using the same configuration to force use of that CRC function. I concur\n> with your diagnosis that this is just a missed update of the pre-existing\n> suppression rule. However, rather than\n> \n> - fun:pg_comp_crc32c\n> + fun:pg_comp_crc32c*\n> \n> as you have it, I'd prefer to use\n> \n> - fun:pg_comp_crc32c\n> + fun:pg_comp_crc32c_sb8\n> \n> which precisely matches what 4f700bc did. The other way seems like\n> it's giving a free pass to problems that could lurk in unrelated CRC\n> implementations.\n\nThe undefined data is in the CRC input, namely the padding bytes in xl_*\nstructs. Apparently, valgrind-3.15.0 doesn't complain about undefined input\nto _mm_crc32_u* functions. We should not be surprised if Valgrind gains the\nfeatures necessary to complain about the other implementations.\n\nMost COMP_CRC32C callers don't have a suppression, so Valgrind still studies\neach CRC implementation via those other callers.\n\n\n", "msg_date": "Fri, 5 Jun 2020 19:57:28 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: valgrind error" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, Jun 05, 2020 at 12:17:54PM -0400, Tom Lane wrote:\n>> as you have it, I'd prefer to use\n>> - fun:pg_comp_crc32c\n>> + fun:pg_comp_crc32c_sb8\n>> which precisely matches what 4f700bc did. The other way seems like\n>> it's giving a free pass to problems that could lurk in unrelated CRC\n>> implementations.\n\n> The undefined data is in the CRC input, namely the padding bytes in xl_*\n> structs.\n\nOh, I see. Objection withdrawn.\n\n> Apparently, valgrind-3.15.0 doesn't complain about undefined input\n> to _mm_crc32_u* functions. We should not be surprised if Valgrind gains the\n> features necessary to complain about the other implementations.\n\nPerhaps it already has ... I wonder if anyone's tried this on ARMv8\nlately.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jun 2020 23:03:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: valgrind error" }, { "msg_contents": "I wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> Apparently, valgrind-3.15.0 doesn't complain about undefined input\n>> to _mm_crc32_u* functions. We should not be surprised if Valgrind gains the\n>> features necessary to complain about the other implementations.\n\n> Perhaps it already has ... I wonder if anyone's tried this on ARMv8\n> lately.\n\nI installed Fedora 32/aarch64 on a Raspberry Pi 3B+, and can report that\nvalgrind 3.16.0 is just as blind to this problem in pg_comp_crc32c_armv8\nas it is in pg_comp_crc32c_sse42. Seems odd, but there you have it.\n\n(There are some other issues, but they seem fit for separate threads.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jun 2020 23:59:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: valgrind error" } ]
[ { "msg_contents": "The HEAPDEBUGALL define has been broken since PG12 due to tableam \nchanges. Should we just remove this? It doesn't look very useful. \nIt's been around since Postgres95.\n\nIf we opt for removing: PG12 added an analogous HEAPAMSLOTDEBUGALL \n(which still compiles correctly). Would we want to keep that?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 14:50:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "HEAPDEBUGALL is broken" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The HEAPDEBUGALL define has been broken since PG12 due to tableam \n> changes. Should we just remove this? It doesn't look very useful. \n> It's been around since Postgres95.\n> If we opt for removing: PG12 added an analogous HEAPAMSLOTDEBUGALL \n> (which still compiles correctly). Would we want to keep that?\n\n+1 for removing both. There are a lot of such debug \"features\"\nin the code, and few of them are worth anything IME.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Apr 2020 09:37:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "Hello hackers,\n19.04.2020 13:37, Tom Lane wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> The HEAPDEBUGALL define has been broken since PG12 due to tableam\n>> changes.  Should we just remove this?  It doesn't look very useful.\n>> It's been around since Postgres95.\n>> If we opt for removing: PG12 added an analogous HEAPAMSLOTDEBUGALL\n>> (which still compiles correctly).  Would we want to keep that?\n>\n> +1 for removing both.  There are a lot of such debug \"features\"\n> in the code, and few of them are worth anything IME.\nTo the point, I've tried to use HAVE_ALLOCINFO on master today and it\nfailed too:\n$ CPPFLAGS=\"-DHAVE_ALLOCINFO\" ./configure --enable-tap-tests\n--enable-debug --enable-cassert  >/dev/null && make -j16 >/dev/null\ngeneration.c: In function ‘GenerationAlloc’:\ngeneration.c:191:11: error: ‘GenerationContext {aka struct\nGenerationContext}’ has no member named ‘name’\n     (_cxt)->name, (_chunk), (_chunk)->size)\n           ^\ngeneration.c:386:3: note: in expansion of macro ‘GenerationAllocInfo’\n   GenerationAllocInfo(set, chunk);\n   ^~~~~~~~~~~~~~~~~~~\ngeneration.c:191:11: error: ‘GenerationContext {aka struct\nGenerationContext}’ has no member named ‘name’\n     (_cxt)->name, (_chunk), (_chunk)->size)\n           ^\ngeneration.c:463:2: note: in expansion of macro ‘GenerationAllocInfo’\n  GenerationAllocInfo(set, chunk);\n  ^~~~~~~~~~~~~~~~~~~\n\nBest regards,\nAlexander\n\n\n\n\n\n\nHello hackers,\n 19.04.2020 13:37, Tom Lane wrote:\n\n\n Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n writes:\n \nThe HEAPDEBUGALL\n define has been broken since PG12 due to tableam changes. \n Should we just remove this?  It doesn't look very useful. It's\n been around since Postgres95.\n \n If we opt for removing: PG12 added an analogous\n HEAPAMSLOTDEBUGALL (which still compiles correctly).  Would we\n want to keep that?\n \n\n\n +1 for removing both.  There are a lot of such debug \"features\"\n \n in the code, and few of them are worth anything IME.\n \n\n To the point, I've tried to use HAVE_ALLOCINFO on master today and\n it failed too:\n $ CPPFLAGS=\"-DHAVE_ALLOCINFO\" ./configure --enable-tap-tests\n --enable-debug --enable-cassert  >/dev/null && make -j16\n >/dev/null\n generation.c: In function ‘GenerationAlloc’:\n generation.c:191:11: error: ‘GenerationContext {aka struct\n GenerationContext}’ has no member named ‘name’\n      (_cxt)->name, (_chunk), (_chunk)->size)\n            ^\n generation.c:386:3: note: in expansion of macro\n ‘GenerationAllocInfo’\n    GenerationAllocInfo(set, chunk);\n    ^~~~~~~~~~~~~~~~~~~\n generation.c:191:11: error: ‘GenerationContext {aka struct\n GenerationContext}’ has no member named ‘name’\n      (_cxt)->name, (_chunk), (_chunk)->size)\n            ^\n generation.c:463:2: note: in expansion of macro\n ‘GenerationAllocInfo’\n   GenerationAllocInfo(set, chunk);\n   ^~~~~~~~~~~~~~~~~~~\n\n Best regards,\n Alexander", "msg_date": "Sun, 19 Apr 2020 23:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "On 2020-04-19 22:00, Alexander Lakhin wrote:\n> To the point, I've tried to use HAVE_ALLOCINFO on master today and it \n> failed too:\n> $ CPPFLAGS=\"-DHAVE_ALLOCINFO\" ./configure --enable-tap-tests \n> --enable-debug --enable-cassert  >/dev/null && make -j16 >/dev/null\n> generation.c: In function ‘GenerationAlloc’:\n> generation.c:191:11: error: ‘GenerationContext {aka struct \n> GenerationContext}’ has no member named ‘name’\n>      (_cxt)->name, (_chunk), (_chunk)->size)\n>            ^\n> generation.c:386:3: note: in expansion of macro ‘GenerationAllocInfo’\n>    GenerationAllocInfo(set, chunk);\n>    ^~~~~~~~~~~~~~~~~~~\n> generation.c:191:11: error: ‘GenerationContext {aka struct \n> GenerationContext}’ has no member named ‘name’\n>      (_cxt)->name, (_chunk), (_chunk)->size)\n>            ^\n> generation.c:463:2: note: in expansion of macro ‘GenerationAllocInfo’\n>   GenerationAllocInfo(set, chunk);\n>   ^~~~~~~~~~~~~~~~~~~\n\nDo you have a proposed patch?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 20:01:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "On 2020-04-19 15:37, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> The HEAPDEBUGALL define has been broken since PG12 due to tableam\n>> changes. Should we just remove this? It doesn't look very useful.\n>> It's been around since Postgres95.\n>> If we opt for removing: PG12 added an analogous HEAPAMSLOTDEBUGALL\n>> (which still compiles correctly). Would we want to keep that?\n> \n> +1 for removing both. There are a lot of such debug \"features\"\n> in the code, and few of them are worth anything IME.\n\nremoved\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 20:11:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-04-19 15:37, Tom Lane wrote:\n>> +1 for removing both. There are a lot of such debug \"features\"\n>> in the code, and few of them are worth anything IME.\n\n> removed\n\nI don't see a commit?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 14:27:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "21.04.2020 21:01, Peter Eisentraut wrote:\n> On 2020-04-19 22:00, Alexander Lakhin wrote:\n>> To the point, I've tried to use HAVE_ALLOCINFO on master today and it\n>> failed too:\n>\n> Do you have a proposed patch?\n>\nAs this is broken at least since the invention of the generational\nallocator (2017-11-23, a4ccc1ce), I believe than no one uses this (and\nslab is broken too). Nonetheless, HAVE_ALLOCINFO in aset.c is still\nworking, so it could be leaved alone, though the output too chatty for\ngeneral use (`make check` produces postmaster log of size 3.8GB). I\nthink someone would still need to insert some extra conditions to use\nthat or find another way to debug memory allocations.\n\nSo I would just remove this debug macro. The proposed patch is attached.\n\nBest regards,\nAlexander", "msg_date": "Wed, 22 Apr 2020 07:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "On 2020-04-21 20:27, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-04-19 15:37, Tom Lane wrote:\n>>> +1 for removing both. There are a lot of such debug \"features\"\n>>> in the code, and few of them are worth anything IME.\n> \n>> removed\n> \n> I don't see a commit?\n\npushed now\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 13:29:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-04-21 20:27, Tom Lane wrote:\n>> I don't see a commit?\n\n> pushed now\n\nLooking at this, I'm tempted to nuke ACLDEBUG as well, which\nis the only remaining undocumented symbol in pg_config_manual.h.\nThe code it controls looks equally forlorn and not-useful-as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:17:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 21.04.2020 21:01, Peter Eisentraut wrote:\n>> Do you have a proposed patch?\n\n> As this is broken at least since the invention of the generational\n> allocator (2017-11-23, a4ccc1ce), I believe than no one uses this (and\n> slab is broken too). Nonetheless, HAVE_ALLOCINFO in aset.c is still\n> working, so it could be leaved alone, though the output too chatty for\n> general use (`make check` produces postmaster log of size 3.8GB). I\n> think someone would still need to insert some extra conditions to use\n> that or find another way to debug memory allocations.\n\n> So I would just remove this debug macro. The proposed patch is attached.\n\nI didn't review this in close detail, but I think it's a good idea.\nWe have better memory-use-analysis tools these days, such as valgrind,\nso it's no surprise that nobody is using this old code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:19:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "On 2020-04-19 09:37:08 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > The HEAPDEBUGALL define has been broken since PG12 due to tableam \n> > changes. Should we just remove this? It doesn't look very useful. \n> > It's been around since Postgres95.\n> > If we opt for removing: PG12 added an analogous HEAPAMSLOTDEBUGALL \n> > (which still compiles correctly). Would we want to keep that?\n> \n> +1 for removing both. There are a lot of such debug \"features\"\n> in the code, and few of them are worth anything IME.\n\nBelatedly: +many\n\n\n", "msg_date": "Wed, 22 Apr 2020 20:44:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "I wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> So I would just remove this debug macro. The proposed patch is attached.\n\n> I didn't review this in close detail, but I think it's a good idea.\n\nI checked this more closely and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:28:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "I wrote:\n> Looking at this, I'm tempted to nuke ACLDEBUG as well, which\n> is the only remaining undocumented symbol in pg_config_manual.h.\n> The code it controls looks equally forlorn and not-useful-as-is.\n\nDid that, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:38:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" }, { "msg_contents": "On Wed, Apr 22, 2020 at 08:44:18PM -0700, Andres Freund wrote:\n> On 2020-04-19 09:37:08 -0400, Tom Lane wrote:\n>> +1 for removing both. There are a lot of such debug \"features\"\n>> in the code, and few of them are worth anything IME.\n> \n> Belatedly: +many\n\n+1.\n--\nMichael", "msg_date": "Fri, 24 Apr 2020 16:19:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: HEAPDEBUGALL is broken" } ]
[ { "msg_contents": "Hi,\nstrlen it is one of the low fruits that can be harvested.\nWhat is your opinion?\n\nregards,\nRanier Vilela", "msg_date": "Sun, 19 Apr 2020 11:24:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Small optimization across postgres (remove strlen duplicate\n usage)" }, { "msg_contents": "On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n>Hi,\n>strlen it is one of the low fruits that can be harvested.\n>What is your opinion?\n>\n\nThat assumes this actually affects/improves performance, without any\nmeasurements proving that. Considering large number of the places you\nmodified are related to DDL (CreateComment, ChooseIndexColumnNames, ...)\nor stuff that runs only once or infrequently (like the changes in\nPostmasterMain or libpqrcv_get_senderinfo). Likewise, it seems entirely\npointless to worry about strlen() overhead e.g. in fsync_parent_path\nwhich is probably dominated by I/O.\n\nMaybe there are places where this would help, but I don't see a reason\nto just throw away all strlen calls and replace them with something\nclearly less convenient and possibly more error-prone (I'd expect quite\na few off-by-one mistakes with this).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 21:33:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 16:33, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n> >Hi,\n> >strlen it is one of the low fruits that can be harvested.\n> >What is your opinion?\n> >\n>\n> That assumes this actually affects/improves performance, without any\n> measurements proving that. Considering large number of the places you\n> modified are related to DDL (CreateComment, ChooseIndexColumnNames, ...)\n> or stuff that runs only once or infrequently (like the changes in\n> PostmasterMain or libpqrcv_get_senderinfo). Likewise, it seems entirely\n> pointless to worry about strlen() overhead e.g. in fsync_parent_path\n> which is probably dominated by I/O.\n>\nWith code as interconnected as postgres, it is difficult to say that a\nfunction, which calls strlen, repeatedly, would not have any gain.\nRegarding the functions, I was just being consistent, trying to remove all\noccurrences, even where, there is very little gain.\n\n\n>\n> Maybe there are places where this would help, but I don't see a reason\n> to just throw away all strlen calls and replace them with something\n> clearly less convenient and possibly more error-prone (I'd expect quite\n> a few off-by-one mistakes with this).\n>\nYes, always, it is prone to errors, but for the most part, they are safe\nchanges.\nIt passes all 199 tests, of course it has not been tested in a real\nproduction environment.\nPerhaps the time is not the best, the end of the cycle, but, once done, I\nbelieve it would be a good harvest.\n\n Thank you for comment.\n\nregards,\nRanier Vilela\n\nEm dom., 19 de abr. de 2020 às 16:33, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n>Hi,\n>strlen it is one of the low fruits that can be harvested.\n>What is your opinion?\n>\n\nThat assumes this actually affects/improves performance, without any\nmeasurements proving that. Considering large number of the places you\nmodified are related to DDL (CreateComment, ChooseIndexColumnNames, ...)\nor stuff that runs only once or infrequently (like the changes in\nPostmasterMain or libpqrcv_get_senderinfo). Likewise, it seems entirely\npointless to worry about strlen() overhead e.g. in fsync_parent_path\nwhich is probably dominated by I/O.\nWith code as interconnected as postgres, it is difficult to say that a function, which calls strlen, repeatedly, would not have any gain.Regarding the functions, I was just being consistent, trying to remove all occurrences, even where, there is very little gain. \n\n\n\nMaybe there are places where this would help, but I don't see a reason\nto just throw away all strlen calls and replace them with something\nclearly less convenient and possibly more error-prone (I'd expect quite\na few off-by-one mistakes with this).Yes, always, it is prone to errors, but for the most part, they are safe changes.It passes all 199 tests, of course it has not been tested in a real production environment.Perhaps the time is not the best, the end of the cycle, but, once done, I believe it would be a good harvest. Thank you for comment.regards,Ranier Vilela", "msg_date": "Sun, 19 Apr 2020 17:29:52 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "On Sun, Apr 19, 2020 at 05:29:52PM -0300, Ranier Vilela wrote:\n>Em dom., 19 de abr. de 2020 �s 16:33, Tomas Vondra <\n>tomas.vondra@2ndquadrant.com> escreveu:\n>\n>> On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n>> >Hi,\n>> >strlen it is one of the low fruits that can be harvested.\n>> >What is your opinion?\n>> >\n>>\n>> That assumes this actually affects/improves performance, without any\n>> measurements proving that. Considering large number of the places you\n>> modified are related to DDL (CreateComment, ChooseIndexColumnNames, ...)\n>> or stuff that runs only once or infrequently (like the changes in\n>> PostmasterMain or libpqrcv_get_senderinfo). Likewise, it seems entirely\n>> pointless to worry about strlen() overhead e.g. in fsync_parent_path\n>> which is probably dominated by I/O.\n>>\n>With code as interconnected as postgres, it is difficult to say that a\n>function, which calls strlen, repeatedly, would not have any gain.\n>Regarding the functions, I was just being consistent, trying to remove all\n>occurrences, even where, there is very little gain.\n>\n\nThat very much depends on the function, I think. For most places modified\nby this patch it's not that hard, I think. The DDL cases (comments and\nindexes) seem pretty clear. Similarly for the command parsing, wal\nreceiver, lockfile creation, guc, exec.c, and so on.\n\nPerhaps the only places worth changing might be xml.c and spell.c, but\nI'm not convinced even these are worth it, really.\n\n>\n>>\n>> Maybe there are places where this would help, but I don't see a reason\n>> to just throw away all strlen calls and replace them with something\n>> clearly less convenient and possibly more error-prone (I'd expect quite\n>> a few off-by-one mistakes with this).\n>>\n>Yes, always, it is prone to errors, but for the most part, they are safe\n>changes.\n>It passes all 199 tests, of course it has not been tested in a real\n>production environment.\n>Perhaps the time is not the best, the end of the cycle, but, once done, I\n>believe it would be a good harvest.\n>\n\nI wasn't really worried about bugs in this patch, but rather in future\nchanges made to this code. Off-by-one errors are trivial to make.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 23:18:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "On 4/19/20 10:29 PM, Ranier Vilela wrote:\n> Em dom., 19 de abr. de 2020 às 16:33, Tomas Vondra \n> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> \n> escreveu:\n> \n> On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n> >Hi,\n> >strlen it is one of the low fruits that can be harvested.\n> >What is your opinion?\n> >\n> \n> That assumes this actually affects/improves performance, without any\n> measurements proving that. Considering large number of the places you\n> modified are related to DDL (CreateComment, ChooseIndexColumnNames, ...)\n> or stuff that runs only once or infrequently (like the changes in\n> PostmasterMain or libpqrcv_get_senderinfo). Likewise, it seems entirely\n> pointless to worry about strlen() overhead e.g. in fsync_parent_path\n> which is probably dominated by I/O.\n> \n> With code as interconnected as postgres, it is difficult to say that a \n> function, which calls strlen, repeatedly, would not have any gain.\n> Regarding the functions, I was just being consistent, trying to remove \n> all occurrences, even where, there is very little gain.\n\nAt least gcc 9.3 optimizes \"strlen(s) == 0\" to \"s[0] == '\\0'\", even at \nlow optimization levels. I tried it out with https://godbolt.org/.\n\nMaybe some of the others cases are performance improvements, I have not \nchecked your patch in details, but strlen() == 0 is easily handled by \nthe compiler.\n\nC code:\n\nint f1(char *str) {\n return strlen(str) == 0;\n}\n\nint f2(char *str) {\n return str[0] == '\\0';\n}\n\nAssembly generated with default flags:\n\nf1:\n pushq %rbp\n movq %rsp, %rbp\n movq %rdi, -8(%rbp)\n movq -8(%rbp), %rax\n movzbl (%rax), %eax\n testb %al, %al\n sete %al\n movzbl %al, %eax\n popq %rbp\n ret\nf2:\n pushq %rbp\n movq %rsp, %rbp\n movq %rdi, -8(%rbp)\n movq -8(%rbp), %rax\n movzbl (%rax), %eax\n testb %al, %al\n sete %al\n movzbl %al, %eax\n popq %rbp\n ret\n\nAssembly generated with -O2.\n\nf1:\n xorl %eax, %eax\n cmpb $0, (%rdi)\n sete %al\n ret\nf2:\n xorl %eax, %eax\n cmpb $0, (%rdi)\n sete %al\n ret\n\nAndreas\n\n\n", "msg_date": "Sun, 19 Apr 2020 23:20:58 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n>> strlen it is one of the low fruits that can be harvested.\n\n> Maybe there are places where this would help, but I don't see a reason\n> to just throw away all strlen calls and replace them with something\n> clearly less convenient and possibly more error-prone (I'd expect quite\n> a few off-by-one mistakes with this).\n\nI've heard it claimed that modern compilers will optimize\nstrlen('literal') to a constant at compile time. I'm not sure how\nmuch I believe that, but to the extent that it's true, replacing such\ncalls would provide exactly no performance benefit.\n\nI'm quite -1 on changing these to sizeof(), in any case, because\n(a) that opens up room for confusion about whether the trailing nul is\nincluded, and (b) it makes it very easy, when changing or copy/pasting\ncode, to apply sizeof to something that's not a string literal, with\ndisastrous results.\n\nThe cases where Ranier proposes to replace strlen(foo) == 0\nwith a test on foo[0] do seem like wins, though. Asking for\nthe full string length to be computed is more computation than\nnecessary, and it's less clear that the compiler could be\nexpected to save you from that. Anyway there's a coding style\nproposition that we should be doing this consistently, and\ncertainly lots of places do do this without using strlen().\n\nI can't get excited about the proposed changes to optimize away\nmultiple calls of strlen() either, unless there's performance\nmeasurements to support them individually. This again seems like\nsomething a compiler might do for you.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Apr 2020 17:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "On Mon, 20 Apr 2020 at 09:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The cases where Ranier proposes to replace strlen(foo) == 0\n> with a test on foo[0] do seem like wins, though. Asking for\n> the full string length to be computed is more computation than\n> necessary, and it's less clear that the compiler could be\n> expected to save you from that. Anyway there's a coding style\n> proposition that we should be doing this consistently, and\n> certainly lots of places do do this without using strlen().\n\nLooking at https://godbolt.org/z/6XsjbA it seems like GCC is pretty\ngood at getting rid of the strlen call even at -O0. It takes -O1 for\nclang to use it and -O2 for icc.\n\nDavid\n\n\n", "msg_date": "Mon, 20 Apr 2020 10:00:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 19:00, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 20 Apr 2020 at 09:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The cases where Ranier proposes to replace strlen(foo) == 0\n> > with a test on foo[0] do seem like wins, though. Asking for\n> > the full string length to be computed is more computation than\n> > necessary, and it's less clear that the compiler could be\n> > expected to save you from that. Anyway there's a coding style\n> > proposition that we should be doing this consistently, and\n> > certainly lots of places do do this without using strlen().\n>\n> Looking at https://godbolt.org/z/6XsjbA it seems like GCC is pretty\n> good at getting rid of the strlen call even at -O0. It takes -O1 for\n> clang to use it and -O2 for icc.\n>\nI tried: https://godbolt.org with:\n\n-O2:\n\nf1:\nint main (int argv, char **argc)\n{\n return strlen(argc[0]) == 0;\n}\n\nf1: Assembly\nmain: # @main\n mov rcx, qword ptr [rsi]\n xor eax, eax\n cmp byte ptr [rcx], 0\n sete al\n ret\n\nf2:\nint main (int argv, char **argc)\n{\n return argc[0] == '\\0';\n}\n\nf2: Assembly\n\nmain: # @main\n xor eax, eax\n cmp qword ptr [rsi], 0\n sete al\n ret\n\nFor me clearly str [0] == '\\ 0', wins.\n\nregards,\nRanier Vilela\n\nEm dom., 19 de abr. de 2020 às 19:00, David Rowley <dgrowleyml@gmail.com> escreveu:On Mon, 20 Apr 2020 at 09:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The cases where Ranier proposes to replace strlen(foo) == 0\n> with a test on foo[0] do seem like wins, though.  Asking for\n> the full string length to be computed is more computation than\n> necessary, and it's less clear that the compiler could be\n> expected to save you from that.  Anyway there's a coding style\n> proposition that we should be doing this consistently, and\n> certainly lots of places do do this without using strlen().\n\nLooking at https://godbolt.org/z/6XsjbA it seems like GCC is pretty\ngood at getting rid of the strlen call even at -O0. It takes -O1 for\nclang to use it and -O2 for icc.I tried: https://godbolt.org with:-O2:f1:\nint main (int argv, char **argc){    return strlen(argc[0]) == 0;}\nf1: Assembly\nmain:                                   # @main        mov     rcx, qword ptr [rsi]        xor     eax, eax        cmp     byte ptr [rcx], 0        sete    al        ret\nf2:\nint main (int argv, char **argc){    return argc[0] == '\\0';}\nf2: Assemblymain:                                   # @main        xor     eax, eax        cmp     qword ptr [rsi], 0        sete    al        retFor me clearly str [0] == '\\ 0', wins.regards,Ranier Vilela", "msg_date": "Sun, 19 Apr 2020 20:23:03 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 18:38, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n> >> strlen it is one of the low fruits that can be harvested.\n>\n> > Maybe there are places where this would help, but I don't see a reason\n> > to just throw away all strlen calls and replace them with something\n> > clearly less convenient and possibly more error-prone (I'd expect quite\n> > a few off-by-one mistakes with this).\n>\n> I've heard it claimed that modern compilers will optimize\n> strlen('literal') to a constant at compile time. I'm not sure how\n> much I believe that, but to the extent that it's true, replacing such\n> calls would provide exactly no performance benefit.\n>\nTom, I wouldn't believe it very much, they still have a lot of stupid\ncompilers out there (msvc for example).\nFurthermore, optimizations are a complicated business, often the adjacent\ncode does not allow for such optimizations.\nWhen a programmer does, there is no doubt.\n\n\n>\n> I'm quite -1 on changing these to sizeof(), in any case, because\n> (a) that opens up room for confusion about whether the trailing nul is\n> included, and (b) it makes it very easy, when changing or copy/pasting\n> code, to apply sizeof to something that's not a string literal, with\n> disastrous results.\n>\nIt may be true, but I have seen a lot of Postgres code, where sizeof is\nused extensively, even with real chances of what you said happened. So that\nrisk already exists.\n\n\n>\n> The cases where Ranier proposes to replace strlen(foo) == 0\n> with a test on foo[0] do seem like wins, though. Asking for\n> the full string length to be computed is more computation than\n> necessary, and it's less clear that the compiler could be\n> expected to save you from that. Anyway there's a coding style\n> proposition that we should be doing this consistently, and\n> certainly lots of places do do this without using strlen().\n>\nYes, this is the idea.\n\n\n>\n> I can't get excited about the proposed changes to optimize away\n> multiple calls of strlen() either, unless there's performance\n> measurements to support them individually. This again seems like\n> something a compiler might do for you.\n>\nAgain, the compiler will not always save us.\nI have seen many fanstatic solutions in Postgres, but I have also seen a\nlot of written code, forgive me, without caprice, without much care.\nThe idea is, little by little, to prevent carefree code, either written or\nleft in Postgres.\n\nYou can see an example in that patch.\nAfter a few calls the programmer validates the entry and leaves if it is\nbad. When it should be done before anything.\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 5bdc02fce2..a00cca2605 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -10699,10 +10699,15 @@ GUCArrayDelete(ArrayType *array, const char *name)\n struct config_generic *record;\n ArrayType *newarray;\n int i;\n+ int len;\n int index;\n\n Assert(name);\n\n+ /* if array is currently null, then surely nothing to delete */\n+ if (!array)\n+ return NULL;\n+\n /* test if the option is valid and we're allowed to set it */\n (void) validate_option_array_item(name, NULL, false);\n\n@@ -10711,12 +10716,9 @@ GUCArrayDelete(ArrayType *array, const char *name)\n if (record)\n name = record->name;\n\n- /* if array is currently null, then surely nothing to delete */\n- if (!array)\n- return NULL;\n-\n newarray = NULL;\n index = 1;\n+ len = strlen(name);\n\n for (i = 1; i <= ARR_DIMS(array)[0]; i++)\n {\n@@ -10735,8 +10737,8 @@ GUCArrayDelete(ArrayType *array, const char *name)\n val = TextDatumGetCString(d);\n\n /* ignore entry if it's what we want to delete */\n- if (strncmp(val, name, strlen(name)) == 0\n- && val[strlen(name)] == '=')\n+ if (strncmp(val, name, len) == 0\n+ && val[len] == '=')\n continue;\n\n /* else add it to the output array */\n\nregards,\nRanier Vilela\n\nEm dom., 19 de abr. de 2020 às 18:38, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Apr 19, 2020 at 11:24:38AM -0300, Ranier Vilela wrote:\n>> strlen it is one of the low fruits that can be harvested.\n\n> Maybe there are places where this would help, but I don't see a reason\n> to just throw away all strlen calls and replace them with something\n> clearly less convenient and possibly more error-prone (I'd expect quite\n> a few off-by-one mistakes with this).\n\nI've heard it claimed that modern compilers will optimize\nstrlen('literal') to a constant at compile time.  I'm not sure how\nmuch I believe that, but to the extent that it's true, replacing such\ncalls would provide exactly no performance benefit.Tom, I wouldn't believe it very much, they still have a lot of stupid compilers out there (msvc for example).Furthermore, optimizations are a complicated business, often the adjacent code does not allow for such optimizations.When a programmer does, there is no doubt. \n\nI'm quite -1 on changing these to sizeof(), in any case, because\n(a) that opens up room for confusion about whether the trailing nul is\nincluded, and (b) it makes it very easy, when changing or copy/pasting\ncode, to apply sizeof to something that's not a string literal, with\ndisastrous results.It may be true, but I have seen a lot of Postgres code, where sizeof is used extensively, even with real chances of what you said happened. So that risk already exists. \n\nThe cases where Ranier proposes to replace strlen(foo) == 0\nwith a test on foo[0] do seem like wins, though.  Asking for\nthe full string length to be computed is more computation than\nnecessary, and it's less clear that the compiler could be\nexpected to save you from that.  Anyway there's a coding style\nproposition that we should be doing this consistently, and\ncertainly lots of places do do this without using strlen().Yes, this is the idea. \n\nI can't get excited about the proposed changes to optimize away\nmultiple calls of strlen() either, unless there's performance\nmeasurements to support them individually.  This again seems like\nsomething a compiler might do for you.Again, the compiler will not always save us.I have seen many fanstatic solutions in Postgres, but I have also seen a lot of written code, forgive me, without caprice, without much care.The idea is, little by little, to prevent carefree code, either written or left in Postgres.You can see an example in that patch.After a few calls the programmer validates the entry and leaves if it is bad. When it should be done before anything.diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.cindex 5bdc02fce2..a00cca2605 100644--- a/src/backend/utils/misc/guc.c+++ b/src/backend/utils/misc/guc.c@@ -10699,10 +10699,15 @@ GUCArrayDelete(ArrayType *array, const char *name) \tstruct config_generic *record; \tArrayType  *newarray; \tint\t\t\ti;+\tint         len; \tint\t\t\tindex;  \tAssert(name); +\t/* if array is currently null, then surely nothing to delete */+\tif (!array)+\t\treturn NULL;+ \t/* test if the option is valid and we're allowed to set it */ \t(void) validate_option_array_item(name, NULL, false); @@ -10711,12 +10716,9 @@ GUCArrayDelete(ArrayType *array, const char *name) \tif (record) \t\tname = record->name; -\t/* if array is currently null, then surely nothing to delete */-\tif (!array)-\t\treturn NULL;- \tnewarray = NULL; \tindex = 1;+\tlen = strlen(name);  \tfor (i = 1; i <= ARR_DIMS(array)[0]; i++) \t{@@ -10735,8 +10737,8 @@ GUCArrayDelete(ArrayType *array, const char *name) \t\tval = TextDatumGetCString(d);  \t\t/* ignore entry if it's what we want to delete */-\t\tif (strncmp(val, name, strlen(name)) == 0-\t\t\t&& val[strlen(name)] == '=')+\t\tif (strncmp(val, name, len) == 0+\t\t\t&& val[len] == '=') \t\t\tcontinue;  \t\t/* else add it to the output array */ regards,Ranier Vilela", "msg_date": "Sun, 19 Apr 2020 20:36:26 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "On Mon, 20 Apr 2020 at 11:24, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I tried: https://godbolt.org with:\n>\n> -O2:\n>\n> f1:\n> int main (int argv, char **argc)\n> {\n> return strlen(argc[0]) == 0;\n> }\n>\n> f1: Assembly\n> main: # @main\n> mov rcx, qword ptr [rsi]\n> xor eax, eax\n> cmp byte ptr [rcx], 0\n> sete al\n> ret\n>\n> f2:\n> int main (int argv, char **argc)\n> {\n> return argc[0] == '\\0';\n> }\n>\n> f2: Assembly\n>\n> main: # @main\n> xor eax, eax\n> cmp qword ptr [rsi], 0\n> sete al\n> ret\n>\n> For me clearly str [0] == '\\ 0', wins.\n\nI think you'd want to use argc[0][0] == '\\0' or *argc[0] == '\\0'.\nOtherwise you appear just to be checking if the first element in the\nargc pointer array is set to NULL, which is certainly not the same as\nan empty string.\n\nDavid\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:00:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 22:00, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 20 Apr 2020 at 11:24, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > I tried: https://godbolt.org with:\n> >\n> > -O2:\n> >\n> > f1:\n> > int main (int argv, char **argc)\n> > {\n> > return strlen(argc[0]) == 0;\n> > }\n> >\n> > f1: Assembly\n> > main: # @main\n> > mov rcx, qword ptr [rsi]\n> > xor eax, eax\n> > cmp byte ptr [rcx], 0\n> > sete al\n> > ret\n> >\n> > f2:\n> > int main (int argv, char **argc)\n> > {\n> > return argc[0] == '\\0';\n> > }\n> >\n> > f2: Assembly\n> >\n> > main: # @main\n> > xor eax, eax\n> > cmp qword ptr [rsi], 0\n> > sete al\n> > ret\n> >\n> > For me clearly str [0] == '\\ 0', wins.\n>\n> I think you'd want to use argc[0][0] == '\\0' or *argc[0] == '\\0'.\n> Otherwise you appear just to be checking if the first element in the\n> argc pointer array is set to NULL, which is certainly not the same as\n> an empty string.\n>\nI guess you're right.\n\nx86-64 clang (trunk) -O2\nf1:\nint cmp(const char * name)\n{\n return strlen(name) == 0;\n}\n\ncmp: # @cmp\n xor eax, eax\n cmp byte ptr [rdi], 0\n sete al\n ret\n\nf2:\nint cmp(const char * name)\n{\n return name[0] == '\\0';\n}\n\ncmp: # @cmp\n xor eax, eax\n cmp byte ptr [rdi], 0\n sete al\n ret\n\nIs the same result in assembly.\nWell, it doesn't matter to me, I will continue to use str[0] == '\\0'.\n\nThanks for take part.\n\nregards,\nRanier VIlela\n\nEm dom., 19 de abr. de 2020 às 22:00, David Rowley <dgrowleyml@gmail.com> escreveu:On Mon, 20 Apr 2020 at 11:24, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I tried: https://godbolt.org with:\n>\n> -O2:\n>\n> f1:\n> int main (int argv, char **argc)\n> {\n>     return strlen(argc[0]) == 0;\n> }\n>\n> f1: Assembly\n> main:                                   # @main\n>         mov     rcx, qword ptr [rsi]\n>         xor     eax, eax\n>         cmp     byte ptr [rcx], 0\n>         sete    al\n>         ret\n>\n> f2:\n> int main (int argv, char **argc)\n> {\n>     return argc[0] == '\\0';\n> }\n>\n> f2: Assembly\n>\n> main:                                   # @main\n>         xor     eax, eax\n>         cmp     qword ptr [rsi], 0\n>         sete    al\n>         ret\n>\n> For me clearly str [0] == '\\ 0', wins.\n\nI think you'd want to use argc[0][0] == '\\0' or *argc[0] == '\\0'.\nOtherwise you appear just to be checking if the first element in the\nargc pointer array is set to NULL, which is certainly not the same as\nan empty string.I guess you're right.x86-64 clang (trunk) -O2f1:int cmp(const char * name){    return strlen(name) == 0;}cmp:                                    # @cmp        xor     eax, eax        cmp     byte ptr [rdi], 0        sete    al        retf2:int cmp(const char * name){    return name[0] == '\\0';}cmp:                                    # @cmp        xor     eax, eax        cmp     byte ptr [rdi], 0        sete    al        retIs the same result in assembly.Well, it doesn't matter to me, I will continue to use str[0] == '\\0'.Thanks for take part.regards,Ranier VIlela", "msg_date": "Sun, 19 Apr 2020 23:57:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small optimization across postgres (remove strlen\n duplicate usage)" } ]
[ { "msg_contents": "Hi,\n\nlast week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now\nsupports pipes, named pipes very well. Today the pspg can be used as pager\nfor output of \\watch command. Sure, psql needs attached patch.\n\nI propose new psql environment variable PSQL_WATCH_PAGER. When this\nvariable is not empty, then \\watch command starts specified pager, and\nredirect output to related pipe. When pipe is closed - by pager, then\n\\watch cycle is leaved.\n\nIf you want to test proposed feature, you need a pspg with\ncb4114f98318344d162a84b895a3b7f8badec241\ncommit.\n\nThen you can set your env\n\nexport PSQL_WATCH_PAGER=\"pspg --stream\"\npsql\n\nSELECT * FROM pg_stat_database;\n\\watch 1\n\nComments, notes?\n\nRegards\n\nPavel", "msg_date": "Sun, 19 Apr 2020 19:27:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - psql - use pager for \\watch command" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> variable is not empty, then \\watch command starts specified pager, and\n> redirect output to related pipe. When pipe is closed - by pager, then\n> \\watch cycle is leaved.\n\nI dunno, this just seems really strange. With any normal pager,\nyou'd get completely unusable behavior (per the comments that you\ndidn't bother to change). Also, how would the pager know where\nthe boundaries between successive query outputs are? If it does\nnot know, seems like that's another decrement in usability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 16:41:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "st 1. 7. 2020 v 22:41 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> > variable is not empty, then \\watch command starts specified pager, and\n> > redirect output to related pipe. When pipe is closed - by pager, then\n> > \\watch cycle is leaved.\n>\n> I dunno, this just seems really strange. With any normal pager,\n> you'd get completely unusable behavior (per the comments that you\n> didn't bother to change). Also, how would the pager know where\n> the boundaries between successive query outputs are? If it does\n> not know, seems like that's another decrement in usability.\n>\n\nThis feature is designed for specialized pagers - now only pspg can work in\nthis mode. But pspg is part of RH, Fedora, Debian, and it is available on\nalmost Unix platforms.\n\nhttps://github.com/okbob/pspg\n\nthe pspg knows the psql output format of \\watch statement.\n\nThe usability of this combination - psql \\watch and pspg is really good.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nst 1. 7. 2020 v 22:41 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> variable is not empty, then \\watch command starts specified pager, and\n> redirect output to related pipe. When pipe is closed - by pager, then\n> \\watch cycle is leaved.\n\nI dunno, this just seems really strange.  With any normal pager,\nyou'd get completely unusable behavior (per the comments that you\ndidn't bother to change).  Also, how would the pager know where\nthe boundaries between successive query outputs are?  If it does\nnot know, seems like that's another decrement in usability.This feature is designed for specialized pagers - now only pspg can work in this mode. But pspg is part of RH, Fedora, Debian, and it is available on almost Unix platforms.https://github.com/okbob/pspgthe pspg knows the psql output format of \\watch statement.The usability of this combination - psql \\watch and pspg is really good. RegardsPavel \n\n                        regards, tom lane", "msg_date": "Wed, 1 Jul 2020 23:03:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "ne 19. 4. 2020 v 19:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi,\n>\n> last week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now\n> supports pipes, named pipes very well. Today the pspg can be used as pager\n> for output of \\watch command. Sure, psql needs attached patch.\n>\n> I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> variable is not empty, then \\watch command starts specified pager, and\n> redirect output to related pipe. When pipe is closed - by pager, then\n> \\watch cycle is leaved.\n>\n> If you want to test proposed feature, you need a pspg with cb4114f98318344d162a84b895a3b7f8badec241\n> commit.\n>\n> Then you can set your env\n>\n> export PSQL_WATCH_PAGER=\"pspg --stream\"\n> psql\n>\n> SELECT * FROM pg_stat_database;\n> \\watch 1\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n\nrebase", "msg_date": "Fri, 8 Jan 2021 10:35:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Fri, Jan 8, 2021 at 10:36 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> ne 19. 4. 2020 v 19:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> last week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now supports pipes, named pipes very well. Today the pspg can be used as pager for output of \\watch command. Sure, psql needs attached patch.\n>>\n>> I propose new psql environment variable PSQL_WATCH_PAGER. When this variable is not empty, then \\watch command starts specified pager, and redirect output to related pipe. When pipe is closed - by pager, then \\watch cycle is leaved.\n>>\n>> If you want to test proposed feature, you need a pspg with cb4114f98318344d162a84b895a3b7f8badec241 commit.\n>>\n>> Then you can set your env\n>>\n>> export PSQL_WATCH_PAGER=\"pspg --stream\"\n>> psql\n>>\n>> SELECT * FROM pg_stat_database;\n>> \\watch 1\n>>\n>> Comments, notes?\n\nI tried this out with pspg 4.1 from my package manager. It seems\nreally useful, especially for demos. I like it!\n\n * Set up rendering options, in particular, disable the pager, because\n * nobody wants to be prompted while watching the output of 'watch'.\n */\n- myopt.topt.pager = 0;\n+ if (!pagerpipe)\n+ myopt.topt.pager = 0;\n\nObsolete comment.\n\n+static bool sigpipe_received = false;\n\nThis should be \"static volatile sig_atomic_t\", and I suppose our\nconvention name for that variable would be got_SIGPIPE. Would it be\npossible to ignore SIGPIPE instead, and then rely on another way of\nknowing that the pager has quit? But... hmm:\n\n- long s = Min(i, 1000L);\n+ long s = Min(i, pagerpipe ? 100L : 1000L);\n\nI haven't studied this (preexisting) polling loop, but I don't like\nit. I understand that it's there because on some systems, pg_usleep()\nwon't wake up for SIGINT (^C), but now it's being used for a secondary\npurpose, that I haven't fully understood. After I quit pspg (by\npressing q) while running \\watch 10, I have to wait until the end of a\n10 second cycle before it tries to write to the pipe again, unless I\nalso press ^C. I feel like it has to be possible to achieve \"precise\"\nbehaviour somehow when you quit; maybe something like waiting for\nreadiness on the pager's stderr, or something like that -- I haven't\nthought hard about this and I admit that I have no idea how this works\non Windows.\n\nSometimes I see a message like this after I quit pspg:\n\npostgres=# \\watch 10\ninput stream was closed\n\n\n", "msg_date": "Tue, 16 Feb 2021 14:49:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nút 16. 2. 2021 v 2:49 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Fri, Jan 8, 2021 at 10:36 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > ne 19. 4. 2020 v 19:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> last week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now\n> supports pipes, named pipes very well. Today the pspg can be used as pager\n> for output of \\watch command. Sure, psql needs attached patch.\n> >>\n> >> I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> variable is not empty, then \\watch command starts specified pager, and\n> redirect output to related pipe. When pipe is closed - by pager, then\n> \\watch cycle is leaved.\n> >>\n> >> If you want to test proposed feature, you need a pspg with\n> cb4114f98318344d162a84b895a3b7f8badec241 commit.\n> >>\n> >> Then you can set your env\n> >>\n> >> export PSQL_WATCH_PAGER=\"pspg --stream\"\n> >> psql\n> >>\n> >> SELECT * FROM pg_stat_database;\n> >> \\watch 1\n> >>\n> >> Comments, notes?\n>\n> I tried this out with pspg 4.1 from my package manager. It seems\n> really useful, especially for demos. I like it!\n>\n\nThank you :)\n\n\n> * Set up rendering options, in particular, disable the pager,\n> because\n> * nobody wants to be prompted while watching the output of\n> 'watch'.\n> */\n> - myopt.topt.pager = 0;\n> + if (!pagerpipe)\n> + myopt.topt.pager = 0;\n>\n> Obsolete comment.\n>\n> +static bool sigpipe_received = false;\n>\n> This should be \"static volatile sig_atomic_t\", and I suppose our\n> convention name for that variable would be got_SIGPIPE. Would it be\n> possible to ignore SIGPIPE instead, and then rely on another way of\n> knowing that the pager has quit? But... hmm:\n>\n> - long s = Min(i, 1000L);\n> + long s = Min(i, pagerpipe ? 100L :\n> 1000L);\n>\n> I haven't studied this (preexisting) polling loop, but I don't like\n> it. I understand that it's there because on some systems, pg_usleep()\n> won't wake up for SIGINT (^C), but now it's being used for a secondary\n> purpose, that I haven't fully understood. After I quit pspg (by\n> pressing q) while running \\watch 10, I have to wait until the end of a\n> 10 second cycle before it tries to write to the pipe again, unless I\n> also press ^C. I feel like it has to be possible to achieve \"precise\"\n> behaviour somehow when you quit; maybe something like waiting for\n> readiness on the pager's stderr, or something like that -- I haven't\n> thought hard about this and I admit that I have no idea how this works\n> on Windows.\n>\n\nI'll look there.\n\n\n> Sometimes I see a message like this after I quit pspg:\n>\n> postgres=# \\watch 10\n> input stream was closed\n>\n\nThis is a pspg's message. It's a little bit strange, because this message\ncomes from event reading, and in the end, the pspg doesn't read events. So\nit looks like the pspg issue, and I have to check it.\n\nI have one question - now, the pspg has to do complex heuristics to detect\nan start and an end of data in an stream. Can we, in this case (when\nPSQL_WATCH_PAGER is active), use invisible chars STX and ETX or maybe ETB?\nIt can be a special \\pset option. Surely, the detection of these chars\nshould be much more robust than current pspg's heuristics.\n\nRegards\n\nPavel\n\nHiút 16. 2. 2021 v 2:49 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Fri, Jan 8, 2021 at 10:36 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> ne 19. 4. 2020 v 19:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> last week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now supports pipes, named pipes very well. Today the pspg can be used as pager for output of \\watch command. Sure, psql needs attached patch.\n>>\n>> I propose new psql environment variable PSQL_WATCH_PAGER. When this variable is not empty, then \\watch command starts specified pager, and redirect output to related pipe. When pipe is closed - by pager, then \\watch cycle is leaved.\n>>\n>> If you want to test proposed feature, you need a pspg with cb4114f98318344d162a84b895a3b7f8badec241 commit.\n>>\n>> Then you can set your env\n>>\n>> export PSQL_WATCH_PAGER=\"pspg --stream\"\n>> psql\n>>\n>> SELECT * FROM pg_stat_database;\n>> \\watch 1\n>>\n>> Comments, notes?\n\nI tried this out with pspg 4.1 from my package manager.  It seems\nreally useful, especially for demos.  I like it!Thank you :) \n\n         * Set up rendering options, in particular, disable the pager, because\n         * nobody wants to be prompted while watching the output of 'watch'.\n         */\n-       myopt.topt.pager = 0;\n+       if (!pagerpipe)\n+               myopt.topt.pager = 0;\n\nObsolete comment.\n\n+static bool sigpipe_received = false;\n\nThis should be \"static volatile sig_atomic_t\", and I suppose our\nconvention name for that variable would be got_SIGPIPE.  Would it be\npossible to ignore SIGPIPE instead, and then rely on another way of\nknowing that the pager has quit?  But... hmm:\n\n-                       long            s = Min(i, 1000L);\n+                       long            s = Min(i, pagerpipe ? 100L : 1000L);\n\nI haven't studied this (preexisting) polling loop, but I don't like\nit.  I understand that it's there because on some systems, pg_usleep()\nwon't wake up for SIGINT (^C), but now it's being used for a secondary\npurpose, that I haven't fully understood.  After I quit pspg (by\npressing q) while running \\watch 10, I have to wait until the end of a\n10 second cycle before it tries to write to the pipe again, unless I\nalso press ^C.  I feel like it has to be possible to achieve \"precise\"\nbehaviour somehow when you quit; maybe something like waiting for\nreadiness on the pager's stderr, or something like that -- I haven't\nthought hard about this and I admit that I have no idea how this works\non Windows.I'll look there. \n\nSometimes I see a message like this after I quit pspg:\n\npostgres=# \\watch 10\ninput stream was closedThis is a pspg's message. It's a little bit strange, because this message comes from event reading, and in the end, the pspg doesn't read events. So it looks like the pspg issue, and I have to check it.I have one question - now, the pspg has to do complex heuristics to detect an start and an end of data in an stream. Can we, in this case (when PSQL_WATCH_PAGER is active), use invisible chars STX and ETX or maybe ETB? It can be a special \\pset option. Surely, the detection of these chars should be much more robust than current pspg's heuristics.RegardsPavel", "msg_date": "Thu, 18 Feb 2021 09:16:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nút 16. 2. 2021 v 2:49 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Fri, Jan 8, 2021 at 10:36 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > ne 19. 4. 2020 v 19:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> last week I finished pspg 3.0 https://github.com/okbob/pspg . pspg now\n> supports pipes, named pipes very well. Today the pspg can be used as pager\n> for output of \\watch command. Sure, psql needs attached patch.\n> >>\n> >> I propose new psql environment variable PSQL_WATCH_PAGER. When this\n> variable is not empty, then \\watch command starts specified pager, and\n> redirect output to related pipe. When pipe is closed - by pager, then\n> \\watch cycle is leaved.\n> >>\n> >> If you want to test proposed feature, you need a pspg with\n> cb4114f98318344d162a84b895a3b7f8badec241 commit.\n> >>\n> >> Then you can set your env\n> >>\n> >> export PSQL_WATCH_PAGER=\"pspg --stream\"\n> >> psql\n> >>\n> >> SELECT * FROM pg_stat_database;\n> >> \\watch 1\n> >>\n> >> Comments, notes?\n>\n> I tried this out with pspg 4.1 from my package manager. It seems\n> really useful, especially for demos. I like it!\n>\n> * Set up rendering options, in particular, disable the pager,\n> because\n> * nobody wants to be prompted while watching the output of\n> 'watch'.\n> */\n> - myopt.topt.pager = 0;\n> + if (!pagerpipe)\n> + myopt.topt.pager = 0;\n>\n> Obsolete comment.\n>\n\nfixed\n\n\n> +static bool sigpipe_received = false;\n>\n> This should be \"static volatile sig_atomic_t\", and I suppose our\n> convention name for that variable would be got_SIGPIPE. Would it be\n> possible to ignore SIGPIPE instead, and then rely on another way of\n> knowing that the pager has quit? But... hmm:\n>\n> - long s = Min(i, 1000L);\n> + long s = Min(i, pagerpipe ? 100L :\n> 1000L);\n>\n> I haven't studied this (preexisting) polling loop, but I don't like\n> it. I understand that it's there because on some systems, pg_usleep()\n> won't wake up for SIGINT (^C), but now it's being used for a secondary\n> purpose, that I haven't fully understood. After I quit pspg (by\n> pressing q) while running \\watch 10, I have to wait until the end of a\n> 10 second cycle before it tries to write to the pipe again, unless I\n> also press ^C. I feel like it has to be possible to achieve \"precise\"\n> behaviour somehow when you quit; maybe something like waiting for\n> readiness on the pager's stderr, or something like that -- I haven't\n> thought hard about this and I admit that I have no idea how this works\n> on Windows.\n>\n>\nI rewrote this mechanism (it was broken, because the timing of SIGPIPE is\ndifferent, then I expected). An implementation can be significantly simpler\n- just detect with waitpid any closed child and react. You proposed it.\n\nSometimes I see a message like this after I quit pspg:\n>\n> postgres=# \\watch 10\n> input stream was closed\n>\n\nI don't see this message. But I use fresh 4.3 pspg\n\nplease, see attached patch\n\nRegards\n\nPavel", "msg_date": "Wed, 3 Mar 2021 19:16:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nHere is a little bit updated patch - detection of end of any child process\ncannot be used on WIN32. I am not an expert on this platform, but from what\nI read about it, there is no easy solution. The problem is in _popen\nfunction. We lost the handle of the created process, and it is not possible\nto find it. Writing a new implementation of _popen function looks like a\nbig overkill to me. We can disable this functionality there completely (on\nwin32) or we can accept the waiting time after pager has ended until we\ndetect pipe error. I hope so this is acceptable, in this moment, because a)\nthere are not pspg for windows (and there was only few requests for porting\nthere in last 4 years), b) usage of psql on mswin platform is not too wide,\nc) in near future, there will be an possibility to use Unix psql on this\nplatform.\n\nRegards\n\nPavel", "msg_date": "Thu, 4 Mar 2021 07:37:04 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nčt 4. 3. 2021 v 7:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> Here is a little bit updated patch - detection of end of any child process\n> cannot be used on WIN32. I am not an expert on this platform, but from what\n> I read about it, there is no easy solution. The problem is in _popen\n> function. We lost the handle of the created process, and it is not possible\n> to find it. Writing a new implementation of _popen function looks like a\n> big overkill to me. We can disable this functionality there completely (on\n> win32) or we can accept the waiting time after pager has ended until we\n> detect pipe error. I hope so this is acceptable, in this moment, because a)\n> there are not pspg for windows (and there was only few requests for porting\n> there in last 4 years), b) usage of psql on mswin platform is not too wide,\n> c) in near future, there will be an possibility to use Unix psql on this\n> platform.\n>\n>\nsecond version - after some thinking, I think the pager for \\watch command\nshould be controlled by option \"pager\" too. When the pager is disabled on\npsql level, then the pager will not be used for \\watch too.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Thu, 4 Mar 2021 11:28:16 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Thu, Mar 4, 2021 at 11:28 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 4. 3. 2021 v 7:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> Here is a little bit updated patch - detection of end of any child process cannot be used on WIN32.\n\nYeah, it's OK for me if this feature only works on Unix until the\nright person for the job shows up with a patch. If there is no pspg\non Windows, how would we even know if it works?\n\n> second version - after some thinking, I think the pager for \\watch command should be controlled by option \"pager\" too. When the pager is disabled on psql level, then the pager will not be used for \\watch too.\n\nMakes sense.\n\n+ long s = Min(i, 100L);\n+\n+ pg_usleep(1000L * s);\n+\n+ /*\n+ * in this moment an pager process can be only one child of\n+ * psql process. There cannot be other processes. So we can\n+ * detect end of any child process for fast detection of\n+ * pager process.\n+ *\n+ * This simple detection doesn't work on WIN32, because we\n+ * don't know handle of process created by _popen function.\n+ * Own implementation of _popen function based on CreateProcess\n+ * looks like overkill in this moment.\n+ */\n+ if (pagerpipe)\n+ {\n+\n+ int status;\n+ pid_t pid;\n+\n+ pid = waitpid(-1, &status, WNOHANG);\n+ if (pid)\n+ break;\n+ }\n+\n+#endif\n+\n if (cancel_pressed)\n break;\n\nI thought a bit about what we're really trying to achieve here. We\nwant to go to sleep until someone presses ^C, the pager exits, or a\ncertain time is reached. Here, we're waking up 10 times per second to\ncheck for exited child processes. It works, but it does not spark\njoy.\n\nI thought about treating SIGCHLD the same way as we treat SIGINT: it\ncould use the siglongjmp() trick for a non-local exit from the signal\nhandler. (Hmm... I wonder why that pre-existing code bothers to check\ncancel_pressed, considering it is running with\nsigint_interrupt_enabled = true so it won't even set the flag.) It\nfeels clever, but then you'd still have the repeating short\npg_usleep() calls, for reasons described by commit 8c1a71d36f5. I do\nnot like sleep/poll loops. Think of the polar bears. I need to fix\nall of these, as a carbon emission offset for cfbot.\n\nAlthough there are probably several ways to do this efficiently, my\nfirst thought was: let's try sigtimedwait()! If you block the signals\nfirst, you have a race-free way to wait for SIGINT (^C), SIGCHLD\n(pager exited) or a timeout you can specify. I coded that up and it\nworked pretty nicely, but then I tried it on macOS too and it didn't\ncompile -- Apple didn't implement that. Blah.\n\nNext I tried sigwait(). That's already used in our tree, so it should\nbe OK. At first I thought that using SIGALRM to wake it up would be a\nbit too ugly and I was going to give up, but then I realised that an\ninterval timer (one that automatically fires every X seconds) is\nexactly what we want here, and we can set it up just once at the start\nof do_watch() and cancel it at the end of do_watch(). With the\nattached patch you get exactly one sigwait() syscall of the correct\nduration per \\watch cycle.\n\nThoughts? I put my changes into a separate patch for clarity, but\nthey need some more tidying up.\n\nI'll look at the documentation next.", "msg_date": "Sun, 21 Mar 2021 11:44:46 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Sun, Mar 21, 2021 at 11:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [review]\n\nOh, just BTW, to save confusion for others who might try this: It\nseems there is something wrong with pspg --stream on macOS, at least\nwhen using MacPorts. I assumed it might be just pspg 3.1.5 being too\nold (that's what MacPorts has currently), so I didn't mention it\nbefore, but I just built pspg from your github master branch and it\nhas the same symptom. It doesn't seem to repaint the screen until you\npress a key. I can see that psql is doing its job, but pspg is\nsitting in select() reached from ncurses wgetch():\n\n * frame #0: 0x000000019b4af0e8 libsystem_kernel.dylib`__select + 8\n frame #1: 0x0000000100ca0620 libncurses.6.dylib`_nc_timed_wait + 332\n frame #2: 0x0000000100c85444 libncurses.6.dylib`_nc_wgetch + 296\n frame #3: 0x0000000100c85b24 libncurses.6.dylib`wgetch + 52\n frame #4: 0x0000000100a815e4 pspg`get_event + 624\n frame #5: 0x0000000100a7899c pspg`main + 9640\n frame #6: 0x000000019b4f9f34 libdyld.dylib`start + 4\n\nThat's using MacPorts' libncurses. I couldn't get it to build against\nApple's libcurses (some missing functions). It's the same for both\nyour V2 and the fixup I posted. When you press a key, it suddenly\ncatches up and repaints all the \\watch updates that were buffered.\n\nIt works fine on Linux and FreeBSD though (I tried pspg 4.1.0 from\nDebian's package manager, and pspg 4.3.1 from FreeBSD's).\n\n\n", "msg_date": "Sun, 21 Mar 2021 12:41:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "ne 21. 3. 2021 v 0:42 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sun, Mar 21, 2021 at 11:44 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > [review]\n>\n> Oh, just BTW, to save confusion for others who might try this: It\n> seems there is something wrong with pspg --stream on macOS, at least\n> when using MacPorts. I assumed it might be just pspg 3.1.5 being too\n> old (that's what MacPorts has currently), so I didn't mention it\n> before, but I just built pspg from your github master branch and it\n> has the same symptom. It doesn't seem to repaint the screen until you\n> press a key. I can see that psql is doing its job, but pspg is\n> sitting in select() reached from ncurses wgetch():\n>\n> * frame #0: 0x000000019b4af0e8 libsystem_kernel.dylib`__select + 8\n> frame #1: 0x0000000100ca0620 libncurses.6.dylib`_nc_timed_wait + 332\n> frame #2: 0x0000000100c85444 libncurses.6.dylib`_nc_wgetch + 296\n> frame #3: 0x0000000100c85b24 libncurses.6.dylib`wgetch + 52\n> frame #4: 0x0000000100a815e4 pspg`get_event + 624\n> frame #5: 0x0000000100a7899c pspg`main + 9640\n> frame #6: 0x000000019b4f9f34 libdyld.dylib`start + 4\n>\n> That's using MacPorts' libncurses. I couldn't get it to build against\n> Apple's libcurses (some missing functions). It's the same for both\n> your V2 and the fixup I posted. When you press a key, it suddenly\n> catches up and repaints all the \\watch updates that were buffered.\n>\n\nI do not have a Mac, so I never tested these features there. Surelly,\nsomething is wrong, but I have no idea what.\n\n1. pspg call timeout function with value 1000. So maximal waiting time\nanywhere should be 1 sec\n\n2. For this case, the pool function should be called, and timeout is\ndetected from the result value of the pool function.\n\nSo it looks like the pool function has a little bit different behavior than\nI expect.\n\nCan somebody help me (with access on macos0 with debugging this issue?\n\nRegards\n\nPavel\n\n\n\n\n> It works fine on Linux and FreeBSD though (I tried pspg 4.1.0 from\n> Debian's package manager, and pspg 4.3.1 from FreeBSD's).\n>\n\nne 21. 3. 2021 v 0:42 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Sun, Mar 21, 2021 at 11:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [review]\n\nOh, just BTW, to save confusion for others who might try this:  It\nseems there is something wrong with pspg --stream on macOS, at least\nwhen using MacPorts.  I assumed it might be just pspg 3.1.5 being too\nold (that's what MacPorts has currently), so I didn't mention it\nbefore, but I just built pspg from your github master branch and it\nhas the same symptom.  It doesn't seem to repaint the screen until you\npress a key.  I can see that psql is doing its job, but pspg is\nsitting in select() reached from ncurses wgetch():\n\n  * frame #0: 0x000000019b4af0e8 libsystem_kernel.dylib`__select + 8\n    frame #1: 0x0000000100ca0620 libncurses.6.dylib`_nc_timed_wait + 332\n    frame #2: 0x0000000100c85444 libncurses.6.dylib`_nc_wgetch + 296\n    frame #3: 0x0000000100c85b24 libncurses.6.dylib`wgetch + 52\n    frame #4: 0x0000000100a815e4 pspg`get_event + 624\n    frame #5: 0x0000000100a7899c pspg`main + 9640\n    frame #6: 0x000000019b4f9f34 libdyld.dylib`start + 4\n\nThat's using MacPorts' libncurses.  I couldn't get it to build against\nApple's libcurses (some missing functions).  It's the same for both\nyour V2 and the fixup I posted.  When you press a key, it suddenly\ncatches up and repaints all the \\watch updates that were buffered.I  do not have a Mac, so I never tested these features there. Surelly, something is wrong, but I have no idea what.1. pspg call timeout function with value 1000. So maximal waiting time anywhere should be 1 sec2. For this case, the pool function should be called, and timeout is detected from the result value of the pool function. So it looks like the pool function has a little bit different behavior than I expect.Can somebody help me (with access on macos0 with debugging this issue?RegardsPavel \n\nIt works fine on Linux and FreeBSD though (I tried pspg 4.1.0 from\nDebian's package manager, and pspg 4.3.1 from FreeBSD's).", "msg_date": "Sun, 21 Mar 2021 10:37:36 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "so 20. 3. 2021 v 23:45 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Thu, Mar 4, 2021 at 11:28 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > čt 4. 3. 2021 v 7:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> Here is a little bit updated patch - detection of end of any child\n> process cannot be used on WIN32.\n>\n> Yeah, it's OK for me if this feature only works on Unix until the\n> right person for the job shows up with a patch. If there is no pspg\n> on Windows, how would we even know if it works?\n>\n> > second version - after some thinking, I think the pager for \\watch\n> command should be controlled by option \"pager\" too. When the pager is\n> disabled on psql level, then the pager will not be used for \\watch too.\n>\n> Makes sense.\n>\n> + long s = Min(i, 100L);\n> +\n> + pg_usleep(1000L * s);\n> +\n> + /*\n> + * in this moment an pager process can be only one child of\n> + * psql process. There cannot be other processes. So we can\n> + * detect end of any child process for fast detection of\n> + * pager process.\n> + *\n> + * This simple detection doesn't work on WIN32, because we\n> + * don't know handle of process created by _popen function.\n> + * Own implementation of _popen function based on\n> CreateProcess\n> + * looks like overkill in this moment.\n> + */\n> + if (pagerpipe)\n> + {\n> +\n> + int status;\n> + pid_t pid;\n> +\n> + pid = waitpid(-1, &status, WNOHANG);\n> + if (pid)\n> + break;\n> + }\n> +\n> +#endif\n> +\n> if (cancel_pressed)\n> break;\n>\n> I thought a bit about what we're really trying to achieve here. We\n> want to go to sleep until someone presses ^C, the pager exits, or a\n> certain time is reached. Here, we're waking up 10 times per second to\n> check for exited child processes. It works, but it does not spark\n> joy.\n>\n> I thought about treating SIGCHLD the same way as we treat SIGINT: it\n> could use the siglongjmp() trick for a non-local exit from the signal\n> handler. (Hmm... I wonder why that pre-existing code bothers to check\n> cancel_pressed, considering it is running with\n> sigint_interrupt_enabled = true so it won't even set the flag.) It\n> feels clever, but then you'd still have the repeating short\n> pg_usleep() calls, for reasons described by commit 8c1a71d36f5. I do\n> not like sleep/poll loops. Think of the polar bears. I need to fix\n> all of these, as a carbon emission offset for cfbot.\n>\n> Although there are probably several ways to do this efficiently, my\n> first thought was: let's try sigtimedwait()! If you block the signals\n> first, you have a race-free way to wait for SIGINT (^C), SIGCHLD\n> (pager exited) or a timeout you can specify. I coded that up and it\n> worked pretty nicely, but then I tried it on macOS too and it didn't\n> compile -- Apple didn't implement that. Blah.\n>\n> Next I tried sigwait(). That's already used in our tree, so it should\n> be OK. At first I thought that using SIGALRM to wake it up would be a\n> bit too ugly and I was going to give up, but then I realised that an\n> interval timer (one that automatically fires every X seconds) is\n> exactly what we want here, and we can set it up just once at the start\n> of do_watch() and cancel it at the end of do_watch(). With the\n> attached patch you get exactly one sigwait() syscall of the correct\n> duration per \\watch cycle.\n>\n> Thoughts? I put my changes into a separate patch for clarity, but\n> they need some more tidying up.\n>\n\nyes, your solution is much better.\n\nPavel\n\n\n> I'll look at the documentation next.\n>\n\nso 20. 3. 2021 v 23:45 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Thu, Mar 4, 2021 at 11:28 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 4. 3. 2021 v 7:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> Here is a little bit updated patch - detection of end of any child process cannot be used on WIN32.\n\nYeah, it's OK for me if this feature only works on Unix until the\nright person for the job shows up with a patch.  If there is no pspg\non Windows, how would we even know if it works?\n\n> second version - after some thinking, I think the pager for \\watch command should be controlled by option \"pager\" too.  When the pager is disabled on psql level, then the pager will not be used for \\watch too.\n\nMakes sense.\n\n+            long        s = Min(i, 100L);\n+\n+            pg_usleep(1000L * s);\n+\n+            /*\n+             * in this moment an pager process can be only one child of\n+             * psql process. There cannot be other processes. So we can\n+             * detect end of any child process for fast detection of\n+             * pager process.\n+             *\n+             * This simple detection doesn't work on WIN32, because we\n+             * don't know handle of process created by _popen function.\n+             * Own implementation of _popen function based on CreateProcess\n+             * looks like overkill in this moment.\n+             */\n+            if (pagerpipe)\n+            {\n+\n+                int        status;\n+                pid_t    pid;\n+\n+                pid = waitpid(-1, &status, WNOHANG);\n+                if (pid)\n+                    break;\n+            }\n+\n+#endif\n+\n             if (cancel_pressed)\n                 break;\n\nI thought a bit about what we're really trying to achieve here.  We\nwant to go to sleep until someone presses ^C, the pager exits, or a\ncertain time is reached.  Here, we're waking up 10 times per second to\ncheck for exited child processes.  It works, but it does not spark\njoy.\n\nI thought about treating SIGCHLD the same way as we treat SIGINT: it\ncould use the siglongjmp() trick for a non-local exit from the signal\nhandler.  (Hmm... I wonder why that pre-existing code bothers to check\ncancel_pressed, considering it is running with\nsigint_interrupt_enabled = true so it won't even set the flag.)  It\nfeels clever, but then you'd still have the repeating short\npg_usleep() calls, for reasons described by commit 8c1a71d36f5.  I do\nnot like sleep/poll loops.  Think of the polar bears.  I need to fix\nall of these, as a carbon emission offset for cfbot.\n\nAlthough there are probably several ways to do this efficiently, my\nfirst thought was: let's try sigtimedwait()!  If you block the signals\nfirst, you have a race-free way to wait for SIGINT (^C), SIGCHLD\n(pager exited) or a timeout you can specify.  I coded that up and it\nworked pretty nicely, but then I tried it on macOS too and it didn't\ncompile -- Apple didn't implement that.  Blah.\n\nNext I tried sigwait().  That's already used in our tree, so it should\nbe OK.  At first I thought that using SIGALRM to wake it up would be a\nbit too ugly and I was going to give up, but then I realised that an\ninterval timer (one that automatically fires every X seconds) is\nexactly what we want here, and we can set it up just once at the start\nof do_watch() and cancel it at the end of do_watch().  With the\nattached patch you get exactly one sigwait() syscall of the correct\nduration per \\watch cycle.\n\nThoughts?  I put my changes into a separate patch for clarity, but\nthey need some more tidying up.yes, your solution is much better.Pavel\n\nI'll look at the documentation next.", "msg_date": "Sun, 21 Mar 2021 11:43:10 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Sun, Mar 21, 2021 at 10:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Can somebody help me (with access on macos0 with debugging this issue?\n\nI'll try to figure it out, but maybe after the code freeze. I started\nmy programming career writing curses software a million years ago on a\ncouple of extinct Unixes... I might even be able to remember how it\nworks. This is not a problem for committing the PSQL_WATCH_PAGER\npatch, I just mentioned it here because I thought that others might\ntry it out on a Mac and be confused.\n\n\n", "msg_date": "Mon, 22 Mar 2021 16:55:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "po 22. 3. 2021 v 4:55 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sun, Mar 21, 2021 at 10:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Can somebody help me (with access on macos0 with debugging this issue?\n>\n> I'll try to figure it out, but maybe after the code freeze. I started\n> my programming career writing curses software a million years ago on a\n> couple of extinct Unixes... I might even be able to remember how it\n> works. This is not a problem for committing the PSQL_WATCH_PAGER\n> patch, I just mentioned it here because I thought that others might\n> try it out on a Mac and be confused.\n>\n\nThank you.\n\nprobably there will not be an issue inside ncurses - the most complex part\nof get_event is polling of input sources - tty and some other. The pspg\nshould not to stop there on tty reading.\n\nRegards\n\nPavel\n\npo 22. 3. 2021 v 4:55 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Sun, Mar 21, 2021 at 10:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Can somebody help me (with access on macos0 with debugging this issue?\n\nI'll try to figure it out, but maybe after the code freeze.  I started\nmy programming career writing curses software a million years ago on a\ncouple of extinct Unixes... I might even be able to remember how it\nworks.  This is not a problem for committing the PSQL_WATCH_PAGER\npatch, I just mentioned it here because I thought that others might\ntry it out on a Mac and be confused.Thank you. probably there will not be an issue inside ncurses - the most complex part of get_event is polling of input sources - tty and some other. The pspg should not to stop there on tty reading.RegardsPavel", "msg_date": "Mon, 22 Mar 2021 05:09:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Mon, Mar 22, 2021 at 5:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> probably there will not be an issue inside ncurses - the most complex part of get_event is polling of input sources - tty and some other. The pspg should not to stop there on tty reading.\n\nThe problem is that Apple's /dev/tty device is defective, and doesn't\nwork in poll(). It always returns immediately with revents=POLLNVAL,\nbut pspg assumes that data is ready and tries to read the keyboard and\nthen blocks until I press a key. This seems to fix it:\n\n+#ifndef __APPLE__\n+ /* macOS can't use poll() on /dev/tty */\n state.tty = fopen(\"/dev/tty\", \"r+\");\n+#endif\n if (!state.tty)\n state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n\nA minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\nNCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\nmanually add -DNCURSES_WIDECHAR=1, though I didn't check.\n\n\n", "msg_date": "Tue, 23 Mar 2021 01:13:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "po 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Mon, Mar 22, 2021 at 5:10 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > probably there will not be an issue inside ncurses - the most complex\n> part of get_event is polling of input sources - tty and some other. The\n> pspg should not to stop there on tty reading.\n>\n> The problem is that Apple's /dev/tty device is defective, and doesn't\n> work in poll(). It always returns immediately with revents=POLLNVAL,\n> but pspg assumes that data is ready and tries to read the keyboard and\n> then blocks until I press a key. This seems to fix it:\n>\n> +#ifndef __APPLE__\n> + /* macOS can't use poll() on /dev/tty */\n> state.tty = fopen(\"/dev/tty\", \"r+\");\n> +#endif\n> if (!state.tty)\n> state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n>\n\nit is hell.\n\nPlease, can you verify this fix?\n\n\n> A minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\n> NCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\n> manually add -DNCURSES_WIDECHAR=1, though I didn't check.\n>\n\nIt is possible -\n\ncan you run \"pspg --version\"\n\n[pavel@localhost pspg-master]$ ./pspg --version\npspg-4.4.0\nwith readline (version: 0x0801)\nwith integrated menu\nncurses version: 6.2, patch: 20200222\nncurses with wide char support\nncurses widechar num: 1\nwchar_t width: 4, max: 2147483647\nwith inotify support\n\nThis is not too critical for pspg, because all commands are basic ascii\nchars. Strings are taken by readline library or by wgetnstr function\n\npo 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Mon, Mar 22, 2021 at 5:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> probably there will not be an issue inside ncurses - the most complex part of get_event is polling of input sources - tty and some other. The pspg should not to stop there on tty reading.\n\nThe problem is that Apple's /dev/tty device is defective, and doesn't\nwork in poll().  It always returns immediately with revents=POLLNVAL,\nbut pspg assumes that data is ready and tries to read the keyboard and\nthen blocks until I press a key.  This seems to fix it:\n\n+#ifndef __APPLE__\n+               /* macOS can't use poll() on /dev/tty */\n                state.tty = fopen(\"/dev/tty\", \"r+\");\n+#endif\n                if (!state.tty)\n                        state.tty = fopen(ttyname(fileno(stdout)), \"r\");it is hell. Please, can you verify this fix? \n\nA minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\nNCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\nmanually add -DNCURSES_WIDECHAR=1, though I didn't check.It is possible - can you run \"pspg --version\"[pavel@localhost pspg-master]$ ./pspg --versionpspg-4.4.0with readline (version: 0x0801)with integrated menuncurses version: 6.2, patch: 20200222ncurses with wide char supportncurses widechar num: 1wchar_t width: 4, max: 2147483647with inotify supportThis is not too critical for pspg, because all commands are basic ascii chars. Strings are taken by readline library or by wgetnstr function", "msg_date": "Mon, 22 Mar 2021 13:52:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Tue, Mar 23, 2021 at 1:53 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>> The problem is that Apple's /dev/tty device is defective, and doesn't\n>> work in poll(). It always returns immediately with revents=POLLNVAL,\n>> but pspg assumes that data is ready and tries to read the keyboard and\n>> then blocks until I press a key. This seems to fix it:\n>>\n>> +#ifndef __APPLE__\n>> + /* macOS can't use poll() on /dev/tty */\n>> state.tty = fopen(\"/dev/tty\", \"r+\");\n>> +#endif\n>> if (!state.tty)\n>> state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n>\n>\n> it is hell.\n\nHeh. I've recently spent many, many hours trying to make AIO work on\nmacOS, and nothing surprises me anymore. BTW I found something from\nyears ago on the 'net that fits with my observation about /dev/tty:\n\nhttps://www.mail-archive.com/bug-gnulib@gnu.org/msg00296.html\n\nCurious, which other OS did you put that fallback case in for? I'm a\nlittle confused about why it works, so I'm not sure if it's the best\npossible change, but I'm not planning to dig any further now, too many\npatches, not enough time :-)\n\n> Please, can you verify this fix?\n\nIt works perfectly for me on a macOS 11.2 system with that change,\nrepainting the screen exactly when it should. I'm happy about that\nbecause (1) it means I can confirm that the proposed change to psql is\nworking correctly on the 3 Unixes I have access to, and (2) I am sure\nthat a lot of Mac users will appreciate being able to use super-duper\n\\watch mode when this ships (a high percentage of PostgreSQL users I\nknow use a Mac as their client machine).\n\n>> A minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\n>> NCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\n>> manually add -DNCURSES_WIDECHAR=1, though I didn't check.\n>\n> It is possible -\n>\n> can you run \"pspg --version\"\n\nLooks like I misunderstood: it is showing \"with wide char support\",\nit's just that the \"num\" is 0 rather than 1. I'm not planning to\ninvestigate that any further now, but I checked that it can show the\noutput of SELECT 'špeĉiäl chârãçtérs' correctly.\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:07:07 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Sun, Mar 21, 2021 at 11:43 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> so 20. 3. 2021 v 23:45 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>> Thoughts? I put my changes into a separate patch for clarity, but\n>> they need some more tidying up.\n>\n> yes, your solution is much better.\n\nHmm, there was a problem with it though: it blocked ^C while running\nthe query, which is bad. I fixed that. I did some polishing of the\ncode and some editing on the documentation and comments. I disabled\nthe feature completely on Windows, because it seems unlikely that\nwe'll be able to know if it even works, in this cycle.\n\n- output = PageOutput(158, pager ? &(pset.popt.topt) : NULL);\n+ output = PageOutput(160, pager ? &(pset.popt.topt) : NULL);\n\nWhat is that change for?", "msg_date": "Tue, 23 Mar 2021 12:35:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "po 22. 3. 2021 v 22:07 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Tue, Mar 23, 2021 at 1:53 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com>\n> napsal:\n> >> The problem is that Apple's /dev/tty device is defective, and doesn't\n> >> work in poll(). It always returns immediately with revents=POLLNVAL,\n> >> but pspg assumes that data is ready and tries to read the keyboard and\n> >> then blocks until I press a key. This seems to fix it:\n> >>\n> >> +#ifndef __APPLE__\n> >> + /* macOS can't use poll() on /dev/tty */\n> >> state.tty = fopen(\"/dev/tty\", \"r+\");\n> >> +#endif\n> >> if (!state.tty)\n> >> state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n> >\n> >\n> > it is hell.\n>\n> Heh. I've recently spent many, many hours trying to make AIO work on\n> macOS, and nothing surprises me anymore. BTW I found something from\n> years ago on the 'net that fits with my observation about /dev/tty:\n>\n> https://www.mail-archive.com/bug-gnulib@gnu.org/msg00296.html\n>\n> Curious, which other OS did you put that fallback case in for? I'm a\n> little confused about why it works, so I'm not sure if it's the best\n> possible change, but I'm not planning to dig any further now, too many\n> patches, not enough time :-)\n>\n\nUnfortunately, I have no exact evidence. My original implementation was\nvery primitive\n\nif (freopen(\"/dev/tty\", \"rw\", stdin) == NULL)\n{\nfprintf(stderr, \"cannot to reopen stdin: %s\\n\", strerror(errno));\nexit(1);\n}\n\nSome people reported problems, but I don't know if these issues was related\nto tty or to freopen\n\nIn some discussion I found a workaround with reusing stdout and stderr -\nand then this works well, but I have no feedback about these fallback\ncases. And because this strategy is used by \"less\" pager too, I expect so\nthis is a common and widely used workaround.\n\nI remember so there was a problems with cygwin and some unix platforms (but\nmaybe very old) there was problem in deeper nesting - some like\n\nscreen -> psql -> pspg.\n\nDirectly pspg was working, but it didn't work from psql. Probably somewhere\nthe implementation of pty was not fully correct.\n\n\n\n>\n> > Please, can you verify this fix?\n>\n> It works perfectly for me on a macOS 11.2 system with that change,\n> repainting the screen exactly when it should. I'm happy about that\n> because (1) it means I can confirm that the proposed change to psql is\n> working correctly on the 3 Unixes I have access to, and (2) I am sure\n> that a lot of Mac users will appreciate being able to use super-duper\n> \\watch mode when this ships (a high percentage of PostgreSQL users I\n> know use a Mac as their client machine).\n>\n\nThank you for verification. I fixed it in master branch\n\n\n> >> A minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\n> >> NCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\n> >> manually add -DNCURSES_WIDECHAR=1, though I didn't check.\n> >\n> > It is possible -\n> >\n> > can you run \"pspg --version\"\n>\n> Looks like I misunderstood: it is showing \"with wide char support\",\n> it's just that the \"num\" is 0 rather than 1. I'm not planning to\n> investigate that any further now, but I checked that it can show the\n> output of SELECT 'špeĉiäl chârãçtérs' correctly.\n>\n\nIt is the job of ncursesw - pspg sends data to ncurses in original format\n- it does only some game with attributes.\n\npo 22. 3. 2021 v 22:07 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Tue, Mar 23, 2021 at 1:53 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>> The problem is that Apple's /dev/tty device is defective, and doesn't\n>> work in poll().  It always returns immediately with revents=POLLNVAL,\n>> but pspg assumes that data is ready and tries to read the keyboard and\n>> then blocks until I press a key.  This seems to fix it:\n>>\n>> +#ifndef __APPLE__\n>> +               /* macOS can't use poll() on /dev/tty */\n>>                 state.tty = fopen(\"/dev/tty\", \"r+\");\n>> +#endif\n>>                 if (!state.tty)\n>>                         state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n>\n>\n> it is hell.\n\nHeh.  I've recently spent many, many hours trying to make AIO work on\nmacOS, and nothing surprises me anymore.  BTW I found something from\nyears ago on the 'net that fits with my observation about /dev/tty:\n\nhttps://www.mail-archive.com/bug-gnulib@gnu.org/msg00296.html\n\nCurious, which other OS did you put that fallback case in for?  I'm a\nlittle confused about why it works, so I'm not sure if it's the best\npossible change, but I'm not planning to dig any further now, too many\npatches, not enough time :-)Unfortunately, I have no exact evidence. My original implementation was very primitive \tif (freopen(\"/dev/tty\", \"rw\", stdin) == NULL)\t{\t\tfprintf(stderr, \"cannot to reopen stdin: %s\\n\", strerror(errno));\t\texit(1);\t}Some people reported problems, but I don't know if these issues was related to tty or to freopenIn some discussion I found a workaround with reusing stdout and stderr - and then this works well, but I have no feedback about these fallback cases. And because this strategy is used by \"less\" pager too, I expect so this is a common and widely used workaround.I remember so there was a problems with cygwin and some unix platforms (but maybe very old) there was problem in deeper nesting - some likescreen -> psql -> pspg.Directly pspg was working, but it didn't work from psql. Probably somewhere the implementation of pty was not fully correct. \n\n> Please, can you verify this fix?\n\nIt works perfectly for me on a macOS 11.2 system with that change,\nrepainting the screen exactly when it should.  I'm happy about that\nbecause (1) it means I can confirm that the proposed change to psql is\nworking correctly on the 3 Unixes I have access to, and (2) I am sure\nthat a lot of Mac users will appreciate being able to use super-duper\n\\watch mode when this ships (a high percentage of PostgreSQL users I\nknow use a Mac as their client machine).Thank you for verification. I fixed it in master branch \n\n>> A minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\n>> NCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\n>> manually add -DNCURSES_WIDECHAR=1, though I didn't check.\n>\n> It is possible -\n>\n> can you run \"pspg --version\"\n\nLooks like I misunderstood: it is showing \"with wide char support\",\nit's just that the \"num\" is 0 rather than 1.  I'm not planning to\ninvestigate that any further now, but I checked that it can show the\noutput of SELECT 'špeĉiäl chârãçtérs' correctly.It is the job of ncursesw -  pspg sends data to ncurses in original format - it does only some game with attributes.", "msg_date": "Tue, 23 Mar 2021 05:25:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "út 23. 3. 2021 v 0:35 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sun, Mar 21, 2021 at 11:43 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > so 20. 3. 2021 v 23:45 odesílatel Thomas Munro <thomas.munro@gmail.com>\n> napsal:\n> >> Thoughts? I put my changes into a separate patch for clarity, but\n> >> they need some more tidying up.\n> >\n> > yes, your solution is much better.\n>\n> Hmm, there was a problem with it though: it blocked ^C while running\n> the query, which is bad. I fixed that. I did some polishing of the\n> code and some editing on the documentation and comments. I disabled\n> the feature completely on Windows, because it seems unlikely that\n> we'll be able to know if it even works, in this cycle.\n>\n> - output = PageOutput(158, pager ? &(pset.popt.topt) : NULL);\n> + output = PageOutput(160, pager ? &(pset.popt.topt) : NULL);\n>\n> What is that change for?\n>\n\nThis is correct - this is the number of printed rows - it is used for\ndecisions about using a pager for help. There are two new rows, and the\nnumber is correctly +2\n\nPavel\n\nút 23. 3. 2021 v 0:35 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Sun, Mar 21, 2021 at 11:43 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> so 20. 3. 2021 v 23:45 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>> Thoughts?  I put my changes into a separate patch for clarity, but\n>> they need some more tidying up.\n>\n> yes, your solution is much better.\n\nHmm, there was a problem with it though: it blocked ^C while running\nthe query, which is bad.  I fixed that.   I did some polishing of the\ncode and some editing on the documentation and comments.  I disabled\nthe feature completely on Windows, because it seems unlikely that\nwe'll be able to know if it even works, in this cycle.\n\n-       output = PageOutput(158, pager ? &(pset.popt.topt) : NULL);\n+       output = PageOutput(160, pager ? &(pset.popt.topt) : NULL);\n\nWhat is that change for?This is correct - this is the number of printed rows - it is used for decisions about using a pager for help. There are two new rows, and the number is correctly +2Pavel", "msg_date": "Tue, 23 Mar 2021 06:30:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "po 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Mon, Mar 22, 2021 at 5:10 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > probably there will not be an issue inside ncurses - the most complex\n> part of get_event is polling of input sources - tty and some other. The\n> pspg should not to stop there on tty reading.\n>\n> The problem is that Apple's /dev/tty device is defective, and doesn't\n> work in poll(). It always returns immediately with revents=POLLNVAL,\n> but pspg assumes that data is ready and tries to read the keyboard and\n> then blocks until I press a key. This seems to fix it:\n>\n> +#ifndef __APPLE__\n> + /* macOS can't use poll() on /dev/tty */\n> state.tty = fopen(\"/dev/tty\", \"r+\");\n> +#endif\n> if (!state.tty)\n> state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n>\n> A minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\n> NCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\n> manually add -DNCURSES_WIDECHAR=1, though I didn't check.\n>\n\nFor record, this issue is fixed in pspg 4.5.0.\n\nRegards\n\nPavel\n\npo 22. 3. 2021 v 13:13 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Mon, Mar 22, 2021 at 5:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> probably there will not be an issue inside ncurses - the most complex part of get_event is polling of input sources - tty and some other. The pspg should not to stop there on tty reading.\n\nThe problem is that Apple's /dev/tty device is defective, and doesn't\nwork in poll().  It always returns immediately with revents=POLLNVAL,\nbut pspg assumes that data is ready and tries to read the keyboard and\nthen blocks until I press a key.  This seems to fix it:\n\n+#ifndef __APPLE__\n+               /* macOS can't use poll() on /dev/tty */\n                state.tty = fopen(\"/dev/tty\", \"r+\");\n+#endif\n                if (!state.tty)\n                        state.tty = fopen(ttyname(fileno(stdout)), \"r\");\n\nA minor problem is that on macOS, _GNU_SOURCE doesn't seem to imply\nNCURSES_WIDECHAR, so I suspect Unicode will be broken unless you\nmanually add -DNCURSES_WIDECHAR=1, though I didn't check.For record, this issue is fixed in pspg 4.5.0. RegardsPavel", "msg_date": "Tue, 23 Mar 2021 18:09:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Here's a rebase, due to a conflict with 3a513067 \"psql: Show all query\nresults by default\" which moved a few things around making it harder\nto use the pager for the right scope. Lacking time, I came up with\nthis change to PSQLexecWatch():\n\n+ if (printQueryFout)\n+ {\n+ restoreQueryFout = pset.queryFout;\n+ pset.queryFout = printQueryFout;\n+ }\n+\n SetCancelConn(pset.db);\n res = SendQueryAndProcessResults(query, &elapsed_msec, true);\n ResetCancelConn();\n\n fflush(pset.queryFout);\n\n+ if (restoreQueryFout)\n+ pset.queryFout = restoreQueryFout;\n+\n\nIf someone has a tidier way to factor this, I'm keen to hear it. I'd\nlike to push this today.", "msg_date": "Thu, 8 Apr 2021 11:37:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nčt 8. 4. 2021 v 1:38 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> Here's a rebase, due to a conflict with 3a513067 \"psql: Show all query\n> results by default\" which moved a few things around making it harder\n> to use the pager for the right scope. Lacking time, I came up with\n> this change to PSQLexecWatch():\n>\n> + if (printQueryFout)\n> + {\n> + restoreQueryFout = pset.queryFout;\n> + pset.queryFout = printQueryFout;\n> + }\n> +\n> SetCancelConn(pset.db);\n> res = SendQueryAndProcessResults(query, &elapsed_msec, true);\n> ResetCancelConn();\n>\n> fflush(pset.queryFout);\n>\n> + if (restoreQueryFout)\n> + pset.queryFout = restoreQueryFout;\n> +\n>\n> If someone has a tidier way to factor this, I'm keen to hear it. I'd\n> like to push this today.\n>\n\nhere is an rebase of Thomas's implementation\n\nRegards\n\nPavel", "msg_date": "Wed, 21 Apr 2021 08:32:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> here is an rebase of Thomas's implementation\n\nThanks. I finished up not committing that one for 14 because I wasn't\nsure about the way to rebase it on top of 3a513067 (now reverted);\nthat \"restore\" stuff seemed a bit weird. Let's try again in v15 CF1!\n\n\n", "msg_date": "Wed, 21 Apr 2021 18:48:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "st 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > here is an rebase of Thomas's implementation\n>\n> Thanks. I finished up not committing that one for 14 because I wasn't\n> sure about the way to rebase it on top of 3a513067 (now reverted);\n> that \"restore\" stuff seemed a bit weird. Let's try again in v15 CF1!\n>\n\nUnderstand. Thank you\n\nPavel\n\nst 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> here is an rebase of Thomas's implementation\n\nThanks.  I finished up not committing that one for 14 because I wasn't\nsure about the way to rebase it on top of 3a513067 (now reverted);\nthat \"restore\" stuff seemed a bit weird.  Let's try again in v15 CF1!Understand. Thank youPavel", "msg_date": "Wed, 21 Apr 2021 08:52:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nst 21. 4. 2021 v 8:52 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com>\n> napsal:\n>\n>> On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > here is an rebase of Thomas's implementation\n>>\n>> Thanks. I finished up not committing that one for 14 because I wasn't\n>> sure about the way to rebase it on top of 3a513067 (now reverted);\n>> that \"restore\" stuff seemed a bit weird. Let's try again in v15 CF1!\n>>\n>\n> Understand. Thank you\n>\n\nrebase\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>", "msg_date": "Wed, 12 May 2021 12:25:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "st 12. 5. 2021 v 12:25 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> st 21. 4. 2021 v 8:52 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> st 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com>\n>> napsal:\n>>\n>>> On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>> > here is an rebase of Thomas's implementation\n>>>\n>>> Thanks. I finished up not committing that one for 14 because I wasn't\n>>> sure about the way to rebase it on top of 3a513067 (now reverted);\n>>> that \"restore\" stuff seemed a bit weird. Let's try again in v15 CF1!\n>>>\n>>\n>> Understand. Thank you\n>>\n>\n> rebase\n>\n\nlooks so with your patch psql doesn't work on ms\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.134648\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Pavel\n>>\n>>\n\nst 12. 5. 2021 v 12:25 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hist 21. 4. 2021 v 8:52 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> here is an rebase of Thomas's implementation\n\nThanks.  I finished up not committing that one for 14 because I wasn't\nsure about the way to rebase it on top of 3a513067 (now reverted);\nthat \"restore\" stuff seemed a bit weird.  Let's try again in v15 CF1!Understand. Thank yourebaselooks so with your patch psql doesn't work on mshttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.134648RegardsPavel RegardsPavelPavel", "msg_date": "Wed, 12 May 2021 14:14:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Wed, May 12, 2021 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 12. 5. 2021 v 12:25 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>\n>> Hi\n>>\n>> st 21. 4. 2021 v 8:52 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>>\n>>>\n>>>\n>>> st 21. 4. 2021 v 8:49 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>>>>\n>>>> On Wed, Apr 21, 2021 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>>> > here is an rebase of Thomas's implementation\n>>>>\n>>>> Thanks. I finished up not committing that one for 14 because I wasn't\n>>>> sure about the way to rebase it on top of 3a513067 (now reverted);\n>>>> that \"restore\" stuff seemed a bit weird. Let's try again in v15 CF1!\n>>>\n>>>\n>>> Understand. Thank you\n>>\n>>\n>> rebase\n>\n>\n> looks so with your patch psql doesn't work on ms\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.134648\n\nI am changing the status to \"Waiting on Author\" as Pavel's comments\nare not addressed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 10 Jul 2021 18:48:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Sun, Jul 11, 2021 at 1:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> On Wed, May 12, 2021 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > looks so with your patch psql doesn't work on ms\n\nHere's a fix for Windows. The pqsignal() calls are #ifdef'd out. I\nalso removed a few lines that were added after commit 3a513067 but\naren't needed anymore after fae65629.", "msg_date": "Mon, 12 Jul 2021 10:58:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Mon, Jul 12, 2021 at 4:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Jul 11, 2021 at 1:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> > On Wed, May 12, 2021 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > > looks so with your patch psql doesn't work on ms\n>\n> Here's a fix for Windows. The pqsignal() calls are #ifdef'd out. I\n> also removed a few lines that were added after commit 3a513067 but\n> aren't needed anymore after fae65629.\n\nThanks for fixing the comments, CFbot also passes for the same. I have\nchanged the status back to \"Ready for Committer\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 12 Jul 2021 21:41:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "po 12. 7. 2021 v 18:12 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Mon, Jul 12, 2021 at 4:29 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >\n> > On Sun, Jul 11, 2021 at 1:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > On Wed, May 12, 2021 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > > > looks so with your patch psql doesn't work on ms\n> >\n> > Here's a fix for Windows. The pqsignal() calls are #ifdef'd out. I\n> > also removed a few lines that were added after commit 3a513067 but\n> > aren't needed anymore after fae65629.\n>\n> Thanks for fixing the comments, CFbot also passes for the same. I have\n> changed the status back to \"Ready for Committer\".\n>\n\nI tested this version with the last release and with a developing version\nof pspg, and it works very well.\n\nRegards\n\nPavel\n\n\n> Regards,\n> Vignesh\n>\n\npo 12. 7. 2021 v 18:12 odesílatel vignesh C <vignesh21@gmail.com> napsal:On Mon, Jul 12, 2021 at 4:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Jul 11, 2021 at 1:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> > On Wed, May 12, 2021 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > > looks so with your patch psql doesn't work on ms\n>\n> Here's a fix for Windows.  The pqsignal() calls are #ifdef'd out.  I\n> also removed a few lines that were added after commit 3a513067 but\n> aren't needed anymore after fae65629.\n\nThanks for fixing the comments, CFbot also passes for the same. I have\nchanged the status back to \"Ready for Committer\".I tested this version with the last release and with a developing version of pspg, and it works very well. RegardsPavel\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Jul 2021 22:20:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Tue, Jul 13, 2021 at 8:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 12. 7. 2021 v 18:12 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n>> Thanks for fixing the comments, CFbot also passes for the same. I have\n>> changed the status back to \"Ready for Committer\".\n>\n> I tested this version with the last release and with a developing version of pspg, and it works very well.\n\nPushed, after retesting on macOS (with the fixed pspg that has by now\narrived in MacPorts), FreeBSD and Linux. Thanks! I'm using this to\nmonitor system views when demoing new features in development, it's\nnice. Of course, I don't like the default theme, it's a bit too\nMS-DOS/Norton for my taste, but the quieter themes are good :-)\n\n\n", "msg_date": "Tue, 13 Jul 2021 12:01:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "út 13. 7. 2021 v 2:01 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Tue, Jul 13, 2021 at 8:20 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 12. 7. 2021 v 18:12 odesílatel vignesh C <vignesh21@gmail.com>\n> napsal:\n> >> Thanks for fixing the comments, CFbot also passes for the same. I have\n> >> changed the status back to \"Ready for Committer\".\n> >\n> > I tested this version with the last release and with a developing\n> version of pspg, and it works very well.\n>\n> Pushed, after retesting on macOS (with the fixed pspg that has by now\n> arrived in MacPorts), FreeBSD and Linux. Thanks! I'm using this to\n> monitor system views when demoing new features in development, it's\n> nice. Of course, I don't like the default theme, it's a bit too\n> MS-DOS/Norton for my taste, but the quieter themes are good :-)\n>\n\nI have a different feeling - I cannot write with an editor without a blue\nbackground from my beginning with Turbo Pascal 5.5 :-), and although I\nspent a lot of hours on creating and tuning themes for the pspg, I still\nprefer the mc theme, and some themes look really nice. But I cannot work\nwith it.\n\nThank you very much for your work - I hope so this will be an interesting\nfeature of psql\n\nPavel\n\nút 13. 7. 2021 v 2:01 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Tue, Jul 13, 2021 at 8:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 12. 7. 2021 v 18:12 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n>> Thanks for fixing the comments, CFbot also passes for the same. I have\n>> changed the status back to \"Ready for Committer\".\n>\n> I tested this version with the last release and with a developing version of pspg, and it works very well.\n\nPushed, after retesting on macOS (with the fixed pspg that has by now\narrived in MacPorts), FreeBSD and Linux.  Thanks!  I'm using this to\nmonitor system views when demoing new features in development, it's\nnice.  Of course, I don't like the default theme, it's a bit too\nMS-DOS/Norton for my taste, but the quieter themes are good :-)I have a different feeling - I cannot write with an editor without a blue background from my beginning with Turbo Pascal 5.5 :-), and although I spent a lot of hours on creating and tuning themes for the pspg, I still prefer the mc theme, and some themes look really nice. But I cannot work with it. Thank you very much for your work - I hope so this will be an interesting feature of psqlPavel", "msg_date": "Tue, 13 Jul 2021 06:34:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Pushed, after retesting on macOS (with the fixed pspg that has by now\n> arrived in MacPorts), FreeBSD and Linux. Thanks!\n\nAfter playing with this along the way to fixing the sigwait issues,\nI have a gripe/suggestion. If I hit control-C while the thing\nis waiting between queries, eg\n\nregression=# select now() \\watch\nTue Jul 13 13:44:44 2021 (every 2s)\n\n now \n-------------------------------\n 2021-07-13 13:44:44.396565-04\n(1 row)\n\nTue Jul 13 13:44:46 2021 (every 2s)\n\n now \n-------------------------------\n 2021-07-13 13:44:46.396572-04\n(1 row)\n\n^Cregression=# \n\nthen as you can see I get nothing but the \"^C\" echo before the next\npsql prompt. The problem with this is that now libreadline is\nmisinformed about the cursor position, messing up any editing I\nmight try to do on the next line of input. So I think it would\nbe a good idea to have some explicit final output when the \\watch\ncommand terminates, along the line of\n\n...\nTue Jul 13 13:44:46 2021 (every 2s)\n\n now \n-------------------------------\n 2021-07-13 13:44:46.396572-04\n(1 row)\n\n^C\\watch cancelled\nregression=# \n\nThis strikes me as a usability improvement even without the\nreadline-confusion angle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Jul 2021 13:50:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Pushed, after retesting on macOS (with the fixed pspg that has by now\n> > arrived in MacPorts), FreeBSD and Linux. Thanks!\n>\n> After playing with this along the way to fixing the sigwait issues,\n> I have a gripe/suggestion. If I hit control-C while the thing\n> is waiting between queries, eg\n>\n> regression=# select now() \\watch\n> Tue Jul 13 13:44:44 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:44.396565-04\n> (1 row)\n>\n> Tue Jul 13 13:44:46 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:46.396572-04\n> (1 row)\n>\n> ^Cregression=#\n>\n> then as you can see I get nothing but the \"^C\" echo before the next\n> psql prompt. The problem with this is that now libreadline is\n> misinformed about the cursor position, messing up any editing I\n> might try to do on the next line of input. So I think it would\n> be a good idea to have some explicit final output when the \\watch\n> command terminates, along the line of\n>\n> ...\n> Tue Jul 13 13:44:46 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:46.396572-04\n> (1 row)\n>\n> ^C\\watch cancelled\n> regression=#\n>\n> This strikes me as a usability improvement even without the\n> readline-confusion angle.\n>\n>\nI'll look at this issue.\n\nPavel\n\n\n\n\n> regards, tom lane\n>\n\nút 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Thomas Munro <thomas.munro@gmail.com> writes:\n> Pushed, after retesting on macOS (with the fixed pspg that has by now\n> arrived in MacPorts), FreeBSD and Linux.  Thanks!\n\nAfter playing with this along the way to fixing the sigwait issues,\nI have a gripe/suggestion.  If I hit control-C while the thing\nis waiting between queries, eg\n\nregression=# select now() \\watch\nTue Jul 13 13:44:44 2021 (every 2s)\n\n              now              \n-------------------------------\n 2021-07-13 13:44:44.396565-04\n(1 row)\n\nTue Jul 13 13:44:46 2021 (every 2s)\n\n              now              \n-------------------------------\n 2021-07-13 13:44:46.396572-04\n(1 row)\n\n^Cregression=# \n\nthen as you can see I get nothing but the \"^C\" echo before the next\npsql prompt.  The problem with this is that now libreadline is\nmisinformed about the cursor position, messing up any editing I\nmight try to do on the next line of input.  So I think it would\nbe a good idea to have some explicit final output when the \\watch\ncommand terminates, along the line of\n\n...\nTue Jul 13 13:44:46 2021 (every 2s)\n\n              now              \n-------------------------------\n 2021-07-13 13:44:46.396572-04\n(1 row)\n\n^C\\watch cancelled\nregression=# \n\nThis strikes me as a usability improvement even without the\nreadline-confusion angle.\nI'll look at this issue.Pavel \n                        regards, tom lane", "msg_date": "Tue, 13 Jul 2021 20:05:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Wed, Jul 14, 2021 at 6:06 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> After playing with this along the way to fixing the sigwait issues,\n>> I have a gripe/suggestion. If I hit control-C while the thing\n>> is waiting between queries, eg\n>>\n>> regression=# select now() \\watch\n>> Tue Jul 13 13:44:44 2021 (every 2s)\n>>\n>> now\n>> -------------------------------\n>> 2021-07-13 13:44:44.396565-04\n>> (1 row)\n>>\n>> Tue Jul 13 13:44:46 2021 (every 2s)\n>>\n>> now\n>> -------------------------------\n>> 2021-07-13 13:44:46.396572-04\n>> (1 row)\n>>\n>> ^Cregression=#\n>>\n>> then as you can see I get nothing but the \"^C\" echo before the next\n>> psql prompt. The problem with this is that now libreadline is\n>> misinformed about the cursor position, messing up any editing I\n>> might try to do on the next line of input. So I think it would\n>> be a good idea to have some explicit final output when the \\watch\n>> command terminates, along the line of\n>>\n>> ...\n>> Tue Jul 13 13:44:46 2021 (every 2s)\n>>\n>> now\n>> -------------------------------\n>> 2021-07-13 13:44:46.396572-04\n>> (1 row)\n>>\n>> ^C\\watch cancelled\n>> regression=#\n>>\n>> This strikes me as a usability improvement even without the\n>> readline-confusion angle.\n>\n> I'll look at this issue.\n\nHi Pavel,\n\nDo you have a patch for this?\n\n\n", "msg_date": "Thu, 24 Mar 2022 23:04:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "čt 24. 3. 2022 v 11:05 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Wed, Jul 14, 2021 at 6:06 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> After playing with this along the way to fixing the sigwait issues,\n> >> I have a gripe/suggestion. If I hit control-C while the thing\n> >> is waiting between queries, eg\n> >>\n> >> regression=# select now() \\watch\n> >> Tue Jul 13 13:44:44 2021 (every 2s)\n> >>\n> >> now\n> >> -------------------------------\n> >> 2021-07-13 13:44:44.396565-04\n> >> (1 row)\n> >>\n> >> Tue Jul 13 13:44:46 2021 (every 2s)\n> >>\n> >> now\n> >> -------------------------------\n> >> 2021-07-13 13:44:46.396572-04\n> >> (1 row)\n> >>\n> >> ^Cregression=#\n> >>\n> >> then as you can see I get nothing but the \"^C\" echo before the next\n> >> psql prompt. The problem with this is that now libreadline is\n> >> misinformed about the cursor position, messing up any editing I\n> >> might try to do on the next line of input. So I think it would\n> >> be a good idea to have some explicit final output when the \\watch\n> >> command terminates, along the line of\n> >>\n> >> ...\n> >> Tue Jul 13 13:44:46 2021 (every 2s)\n> >>\n> >> now\n> >> -------------------------------\n> >> 2021-07-13 13:44:46.396572-04\n> >> (1 row)\n> >>\n> >> ^C\\watch cancelled\n> >> regression=#\n> >>\n> >> This strikes me as a usability improvement even without the\n> >> readline-confusion angle.\n> >\n> > I'll look at this issue.\n>\n> Hi Pavel,\n>\n> Do you have a patch for this?\n>\n\nNot yet. I forgot about this issue.\n\nRegards\n\nPavel\n\nčt 24. 3. 2022 v 11:05 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Wed, Jul 14, 2021 at 6:06 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> After playing with this along the way to fixing the sigwait issues,\n>> I have a gripe/suggestion.  If I hit control-C while the thing\n>> is waiting between queries, eg\n>>\n>> regression=# select now() \\watch\n>> Tue Jul 13 13:44:44 2021 (every 2s)\n>>\n>>               now\n>> -------------------------------\n>>  2021-07-13 13:44:44.396565-04\n>> (1 row)\n>>\n>> Tue Jul 13 13:44:46 2021 (every 2s)\n>>\n>>               now\n>> -------------------------------\n>>  2021-07-13 13:44:46.396572-04\n>> (1 row)\n>>\n>> ^Cregression=#\n>>\n>> then as you can see I get nothing but the \"^C\" echo before the next\n>> psql prompt.  The problem with this is that now libreadline is\n>> misinformed about the cursor position, messing up any editing I\n>> might try to do on the next line of input.  So I think it would\n>> be a good idea to have some explicit final output when the \\watch\n>> command terminates, along the line of\n>>\n>> ...\n>> Tue Jul 13 13:44:46 2021 (every 2s)\n>>\n>>               now\n>> -------------------------------\n>>  2021-07-13 13:44:46.396572-04\n>> (1 row)\n>>\n>> ^C\\watch cancelled\n>> regression=#\n>>\n>> This strikes me as a usability improvement even without the\n>> readline-confusion angle.\n>\n> I'll look at this issue.\n\nHi Pavel,\n\nDo you have a patch for this?Not yet. I forgot about this issue.RegardsPavel", "msg_date": "Thu, 24 Mar 2022 11:21:04 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Hi\n\nút 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Pushed, after retesting on macOS (with the fixed pspg that has by now\n> > arrived in MacPorts), FreeBSD and Linux. Thanks!\n>\n> After playing with this along the way to fixing the sigwait issues,\n> I have a gripe/suggestion. If I hit control-C while the thing\n> is waiting between queries, eg\n>\n> regression=# select now() \\watch\n> Tue Jul 13 13:44:44 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:44.396565-04\n> (1 row)\n>\n> Tue Jul 13 13:44:46 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:46.396572-04\n> (1 row)\n>\n> ^Cregression=#\n>\n> then as you can see I get nothing but the \"^C\" echo before the next\n> psql prompt. The problem with this is that now libreadline is\n> misinformed about the cursor position, messing up any editing I\n> might try to do on the next line of input. So I think it would\n> be a good idea to have some explicit final output when the \\watch\n> command terminates, along the line of\n>\n> ...\n> Tue Jul 13 13:44:46 2021 (every 2s)\n>\n> now\n> -------------------------------\n> 2021-07-13 13:44:46.396572-04\n> (1 row)\n>\n> ^C\\watch cancelled\n> regression=#\n>\n> This strikes me as a usability improvement even without the\n> readline-confusion angle.\n>\n\nhere is an patch\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>", "msg_date": "Mon, 9 May 2022 09:07:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Mon, May 9, 2022 at 7:07 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> ^Cregression=#\n>>\n>> then as you can see I get nothing but the \"^C\" echo before the next\n>> psql prompt. The problem with this is that now libreadline is\n>> misinformed about the cursor position, messing up any editing I\n>> might try to do on the next line of input. So I think it would\n>> be a good idea to have some explicit final output when the \\watch\n>> command terminates, along the line of\n>>\n>> ...\n>> Tue Jul 13 13:44:46 2021 (every 2s)\n>>\n>> now\n>> -------------------------------\n>> 2021-07-13 13:44:46.396572-04\n>> (1 row)\n>>\n>> ^C\\watch cancelled\n>> regression=#\n>>\n>> This strikes me as a usability improvement even without the\n>> readline-confusion angle.\n>\n> here is an patch\n\nI played with this. On a libedit build (tested on my Mac), an easy\nway to see corruption is to run eg SELECT;, then \\watch 1, then ^C,\nthen up-arrow to see the previous command clobber the wrong columns.\nOn a libreadline build (tested on my Debian box), that simple test\ndoesn't fail in the same way. Though there may be some other way to\nmake it misbehave that would take me longer to find, it's enough for\nme that libedit is befuddled by what we're doing.\n\nDo we really need the extra text? What about just \\n, so you get:\n\npostgres=# \\watch 1\n...blah blah...\n^C\npostgres=#\n\nThis affects all release branches too. Should we bother to fix this\nthere? For them, I think the fix is just:\n\ndiff --git a/src/bin/psql/command.c b/src/bin/psql/command.c\nindex d1ee795cb6..3a88d5d6c4 100644\n--- a/src/bin/psql/command.c\n+++ b/src/bin/psql/command.c\n@@ -4992,6 +4992,9 @@ do_watch(PQExpBuffer query_buf, double sleep)\n sigint_interrupt_enabled = false;\n }\n\n+ fprintf(pset.queryFout, \"\\n\");\n+ fflush(pset.queryFout);\n+\n pg_free(title);\n return (res >= 0);\n }", "msg_date": "Tue, 7 Jun 2022 15:12:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, May 9, 2022 at 7:07 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> út 13. 7. 2021 v 19:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>> ^C\\watch cancelled\n>>> regression=#\n\n> Do we really need the extra text? What about just \\n, so you get:\n\n> postgres=# \\watch 1\n> ...blah blah...\n> ^C\n> postgres=#\n\nFine by me.\n\n> This affects all release branches too. Should we bother to fix this\n> there? For them, I think the fix is just:\n\nIf we're doing something as nonintrusive as just adding a newline,\nit'd probably be OK to backpatch.\n\nThe code needs a comment about why it's emitting a newline, though.\nIn particular, it had better explain why that should be conditional\non !pagerpipe, because that makes no sense to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jun 2022 23:23:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "On Tue, Jun 7, 2022 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The code needs a comment about why it's emitting a newline, though.\n> In particular, it had better explain why that should be conditional\n> on !pagerpipe, because that makes no sense to me.\n\nYeah. OK, here's my take:\n\n+ /*\n+ * If the terminal driver echoed \"^C\",\nlibedit/libreadline might be\n+ * confused about the cursor position. Therefore,\ninject a newline\n+ * before the next prompt is displayed. We only do\nthis when not using\n+ * a pager, because pagers are expected to restore the\nscreen to a sane\n+ * state on exit.\n+ */\n\nAFAIK pagers conventionally use something like termcap ti/te[1] to\nrestore the screen, or equivalents in tinfo etc (likely via curses).\nIf we were to inject an extra newline we'd just have a blank line for\nnothing. I suppose there could be a hypothetical pager that doesn't\nfollow that convention, and in fact both less and pspg have a -X\noption to preserve last output, but in any case I expect that pagers\ndisable echoing, so I don't think the ^C will make it to the screen,\nand furthermore ^C isn't used for exit anyway. Rather than speculate\nabout the precise details, I just said \"... sane state on exit\".\nPavel, do you agree?\n\nHere's how it looks after I enter and then exit Pavel's streaming pager:\n\n$ PSQL_WATCH_PAGER='pspg --stream' ~/install/bin/psql postgres\npsql (15beta1)\nType \"help\" for help.\n\npostgres=# select;\n--\n(1 row)\n\npostgres=# \\watch 1\npostgres=#\n\nFWIW it's the same with PSQL_WATCH_PAGER='less'.\n\n[1] https://www.gnu.org/software/termutils/manual/termcap-1.3/html_node/termcap_39.html", "msg_date": "Tue, 7 Jun 2022 16:50:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - use pager for \\watch command" }, { "msg_contents": "út 7. 6. 2022 v 6:50 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Tue, Jun 7, 2022 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The code needs a comment about why it's emitting a newline, though.\n> > In particular, it had better explain why that should be conditional\n> > on !pagerpipe, because that makes no sense to me.\n>\n> Yeah. OK, here's my take:\n>\n> + /*\n> + * If the terminal driver echoed \"^C\",\n> libedit/libreadline might be\n> + * confused about the cursor position. Therefore,\n> inject a newline\n> + * before the next prompt is displayed. We only do\n> this when not using\n> + * a pager, because pagers are expected to restore the\n> screen to a sane\n> + * state on exit.\n> + */\n>\n> AFAIK pagers conventionally use something like termcap ti/te[1] to\n> restore the screen, or equivalents in tinfo etc (likely via curses).\n> If we were to inject an extra newline we'd just have a blank line for\n> nothing. I suppose there could be a hypothetical pager that doesn't\n> follow that convention, and in fact both less and pspg have a -X\n> option to preserve last output, but in any case I expect that pagers\n> disable echoing, so I don't think the ^C will make it to the screen,\n> and furthermore ^C isn't used for exit anyway. Rather than speculate\n> about the precise details, I just said \"... sane state on exit\".\n> Pavel, do you agree?\n>\n\nApplications designed to be used as pager are usually careful about the\nfinal cursor position. Without it, there can be no wanted artefacts. pspg\nshould work in pgcli, which is a more sensitive environment than psql.\n\nI think modern pagers like less or pspg will work in all modes correctly.\nThere can be some legacy pagers like \"pg\" or very old implementations of\n\"more\". But we don't consider it probably (more just in comment).\n\nRegards\n\nPavel\n\n\n\n> Here's how it looks after I enter and then exit Pavel's streaming pager:\n>\n> $ PSQL_WATCH_PAGER='pspg --stream' ~/install/bin/psql postgres\n> psql (15beta1)\n> Type \"help\" for help.\n>\n> postgres=# select;\n> --\n> (1 row)\n>\n> postgres=# \\watch 1\n> postgres=#\n>\n> FWIW it's the same with PSQL_WATCH_PAGER='less'.\n>\n> [1]\n> https://www.gnu.org/software/termutils/manual/termcap-1.3/html_node/termcap_39.html\n>\n\nút 7. 6. 2022 v 6:50 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Tue, Jun 7, 2022 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The code needs a comment about why it's emitting a newline, though.\n> In particular, it had better explain why that should be conditional\n> on !pagerpipe, because that makes no sense to me.\n\nYeah.  OK, here's my take:\n\n+               /*\n+                * If the terminal driver echoed \"^C\",\nlibedit/libreadline might be\n+                * confused about the cursor position.  Therefore,\ninject a newline\n+                * before the next prompt is displayed.  We only do\nthis when not using\n+                * a pager, because pagers are expected to restore the\nscreen to a sane\n+                * state on exit.\n+                */\n\nAFAIK pagers conventionally use something like termcap ti/te[1] to\nrestore the screen, or equivalents in tinfo etc (likely via curses).\nIf we were to inject an extra newline we'd just have a blank line for\nnothing.  I suppose there could be a hypothetical pager that doesn't\nfollow that convention, and in fact both less and pspg have a -X\noption to preserve last output, but in any case I expect that pagers\ndisable echoing, so I don't think the ^C will make it to the screen,\nand furthermore ^C isn't used for exit anyway.  Rather than speculate\nabout the precise details, I just said \"... sane state on exit\".\nPavel, do you agree?Applications designed to be used as pager are usually careful about the final cursor position. Without it, there can be no wanted artefacts. pspg should work in pgcli, which is a more sensitive environment than psql.I think modern pagers like less or pspg will work in all modes correctly. There can be some legacy pagers like \"pg\" or very old implementations of \"more\". But we don't consider it probably (more just in comment). RegardsPavel\n\nHere's how it looks after I enter and then exit Pavel's streaming pager:\n\n$ PSQL_WATCH_PAGER='pspg --stream' ~/install/bin/psql postgres\npsql (15beta1)\nType \"help\" for help.\n\npostgres=# select;\n--\n(1 row)\n\npostgres=# \\watch 1\npostgres=#\n\nFWIW it's the same with PSQL_WATCH_PAGER='less'.\n\n[1] https://www.gnu.org/software/termutils/manual/termcap-1.3/html_node/termcap_39.html", "msg_date": "Tue, 7 Jun 2022 07:26:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - use pager for \\watch command" } ]
[ { "msg_contents": "There was a previous thread[1], but I think it needs some wider\ndiscussion.\n\nI brought up an issue where GCC in combination with FORTIFY_SOURCE[2]\ncauses a perf regression for logical tapes after introducing\nLogicalTapeSetExtend()[3]. Unfortunately, FORTIFY_SOURCE is used by\ndefault on ubuntu. I have not observed the problem with clang.\n\nThere is no reason why the change should trigger the regression, but it\ndoes. The slowdown is due to GCC switching to an inlined version of\nmemcpy() for LogicalTapeWrite() at logtape.c:768. The change[3] seems\nto have little if anything to do with that.\n\nGCC's Object Size Checking[4] doc says:\n\n \"There are built-in functions added for many common\n string operation functions, e.g., for memcpy \n __builtin___memcpy_chk built-in is provided. This\n built-in has an additional last argument, which is\n the number of bytes remaining in the object the dest\n argument points to or (size_t) -1 if the size is not\n known. The built-in functions are optimized into the\n normal string functions like memcpy if the last\n argument is (size_t) -1 or if it is known at compile\n time that the destination object will not be\n overflowed...\"\n\nIn other words, if GCC knows the size of the object it tries to either\nverify at compile time that it will never overflow, or it inserts a\nruntime check. But if it doesn't know the size of the object, there's\nnothing it can do so it just uses memcpy() like normal.\n\nKnowing the destination buffer size at compile time would be impossible\n(before or after my change) because palloc() doesn't have the\nalloc_size attribute[5] specified. Even if it is specified (which I\ntried), and if the compiler was smart enough (which it's not), it could\nstill only come up with a maximum size because the offset changes at\nruntime. Regardless, I tried printing out the results of:\n __builtin_object_size (lt->buffer + lt->pos, 0)\nand the result is always -1 (unknown).\n\nI have attached a workaround patch which restores the performance, and\nit's isolatted to logtape.c, but it's ugly (and not a little bit).\n\nThe questions are:\n\n1. Is my analysis correct?\n2. What is the scale of this problem? What about other platforms or\ncompilers? Are there other cases in PostgreSQL that might suffer from\nthe use of FORTIFY_SOURCE?\n3. Even if this is the compiler's fault, should we still fix it?\n4. Does the attached fix have any dangers of regressing on other\ncompilers/platforms?\n5. Does anyone have a suggestion for a better fix?\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://postgr.es/m/91ca648cfd1f99bf07981487a7d81a1ec926caad.camel@j-davis.com\n[2] \nhttps://fedoraproject.org/wiki/Security_Features?rd=Security/Features#Compile_Time_Buffer_Checks_.28FORTIFY_SOURCE.29\n[3] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24d85952\n[4] https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html\n[5] \nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alloc_005fsize-function-attribute", "msg_date": "Sun, 19 Apr 2020 15:07:22 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Sun, Apr 19, 2020 at 3:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> 1. Is my analysis correct?\n> 2. What is the scale of this problem? What about other platforms or\n> compilers? Are there other cases in PostgreSQL that might suffer from\n> the use of FORTIFY_SOURCE?\n> 3. Even if this is the compiler's fault, should we still fix it?\n\nThe precedent set by MemSetAligned() is that sometimes we see the code\ngenerated by very common standard library functions as a problem for\nus to fix, or to paper over.\n\nIs it possible that the issue has something to do with what the\ncompiler knows about the alignment of the tapes back when they were a\nflexible array vs. now, where it's a separate allocation? Perhaps I'm\nover reaching, but it occurs to me that MemSetAligned() is itself\nconcerned about the alignment of data returned from palloc(). Could be\na similar issue here, too.\n\nSome guy on the internet says that microarchitectural issues can make\n__memcpy_avx_unaligned() a lot faster that the \"rep movsq\" instruction\n(which you mentioned was a factor on the other thread) in some cases\n[1]. This explanation sounds kind of plausible.\n\n[1] https://news.ycombinator.com/item?id=12050579\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 19 Apr 2020 16:19:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Sun, 2020-04-19 at 16:19 -0700, Peter Geoghegan wrote:\n> Is it possible that the issue has something to do with what the\n> compiler knows about the alignment of the tapes back when they were a\n> flexible array vs. now, where it's a separate allocation? Perhaps I'm\n> over reaching, but it occurs to me that MemSetAligned() is itself\n> concerned about the alignment of data returned from palloc(). Could\n> be\n> a similar issue here, too.\n\nThe memcpy() is for the buffer, not the array of LogicalTapes, so I\ndon't really see how that would happen.\n\n> Some guy on the internet says that microarchitectural issues can make\n> __memcpy_avx_unaligned() a lot faster that the \"rep movsq\"\n> instruction\n> (which you mentioned was a factor on the other thread) in some cases\n> [1]. This explanation sounds kind of plausible.\n> \n> [1] https://news.ycombinator.com/item?id=12050579\n\nThat raises another consideration: perhaps this is not uniformly a\nregression, but actually faster in some situations? If so, what\nsituations?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 19 Apr 2020 19:48:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "Speaking with my RMT hat on, I'm concerned that this item is not moving\nforward at all. ISTM we first and foremost need to decide whether this\nis a problem worth worrying about, or not.\n\nIf it is something worth worrying about, let's discuss what's a good\nfix for it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 4 Jun 2020 16:35:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Sun, 2020-04-19 at 16:19 -0700, Peter Geoghegan wrote:\n> Is it possible that the issue has something to do with what the\n> compiler knows about the alignment of the tapes back when they were a\n> flexible array vs. now, where it's a separate allocation? Perhaps I'm\n> over reaching, but it occurs to me that MemSetAligned() is itself\n> concerned about the alignment of data returned from palloc(). Could\n> be\n> a similar issue here, too.\n\nPerhaps, but if so, what remedy would that suggest?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 04 Jun 2020 17:58:23 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Thu, 2020-06-04 at 16:35 -0400, Alvaro Herrera wrote:\n> If it is something worth worrying about, let's discuss what's a good\n> fix for it.\n\nI did post a fix for it, but it's not a very clean fix. I'm slightly\ninclined to proceed with that fix, but I was hoping someone else would\nhave a better suggestion.\n\nHow about if I wait another week, and if we still don't have a better\nfix, I will commit this one.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 04 Jun 2020 18:09:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Thu, 2020-06-04 at 16:35 -0400, Alvaro Herrera wrote:\n>> If it is something worth worrying about, let's discuss what's a good\n>> fix for it.\n\n> I did post a fix for it, but it's not a very clean fix. I'm slightly\n> inclined to proceed with that fix, but I was hoping someone else would\n> have a better suggestion.\n> How about if I wait another week, and if we still don't have a better\n> fix, I will commit this one.\n\nTBH, I don't think we should do this, at least not on the strength of the\nevidence you posted so far. It looks to me like you are micro-optimizing\nfor one compiler on one platform. Moreover, you're basically trying to\nwork around a compiler codegen bug that might not be there next year.\n\nI think what'd make more sense is to file this as a gcc bug (\"why doesn't\nit remove the useless object size check?\") and see what they say about\nthat. If the answer is that this isn't a gcc bug for whatever reason,\nthen we could think about whether we should work around it on the\nsource-code level. Even then, I'd want more evidence than has been\npresented about this not causing a regression on other toolchains/CPUs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jun 2020 21:41:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2020-04-19 15:07:22 -0700, Jeff Davis wrote:\n> I brought up an issue where GCC in combination with FORTIFY_SOURCE[2]\n> causes a perf regression for logical tapes after introducing\n> LogicalTapeSetExtend()[3]. Unfortunately, FORTIFY_SOURCE is used by\n> default on ubuntu. I have not observed the problem with clang.\n>\n> There is no reason why the change should trigger the regression, but it\n> does. The slowdown is due to GCC switching to an inlined version of\n> memcpy() for LogicalTapeWrite() at logtape.c:768. The change[3] seems\n> to have little if anything to do with that.\n\nFWIW, with gcc 10 and glibc 2.30 I don't see such a switch. Taking a\nprofile shows me:\n\n │ nthistime = TapeBlockPayloadSize - lt->pos;\n │ if (nthistime > size)\n 3.01 │1 b0: cmp %rdx,%r12\n 1.09 │ cmovbe %r12,%rdx\n │ memcpy():\n │\n │ __fortify_function void *\n │ __NTH (memcpy (void *__restrict __dest, const void *__restrict __src,\n │ size_t __len))\n │ {\n │ return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));\n 2.44 │ mov %r13,%rsi\n │ LogicalTapeWrite():\n │ nthistime = size;\n │ Assert(nthistime > 0);\n │\n │ memcpy(lt->buffer + lt->pos, ptr, nthistime);\n 2.49 │ add 0x28(%rbx),%rdi\n 0.28 │ mov %rdx,%r15\n │ memcpy():\n 4.65 │ → callq memcpy@plt\n │ LogicalTapeWrite():\n\nI.e. normal memcpy is getting called.\n\nThat's with -D_FORTIFY_SOURCE=2\n\nWith which compiler / libc versions did you encounter this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Jun 2020 14:49:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Thu, 2020-06-04 at 21:41 -0400, Tom Lane wrote:\n> I think what'd make more sense is to file this as a gcc bug (\"why\n> doesn't\n> it remove the useless object size check?\") \n\nFiled:\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 05 Jun 2020 18:30:17 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Fri, 2020-06-05 at 14:49 -0700, Andres Freund wrote:\n> FWIW, with gcc 10 and glibc 2.30 I don't see such a switch. Taking a\n> profile shows me:\n\n...\n\n> 4.65 │ → callq memcpy@plt\n> │ LogicalTapeWrite():\n> \n> I.e. normal memcpy is getting called.\n> \n> That's with -D_FORTIFY_SOURCE=2\n\nThat's good news, although people will be using ubuntu 18.04 for a\nwhile.\n\nJust to confirm, would you mind trying the example programs in the GCC\nbug report?\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556\n\n> With which compiler / libc versions did you encounter this?\n\ngcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\ngcc-9 (Ubuntu 9.2.1-19ubuntu1~18.04.york0) 9.2.1 20191109\nlibc-dev-bin/bionic,now 2.27-3ubuntu1 amd64 [installed]\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 05 Jun 2020 18:39:28 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Thu, 2020-06-04 at 16:35 -0400, Alvaro Herrera wrote:\n> If it is something worth worrying about, let's discuss what's a good\n> fix for it.\n\nWhile making a minimal test case for the GCC bug report, I found\nanother surprisingly-small workaround. Patch attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 05 Jun 2020 18:46:13 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Thu, 2020-06-04 at 16:35 -0400, Alvaro Herrera wrote:\n>> If it is something worth worrying about, let's discuss what's a good\n>> fix for it.\n\n> While making a minimal test case for the GCC bug report, I found\n> another surprisingly-small workaround. Patch attached.\n\nUgh :-( ... but perhaps you could get the same result like this:\n\n-#define TapeBlockPayloadSize (BLCKSZ - sizeof(TapeBlockTrailer))\n+#define TapeBlockPayloadSize (BLCKSZ - (int) sizeof(TapeBlockTrailer))\n\nOr possibly casting the whole thing to int or unsigned int would be\nbetter. Point being that I bet it's int vs long that is making the\ndifference.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jun 2020 21:50:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "On Fri, 2020-06-05 at 21:50 -0400, Tom Lane wrote:\n> Or possibly casting the whole thing to int or unsigned int would be\n> better. Point being that I bet it's int vs long that is making the\n> difference.\n\nThat did it, and it's much more tolerable as a workaround. Thank you.\n\nI haven't tested end-to-end that it solves the problem, but I'm pretty\nsure it will.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 05 Jun 2020 19:06:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2020-06-05 18:39:28 -0700, Jeff Davis wrote:\n> On Fri, 2020-06-05 at 14:49 -0700, Andres Freund wrote:\n> > FWIW, with gcc 10 and glibc 2.30 I don't see such a switch. Taking a\n> > profile shows me:\n> \n> ...\n> \n> > 4.65 │ → callq memcpy@plt\n> > │ LogicalTapeWrite():\n> > \n> > I.e. normal memcpy is getting called.\n> > \n> > That's with -D_FORTIFY_SOURCE=2\n> \n> That's good news, although people will be using ubuntu 18.04 for a\n> while.\n> \n> Just to confirm, would you mind trying the example programs in the GCC\n> bug report?\n> \n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556\n\nI get \"call memcpy@PLT\" for both files. With various debian versions of\ngcc (7,8,9,10). But, very curiously, I do see the difference when\ncompiling with gcc-snapshot (which is a debian package wrapping a recent\nsnapshot from upstream gcc).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Jun 2020 19:45:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: v13: Performance regression related to FORTIFY_SOURCE" } ]
[ { "msg_contents": "Hackers,\n\nI have been talking with Robert about table corruption that occurs from time to time. The page checksum feature seems sufficient to detect most random corruption problems, but it can't detect \"logical\" corruption, where the page is valid but inconsistent with the rest of the database cluster. This can happen due to faulty or ill-conceived backup and restore tools, or bad storage, or user error, or bugs in the server itself. (Also, not everyone enables checksums.)\n\nThe attached module provides the means to scan a relation and sanity check it. Currently, it checks xmin and xmax values against relfrozenxid and relminmxid, and also validates TOAST pointers. If people like this, it could be expanded to perform additional checks.\n\nThere was a prior v1 patch, discussed offlist with Robert, not posted. Here is v2:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 20 Apr 2020 10:59:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 10:59 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The attached module provides the means to scan a relation and sanity check it. Currently, it checks xmin and xmax values against relfrozenxid and relminmxid, and also validates TOAST pointers. If people like this, it could be expanded to perform additional checks.\n\nCool. Why not make it part of contrib/amcheck?\n\nWe talked about the kinds of checks that we'd like to have for a tool\nlike this before:\n\nhttps://postgr.es/m/20161017014605.GA1220186@tornado.leadboat.com\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:09:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 2:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Cool. Why not make it part of contrib/amcheck?\n\nI wondered if people would suggest that. Didn't take long.\n\nThe documentation would need some updating, but that's doable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:19:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 11:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I wondered if people would suggest that. Didn't take long.\n\nYou were the one that pointed out that my first version of\ncontrib/amcheck, which was called \"contrib/btreecheck\", should have\na more general name. And rightly so!\n\nThe basic interface used for the heap checker functions seem very\nsimilar to what amcheck already offers for B-Tree indexes, so it seems\nvery natural to distribute them together.\n\nIMV, the problem that we have with amcheck is that it's too hard to\nuse in a top down kind of way. Perhaps there is an opportunity to\nprovide a more top-down interface to an expanded version of amcheck\nthat does heap checking. Something with a high level practical focus,\nin addition to the low level functions. I'm not saying that Mark\nshould be required to solve that problem, but it certainly seems worth\nconsidering now.\n\n> The documentation would need some updating, but that's doable.\n\nIt would also probably need a bit of renaming, so that analogous\nfunction names are used.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:31:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Apr 20, 2020, at 11:31 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> IMV, the problem that we have with amcheck is that it's too hard to\n> use in a top down kind of way. Perhaps there is an opportunity to\n> provide a more top-down interface to an expanded version of amcheck\n> that does heap checking. Something with a high level practical focus,\n> in addition to the low level functions. I'm not saying that Mark\n> should be required to solve that problem, but it certainly seems worth\n> considering now.\n\nThanks for your quick response and interest in this submission!\n\nCan you elaborate on \"top-down\"? I'm not sure what that means in this context.\n\nI don't mind going further with this project if I understand what you are suggesting.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:34:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I mean an interface that's friendly to DBAs, that verifies an entire\ndatabase. No custom sql query required. Something that provides a\nreasonable mix of verification options based on high level directives. All\nverification methods can be combined in a granular, possibly randomized\nfashion. Maybe we can make this run in parallel.\n\nFor example, maybe your heap checker code sometimes does index probes for a\nsubset of indexes and heap tuples. It's not hard to combine it with the\nrootdescend stuff from amcheck. It should be composable.\n\nThe interface you've chosen is a good starting point. But let's not miss an\nopportunity to make everything work together.\n\nPeter Geoghegan\n(Sent from my phone)\n\nI mean an interface that's friendly to DBAs, that verifies an entire database. No custom sql query required. Something that provides a reasonable mix of verification options based on high level directives. All verification methods can be combined in a granular, possibly randomized fashion. Maybe we can make this run in parallel. For example, maybe your heap checker code sometimes does index probes for a subset of indexes and heap tuples. It's not hard to combine it with the rootdescend stuff from amcheck. It should be composable. The interface you've chosen is a good starting point. But let's not miss an opportunity to make everything work together. Peter Geoghegan(Sent from my phone)", "msg_date": "Mon, 20 Apr 2020 12:37:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Apr 20, 2020, at 12:37 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I mean an interface that's friendly to DBAs, that verifies an entire database. No custom sql query required. Something that provides a reasonable mix of verification options based on high level directives. All verification methods can be combined in a granular, possibly randomized fashion. Maybe we can make this run in parallel. \n> \n> For example, maybe your heap checker code sometimes does index probes for a subset of indexes and heap tuples. It's not hard to combine it with the rootdescend stuff from amcheck. It should be composable. \n> \n> The interface you've chosen is a good starting point. But let's not miss an opportunity to make everything work together. \n\nOk, I'll work in that direction and repost when I have something along those lines.\n\nThanks again for your input.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 20 Apr 2020 12:40:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn 2020-04-20 10:59:28 -0700, Mark Dilger wrote:\n> I have been talking with Robert about table corruption that occurs\n> from time to time. The page checksum feature seems sufficient to\n> detect most random corruption problems, but it can't detect \"logical\"\n> corruption, where the page is valid but inconsistent with the rest of\n> the database cluster. This can happen due to faulty or ill-conceived\n> backup and restore tools, or bad storage, or user error, or bugs in\n> the server itself. (Also, not everyone enables checksums.)\n\nThis is something we really really really need. I'm very excited to see\nprogress!\n\n\n> From 2a1bc0bb9fa94bd929adc1a408900cb925ebcdd5 Mon Sep 17 00:00:00 2001\n> From: Mark Dilger <mark.dilger@enterprisedb.com>\n> Date: Mon, 20 Apr 2020 08:05:58 -0700\n> Subject: [PATCH v2] Adding heapcheck contrib module.\n> \n> The heapcheck module introduces a new function for checking a heap\n> relation and associated toast relation, if any, for corruption.\n\nWhy not add it to amcheck?\n\n\nI wonder if a mode where heapcheck optionally would only checks\nnon-frozen (perhaps also non-all-visible) regions of a table would be a\ngood idea? Would make it a lot more viable to run this regularly on\nbigger databases. Even if there's a window to not check some data\n(because it's frozen before the next heapcheck run).\n\n\n> The attached module provides the means to scan a relation and sanity\n> check it. Currently, it checks xmin and xmax values against\n> relfrozenxid and relminmxid, and also validates TOAST pointers. If\n> people like this, it could be expanded to perform additional checks.\n\n\n> The postgres backend already defends against certain forms of\n> corruption, by checking the page header of each page before allowing\n> it into the page cache, and by checking the page checksum, if enabled.\n> Experience shows that broken or ill-conceived backup and restore\n> mechanisms can result in a page, or an entire file, being overwritten\n> with an earlier version of itself, restored from backup. Pages thus\n> overwritten will appear to have valid page headers and checksums,\n> while potentially containing xmin, xmax, and toast pointers that are\n> invalid.\n\nWe also had a *lot* of bugs that we'd have found a lot earlier, possibly\neven during development, if we had a way to easily perform these checks.\n\n\n> contrib/heapcheck introduces a function, heapcheck_relation, that\n> takes a regclass argument, scans the given heap relation, and returns\n> rows containing information about corruption found within the table.\n> The main focus of the scan is to find invalid xmin, xmax, and toast\n> pointer values. It also checks for structural corruption within the\n> page (such as invalid t_hoff values) that could lead to the backend\n> aborting should the function blindly trust the data as it finds it.\n\n\n> +typedef struct CorruptionInfo\n> +{\n> +\tBlockNumber blkno;\n> +\tOffsetNumber offnum;\n> +\tint16\t\tlp_off;\n> +\tint16\t\tlp_flags;\n> +\tint16\t\tlp_len;\n> +\tint32\t\tattnum;\n> +\tint32\t\tchunk;\n> +\tchar\t *msg;\n> +}\t\t\tCorruptionInfo;\n\nAdding a short comment explaining what this is for would be good.\n\n\n> +/* Internal implementation */\n> +void\t\trecord_corruption(HeapCheckContext * ctx, char *msg);\n> +TupleDesc\theapcheck_relation_tupdesc(void);\n> +\n> +void\t\tbeginRelBlockIteration(HeapCheckContext * ctx);\n> +bool\t\trelBlockIteration_next(HeapCheckContext * ctx);\n> +void\t\tendRelBlockIteration(HeapCheckContext * ctx);\n> +\n> +void\t\tbeginPageTupleIteration(HeapCheckContext * ctx);\n> +bool\t\tpageTupleIteration_next(HeapCheckContext * ctx);\n> +void\t\tendPageTupleIteration(HeapCheckContext * ctx);\n> +\n> +void\t\tbeginTupleAttributeIteration(HeapCheckContext * ctx);\n> +bool\t\ttupleAttributeIteration_next(HeapCheckContext * ctx);\n> +void\t\tendTupleAttributeIteration(HeapCheckContext * ctx);\n> +\n> +void\t\tbeginToastTupleIteration(HeapCheckContext * ctx,\n> +\t\t\t\t\t\t\t\t\t struct varatt_external *toast_pointer);\n> +void\t\tendToastTupleIteration(HeapCheckContext * ctx);\n> +bool\t\ttoastTupleIteration_next(HeapCheckContext * ctx);\n> +\n> +bool\t\tTransactionIdStillValid(TransactionId xid, FullTransactionId *fxid);\n> +bool\t\tHeapTupleIsVisible(HeapTupleHeader tuphdr, HeapCheckContext * ctx);\n> +void\t\tcheck_toast_tuple(HeapCheckContext * ctx);\n> +bool\t\tcheck_tuple_attribute(HeapCheckContext * ctx);\n> +void\t\tcheck_tuple(HeapCheckContext * ctx);\n> +\n> +List\t *check_relation(Oid relid);\n> +void\t\tcheck_relation_relkind(Relation rel);\n\nWhy aren't these static?\n\n\n> +/*\n> + * record_corruption\n> + *\n> + * Record a message about corruption, including information\n> + * about where in the relation the corruption was found.\n> + */\n> +void\n> +record_corruption(HeapCheckContext * ctx, char *msg)\n> +{\n\nGiven that you went through the trouble of adding prototypes for all of\nthese, I'd start with the most important functions, not the unimportant\ndetails.\n\n\n> +/*\n> + * Helper function to construct the TupleDesc needed by heapcheck_relation.\n> + */\n> +TupleDesc\n> +heapcheck_relation_tupdesc()\n\nMissing (void) (it's our style, even though you could theoretically not\nhave it as long as you have a prototype).\n\n\n> +{\n> +\tTupleDesc\ttupdesc;\n> +\tAttrNumber\tmaxattr = 8;\n\nThis 8 is in multiple places, I'd add a define for it.\n\n> +\tAttrNumber\ta = 0;\n> +\n> +\ttupdesc = CreateTemplateTupleDesc(maxattr);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"blkno\", INT8OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"offnum\", INT4OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_off\", INT2OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_flags\", INT2OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_len\", INT2OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"attnum\", INT4OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"chunk\", INT4OID, -1, 0);\n> +\tTupleDescInitEntry(tupdesc, ++a, \"msg\", TEXTOID, -1, 0);\n> +\tAssert(a == maxattr);\n> +\n> +\treturn BlessTupleDesc(tupdesc);\n> +}\n\n\n> +/*\n> + * heapcheck_relation\n> + *\n> + * Scan and report corruption in heap pages or in associated toast relation.\n> + */\n> +Datum\n> +heapcheck_relation(PG_FUNCTION_ARGS)\n> +{\n> +\tFuncCallContext *funcctx;\n> +\tCheckRelCtx *ctx;\n> +\n> +\tif (SRF_IS_FIRSTCALL())\n> +\t{\n\nI think it'd be good to have a version that just returned a boolean. For\none, in many cases that's all we care about when scripting things. But\nalso, on a large relation, there could be a lot of errors.\n\n\n> +\t\tOid\t\t\trelid = PG_GETARG_OID(0);\n> +\t\tMemoryContext oldcontext;\n> +\n> +\t\t/*\n> +\t\t * Scan the entire relation, building up a list of corruption found in\n> +\t\t * ctx->corruption, for returning later. The scan must be performed\n> +\t\t * in a memory context that will survive until after all rows are\n> +\t\t * returned.\n> +\t\t */\n> +\t\tfuncctx = SRF_FIRSTCALL_INIT();\n> +\t\toldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n> +\t\tfuncctx->tuple_desc = heapcheck_relation_tupdesc();\n> +\t\tctx = (CheckRelCtx *) palloc0(sizeof(CheckRelCtx));\n> +\t\tctx->corruption = check_relation(relid);\n> +\t\tctx->idx = 0;\t\t\t/* start the iterator at the beginning */\n> +\t\tfuncctx->user_fctx = (void *) ctx;\n> +\t\tMemoryContextSwitchTo(oldcontext);\n\nHm. This builds up all the errors in memory. Is that a good idea? I mean\nfor a large relation having one returned value for each tuple could be a\nheck of a lot of data.\n\nI think it'd be better to use the spilling SRF protocol here. It's not\nlike you're benefitting from deferring the tuple construction to the\nreturn currently.\n\n\n> +/*\n> + * beginRelBlockIteration\n> + *\n> + * For the given heap relation being checked, as recorded in ctx, sets up\n> + * variables for iterating over the heap's pages.\n> + *\n> + * The caller should have already opened the heap relation, ctx->rel\n> + */\n> +void\n> +beginRelBlockIteration(HeapCheckContext * ctx)\n> +{\n> +\tctx->nblocks = RelationGetNumberOfBlocks(ctx->rel);\n> +\tctx->blkno = InvalidBlockNumber;\n> +\tctx->bstrategy = GetAccessStrategy(BAS_BULKREAD);\n> +\tctx->buffer = InvalidBuffer;\n> +\tctx->page = NULL;\n> +}\n> +\n> +/*\n> + * endRelBlockIteration\n> + *\n> + * Releases resources that were reserved by either beginRelBlockIteration or\n> + * relBlockIteration_next.\n> + */\n> +void\n> +endRelBlockIteration(HeapCheckContext * ctx)\n> +{\n> +\t/*\n> +\t * Clean up. If the caller iterated to the end, the final call to\n> +\t * relBlockIteration_next will already have released the buffer, but if\n> +\t * the caller is bailing out early, we have to release it ourselves.\n> +\t */\n> +\tif (InvalidBuffer != ctx->buffer)\n> +\t\tUnlockReleaseBuffer(ctx->buffer);\n> +}\n\nThese seem mighty granular and generically named to me.\n\n\n> + * pageTupleIteration_next\n> + *\n> + * Advances the state tracked in ctx to the next tuple on the page.\n> + *\n> + * Caller should have already set up the iteration via\n> + * beginPageTupleIteration, and should stop calling when this function\n> + * returns false.\n> + */\n> +bool\n> +pageTupleIteration_next(HeapCheckContext * ctx)\n\nI don't think this is a naming scheme we use anywhere in postgres. I\ndon't think it's a good idea to add yet more of those.\n\n\n> +{\n> +\t/*\n> +\t * Iterate to the next interesting line pointer, if any. Unused, dead and\n> +\t * redirect line pointers are of no interest.\n> +\t */\n> +\tdo\n> +\t{\n> +\t\tctx->offnum = OffsetNumberNext(ctx->offnum);\n> +\t\tif (ctx->offnum > ctx->maxoff)\n> +\t\t\treturn false;\n> +\t\tctx->itemid = PageGetItemId(ctx->page, ctx->offnum);\n> +\t} while (!ItemIdIsUsed(ctx->itemid) ||\n> +\t\t\t ItemIdIsDead(ctx->itemid) ||\n> +\t\t\t ItemIdIsRedirected(ctx->itemid));\n\nThis is an odd loop. Part of the test is in the body, part of in the\nloop header.\n\n\n> +/*\n> + * Given a TransactionId, attempt to interpret it as a valid\n> + * FullTransactionId, neither in the future nor overlong in\n> + * the past. Stores the inferred FullTransactionId in *fxid.\n> + *\n> + * Returns whether the xid is newer than the oldest clog xid.\n> + */\n> +bool\n> +TransactionIdStillValid(TransactionId xid, FullTransactionId *fxid)\n\nI don't at all like the naming of this function. This isn't a reliable\ncheck. As before, it obviously also shouldn't be static.\n\n\n> +{\n> +\tFullTransactionId fnow;\n> +\tuint32\t\tepoch;\n> +\n> +\t/* Initialize fxid; we'll overwrite this later if needed */\n> +\t*fxid = FullTransactionIdFromEpochAndXid(0, xid);\n\n> +\t/* Special xids can quickly be turned into invalid fxids */\n> +\tif (!TransactionIdIsValid(xid))\n> +\t\treturn false;\n> +\tif (!TransactionIdIsNormal(xid))\n> +\t\treturn true;\n> +\n> +\t/*\n> +\t * Charitably infer the full transaction id as being within one epoch ago\n> +\t */\n> +\tfnow = ReadNextFullTransactionId();\n> +\tepoch = EpochFromFullTransactionId(fnow);\n> +\t*fxid = FullTransactionIdFromEpochAndXid(epoch, xid);\n\nSo now you're overwriting the fxid value from above unconditionally?\n\n\n> +\tif (!FullTransactionIdPrecedes(*fxid, fnow))\n> +\t\t*fxid = FullTransactionIdFromEpochAndXid(epoch - 1, xid);\n\n\nI think it'd be better to do the conversion the following way:\n\n *fxid = FullTransactionIdFromU64(U64FromFullTransactionId(fnow)\n + (int32) (XidFromFullTransactionId(fnow) - xid));\n\n\n> +\tif (!FullTransactionIdPrecedes(*fxid, fnow))\n> +\t\treturn false;\n> +\t/* The oldestClogXid is protected by CLogTruncationLock */\n> +\tAssert(LWLockHeldByMe(CLogTruncationLock));\n> +\tif (TransactionIdPrecedes(xid, ShmemVariableCache->oldestClogXid))\n> +\t\treturn false;\n> +\treturn true;\n> +}\n\nWhy is this testing oldestClogXid instead of oldestXid?\n\n\n> +/*\n> + * HeapTupleIsVisible\n> + *\n> + *\tDetermine whether tuples are visible for heapcheck. Similar to\n> + * HeapTupleSatisfiesVacuum, but with critical differences.\n> + *\n> + * 1) Does not touch hint bits. It seems imprudent to write hint bits\n> + * to a table during a corruption check.\n> + * 2) Gracefully handles xids that are too old by calling\n> + * TransactionIdStillValid before TransactionLogFetch, thus avoiding\n> + * a backend abort.\n\nI think it'd be better to protect against this by avoiding checks for\nxids that are older than relfrozenxid. And ones that are newer than\nReadNextTransactionId(). But all of those cases should be errors\nanyway, so it doesn't seem like that should be handled within the\nvisibility routine.\n\n\n> + * 3) Only makes a boolean determination of whether heapcheck should\n> + * see the tuple, rather than doing extra work for vacuum-related\n> + * categorization.\n> + */\n> +bool\n> +HeapTupleIsVisible(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n> +{\n\n> +\tFullTransactionId fxmin,\n> +\t\t\t\tfxmax;\n> +\tuint16\t\tinfomask = tuphdr->t_infomask;\n> +\tTransactionId xmin = HeapTupleHeaderGetXmin(tuphdr);\n> +\n> +\tif (!HeapTupleHeaderXminCommitted(tuphdr))\n> +\t{\n\nHm. I wonder if it'd be good to crosscheck the xid committed hint bits\nwith clog?\n\n\n> +\t\telse if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuphdr)))\n> +\t\t{\n> +\t\t\tLWLockRelease(CLogTruncationLock);\n> +\t\t\treturn false;\t\t/* HEAPTUPLE_DEAD */\n> +\t\t}\n\nNote that this actually can error out, if xmin is a subtransaction xid,\nbecause pg_subtrans is truncated a lot more aggressively than anything\nelse. I think you'd need to filter against subtransactions older than\nRecentXmin before here, and treat that as an error.\n\n\n> +\tif (!(infomask & HEAP_XMAX_INVALID) && !HEAP_XMAX_IS_LOCKED_ONLY(infomask))\n> +\t{\n> +\t\tif (infomask & HEAP_XMAX_IS_MULTI)\n> +\t\t{\n> +\t\t\tTransactionId xmax = HeapTupleGetUpdateXid(tuphdr);\n> +\n> +\t\t\t/* not LOCKED_ONLY, so it has to have an xmax */\n> +\t\t\tif (!TransactionIdIsValid(xmax))\n> +\t\t\t{\n> +\t\t\t\trecord_corruption(ctx, _(\"heap tuple with XMAX_IS_MULTI is \"\n> +\t\t\t\t\t\t\t\t\t\t \"neither LOCKED_ONLY nor has a \"\n> +\t\t\t\t\t\t\t\t\t\t \"valid xmax\"));\n> +\t\t\t\treturn false;\n> +\t\t\t}\n\nI think it's bad to have code like this in a routine that's named like a\ngeneric visibility check routine.\n\n\n> +\t\t\tif (TransactionIdIsInProgress(xmax))\n> +\t\t\t\treturn false;\t/* HEAPTUPLE_DELETE_IN_PROGRESS */\n> +\n> +\t\t\tLWLockAcquire(CLogTruncationLock, LW_SHARED);\n> +\t\t\tif (!TransactionIdStillValid(xmax, &fxmax))\n> +\t\t\t{\n> +\t\t\t\tLWLockRelease(CLogTruncationLock);\n> +\t\t\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u (interpreted \"\n> +\t\t\t\t\t\t\t\t\t\t\t\t\"as \" UINT64_FORMAT\n> +\t\t\t\t\t\t\t\t\t\t\t\t\") not or no longer valid\",\n> +\t\t\t\t\t\t\t\t\t\t\t\txmax, fxmax.value));\n> +\t\t\t\treturn false;\n> +\t\t\t}\n> +\t\t\telse if (TransactionIdDidCommit(xmax))\n> +\t\t\t{\n> +\t\t\t\tLWLockRelease(CLogTruncationLock);\n> +\t\t\t\treturn false;\t/* HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD */\n> +\t\t\t}\n> +\t\t\tLWLockRelease(CLogTruncationLock);\n> +\t\t\t/* Ok, the tuple is live */\n\nI don't think random interspersed uses of CLogTruncationLock are a good\nidea. If you move to only checking visibility after tuple fits into\n[relfrozenxid, nextXid), then you don't need to take any locks here, as\nlong as a lock against vacuum is taken (which I think this should do\nanyway).\n\n\n> +/*\n> + * check_tuple\n> + *\n> + * Checks the current tuple as tracked in ctx for corruption. Records any\n> + * corruption found in ctx->corruption.\n> + *\n> + * The caller should have iterated to a tuple via pageTupleIteration_next.\n> + */\n> +void\n> +check_tuple(HeapCheckContext * ctx)\n> +{\n> +\tbool\t\tfatal = false;\n\nWait, aren't some checks here duplicate with ones in\nHeapTupleIsVisible()?\n\n\n> +\t/* Check relminmxid against mxid, if any */\n> +\tif (ctx->infomask & HEAP_XMAX_IS_MULTI &&\n> +\t\tMultiXactIdPrecedes(ctx->xmax, ctx->relminmxid))\n> +\t{\n> +\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u precedes relation \"\n> +\t\t\t\t\t\t\t\t\t\t\"relminmxid = %u\",\n> +\t\t\t\t\t\t\t\t\t\tctx->xmax, ctx->relminmxid));\n> +\t}\n\nIt's pretty weird that the routines here access xmin/xmax/... via\nHeapCheckContext, but HeapTupleIsVisible() doesn't.\n\n\n> +\t/* Check xmin against relfrozenxid */\n> +\tif (TransactionIdIsNormal(ctx->relfrozenxid) &&\n> +\t\tTransactionIdIsNormal(ctx->xmin) &&\n> +\t\tTransactionIdPrecedes(ctx->xmin, ctx->relfrozenxid))\n> +\t{\n> +\t\trecord_corruption(ctx, psprintf(\"tuple xmin = %u precedes relation \"\n> +\t\t\t\t\t\t\t\t\t\t\"relfrozenxid = %u\",\n> +\t\t\t\t\t\t\t\t\t\tctx->xmin, ctx->relfrozenxid));\n> +\t}\n> +\n> +\t/* Check xmax against relfrozenxid */\n> +\tif (TransactionIdIsNormal(ctx->relfrozenxid) &&\n> +\t\tTransactionIdIsNormal(ctx->xmax) &&\n> +\t\tTransactionIdPrecedes(ctx->xmax, ctx->relfrozenxid))\n> +\t{\n> +\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u precedes relation \"\n> +\t\t\t\t\t\t\t\t\t\t\"relfrozenxid = %u\",\n> +\t\t\t\t\t\t\t\t\t\tctx->xmax, ctx->relfrozenxid));\n> +\t}\n\nthese all should be fatal. You definitely cannot just continue\nafterwards given the justification below:\n\n\n> +\t/*\n> +\t * Iterate over the attributes looking for broken toast values. This\n> +\t * roughly follows the logic of heap_deform_tuple, except that it doesn't\n> +\t * bother building up isnull[] and values[] arrays, since nobody wants\n> +\t * them, and it unrolls anything that might trip over an Assert when\n> +\t * processing corrupt data.\n> +\t */\n> +\tbeginTupleAttributeIteration(ctx);\n> +\twhile (tupleAttributeIteration_next(ctx) &&\n> +\t\t check_tuple_attribute(ctx))\n> +\t\t;\n> +\tendTupleAttributeIteration(ctx);\n> +}\n\nI really don't find these helpers helpful.\n\n\n> +/*\n> + * check_relation\n> + *\n> + * Checks the relation given by relid for corruption, returning a list of all\n> + * it finds.\n> + *\n> + * The caller should set up the memory context as desired before calling.\n> + * The returned list belongs to the caller.\n> + */\n> +List *\n> +check_relation(Oid relid)\n> +{\n> +\tHeapCheckContext ctx;\n> +\n> +\tmemset(&ctx, 0, sizeof(HeapCheckContext));\n> +\n> +\t/* Open the relation */\n> +\tctx.relid = relid;\n> +\tctx.corruption = NIL;\n> +\tctx.rel = relation_open(relid, AccessShareLock);\n\nI think you need to protect at least against concurrent schema changes\ngiven some of your checks. But I think it'd be better to also conflict\nwith vacuum here.\n\n\n> +\tcheck_relation_relkind(ctx.rel);\n\nI think you also need to ensure that the table is actually using heap\nAM, not another tableam. Oh - you're doing that inside the check. But\nthat's confusing, because that's not 'relkind'.\n\n\n> +\tctx.relDesc = RelationGetDescr(ctx.rel);\n> +\tctx.rel_natts = RelationGetDescr(ctx.rel)->natts;\n> +\tctx.relfrozenxid = ctx.rel->rd_rel->relfrozenxid;\n> +\tctx.relminmxid = ctx.rel->rd_rel->relminmxid;\n\nthree naming schemes in three lines...\n\n\n\n> +\t/* check all blocks of the relation */\n> +\tbeginRelBlockIteration(&ctx);\n> +\twhile (relBlockIteration_next(&ctx))\n> +\t{\n> +\t\t/* Perform tuple checks */\n> +\t\tbeginPageTupleIteration(&ctx);\n> +\t\twhile (pageTupleIteration_next(&ctx))\n> +\t\t\tcheck_tuple(&ctx);\n> +\t\tendPageTupleIteration(&ctx);\n> +\t}\n> +\tendRelBlockIteration(&ctx);\n\nI again do not find this helper stuff helpful.\n\n\n> +\t/* Close the associated toast table and indexes, if any. */\n> +\tif (ctx.has_toastrel)\n> +\t{\n> +\t\ttoast_close_indexes(ctx.toast_indexes, ctx.num_toast_indexes,\n> +\t\t\t\t\t\t\tAccessShareLock);\n> +\t\ttable_close(ctx.toastrel, AccessShareLock);\n> +\t}\n> +\n> +\t/* Close the main relation */\n> +\trelation_close(ctx.rel, AccessShareLock);\n\nWhy the closing here?\n\n\n\n> +# This regression test demonstrates that the heapcheck_relation() function\n> +# supplied with this contrib module correctly identifies specific kinds of\n> +# corruption within pages. To test this, we need a mechanism to create corrupt\n> +# pages with predictable, repeatable corruption. The postgres backend cannot be\n> +# expected to help us with this, as its design is not consistent with the goal\n> +# of intentionally corrupting pages.\n> +#\n> +# Instead, we create a table to corrupt, and with careful consideration of how\n> +# postgresql lays out heap pages, we seek to offsets within the page and\n> +# overwrite deliberately chosen bytes with specific values calculated to\n> +# corrupt the page in expected ways. We then verify that heapcheck_relation\n> +# reports the corruption, and that it runs without crashing. Note that the\n> +# backend cannot simply be started to run queries against the corrupt table, as\n> +# the backend will crash, at least for some of the corruption types we\n> +# generate.\n> +#\n> +# Autovacuum potentially touching the table in the background makes the exact\n> +# behavior of this test harder to reason about. We turn it off to keep things\n> +# simpler. We use a \"belt and suspenders\" approach, turning it off for the\n> +# system generally in postgresql.conf, and turning it off specifically for the\n> +# test table.\n> +#\n> +# This test depends on the table being written to the heap file exactly as we\n> +# expect it to be, so we take care to arrange the columns of the table, and\n> +# insert rows of the table, that give predictable sizes and locations within\n> +# the table page.\n\nI have a hard time believing this is going to be really\nreliable. E.g. the alignment requirements will vary between platforms,\nleading to different layouts. In particular, MAXALIGN differs between\nplatforms.\n\nAlso, it's supported to compile postgres with a different pagesize.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Apr 2020 12:42:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "[ retrying from the email address I intended to use ]\n\nOn Mon, Apr 20, 2020 at 3:42 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think random interspersed uses of CLogTruncationLock are a good\n> idea. If you move to only checking visibility after tuple fits into\n> [relfrozenxid, nextXid), then you don't need to take any locks here, as\n> long as a lock against vacuum is taken (which I think this should do\n> anyway).\n\nI think it would be *really* good to avoid ShareUpdateExclusiveLock\nhere. Running with only AccessShareLock would be a big advantage. I\nagree that any use of CLogTruncationLock should not be \"random\", but I\ndon't see why the same method we use to make txid_status() safe to\nexpose to SQL shouldn't also be used here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:03:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn 2020-04-20 15:59:49 -0400, Robert Haas wrote:\n> On Mon, Apr 20, 2020 at 3:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think random interspersed uses of CLogTruncationLock are a good\n> > idea. If you move to only checking visibility after tuple fits into\n> > [relfrozenxid, nextXid), then you don't need to take any locks here, as\n> > long as a lock against vacuum is taken (which I think this should do\n> > anyway).\n> \n> I think it would be *really* good to avoid ShareUpdateExclusiveLock\n> here. Running with only AccessShareLock would be a big advantage. I\n> agree that any use of CLogTruncationLock should not be \"random\", but I\n> don't see why the same method we use to make txid_status() safe to\n> expose to SQL shouldn't also be used here.\n\nA few billion CLogTruncationLock acquisitions in short order will likely\nhave at least as big an impact as ShareUpdateExclusiveLock held for the\nduration of the check. That's not really a relevant concern or\ntxid_status(). Per-tuple lock acquisitions aren't great.\n\nI think it might be doable to not need either. E.g. we could set the\nchecking backend's xmin to relfrozenxid, and set somethign like\nPROC_IN_VACUUM. That should, I think, prevent clog from being truncated\nin a problematic way (clog truncations look at PROC_IN_VACUUM backends),\nwhile not blocking vacuum.\n\nThe similar concern for ReadNewTransactionId() can probably more easily\nbe addressed, by only calling ReadNewTransactionId() when encountering\nan xid that's newer than the last value read.\n\n\nI think it'd be good to set PROC_IN_VACUUM (or maybe a separate version\nof it) while checking anyway. Reading the full relation can take quite a\nwhile, and we shouldn't prevent hot pruning while doing so.\n\n\nThere's some things we'd need to figure out to be able to use\nPROC_IN_VACUUM, as that's really only safe in some\ncircumstances. Possibly it'd be easiest to address that if we'd make the\ncheck a procedure...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:30:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 12:42 PM Andres Freund <andres@anarazel.de> wrote:\n> This is something we really really really need. I'm very excited to see\n> progress!\n\n+1\n\nMy experience with amcheck was that the requirement that we document\nand verify pretty much every invariant (the details of which differ\nslightly based on the B-Tree version in use) has had intangible\nbenefits. It helped me come up with a simpler, better design in the\nfirst place. Also, many of the benchmarks that I perform get to be a\nstress-test of the feature itself. It saves quite a lot of testing\nwork in the long run.\n\n> I wonder if a mode where heapcheck optionally would only checks\n> non-frozen (perhaps also non-all-visible) regions of a table would be a\n> good idea? Would make it a lot more viable to run this regularly on\n> bigger databases. Even if there's a window to not check some data\n> (because it's frozen before the next heapcheck run).\n\nThat's a great idea. It could also make it practical to use the\nrootdescend verification option to verify indexes selectively -- if\nyou don't have too many blocks to check on average, the overhead is\ntolerable. This is the kind of thing that naturally belongs in the\nhigher level interface that I sketched already.\n\n> We also had a *lot* of bugs that we'd have found a lot earlier, possibly\n> even during development, if we had a way to easily perform these checks.\n\nI can think of a case where it was quite unclear what the invariants\nfor the heap even were, at least temporarily. And this was in the\ncontext of fixing a bug that was really quite nasty. Formally defining\nthe invariants in one place, and taking a position on exactly what\ncorrect looks like seems like a very valuable exercise. Even without\nthe tool catching a single bug.\n\n> I have a hard time believing this is going to be really\n> reliable. E.g. the alignment requirements will vary between platforms,\n> leading to different layouts. In particular, MAXALIGN differs between\n> platforms.\n\nOver on another thread, I suggested that Mark might want to have a\ncorruption test framework that exposes some of the bufpage.c routines.\nThe idea is that you can destructively manipulate a page using the\nlogical page interface. Something that works one level below the\naccess method, but one level above the raw page image. It probably\nwouldn't test everything that Mark wants to test, but it would test\nsome things in a way that seems maintainable to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:37:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> A few billion CLogTruncationLock acquisitions in short order will likely\n> have at least as big an impact as ShareUpdateExclusiveLock held for the\n> duration of the check. That's not really a relevant concern or\n> txid_status(). Per-tuple lock acquisitions aren't great.\n\nYeah, that's true. Doing it for every tuple is going to be too much, I\nthink. I was hoping we could avoid that.\n\n> I think it might be doable to not need either. E.g. we could set the\n> checking backend's xmin to relfrozenxid, and set somethign like\n> PROC_IN_VACUUM. That should, I think, prevent clog from being truncated\n> in a problematic way (clog truncations look at PROC_IN_VACUUM backends),\n> while not blocking vacuum.\n\nHmm, OK, I don't know if that would be OK or not.\n\n> The similar concern for ReadNewTransactionId() can probably more easily\n> be addressed, by only calling ReadNewTransactionId() when encountering\n> an xid that's newer than the last value read.\n\nYeah, if we can cache some things to avoid repetitive calls, that would be good.\n\n> I think it'd be good to set PROC_IN_VACUUM (or maybe a separate version\n> of it) while checking anyway. Reading the full relation can take quite a\n> while, and we shouldn't prevent hot pruning while doing so.\n>\n> There's some things we'd need to figure out to be able to use\n> PROC_IN_VACUUM, as that's really only safe in some\n> circumstances. Possibly it'd be easiest to address that if we'd make the\n> check a procedure...\n\nI think we sure want to set things up so that we do this check without\nholding a snapshot, if we can. Not sure exactly how to get there.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:40:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 12:40 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Ok, I'll work in that direction and repost when I have something along those lines.\n\nGreat, thanks!\n\nIt also occurs to me that the B-Tree checks that amcheck already has\nhave one remaining blindspot: While the heapallindexed verification\noption has the ability to detect the absence of an index tuple that\nthe dummy CREATE INDEX that we perform under the hood says should be\nin the index, it cannot do the opposite: It cannot detect the presence\nof a malformed tuple that shouldn't be there at all, unless the index\ntuple itself is corrupt. That could miss an inconsistent page image\nwhen a few tuples have been VACUUMed away, but still appear in the\nindex.\n\nIn order to do that, we'd have to have something a bit like the\nvalidate_index() heap scan that CREATE INDEX CONCURRENTLY uses. We'd\nhave to get a list of heap TIDs that any index tuple might be pointing\nto, and then make sure that there were no TIDs in the index that were\nnot in that list -- tuples that were pointing to nothing in the heap\nat all. This could use the index_bulk_delete() interface. This is the\nkind of verification option that I might work on for debugging\npurposes, but not the kind of thing I could really recommend to\nordinary users outside of exceptional cases. This is the kind of thing\nthat argues for more or less providing all of the verification\nfunctionality we have through both high level and low level\ninterfaces. This isn't likely to be all that valuable most of the\ntime, and users shouldn't have to figure that out for themselves the\nhard way. (BTW, I think that this could be implemented in an\nindex-AM-agnostic way, I think, so perhaps you can consider adding it\ntoo, if you have time.)\n\nOne last thing for now: take a look at amcheck's\nbt_tuple_present_callback() function. It has comments about HOT chain\ncorruption that you may find interesting. Note that this check played\na role in the \"freeze the dead\" corruption bug [1] -- it detected that\nour initial fix for that was broken. It seems like it would be a good\nidea to go back through the reproducers we've seen for some of the\nmore memorable corruption bugs, and actually make sure that your tool\ndetects them where that isn't clear. History doesn't repeat itself,\nbut it often rhymes.\n\n[1] https://postgr.es/m/CAH2-Wznm4rCrhFAiwKPWTpEw2bXDtgROZK7jWWGucXeH3D1fmA@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:14:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Apr 20, 2020 at 1:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Apr 20, 2020 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > A few billion CLogTruncationLock acquisitions in short order will likely\n> > have at least as big an impact as ShareUpdateExclusiveLock held for the\n> > duration of the check. That's not really a relevant concern or\n> > txid_status(). Per-tuple lock acquisitions aren't great.\n>\n> Yeah, that's true. Doing it for every tuple is going to be too much, I\n> think. I was hoping we could avoid that.\n\nWhat about the visibility map? It would be nice if pg_visibility was\nmerged into amcheck, since it mostly provides integrity checking for\nthe visibility map. Maybe we could just merge the functions that\nperform verification, and leave other functions (like\npg_truncate_visibility_map()) where they are. We could keep the\ncurrent interface for functions like pg_check_visible(), but also\nallow the same verification to occur in passing, as part of a higher\nlevel check.\n\nIt wouldn't be so bad if pg_visibility was an expert-only tool. But\nISTM that the verification performed by code like\ncollect_corrupt_items() could easily take place at the same time as\nthe new checks that Mark proposes. Possibly only some of the time. It\ncan work in a totally additive way. (Though like Andres I don't really\nlike the current \"helper\" functions used to iterate through a heap\nrelation; they seem like they'd make this harder.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:45:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Apr 20, 2020, at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2020-04-20 10:59:28 -0700, Mark Dilger wrote:\n>> I have been talking with Robert about table corruption that occurs\n>> from time to time. The page checksum feature seems sufficient to\n>> detect most random corruption problems, but it can't detect \"logical\"\n>> corruption, where the page is valid but inconsistent with the rest of\n>> the database cluster. This can happen due to faulty or ill-conceived\n>> backup and restore tools, or bad storage, or user error, or bugs in\n>> the server itself. (Also, not everyone enables checksums.)\n> \n> This is something we really really really need. I'm very excited to see\n> progress!\n\nThanks for the review!\n\n>> From 2a1bc0bb9fa94bd929adc1a408900cb925ebcdd5 Mon Sep 17 00:00:00 2001\n>> From: Mark Dilger <mark.dilger@enterprisedb.com>\n>> Date: Mon, 20 Apr 2020 08:05:58 -0700\n>> Subject: [PATCH v2] Adding heapcheck contrib module.\n>> \n>> The heapcheck module introduces a new function for checking a heap\n>> relation and associated toast relation, if any, for corruption.\n> \n> Why not add it to amcheck?\n\nThat seems to be the general consensus. The functionality has been moved there, renamed as \"verify_heapam\", as that seems more in line with the \"verify_nbtree\" name already present in that module. The docs have also been moved there, although not very gracefully. It seems premature to polish the documentation given that the interface will likely change at least one more time, to incorporate more of Peter's suggestions. There are still design differences between the two implementations that need to be harmonized. The verify_heapam function returns rows detailing the corruption found, which is inconsistent with how verify_heapam does things.\n\n> I wonder if a mode where heapcheck optionally would only checks\n> non-frozen (perhaps also non-all-visible) regions of a table would be a\n> good idea? Would make it a lot more viable to run this regularly on\n> bigger databases. Even if there's a window to not check some data\n> (because it's frozen before the next heapcheck run).\n\nPerhaps we should come back to that. Version 3 of this patch addresses concerns about the v2 patch without adding too many new features.\n\n>> The attached module provides the means to scan a relation and sanity\n>> check it. Currently, it checks xmin and xmax values against\n>> relfrozenxid and relminmxid, and also validates TOAST pointers. If\n>> people like this, it could be expanded to perform additional checks.\n> \n> \n>> The postgres backend already defends against certain forms of\n>> corruption, by checking the page header of each page before allowing\n>> it into the page cache, and by checking the page checksum, if enabled.\n>> Experience shows that broken or ill-conceived backup and restore\n>> mechanisms can result in a page, or an entire file, being overwritten\n>> with an earlier version of itself, restored from backup. Pages thus\n>> overwritten will appear to have valid page headers and checksums,\n>> while potentially containing xmin, xmax, and toast pointers that are\n>> invalid.\n> \n> We also had a *lot* of bugs that we'd have found a lot earlier, possibly\n> even during development, if we had a way to easily perform these checks.\n\nI certainly hope this is useful for testing.\n\n>> contrib/heapcheck introduces a function, heapcheck_relation, that\n>> takes a regclass argument, scans the given heap relation, and returns\n>> rows containing information about corruption found within the table.\n>> The main focus of the scan is to find invalid xmin, xmax, and toast\n>> pointer values. It also checks for structural corruption within the\n>> page (such as invalid t_hoff values) that could lead to the backend\n>> aborting should the function blindly trust the data as it finds it.\n> \n> \n>> +typedef struct CorruptionInfo\n>> +{\n>> +\tBlockNumber blkno;\n>> +\tOffsetNumber offnum;\n>> +\tint16\t\tlp_off;\n>> +\tint16\t\tlp_flags;\n>> +\tint16\t\tlp_len;\n>> +\tint32\t\tattnum;\n>> +\tint32\t\tchunk;\n>> +\tchar\t *msg;\n>> +}\t\t\tCorruptionInfo;\n> \n> Adding a short comment explaining what this is for would be good.\n\nThis struct has been removed.\n\n>> +/* Internal implementation */\n>> +void\t\trecord_corruption(HeapCheckContext * ctx, char *msg);\n>> +TupleDesc\theapcheck_relation_tupdesc(void);\n>> +\n>> +void\t\tbeginRelBlockIteration(HeapCheckContext * ctx);\n>> +bool\t\trelBlockIteration_next(HeapCheckContext * ctx);\n>> +void\t\tendRelBlockIteration(HeapCheckContext * ctx);\n>> +\n>> +void\t\tbeginPageTupleIteration(HeapCheckContext * ctx);\n>> +bool\t\tpageTupleIteration_next(HeapCheckContext * ctx);\n>> +void\t\tendPageTupleIteration(HeapCheckContext * ctx);\n>> +\n>> +void\t\tbeginTupleAttributeIteration(HeapCheckContext * ctx);\n>> +bool\t\ttupleAttributeIteration_next(HeapCheckContext * ctx);\n>> +void\t\tendTupleAttributeIteration(HeapCheckContext * ctx);\n>> +\n>> +void\t\tbeginToastTupleIteration(HeapCheckContext * ctx,\n>> +\t\t\t\t\t\t\t\t\t struct varatt_external *toast_pointer);\n>> +void\t\tendToastTupleIteration(HeapCheckContext * ctx);\n>> +bool\t\ttoastTupleIteration_next(HeapCheckContext * ctx);\n>> +\n>> +bool\t\tTransactionIdStillValid(TransactionId xid, FullTransactionId *fxid);\n>> +bool\t\tHeapTupleIsVisible(HeapTupleHeader tuphdr, HeapCheckContext * ctx);\n>> +void\t\tcheck_toast_tuple(HeapCheckContext * ctx);\n>> +bool\t\tcheck_tuple_attribute(HeapCheckContext * ctx);\n>> +void\t\tcheck_tuple(HeapCheckContext * ctx);\n>> +\n>> +List\t *check_relation(Oid relid);\n>> +void\t\tcheck_relation_relkind(Relation rel);\n> \n> Why aren't these static?\n\nThey are now, except for the iterator style functions, which are gone.\n\n>> +/*\n>> + * record_corruption\n>> + *\n>> + * Record a message about corruption, including information\n>> + * about where in the relation the corruption was found.\n>> + */\n>> +void\n>> +record_corruption(HeapCheckContext * ctx, char *msg)\n>> +{\n> \n> Given that you went through the trouble of adding prototypes for all of\n> these, I'd start with the most important functions, not the unimportant\n> details.\n\nYeah, good idea. The most important functions are now at the top.\n\n>> +/*\n>> + * Helper function to construct the TupleDesc needed by heapcheck_relation.\n>> + */\n>> +TupleDesc\n>> +heapcheck_relation_tupdesc()\n> \n> Missing (void) (it's our style, even though you could theoretically not\n> have it as long as you have a prototype).\n\nThat was unintentional, and is now fixed.\n\n>> +{\n>> +\tTupleDesc\ttupdesc;\n>> +\tAttrNumber\tmaxattr = 8;\n> \n> This 8 is in multiple places, I'd add a define for it.\n\nDone.\n\n>> +\tAttrNumber\ta = 0;\n>> +\n>> +\ttupdesc = CreateTemplateTupleDesc(maxattr);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"blkno\", INT8OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"offnum\", INT4OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_off\", INT2OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_flags\", INT2OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"lp_len\", INT2OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"attnum\", INT4OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"chunk\", INT4OID, -1, 0);\n>> +\tTupleDescInitEntry(tupdesc, ++a, \"msg\", TEXTOID, -1, 0);\n>> +\tAssert(a == maxattr);\n>> +\n>> +\treturn BlessTupleDesc(tupdesc);\n>> +}\n> \n> \n>> +/*\n>> + * heapcheck_relation\n>> + *\n>> + * Scan and report corruption in heap pages or in associated toast relation.\n>> + */\n>> +Datum\n>> +heapcheck_relation(PG_FUNCTION_ARGS)\n>> +{\n>> +\tFuncCallContext *funcctx;\n>> +\tCheckRelCtx *ctx;\n>> +\n>> +\tif (SRF_IS_FIRSTCALL())\n>> +\t{\n> \n> I think it'd be good to have a version that just returned a boolean. For\n> one, in many cases that's all we care about when scripting things. But\n> also, on a large relation, there could be a lot of errors.\n\nThere is now a second parameter to the function, \"stop_on_error\". The function performs exactly the same checks, but returns after the first page that contains corruption.\n\n>> +\t\tOid\t\t\trelid = PG_GETARG_OID(0);\n>> +\t\tMemoryContext oldcontext;\n>> +\n>> +\t\t/*\n>> +\t\t * Scan the entire relation, building up a list of corruption found in\n>> +\t\t * ctx->corruption, for returning later. The scan must be performed\n>> +\t\t * in a memory context that will survive until after all rows are\n>> +\t\t * returned.\n>> +\t\t */\n>> +\t\tfuncctx = SRF_FIRSTCALL_INIT();\n>> +\t\toldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n>> +\t\tfuncctx->tuple_desc = heapcheck_relation_tupdesc();\n>> +\t\tctx = (CheckRelCtx *) palloc0(sizeof(CheckRelCtx));\n>> +\t\tctx->corruption = check_relation(relid);\n>> +\t\tctx->idx = 0;\t\t\t/* start the iterator at the beginning */\n>> +\t\tfuncctx->user_fctx = (void *) ctx;\n>> +\t\tMemoryContextSwitchTo(oldcontext);\n> \n> Hm. This builds up all the errors in memory. Is that a good idea? I mean\n> for a large relation having one returned value for each tuple could be a\n> heck of a lot of data.\n> \n> I think it'd be better to use the spilling SRF protocol here. It's not\n> like you're benefitting from deferring the tuple construction to the\n> return currently.\n\nDone.\n\n>> +/*\n>> + * beginRelBlockIteration\n>> + *\n>> + * For the given heap relation being checked, as recorded in ctx, sets up\n>> + * variables for iterating over the heap's pages.\n>> + *\n>> + * The caller should have already opened the heap relation, ctx->rel\n>> + */\n>> +void\n>> +beginRelBlockIteration(HeapCheckContext * ctx)\n>> +{\n>> +\tctx->nblocks = RelationGetNumberOfBlocks(ctx->rel);\n>> +\tctx->blkno = InvalidBlockNumber;\n>> +\tctx->bstrategy = GetAccessStrategy(BAS_BULKREAD);\n>> +\tctx->buffer = InvalidBuffer;\n>> +\tctx->page = NULL;\n>> +}\n>> +\n>> +/*\n>> + * endRelBlockIteration\n>> + *\n>> + * Releases resources that were reserved by either beginRelBlockIteration or\n>> + * relBlockIteration_next.\n>> + */\n>> +void\n>> +endRelBlockIteration(HeapCheckContext * ctx)\n>> +{\n>> +\t/*\n>> +\t * Clean up. If the caller iterated to the end, the final call to\n>> +\t * relBlockIteration_next will already have released the buffer, but if\n>> +\t * the caller is bailing out early, we have to release it ourselves.\n>> +\t */\n>> +\tif (InvalidBuffer != ctx->buffer)\n>> +\t\tUnlockReleaseBuffer(ctx->buffer);\n>> +}\n> \n> These seem mighty granular and generically named to me.\n\nRemoved.\n\n>> + * pageTupleIteration_next\n>> + *\n>> + * Advances the state tracked in ctx to the next tuple on the page.\n>> + *\n>> + * Caller should have already set up the iteration via\n>> + * beginPageTupleIteration, and should stop calling when this function\n>> + * returns false.\n>> + */\n>> +bool\n>> +pageTupleIteration_next(HeapCheckContext * ctx)\n> \n> I don't think this is a naming scheme we use anywhere in postgres. I\n> don't think it's a good idea to add yet more of those.\n\nRemoved.\n\n>> +{\n>> +\t/*\n>> +\t * Iterate to the next interesting line pointer, if any. Unused, dead and\n>> +\t * redirect line pointers are of no interest.\n>> +\t */\n>> +\tdo\n>> +\t{\n>> +\t\tctx->offnum = OffsetNumberNext(ctx->offnum);\n>> +\t\tif (ctx->offnum > ctx->maxoff)\n>> +\t\t\treturn false;\n>> +\t\tctx->itemid = PageGetItemId(ctx->page, ctx->offnum);\n>> +\t} while (!ItemIdIsUsed(ctx->itemid) ||\n>> +\t\t\t ItemIdIsDead(ctx->itemid) ||\n>> +\t\t\t ItemIdIsRedirected(ctx->itemid));\n> \n> This is an odd loop. Part of the test is in the body, part of in the\n> loop header.\n\nRefactored.\n\n>> +/*\n>> + * Given a TransactionId, attempt to interpret it as a valid\n>> + * FullTransactionId, neither in the future nor overlong in\n>> + * the past. Stores the inferred FullTransactionId in *fxid.\n>> + *\n>> + * Returns whether the xid is newer than the oldest clog xid.\n>> + */\n>> +bool\n>> +TransactionIdStillValid(TransactionId xid, FullTransactionId *fxid)\n> \n> I don't at all like the naming of this function. This isn't a reliable\n> check. As before, it obviously also shouldn't be static.\n\nRenamed and refactored.\n\n>> +{\n>> +\tFullTransactionId fnow;\n>> +\tuint32\t\tepoch;\n>> +\n>> +\t/* Initialize fxid; we'll overwrite this later if needed */\n>> +\t*fxid = FullTransactionIdFromEpochAndXid(0, xid);\n> \n>> +\t/* Special xids can quickly be turned into invalid fxids */\n>> +\tif (!TransactionIdIsValid(xid))\n>> +\t\treturn false;\n>> +\tif (!TransactionIdIsNormal(xid))\n>> +\t\treturn true;\n>> +\n>> +\t/*\n>> +\t * Charitably infer the full transaction id as being within one epoch ago\n>> +\t */\n>> +\tfnow = ReadNextFullTransactionId();\n>> +\tepoch = EpochFromFullTransactionId(fnow);\n>> +\t*fxid = FullTransactionIdFromEpochAndXid(epoch, xid);\n> \n> So now you're overwriting the fxid value from above unconditionally?\n> \n> \n>> +\tif (!FullTransactionIdPrecedes(*fxid, fnow))\n>> +\t\t*fxid = FullTransactionIdFromEpochAndXid(epoch - 1, xid);\n> \n> \n> I think it'd be better to do the conversion the following way:\n> \n> *fxid = FullTransactionIdFromU64(U64FromFullTransactionId(fnow)\n> + (int32) (XidFromFullTransactionId(fnow) - xid));\n\nThis has been refactored to the point that these review comments cannot be directly replied to.\n\n>> +\tif (!FullTransactionIdPrecedes(*fxid, fnow))\n>> +\t\treturn false;\n>> +\t/* The oldestClogXid is protected by CLogTruncationLock */\n>> +\tAssert(LWLockHeldByMe(CLogTruncationLock));\n>> +\tif (TransactionIdPrecedes(xid, ShmemVariableCache->oldestClogXid))\n>> +\t\treturn false;\n>> +\treturn true;\n>> +}\n> \n> Why is this testing oldestClogXid instead of oldestXid?\n\nReferences to clog have been refactored out of this module.\n\n>> +/*\n>> + * HeapTupleIsVisible\n>> + *\n>> + *\tDetermine whether tuples are visible for heapcheck. Similar to\n>> + * HeapTupleSatisfiesVacuum, but with critical differences.\n>> + *\n>> + * 1) Does not touch hint bits. It seems imprudent to write hint bits\n>> + * to a table during a corruption check.\n>> + * 2) Gracefully handles xids that are too old by calling\n>> + * TransactionIdStillValid before TransactionLogFetch, thus avoiding\n>> + * a backend abort.\n> \n> I think it'd be better to protect against this by avoiding checks for\n> xids that are older than relfrozenxid. And ones that are newer than\n> ReadNextTransactionId(). But all of those cases should be errors\n> anyway, so it doesn't seem like that should be handled within the\n> visibility routine.\n\nThe new implementation caches a range of expected xids. With the relation locked against concurrent vacuum runs, it can trust that the old end of the range won't move during the course of the scan. The newest end may move, but it only has to check for that when it encounters a newer than expected xid, and it updates the cache with the new maximum.\n\n> \n>> + * 3) Only makes a boolean determination of whether heapcheck should\n>> + * see the tuple, rather than doing extra work for vacuum-related\n>> + * categorization.\n>> + */\n>> +bool\n>> +HeapTupleIsVisible(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n>> +{\n> \n>> +\tFullTransactionId fxmin,\n>> +\t\t\t\tfxmax;\n>> +\tuint16\t\tinfomask = tuphdr->t_infomask;\n>> +\tTransactionId xmin = HeapTupleHeaderGetXmin(tuphdr);\n>> +\n>> +\tif (!HeapTupleHeaderXminCommitted(tuphdr))\n>> +\t{\n> \n> Hm. I wonder if it'd be good to crosscheck the xid committed hint bits\n> with clog?\n\nThis is not done in v3, as it no longer checks clog.\n\n>> +\t\telse if (!TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuphdr)))\n>> +\t\t{\n>> +\t\t\tLWLockRelease(CLogTruncationLock);\n>> +\t\t\treturn false;\t\t/* HEAPTUPLE_DEAD */\n>> +\t\t}\n> \n> Note that this actually can error out, if xmin is a subtransaction xid,\n> because pg_subtrans is truncated a lot more aggressively than anything\n> else. I think you'd need to filter against subtransactions older than\n> RecentXmin before here, and treat that as an error.\n\nCalls to TransactionIdDidCommit are now preceded by checks that the xid argument is not too old.\n\n>> +\tif (!(infomask & HEAP_XMAX_INVALID) && !HEAP_XMAX_IS_LOCKED_ONLY(infomask))\n>> +\t{\n>> +\t\tif (infomask & HEAP_XMAX_IS_MULTI)\n>> +\t\t{\n>> +\t\t\tTransactionId xmax = HeapTupleGetUpdateXid(tuphdr);\n>> +\n>> +\t\t\t/* not LOCKED_ONLY, so it has to have an xmax */\n>> +\t\t\tif (!TransactionIdIsValid(xmax))\n>> +\t\t\t{\n>> +\t\t\t\trecord_corruption(ctx, _(\"heap tuple with XMAX_IS_MULTI is \"\n>> +\t\t\t\t\t\t\t\t\t\t \"neither LOCKED_ONLY nor has a \"\n>> +\t\t\t\t\t\t\t\t\t\t \"valid xmax\"));\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n> \n> I think it's bad to have code like this in a routine that's named like a\n> generic visibility check routine.\n\nRenamed.\n\n>> +\t\t\tif (TransactionIdIsInProgress(xmax))\n>> +\t\t\t\treturn false;\t/* HEAPTUPLE_DELETE_IN_PROGRESS */\n>> +\n>> +\t\t\tLWLockAcquire(CLogTruncationLock, LW_SHARED);\n>> +\t\t\tif (!TransactionIdStillValid(xmax, &fxmax))\n>> +\t\t\t{\n>> +\t\t\t\tLWLockRelease(CLogTruncationLock);\n>> +\t\t\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u (interpreted \"\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\"as \" UINT64_FORMAT\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\") not or no longer valid\",\n>> +\t\t\t\t\t\t\t\t\t\t\t\txmax, fxmax.value));\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n>> +\t\t\telse if (TransactionIdDidCommit(xmax))\n>> +\t\t\t{\n>> +\t\t\t\tLWLockRelease(CLogTruncationLock);\n>> +\t\t\t\treturn false;\t/* HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD */\n>> +\t\t\t}\n>> +\t\t\tLWLockRelease(CLogTruncationLock);\n>> +\t\t\t/* Ok, the tuple is live */\n> \n> I don't think random interspersed uses of CLogTruncationLock are a good\n> idea. If you move to only checking visibility after tuple fits into\n> [relfrozenxid, nextXid), then you don't need to take any locks here, as\n> long as a lock against vacuum is taken (which I think this should do\n> anyway).\n\nDone.\n\n>> +/*\n>> + * check_tuple\n>> + *\n>> + * Checks the current tuple as tracked in ctx for corruption. Records any\n>> + * corruption found in ctx->corruption.\n>> + *\n>> + * The caller should have iterated to a tuple via pageTupleIteration_next.\n>> + */\n>> +void\n>> +check_tuple(HeapCheckContext * ctx)\n>> +{\n>> +\tbool\t\tfatal = false;\n> \n> Wait, aren't some checks here duplicate with ones in\n> HeapTupleIsVisible()?\n\nYeah, there was some overlap. That should be better now.\n\n>> +\t/* Check relminmxid against mxid, if any */\n>> +\tif (ctx->infomask & HEAP_XMAX_IS_MULTI &&\n>> +\t\tMultiXactIdPrecedes(ctx->xmax, ctx->relminmxid))\n>> +\t{\n>> +\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u precedes relation \"\n>> +\t\t\t\t\t\t\t\t\t\t\"relminmxid = %u\",\n>> +\t\t\t\t\t\t\t\t\t\tctx->xmax, ctx->relminmxid));\n>> +\t}\n> \n> It's pretty weird that the routines here access xmin/xmax/... via\n> HeapCheckContext, but HeapTupleIsVisible() doesn't.\n\nFair point. HeapCheckContext no longer has fields for xmin/xmax after the refactoring.\n\n>> +\t/* Check xmin against relfrozenxid */\n>> +\tif (TransactionIdIsNormal(ctx->relfrozenxid) &&\n>> +\t\tTransactionIdIsNormal(ctx->xmin) &&\n>> +\t\tTransactionIdPrecedes(ctx->xmin, ctx->relfrozenxid))\n>> +\t{\n>> +\t\trecord_corruption(ctx, psprintf(\"tuple xmin = %u precedes relation \"\n>> +\t\t\t\t\t\t\t\t\t\t\"relfrozenxid = %u\",\n>> +\t\t\t\t\t\t\t\t\t\tctx->xmin, ctx->relfrozenxid));\n>> +\t}\n>> +\n>> +\t/* Check xmax against relfrozenxid */\n>> +\tif (TransactionIdIsNormal(ctx->relfrozenxid) &&\n>> +\t\tTransactionIdIsNormal(ctx->xmax) &&\n>> +\t\tTransactionIdPrecedes(ctx->xmax, ctx->relfrozenxid))\n>> +\t{\n>> +\t\trecord_corruption(ctx, psprintf(\"tuple xmax = %u precedes relation \"\n>> +\t\t\t\t\t\t\t\t\t\t\"relfrozenxid = %u\",\n>> +\t\t\t\t\t\t\t\t\t\tctx->xmax, ctx->relfrozenxid));\n>> +\t}\n> \n> these all should be fatal. You definitely cannot just continue\n> afterwards given the justification below:\n\nThey are now fatal.\n\n>> +\t/*\n>> +\t * Iterate over the attributes looking for broken toast values. This\n>> +\t * roughly follows the logic of heap_deform_tuple, except that it doesn't\n>> +\t * bother building up isnull[] and values[] arrays, since nobody wants\n>> +\t * them, and it unrolls anything that might trip over an Assert when\n>> +\t * processing corrupt data.\n>> +\t */\n>> +\tbeginTupleAttributeIteration(ctx);\n>> +\twhile (tupleAttributeIteration_next(ctx) &&\n>> +\t\t check_tuple_attribute(ctx))\n>> +\t\t;\n>> +\tendTupleAttributeIteration(ctx);\n>> +}\n> \n> I really don't find these helpers helpful.\n\nRemoved.\n\n>> +/*\n>> + * check_relation\n>> + *\n>> + * Checks the relation given by relid for corruption, returning a list of all\n>> + * it finds.\n>> + *\n>> + * The caller should set up the memory context as desired before calling.\n>> + * The returned list belongs to the caller.\n>> + */\n>> +List *\n>> +check_relation(Oid relid)\n>> +{\n>> +\tHeapCheckContext ctx;\n>> +\n>> +\tmemset(&ctx, 0, sizeof(HeapCheckContext));\n>> +\n>> +\t/* Open the relation */\n>> +\tctx.relid = relid;\n>> +\tctx.corruption = NIL;\n>> +\tctx.rel = relation_open(relid, AccessShareLock);\n> \n> I think you need to protect at least against concurrent schema changes\n> given some of your checks. But I think it'd be better to also conflict\n> with vacuum here.\n\nThe relation is now opened with ShareUpdateExclusiveLock.\n\n> \n>> +\tcheck_relation_relkind(ctx.rel);\n> \n> I think you also need to ensure that the table is actually using heap\n> AM, not another tableam. Oh - you're doing that inside the check. But\n> that's confusing, because that's not 'relkind'.\n\nIt is checking both relkind and relam. The function has been renamed to reflect that.\n\n>> +\tctx.relDesc = RelationGetDescr(ctx.rel);\n>> +\tctx.rel_natts = RelationGetDescr(ctx.rel)->natts;\n>> +\tctx.relfrozenxid = ctx.rel->rd_rel->relfrozenxid;\n>> +\tctx.relminmxid = ctx.rel->rd_rel->relminmxid;\n> \n> three naming schemes in three lines...\n\nFixed.\n\n>> +\t/* check all blocks of the relation */\n>> +\tbeginRelBlockIteration(&ctx);\n>> +\twhile (relBlockIteration_next(&ctx))\n>> +\t{\n>> +\t\t/* Perform tuple checks */\n>> +\t\tbeginPageTupleIteration(&ctx);\n>> +\t\twhile (pageTupleIteration_next(&ctx))\n>> +\t\t\tcheck_tuple(&ctx);\n>> +\t\tendPageTupleIteration(&ctx);\n>> +\t}\n>> +\tendRelBlockIteration(&ctx);\n> \n> I again do not find this helper stuff helpful.\n\nRemoved.\n\n>> +\t/* Close the associated toast table and indexes, if any. */\n>> +\tif (ctx.has_toastrel)\n>> +\t{\n>> +\t\ttoast_close_indexes(ctx.toast_indexes, ctx.num_toast_indexes,\n>> +\t\t\t\t\t\t\tAccessShareLock);\n>> +\t\ttable_close(ctx.toastrel, AccessShareLock);\n>> +\t}\n>> +\n>> +\t/* Close the main relation */\n>> +\trelation_close(ctx.rel, AccessShareLock);\n> \n> Why the closing here?\n\nAs opposed to where...? It seems fairly standard to close the relation in the function where it was opened. Do you prefer that the relation not be closed? Or that it be closed but the lock retained?\n\n>> +# This regression test demonstrates that the heapcheck_relation() function\n>> +# supplied with this contrib module correctly identifies specific kinds of\n>> +# corruption within pages. To test this, we need a mechanism to create corrupt\n>> +# pages with predictable, repeatable corruption. The postgres backend cannot be\n>> +# expected to help us with this, as its design is not consistent with the goal\n>> +# of intentionally corrupting pages.\n>> +#\n>> +# Instead, we create a table to corrupt, and with careful consideration of how\n>> +# postgresql lays out heap pages, we seek to offsets within the page and\n>> +# overwrite deliberately chosen bytes with specific values calculated to\n>> +# corrupt the page in expected ways. We then verify that heapcheck_relation\n>> +# reports the corruption, and that it runs without crashing. Note that the\n>> +# backend cannot simply be started to run queries against the corrupt table, as\n>> +# the backend will crash, at least for some of the corruption types we\n>> +# generate.\n>> +#\n>> +# Autovacuum potentially touching the table in the background makes the exact\n>> +# behavior of this test harder to reason about. We turn it off to keep things\n>> +# simpler. We use a \"belt and suspenders\" approach, turning it off for the\n>> +# system generally in postgresql.conf, and turning it off specifically for the\n>> +# test table.\n>> +#\n>> +# This test depends on the table being written to the heap file exactly as we\n>> +# expect it to be, so we take care to arrange the columns of the table, and\n>> +# insert rows of the table, that give predictable sizes and locations within\n>> +# the table page.\n> \n> I have a hard time believing this is going to be really\n> reliable. E.g. the alignment requirements will vary between platforms,\n> leading to different layouts. In particular, MAXALIGN differs between\n> platforms.\n> \n> Also, it's supported to compile postgres with a different pagesize.\n\nIt's simple enough to extend the tap test a little to check for those things. In v3, the tap test skips tests if the page size is not 8k, and also if the tuples do not fall on the page where expected (which would happen due to alignment issues, gremlins, or whatever.). There are other approaches, though. The HeapFile/HeapPage/HeapTuple perl modules recently submitted on another thread *could* be used here, but only if those modules are likely to be committed. This test *could* be extended to autodetect the page size and alignment issues and calculate at runtime where tuples will be on the page, but only if folks don't mind the test having that extra complexity in it. (There is a school of thought that regression tests should avoid excess complexity.). Do you have a recommendation about which way to go with this?\n\nHere is the work thus far:\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 22 Apr 2020 19:43:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": ">> I wonder if a mode where heapcheck optionally would only checks\n>> non-frozen (perhaps also non-all-visible) regions of a table would be a\n>> good idea?\n\nVersion 4 of this patch now includes boolean options skip_all_frozen and skip_all_visible.\n\n>> Would make it a lot more viable to run this regularly on\n>> bigger databases. Even if there's a window to not check some data\n>> (because it's frozen before the next heapcheck run).\n\nDo you think it would make sense to have the amcheck contrib module have, in addition to the SQL queriable functions, a bgworker based mode that periodically checks your database? The work along those lines is not included in v4, but if it were part of v5, would you have specific design preferences?\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 29 Apr 2020 09:30:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Apr 29, 2020 at 12:30 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Do you think it would make sense to have the amcheck contrib module have, in addition to the SQL queriable functions, a bgworker based mode that periodically checks your database? The work along those lines is not included in v4, but if it were part of v5, would you have specific design preferences?\n\n-1 on that idea from me. That sounds like it's basically building\n\"cron\" into PostgreSQL, but in a way that can only be used by amcheck.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Apr 2020 13:28:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Apr 22, 2020 at 10:43 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It's simple enough to extend the tap test a little to check for those things. In v3, the tap test skips tests if the page size is not 8k, and also if the tuples do not fall on the page where expected (which would happen due to alignment issues, gremlins, or whatever.).\n\nSkipping the test if the tuple isn't in the expected location sounds\nreally bad. That will just lead to the tests passing without actually\ndoing anything. If the tuple isn't in the expected location, the tests\nshould fail.\n\n> There are other approaches, though. The HeapFile/HeapPage/HeapTuple perl modules recently submitted on another thread *could* be used here, but only if those modules are likely to be committed.\n\nYeah, I don't know if we want that stuff or not.\n\n> This test *could* be extended to autodetect the page size and alignment issues and calculate at runtime where tuples will be on the page, but only if folks don't mind the test having that extra complexity in it. (There is a school of thought that regression tests should avoid excess complexity.). Do you have a recommendation about which way to go with this?\n\nHow much extra complexity are we talking about? It feels to me like\nfor a heap page, the only things that are going to affect the position\nof the tuples on the page -- supposing we know the tuple size -- are\nthe page size and, I think, MAXALIGN, and that doesn't sound too bad.\nAnother possibility is to use pageinspect's heap_page_items() to\ndetermine the position within the page (lp_off), which seems like it\nmight simplify things considerably. Then, we're entirely relying on\nthe backend to tell us where the tuples are, and we only need to worry\nabout the offsets relative to the start of the tuple.\n\nI kind of like that approach, because it doesn't involve having Perl\ncode that knows how heap pages are laid out; we rely entirely on the C\ncode for that. I'm not sure if it'd be a problem to have a TAP test\nfor one contrib module that uses another contrib module, but maybe\nthere's some way to figure that problem out.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:41:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Apr 29, 2020 at 12:30 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Version 4 of this patch now includes boolean options skip_all_frozen and skip_all_visible.\n\nI'm not sure sure, but maybe there should just be one argument with\nthree possible values, because skip_all_frozen = true and\nskip_all_visible = false seems nonsensical. On the other hand, if we\nused a text argument with three possible values, I'm not sure what\nwe'd call the argument or what strings we'd use as the values.\n\nAlso, what do people -- either those who have already responded, or\nothers -- think about the idea of putting a command-line tool around\nthis? I know that there were some rumblings about this in respect to\npg_verifybackup, but I think a pg_amcheck binary would be\nwell-received. It could do some interesting things, too. For instance,\nit could query pg_class for a list of relations that amcheck would\nknow how to check, and then issue a separate query for each relation,\nwhich would avoid holding a snapshot or heavyweight locks across the\nwhole operation. It could do parallelism across relations by opening\nmultiple connections, or even within a single relation if -- as I\nthink would be a good idea -- we extended heapcheck to take a range of\nblock numbers after the style of pg_prewarm.\n\nApart from allowing for client-driven parallelism, accepting block\nnumber ranges would have the advantage -- IMHO pretty significant --\nof making it far easier to use this on a relation where some blocks\nare entirely unreadable. You could specify ranges to check out the\nremaining blocks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:56:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Apr 29, 2020, at 11:41 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Apr 22, 2020 at 10:43 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> It's simple enough to extend the tap test a little to check for those things. In v3, the tap test skips tests if the page size is not 8k, and also if the tuples do not fall on the page where expected (which would happen due to alignment issues, gremlins, or whatever.).\n> \n> Skipping the test if the tuple isn't in the expected location sounds\n> really bad. That will just lead to the tests passing without actually\n> doing anything. If the tuple isn't in the expected location, the tests\n> should fail.\n> \n>> There are other approaches, though. The HeapFile/HeapPage/HeapTuple perl modules recently submitted on another thread *could* be used here, but only if those modules are likely to be committed.\n> \n> Yeah, I don't know if we want that stuff or not.\n> \n>> This test *could* be extended to autodetect the page size and alignment issues and calculate at runtime where tuples will be on the page, but only if folks don't mind the test having that extra complexity in it. (There is a school of thought that regression tests should avoid excess complexity.). Do you have a recommendation about which way to go with this?\n> \n> How much extra complexity are we talking about?\n\nThe page size is easy to query, and the test already does so, skipping if the answer isn't 8k. The test could recalculate offsets based on the pagesize rather than skipping the test easily enough, but the MAXALIGN stuff is a little harder. I don't know (perhaps someone would share?) how to easily query that from within a perl test. So the test could guess all possible alignments that occur in the real world, read from the page at the offset that alignment would create, and check if the expected datum is there. The test would have to be careful to avoid false positives, by placing data before and after the datum being checked with bit patterns that cannot be misinterpreted as a match. That level of complexity seems unappealing, at least to me. It's not hard to write, but maintaining stuff like that is an unwelcome burden.\n\n> It feels to me like\n> for a heap page, the only things that are going to affect the position\n> of the tuples on the page -- supposing we know the tuple size -- are\n> the page size and, I think, MAXALIGN, and that doesn't sound too bad.\n> Another possibility is to use pageinspect's heap_page_items() to\n> determine the position within the page (lp_off), which seems like it\n> might simplify things considerably. Then, we're entirely relying on\n> the backend to tell us where the tuples are, and we only need to worry\n> about the offsets relative to the start of the tuple.\n> \n> I kind of like that approach, because it doesn't involve having Perl\n> code that knows how heap pages are laid out; we rely entirely on the C\n> code for that. I'm not sure if it'd be a problem to have a TAP test\n> for one contrib module that uses another contrib module, but maybe\n> there's some way to figure that problem out.\n\nYeah, I'll give this a try.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:06:54 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Here is v5 of the patch. Major changes in this version include:\n\n1) A new module, pg_amcheck, which includes a command line client for checking a database or subset of a database. Internally it functions by querying the database for a list of tables which are appropriate given the command line switches, and then calls amcheck's functions to validate each table and/or index. The options for selecting/excluding tables and schemas is patterned on pg_dump, on the assumption that interface is already familiar to users.\n\n2) amcheck's btree checking functions have been refactored to be able to operate in two modes; the original mode in which all errors are reported via ereport, and a new mode for returning errors as rows from a set returning function. The new mode is used by a new function verify_btreeam(), analogous to verify_heapam(), both of which are used by the pg_amcheck command line tool.\n\n3) The regression test which generates corruption within a table uses the pageinspect module to determine the location of each tuple on disk for corrupting. This was suggested upthread.\n\nTesting on the command line shows that the pre-existing btree checking code could use some hardening, as it currently crashes the backend on certain corruptions. When I corrupt relation files for tables and indexes in the backend and then use pg_amcheck to check all objects in the database, I keep getting assertions from the btree checking code. I think I need to harden this code, but wanted to post an updated patch and solicit opinions before doing so. Here are some example problems I'm seeing. Note the stack trace when calling from the command line tool includes the new verify_btreeam function, but you can get the same crashes using the old interface via psql:\n\nFrom psql, first error:\n\ntest=# select bt_index_parent_check('corrupted_idx', true, true);\nTRAP: FailedAssertion(\"_bt_check_natts(rel, key->heapkeyspace, page, offnum)\", File: \"nbtsearch.c\", Line: 663)\n0 postgres 0x0000000106872977 ExceptionalCondition + 103\n1 postgres 0x00000001063a33e2 _bt_compare + 1090\n2 amcheck.so 0x0000000106d62921 bt_target_page_check + 6033\n3 amcheck.so 0x0000000106d5fd2f bt_index_check_internal + 2847\n4 amcheck.so 0x0000000106d60433 bt_index_parent_check + 67\n5 postgres 0x00000001064d6762 ExecInterpExpr + 1634\n6 postgres 0x000000010650d071 ExecResult + 321\n7 postgres 0x00000001064ddc3d standard_ExecutorRun + 301\n8 postgres 0x00000001066600c5 PortalRunSelect + 389\n9 postgres 0x000000010665fc7f PortalRun + 527\n10 postgres 0x000000010665ed59 exec_simple_query + 1641\n11 postgres 0x000000010665c99d PostgresMain + 3661\n12 postgres 0x00000001065d6a8a BackendRun + 410\n13 postgres 0x00000001065d61c4 ServerLoop + 3044\n14 postgres 0x00000001065d2fe9 PostmasterMain + 3769\n15 postgres 0x000000010652e3b0 help + 0\n16 libdyld.dylib 0x00007fff6725fcc9 start + 1\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: 2020-05-11 10:11:47.394 PDT [41091] LOG: server process (PID 41309) was terminated by signal 6: Abort trap: 6\n\n\n\nFrom commandline, second error:\n\npgtest % pg_amcheck -i test \n(relname=corrupted,blkno=0,offnum=16,lp_off=7680,lp_flags=1,lp_len=31,attnum=,chunk=)\ntuple xmin = 3289393 is in the future\n(relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\ntuple xmax = 0 precedes relation relminmxid = 1\n(relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\ntuple xmin = 12593 is in the future\n(relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\n\n<snip>\n\n(relname=corrupted,blkno=107,offnum=20,lp_off=7392,lp_flags=1,lp_len=34,attnum=,chunk=)\ntuple xmin = 306 precedes relation relfrozenxid = 487\n(relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\ntuple xmax = 0 precedes relation relminmxid = 1\n(relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\ntuple xmin = 305 precedes relation relfrozenxid = 487\n(relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\nt_hoff > lp_len (54 > 34)\n(relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\nt_hoff not max-aligned (54)\nTRAP: FailedAssertion(\"TransactionIdIsValid(xmax)\", File: \"heapam_visibility.c\", Line: 1319)\n0 postgres 0x0000000105b22977 ExceptionalCondition + 103\n1 postgres 0x0000000105636e86 HeapTupleSatisfiesVacuum + 1158\n2 postgres 0x0000000105634aa1 heapam_index_build_range_scan + 1089\n3 amcheck.so 0x00000001060100f3 bt_index_check_internal + 3811\n4 amcheck.so 0x000000010601057c verify_btreeam + 316\n5 postgres 0x0000000105796266 ExecMakeTableFunctionResult + 422\n6 postgres 0x00000001057a8c35 FunctionNext + 101\n7 postgres 0x00000001057bbf3e ExecNestLoop + 478\n8 postgres 0x000000010578dc3d standard_ExecutorRun + 301\n9 postgres 0x00000001059100c5 PortalRunSelect + 389\n10 postgres 0x000000010590fc7f PortalRun + 527\n11 postgres 0x000000010590ed59 exec_simple_query + 1641\n12 postgres 0x000000010590c99d PostgresMain + 3661\n13 postgres 0x0000000105886a8a BackendRun + 410\n14 postgres 0x00000001058861c4 ServerLoop + 3044\n15 postgres 0x0000000105882fe9 PostmasterMain + 3769\n16 postgres 0x00000001057de3b0 help + 0\n17 libdyld.dylib 0x00007fff6725fcc9 start + 1\npg_amcheck: error: query failed: server closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 11 May 2020 10:21:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, May 11, 2020 at 10:21 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> 2) amcheck's btree checking functions have been refactored to be able to operate in two modes; the original mode in which all errors are reported via ereport, and a new mode for returning errors as rows from a set returning function.\n\nSomebody suggested that I make amcheck work in this way during its\ninitial development. I rejected that idea at the time, though. It\nseems hard to make it work because the B-Tree index scan is a logical\norder index scan. It's quite possible that a corrupt index will have\ncircular sibling links, and things like that. Making everything an\nerror removes that concern. There are clearly some failures that we\ncould just soldier on from, but the distinction gets rather blurred.\n\nI understand why you want to do it this way. It makes sense that the\nheap stuff would report all inconsistencies together, at the end. I\ndon't think that that's really workable (or even desirable) in the\ncase of B-Tree indexes, though. When an index is corrupt, the solution\nis always to do root cause analysis, to make sure that the issue does\nnot recur, and then to REINDEX. There isn't really a question about\ndoing data recovery of the index structure.\n\nWould it be possible to log the first B-Tree inconsistency, and then\nmove on to the next high-level phase of verification? You don't have\nto throw an error, but it seems like a good idea for amcheck to still\ngive up on further verification of the index.\n\nThe assertion failure that you reported happens because of a generic\nassertion made from _bt_compare(). It doesn't have anything to do with\namcheck (you'll see the same thing from regular index scans), really.\nI think that removing that assertion would be the opposite of\nhardening. Even if you removed it, the backend will still crash once\nyou come up with a slightly more evil index tuple. Maybe *that* could\nbe mostly avoided with widespread hardening; we could in principle\nperform cross-checks of varlena headers against the tuple or page\nlayout at any point reachable from _bt_compare(). That seems like\nsomething that would have unacceptable overhead, because the cost\nwould be imposed on everything. And even then you've only ameliorated\nthe problem.\n\nCode like amcheck's PageGetItemIdCareful() goes further than the\nequivalent backend macro (PageGetItemId()) to avoid assertion failures\nand crashes with corrupt data. I doubt that it is practical to take it\nmuch further than that, though. It's subject to diminishing returns.\nIn general, _bt_compare() calls user-defined code that is usually\nwritten in C. This C code could in principle feel entitled to do any\nnumber of scary things when you corrupt the input data. The amcheck\nmodule's dependency on user-defined operator code is totally\nunavoidable -- it is the single source of truth for the nbtree checks.\n\nIt boils down to this: I think that regression tests that run on the\nbuildfarm and actually corrupt data are not practical, at least in the\ncase of the index checks -- though probably in all cases. Look at the\npageinspect \"btree.out\" test output file -- it's very limited, because\nwe have to work around a bunch of implementation details. It's no\naccident that the bt_page_items() test shows a palindrome value in the\ndata column (the value is \"01 00 00 00 00 00 00 01\"). That's an\nendianness workaround.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 May 2020 17:34:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On May 12, 2020, at 5:34 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Mon, May 11, 2020 at 10:21 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> 2) amcheck's btree checking functions have been refactored to be able to operate in two modes; the original mode in which all errors are reported via ereport, and a new mode for returning errors as rows from a set returning function.\n\nThank you yet again for reviewing. I really appreciate the feedback!\n\n> Somebody suggested that I make amcheck work in this way during its\n> initial development. I rejected that idea at the time, though. It\n> seems hard to make it work because the B-Tree index scan is a logical\n> order index scan. It's quite possible that a corrupt index will have\n> circular sibling links, and things like that. Making everything an\n> error removes that concern. There are clearly some failures that we\n> could just soldier on from, but the distinction gets rather blurred.\n\nOk, I take your point that the code cannot soldier on after the first error is returned. I'll change that for v6 of the patch, moving on to the next relation after hitting the first corruption in any particular index. Do you mind that I refactored the code to return the error rather than ereporting? If it offends your sensibilities, I could rip that back out, at the expense of having to use try/catch logic in some other places. I prefer to avoid the try/catch stuff, but I'm not going to put up a huge fuss.\n\n> I understand why you want to do it this way. It makes sense that the\n> heap stuff would report all inconsistencies together, at the end. I\n> don't think that that's really workable (or even desirable) in the\n> case of B-Tree indexes, though. When an index is corrupt, the solution\n> is always to do root cause analysis, to make sure that the issue does\n> not recur, and then to REINDEX. There isn't really a question about\n> doing data recovery of the index structure.\n\nYes, I agree that reindexing is the most sensible remedy. I certainly have no plans to implement some pg_fsck_index type tool. Even for tables, I'm not interested in creating such a tool. I just want a good tool for finding out what the nature of the corruption is, as that might make it easier to debug what went wrong. It's not just for debugging production systems, but also for chasing down problems in half-baked code prior to release.\n\n> Would it be possible to log the first B-Tree inconsistency, and then\n> move on to the next high-level phase of verification? You don't have\n> to throw an error, but it seems like a good idea for amcheck to still\n> give up on further verification of the index.\n\nOk, good, it sounds like we're converging on the same idea. I'm happy to do so.\n\n> The assertion failure that you reported happens because of a generic\n> assertion made from _bt_compare(). It doesn't have anything to do with\n> amcheck (you'll see the same thing from regular index scans), really.\n\nOh, I know that already. I could see that easily enough in the backtrace. But if you look at the way I implemented verify_heapam, you might notice this:\n\n/*\n * check_tuphdr_xids\n *\n * Determine whether tuples are visible for verification. Similar to\n * HeapTupleSatisfiesVacuum, but with critical differences.\n *\n * 1) Does not touch hint bits. It seems imprudent to write hint bits\n * to a table during a corruption check.\n * 2) Only makes a boolean determination of whether verification should\n * see the tuple, rather than doing extra work for vacuum-related\n * categorization.\n *\n * The caller should already have checked that xmin and xmax are not out of\n * bounds for the relation.\n */\n\nThe point is that when checking the table for corruption I avoid calling anything that might assert (or segfault, or whatever). I was talking about refactoring the btree checking code to be similarly careful.\n\n> I think that removing that assertion would be the opposite of\n> hardening. Even if you removed it, the backend will still crash once\n> you come up with a slightly more evil index tuple. Maybe *that* could\n> be mostly avoided with widespread hardening; we could in principle\n> perform cross-checks of varlena headers against the tuple or page\n> layout at any point reachable from _bt_compare(). That seems like\n> something that would have unacceptable overhead, because the cost\n> would be imposed on everything. And even then you've only ameliorated\n> the problem.\n\nI think we may have different mental models of how this all works in practice. I am (or was) envisioning that the backend, during regular table and index scans, cannot afford to check for corruption at all steps along the way, and therefore does not, but that a corruption checking tool has a fundamentally different purpose, and can and should choose to operate in a way that won't blow up when checking a corrupt relation. It's the difference between a car designed to drive down the highway at high speed vs. a military vehicle designed to drive over a minefield with a guy on the front bumper scanning for landmines, the whole while going half a mile an hour.\n\nI'm starting to infer from your comments that you see the landmine detection vehicle as also driving at high speed, detecting landmines on occasion by seeing them first, but frequently by failing to see them and just blowing up.\n\n> Code like amcheck's PageGetItemIdCareful() goes further than the\n> equivalent backend macro (PageGetItemId()) to avoid assertion failures\n> and crashes with corrupt data. I doubt that it is practical to take it\n> much further than that, though. It's subject to diminishing returns.\n\nOk.\n\n> In general, _bt_compare() calls user-defined code that is usually\n> written in C. This C code could in principle feel entitled to do any\n> number of scary things when you corrupt the input data. The amcheck\n> module's dependency on user-defined operator code is totally\n> unavoidable -- it is the single source of truth for the nbtree checks.\n\nI don't really understand this argument, since users with buggy user defined operators are not the target audience, but I also don't think there is any point in arguing it, since I'm already resolved to take your advice about not hardening the btree stuff any further.\n\n> It boils down to this: I think that regression tests that run on the\n> buildfarm and actually corrupt data are not practical, at least in the\n> case of the index checks -- though probably in all cases. Look at the\n> pageinspect \"btree.out\" test output file -- it's very limited, because\n> we have to work around a bunch of implementation details. It's no\n> accident that the bt_page_items() test shows a palindrome value in the\n> data column (the value is \"01 00 00 00 00 00 00 01\"). That's an\n> endianness workaround.\n\nOne of the delays in submitting the most recent version of the patch is that I was having trouble creating a reliable, portable btree corrupting regression test. Ultimately, I submitted v5 without any btree corrupting regression test, as it proved pretty difficult to write one good enough for submission, and I had already put a couple more days into developing v5 than I had intended. So I can't argue too much with your point here.\n\nI did however address (some?) issues that you and others mentioned about the table corrupting regression test. Perhaps there are remaining issues that will show up on machines with different endianness than I have thus far tested, but I don't see that they will be insurmountable. Are you fundamentally opposed to that test framework? If you're going to vote against committing the patch with that test, I'll back down and just remove it from the patch, but it doesn't seem like a bad regression test to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 12 May 2020 19:07:46 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, May 12, 2020 at 7:07 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Thank you yet again for reviewing. I really appreciate the feedback!\n\nHappy to help. It's important work.\n\n> Ok, I take your point that the code cannot soldier on after the first error is returned. I'll change that for v6 of the patch, moving on to the next relation after hitting the first corruption in any particular index. Do you mind that I refactored the code to return the error rather than ereporting?\n\ntry/catch seems like the way to do it. Not all amcheck errors come\nfrom amcheck -- some are things that the backend code does, that are\nknown to appear in amcheck from time to time. I'm thinking in\nparticular of the\ntable_index_build_scan()/heapam_index_build_range_scan() errors, as\nwell as the errors from _bt_checkpage().\n\n> Yes, I agree that reindexing is the most sensible remedy. I certainly have no plans to implement some pg_fsck_index type tool. Even for tables, I'm not interested in creating such a tool. I just want a good tool for finding out what the nature of the corruption is, as that might make it easier to debug what went wrong. It's not just for debugging production systems, but also for chasing down problems in half-baked code prior to release.\n\nAll good goals.\n\n> * check_tuphdr_xids\n\n> The point is that when checking the table for corruption I avoid calling anything that might assert (or segfault, or whatever).\n\nI don't think that you can expect to avoid assertion failures in\ngeneral. I'll stick with your example. You're calling\nTransactionIdDidCommit() from check_tuphdr_xids(), which will\ninterrogate the commit log and pg_subtrans. It's just not under your\ncontrol. I'm sure that you could get an assertion failure somewhere in\nthere, and even if you couldn't that could change at any time.\n\nYou've quasi-duplicated some sensitive code to do that much, which\nseems excessive. But it's also not enough.\n\n> I'm starting to infer from your comments that you see the landmine detection vehicle as also driving at high speed, detecting landmines on occasion by seeing them first, but frequently by failing to see them and just blowing up.\n\nThat's not it. I would certainly prefer if the landmine detector\ndidn't blow up. Not having that happen is certainly a goal I share --\nthat's why PageGetItemIdCareful() exists. But not at any cost,\nespecially not when \"blow up\" means an assertion failure that users\nwon't actually see in production. Avoiding assertion failures like the\none you showed is likely to have a high cost (removing defensive\nasserts in low level access method code) for a low benefit. Any\nattempt to avoid having the checker itself blow up rather than throw\nan error message needs to be assessed pragmatically, on a case-by-case\nbasis.\n\n> One of the delays in submitting the most recent version of the patch is that I was having trouble creating a reliable, portable btree corrupting regression test.\n\nTo be clear, I think that corrupting data is very helpful with ad-hoc\ntesting during development.\n\n> I did however address (some?) issues that you and others mentioned about the table corrupting regression test. Perhaps there are remaining issues that will show up on machines with different endianness than I have thus far tested, but I don't see that they will be insurmountable. Are you fundamentally opposed to that test framework?\n\nI haven't thought about it enough just yet, but I am certainly suspicious of it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 May 2020 20:05:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, May 12, 2020 at 11:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> try/catch seems like the way to do it. Not all amcheck errors come\n> from amcheck -- some are things that the backend code does, that are\n> known to appear in amcheck from time to time. I'm thinking in\n> particular of the\n> table_index_build_scan()/heapam_index_build_range_scan() errors, as\n> well as the errors from _bt_checkpage().\n\nThat would require the use of a subtransaction.\n\n> You've quasi-duplicated some sensitive code to do that much, which\n> seems excessive. But it's also not enough.\n\nI think this is a good summary of the problems in this area. On the\none hand, I think it's hideous that we sanity check user input to\ndeath, but blindly trust the bytes on disk to the point of seg\nfaulting if they're wrong. The idea that int4 + int4 has to have\noverflow checking because otherwise a user might be sad when they get\na negative result from adding two negative numbers, while at the same\ntime supposing that the same user will be unwilling to accept the\nperformance hit to avoid crashing if they have a bad tuple, is quite\nsuspect in my mind. The overflow checking is also expensive, but we do\nit because it's the right thing to do, and then we try to minimize the\noverhead. It is unclear to me why we shouldn't also take that approach\nwith bytes that come from disk. In particular, using Assert() checks\nfor such things instead of elog() is basically Assert(there is no such\nthing as a corrupted database).\n\nOn the other hand, that problem is clearly way above this patch's pay\ngrade. There's a lot of stuff all over the code base that would have\nto be changed to fix it. It can't be done as an incidental thing as\npart of this patch or any other. It's a massive effort unto itself. We\nneed to somehow draw a clean line between what this patch does and\nwhat it does not do, such that the scope of this patch remains\nsomething achievable. Otherwise, we'll end up with nothing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 May 2020 15:21:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 12:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think this is a good summary of the problems in this area. On the\n> one hand, I think it's hideous that we sanity check user input to\n> death, but blindly trust the bytes on disk to the point of seg\n> faulting if they're wrong. The idea that int4 + int4 has to have\n> overflow checking because otherwise a user might be sad when they get\n> a negative result from adding two negative numbers, while at the same\n> time supposing that the same user will be unwilling to accept the\n> performance hit to avoid crashing if they have a bad tuple, is quite\n> suspect in my mind. The overflow checking is also expensive, but we do\n> it because it's the right thing to do, and then we try to minimize the\n> overhead. It is unclear to me why we shouldn't also take that approach\n> with bytes that come from disk. In particular, using Assert() checks\n> for such things instead of elog() is basically Assert(there is no such\n> thing as a corrupted database).\n\nI think that it depends. It's nice to be able to add an Assert()\nwithout really having to worry about the overhead at all. I sometimes\ncall relatively expensive functions in assertions. For example, there\nis an assert that calls _bt_compare() within _bt_check_unique() that I\nadded at one point -- it caught a real bug a few weeks later. You\ncould always be doing more.\n\nIn general we don't exactly trust the bytes blindly. I've found that\ncorrupting tuples in a creative way with pg_hexedit doesn't usually\nresult in a segfault. Sometimes we'll do things like display NULL\nvalues when heap line pointers are corrupt, which isn't as good as an\nerror but is still okay. We ought to protect against Murphy, not\nMachiavelli. ISTM that access method code naturally evolves towards\navoiding the most disruptive errors in the event of real world\ncorruption, in particular avoiding segfaulting. It's very hard to\nprove that, though.\n\nDo you recall seeing corruption resulting in segfaults in production?\nI personally don't recall seeing that. If it happened, the segfaults\nthemselves probably wouldn't be the main concern.\n\n> On the other hand, that problem is clearly way above this patch's pay\n> grade. There's a lot of stuff all over the code base that would have\n> to be changed to fix it. It can't be done as an incidental thing as\n> part of this patch or any other. It's a massive effort unto itself. We\n> need to somehow draw a clean line between what this patch does and\n> what it does not do, such that the scope of this patch remains\n> something achievable. Otherwise, we'll end up with nothing.\n\nI can easily come up with an adversarial input that will segfault a\nbackend, even amcheck, but it'll be somewhat contrived. It's hard to\nfool amcheck currently because it doesn't exactly trust line pointers.\nBut I'm sure I could get the backend to segfault amcheck if I tried.\nI'd probably try to play around with varlena headers. It would require\na certain amount of craftiness.\n\nIt's not exactly clear where you draw the line here. And I don't think\nthat the line will be very clearly defined, in the end. It'll be\nsomething that is subject to change over time, as new information\ncomes to light. I think that it's necessary to accept a certain amount\nof ambiguity here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 14:33:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-May-12, Peter Geoghegan wrote:\n\n> > The point is that when checking the table for corruption I avoid\n> > calling anything that might assert (or segfault, or whatever).\n> \n> I don't think that you can expect to avoid assertion failures in\n> general.\n\nHmm. I think we should (try to?) write code that avoids all crashes\nwith production builds, but not extend that to assertion failures.\nSticking again with the provided example,\n\n> I'll stick with your example. You're calling\n> TransactionIdDidCommit() from check_tuphdr_xids(), which will\n> interrogate the commit log and pg_subtrans. It's just not under your\n> control.\n\nin a production build this would just fail with an error that the\npg_xact file cannot be found, which is fine -- if this happens in a\nproduction system, you're not disturbing any other sessions. Or maybe\nthe file is there and the byte can be read, in which case you would get\nthe correct response; but that's fine too.\n\nI don't know to what extent this is possible.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 18:10:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 3:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Hmm. I think we should (try to?) write code that avoids all crashes\n> with production builds, but not extend that to assertion failures.\n\nAssertions are only a problem at all because Mark would like to write\ntests that involve a selection of truly corrupt data. That's a new\nrequirement, and one that I have my doubts about.\n\n> > I'll stick with your example. You're calling\n> > TransactionIdDidCommit() from check_tuphdr_xids(), which will\n> > interrogate the commit log and pg_subtrans. It's just not under your\n> > control.\n>\n> in a production build this would just fail with an error that the\n> pg_xact file cannot be found, which is fine -- if this happens in a\n> production system, you're not disturbing any other sessions. Or maybe\n> the file is there and the byte can be read, in which case you would get\n> the correct response; but that's fine too.\n\nI think that this is fine, too, since I don't consider assertion\nfailures with corrupt data all that important. I'd make some effort to\navoid it, but not too much, and not at the expense of a useful general\npurpose assertion that could catch bugs in many different contexts.\n\nI would be willing to make a larger effort to avoid crashing a\nbackend, since that affects production. I might go to some effort to\nnot crash with downright adversarial inputs, for example. But it seems\ninappropriate to take extreme measures just to avoid a crash with\nextremely contrived inputs that will probably never occur. My sense is\nthat this is subject to sharply diminishing returns. Completely\nnailing down hard crashes from corrupt data seems like the wrong\npriority, at the very least. Pursuing that objective over other\nobjectives sounds like zero-risk bias.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 15:29:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-May-13, Peter Geoghegan wrote:\n\n> On Wed, May 13, 2020 at 3:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Hmm. I think we should (try to?) write code that avoids all crashes\n> > with production builds, but not extend that to assertion failures.\n> \n> Assertions are only a problem at all because Mark would like to write\n> tests that involve a selection of truly corrupt data. That's a new\n> requirement, and one that I have my doubts about.\n\nI agree that this (a test tool that exercises our code against\narbitrarily corrupted data pages) is not going to work as a test that\nall buildfarm members run -- it seems something for specialized\nbuildfarm members to run, or even something that's run outside of the\nbuildfarm, like sqlsmith. Obviously such a tool would not be able to\nrun against an assertion-enabled build, and we shouldn't even try.\n\n> I would be willing to make a larger effort to avoid crashing a\n> backend, since that affects production. I might go to some effort to\n> not crash with downright adversarial inputs, for example. But it seems\n> inappropriate to take extreme measures just to avoid a crash with\n> extremely contrived inputs that will probably never occur. My sense is\n> that this is subject to sharply diminishing returns. Completely\n> nailing down hard crashes from corrupt data seems like the wrong\n> priority, at the very least. Pursuing that objective over other\n> objectives sounds like zero-risk bias.\n\nI think my initial approach for this would be to use a fuzzing tool that\ngenerates data blocks semi-randomly, then uses them as Postgres data\npages somehow, and see what happens -- examine any resulting crashes and\nmake individual judgement calls about the fix(es) necessary to prevent\neach of them. I expect that many such pages would be rejected as\ncorrupt by page header checks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 19:32:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 4:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think my initial approach for this would be to use a fuzzing tool that\n> generates data blocks semi-randomly, then uses them as Postgres data\n> pages somehow, and see what happens -- examine any resulting crashes and\n> make individual judgement calls about the fix(es) necessary to prevent\n> each of them. I expect that many such pages would be rejected as\n> corrupt by page header checks.\n\nAs I mentioned in my response to Robert earlier, that's more or less\nbeen my experience with adversarial corruption generated using\npg_hexedit. Within nbtree, as well as heapam. I put a lot of work into\nthat tool, and have used it to simulate all kinds of weird scenarios.\nI've done things like corrupt individual tuple header fields, swap\nline pointers, create circular sibling links in indexes, corrupt\nvarlena headers, and corrupt line pointer flags/status bits. Postgres\nitself rarely segfaults, and amcheck will only segfault with a truly\ncontrived input.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 17:01:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On May 13, 2020, at 3:29 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Wed, May 13, 2020 at 3:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Hmm. I think we should (try to?) write code that avoids all crashes\n>> with production builds, but not extend that to assertion failures.\n> \n> Assertions are only a problem at all because Mark would like to write\n> tests that involve a selection of truly corrupt data. That's a new\n> requirement, and one that I have my doubts about.\n> \n>>> I'll stick with your example. You're calling\n>>> TransactionIdDidCommit() from check_tuphdr_xids(), which will\n>>> interrogate the commit log and pg_subtrans. It's just not under your\n>>> control.\n>> \n>> in a production build this would just fail with an error that the\n>> pg_xact file cannot be found, which is fine -- if this happens in a\n>> production system, you're not disturbing any other sessions. Or maybe\n>> the file is there and the byte can be read, in which case you would get\n>> the correct response; but that's fine too.\n> \n> I think that this is fine, too, since I don't consider assertion\n> failures with corrupt data all that important. I'd make some effort to\n> avoid it, but not too much, and not at the expense of a useful general\n> purpose assertion that could catch bugs in many different contexts.\n\nI am not removing any assertions. I do not propose to remove any assertions. When I talk about \"hardening against assertions\", that is not in any way a proposal to remove assertions from the code. What I'm talking about is writing the amcheck contrib module code in such a way that it only calls a function that could assert on bad data after checking that the data is not bad.\n\nI don't know that hardening against assertions in this manner is worth doing, but this is none the less what I'm talking about. You have made decent arguments that it probably isn't worth doing for the btree checking code. And in any event, it is probably something that could be addressed in a future patch after getting this patch committed.\n\nThere is a separate but related question in the offing about whether the backend code, independently of any amcheck contrib stuff, should be more paranoid in how it processes tuples to check for corruption. The heap deform tuple code in question is on a pretty hot code path, and I don't know that folks would accept the performance hit of more checks being done in that part of the system, but that's pretty far from relevant to this patch. That should be hashed out, or not, at some other time on some other thread.\n\n> I would be willing to make a larger effort to avoid crashing a\n> backend, since that affects production. I might go to some effort to\n> not crash with downright adversarial inputs, for example. But it seems\n> inappropriate to take extreme measures just to avoid a crash with\n> extremely contrived inputs that will probably never occur.\n\nI think this is a misrepresentation of the tests that I've been running. There are two kinds of tests that I have done:\n\nFirst, there is the regression tests, t/004_verify_heapam.pl, which is obviously contrived. That was included in the regression test suite because it needed to be something other developers could read, verify, \"yeah, I can see why that would be corruption, and would give an error message of the sort the test expects\", and then could be run to verify that indeed that expected error message was generated.\n\nThe second kind of corruption test I have been running is nothing more than writing random nonsense into randomly chosen locations within heap files and then running verify_heapam against those heap relations. It's much more Murphy than Machiavelli when it's just generated by calling random(). When I initially did this kind of testing, the heapam checking code had lots of problems. Now it doesn't. There's very little contrived about that which I can see. It's the kind of corruption you'd expect from any number of faulty storage systems. The one \"contrived\" aspect of my testing in this regard is that the script I use to write random nonsense to random locations in heap files is smart enough not to write random junk to the page headers. This is because if I corrupt the page headers, the backend never even gets as far as running the verify_heapam functions, as the page cache rejects loading the page.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 13 May 2020 17:18:40 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 5:18 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I am not removing any assertions. I do not propose to remove any assertions. When I talk about \"hardening against assertions\", that is not in any way a proposal to remove assertions from the code.\n\nI'm sorry if I seemed to suggest that you wanted to remove assertions,\nrather than test more things earlier. I recognize that that could be a\nuseful thing to do, both in general, and maybe even in the specific\nexample you gave -- on general robustness grounds. At the same time,\nit's something that can only be taken so far. It's probably not going\nto make it practical to corrupt data in a regression test or tap test.\n\n> There is a separate but related question in the offing about whether the backend code, independently of any amcheck contrib stuff, should be more paranoid in how it processes tuples to check for corruption.\n\nI bet that there is something that we could do to be a bit more\ndefensive. Of course, we do a certain amount of that on general\nrobustness grounds already. A systematic review of that could be quite\nuseful. But as you point out, it's not really in scope here.\n\n> > I would be willing to make a larger effort to avoid crashing a\n> > backend, since that affects production. I might go to some effort to\n> > not crash with downright adversarial inputs, for example. But it seems\n> > inappropriate to take extreme measures just to avoid a crash with\n> > extremely contrived inputs that will probably never occur.\n>\n> I think this is a misrepresentation of the tests that I've been running.\n\nI didn't actually mean it that way, but I can see how my words could\nreasonably be interpreted that way. I apologize.\n\n> There are two kinds of tests that I have done:\n>\n> First, there is the regression tests, t/004_verify_heapam.pl, which is obviously contrived. That was included in the regression test suite because it needed to be something other developers could read, verify, \"yeah, I can see why that would be corruption, and would give an error message of the sort the test expects\", and then could be run to verify that indeed that expected error message was generated.\n\nI still don't think that this is necessary. It could work for one type\nof corruption, that happens to not have any of the problems, but just\ntesting that one type of corruption seems rather arbitrary to me.\n\n> The second kind of corruption test I have been running is nothing more than writing random nonsense into randomly chosen locations within heap files and then running verify_heapam against those heap relations. It's much more Murphy than Machiavelli when it's just generated by calling random().\n\nThat sounds like a good initial test case, to guide your intuitions\nabout how to make the feature robust.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 17:36:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On May 13, 2020, at 5:36 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Wed, May 13, 2020 at 5:18 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I am not removing any assertions. I do not propose to remove any assertions. When I talk about \"hardening against assertions\", that is not in any way a proposal to remove assertions from the code.\n> \n> I'm sorry if I seemed to suggest that you wanted to remove assertions\n\nNot a problem at all. As always, I appreciate your involvement in this code and design review. \n\n\n>> I think this is a misrepresentation of the tests that I've been running.\n> \n> I didn't actually mean it that way, but I can see how my words could\n> reasonably be interpreted that way. I apologize.\n\nAgain, no worries. \n\n>> There are two kinds of tests that I have done:\n>> \n>> First, there is the regression tests, t/004_verify_heapam.pl, which is obviously contrived. That was included in the regression test suite because it needed to be something other developers could read, verify, \"yeah, I can see why that would be corruption, and would give an error message of the sort the test expects\", and then could be run to verify that indeed that expected error message was generated.\n> \n> I still don't think that this is necessary. It could work for one type\n> of corruption, that happens to not have any of the problems, but just\n> testing that one type of corruption seems rather arbitrary to me.\n\nAs discussed with Robert off list, this probably doesn't matter. The patch can be committed with or without this particular TAP test.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 14 May 2020 08:35:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 5:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Do you recall seeing corruption resulting in segfaults in production?\n\nI have seen that, I believe. I think it's more common to fail with\nerrors about not being able to palloc>1GB, not being able to look up\nan xid or mxid, etc. but I am pretty sure I've seen multiple cases\ninvolving seg faults, too. Unfortunately for my credibility, I can't\nremember the details right now.\n\n> I personally don't recall seeing that. If it happened, the segfaults\n> themselves probably wouldn't be the main concern.\n\nI don't really agree. Hypothetically speaking, suppose you corrupt\nyour only copy of a critical table in such a way that every time you\nselect from it, the system seg faults. A user in this situation might\nask questions like:\n\n1. How did my table get corrupted?\n2. Why do I only have one copy of it?\n3. How do I retrieve the non-corrupted portion of my data from that\ntable and get back up and running?\n\nIn the grand scheme of things, #1 and #2 are the most important\nquestions, but when something like this actually happens, #3 tends to\nbe the most urgent question, and it's a lot harder to get the\nuncorrupted data out if the system keeps crashing.\n\nAlso, a seg fault tends to lead customers to think that the database\nhas a bug, rather than that the database is corrupted.\n\nSlightly off-topic here, but I think our error reporting in this area\nis pretty lame. I've learned over the years that when a customer\nreports that they get a complaint about a too-large memory allocation\nevery time they access a table, they've probably got a corrupted\nvarlena header. However, that's extremely non-obvious to a typical\nuser. We should try to report errors indicative of corruption in a way\nthat gives the user some clue that corruption has happened. Peter made\na stab at improving things there by adding\nerrcode(ERRCODE_DATA_CORRUPTED) in a bunch of places, but a lot of\nusers will never see the error code, only the message, and a lot of\ncorruption produces still produces errors that weren't changed by that\ncommit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 14:32:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, May 13, 2020 at 7:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I agree that this (a test tool that exercises our code against\n> arbitrarily corrupted data pages) is not going to work as a test that\n> all buildfarm members run -- it seems something for specialized\n> buildfarm members to run, or even something that's run outside of the\n> buildfarm, like sqlsmith. Obviously such a tool would not be able to\n> run against an assertion-enabled build, and we shouldn't even try.\n\nI have a question about what you mean here by \"arbitrarily.\"\n\nIf you mean that we shouldn't have the buildfarm run the proposed heap\ncorruption checker against heap pages full of randomly-generated\ngarbage, I tend to agree. Such a test wouldn't be very stable and\nmight fail in lots of low-probability ways that could require\nunreasonable effort to find and fix.\n\nIf you mean that we shouldn't have the buildfarm run the proposed heap\ncorruption checker against any corrupted heap pages at all, I tend to\ndisagree. If we did that, then we'd basically be releasing a heap\ncorruption checker with very limited test coverage. Like, we shouldn't\nonly have negative test cases, where the absence of corruption\nproduces no results. We should also have positive test cases, where\nthe thing finds some problem...\n\nAt least, that's what I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 14:43:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, May 14, 2020 at 11:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I have seen that, I believe. I think it's more common to fail with\n> errors about not being able to palloc>1GB, not being able to look up\n> an xid or mxid, etc. but I am pretty sure I've seen multiple cases\n> involving seg faults, too. Unfortunately for my credibility, I can't\n> remember the details right now.\n\nI believe you, both in general, and also because what you're saying\nhere is plausible, even if it doesn't fit my own experience.\n\nCorruption is by its very nature exceptional. At least, if that isn't\ntrue then something must be seriously wrong, so the idea that it will\nbe different in some way each time seems like a good working\nassumption. Your exceptional cases are not necessarily the same as\nmine, especially where hardware problems are concerned. On the other\nhand, it's also possible for corruption that originates from very\ndifferent sources to exhibit the same basic inconsistencies and\nsymptoms.\n\nI've noticed that SLRU corruption is often a leading indicator of\ngeneral storage problems. The inconsistencies between certain SLRU\nstate and the heap happens to be far easier to notice in practice,\nparticularly when VACUUM runs. But it's not fundamentally different to\ninconsistencies from pages within one single main fork of some heap\nrelation.\n\n> > I personally don't recall seeing that. If it happened, the segfaults\n> > themselves probably wouldn't be the main concern.\n>\n> I don't really agree. Hypothetically speaking, suppose you corrupt\n> your only copy of a critical table in such a way that every time you\n> select from it, the system seg faults. A user in this situation might\n> ask questions like:\n\nI agree that that could be a problem. But that's not what I've seen\nhappen in production systems myself.\n\nMaybe there is some low hanging fruit here. Perhaps we can make the\nreal PageGetItemId() a little closer to PageGetItemIdCareful() without\nnoticeable overhead, as I suggested already. Are there any real\ngeneralizations that we can make about why backends segfault with\ncorrupt data? Maybe there is. That seems important.\n\n> Slightly off-topic here, but I think our error reporting in this area\n> is pretty lame. I've learned over the years that when a customer\n> reports that they get a complaint about a too-large memory allocation\n> every time they access a table, they've probably got a corrupted\n> varlena header.\n\nI certainlt learned the same lesson in the same way.\n\n> However, that's extremely non-obvious to a typical\n> user. We should try to report errors indicative of corruption in a way\n> that gives the user some clue that corruption has happened. Peter made\n> a stab at improving things there by adding\n> errcode(ERRCODE_DATA_CORRUPTED) in a bunch of places, but a lot of\n> users will never see the error code, only the message, and a lot of\n> corruption produces still produces errors that weren't changed by that\n> commit.\n\nThe theory is that \"can't happen\" errors having an errcode that should\nbe considered similar to or equivalent to ERRCODE_DATA_CORRUPTED. I\ndoubt that it works out that way in practice, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 14 May 2020 12:03:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-May-14, Robert Haas wrote:\n\n> I have a question about what you mean here by \"arbitrarily.\"\n> \n> If you mean that we shouldn't have the buildfarm run the proposed heap\n> corruption checker against heap pages full of randomly-generated\n> garbage, I tend to agree. Such a test wouldn't be very stable and\n> might fail in lots of low-probability ways that could require\n> unreasonable effort to find and fix.\n\nThis is what I meant. I was thinking of blocks generated randomly.\n\n> If you mean that we shouldn't have the buildfarm run the proposed heap\n> corruption checker against any corrupted heap pages at all, I tend to\n> disagree.\n\nYeah, IMV those would not be arbitrarily corrupted -- instead they're\ncrafted to be corrupted in some specific way.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 May 2020 15:31:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-14, Robert Haas wrote:\n>> If you mean that we shouldn't have the buildfarm run the proposed heap\n>> corruption checker against heap pages full of randomly-generated\n>> garbage, I tend to agree. Such a test wouldn't be very stable and\n>> might fail in lots of low-probability ways that could require\n>> unreasonable effort to find and fix.\n\n> This is what I meant. I was thinking of blocks generated randomly.\n\nYeah, -1 for using random data --- when it fails, how you gonna\nreproduce the problem?\n\n>> If you mean that we shouldn't have the buildfarm run the proposed heap\n>> corruption checker against any corrupted heap pages at all, I tend to\n>> disagree.\n\n> Yeah, IMV those would not be arbitrarily corrupted -- instead they're\n> crafted to be corrupted in some specific way.\n\nI think there's definitely value in corrupting data in some predictable\n(reproducible) way and verifying that the check code catches it and\nresponds as expected. Sure, this will not be 100% coverage, but it'll be\na lot better than 0% coverage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 May 2020 15:50:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-05-11 19:21, Mark Dilger wrote:\n> 1) A new module, pg_amcheck, which includes a command line client for checking a database or subset of a database. Internally it functions by querying the database for a list of tables which are appropriate given the command line switches, and then calls amcheck's functions to validate each table and/or index. The options for selecting/excluding tables and schemas is patterned on pg_dump, on the assumption that interface is already familiar to users.\n\nWhy is this useful over just using the extension's functions via psql?\n\nI suppose you could make an argument for a command-line wrapper around \nalmost every admin-focused contrib module (pageinspect, pg_prewarm, \npgstattuple, ...), but that doesn't seem very sensible.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 May 2020 22:02:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On May 14, 2020, at 1:02 PM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-05-11 19:21, Mark Dilger wrote:\n>> 1) A new module, pg_amcheck, which includes a command line client for checking a database or subset of a database. Internally it functions by querying the database for a list of tables which are appropriate given the command line switches, and then calls amcheck's functions to validate each table and/or index. The options for selecting/excluding tables and schemas is patterned on pg_dump, on the assumption that interface is already familiar to users.\n> \n> Why is this useful over just using the extension's functions via psql?\n\nThe tool doesn't hold a single snapshot or transaction for the lifetime of checking the entire database. A future improvement to the tool might add parallelism. Users could do all of this in scripts, but having a single tool with the most commonly useful options avoids duplication of effort.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 14 May 2020 14:53:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, May 11, 2020 at 10:51 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Here is v5 of the patch. Major changes in this version include:\n>\n> 1) A new module, pg_amcheck, which includes a command line client for checking a database or subset of a database. Internally it functions by querying the database for a list of tables which are appropriate given the command line switches, and then calls amcheck's functions to validate each table and/or index. The options for selecting/excluding tables and schemas is patterned on pg_dump, on the assumption that interface is already familiar to users.\n>\n> 2) amcheck's btree checking functions have been refactored to be able to operate in two modes; the original mode in which all errors are reported via ereport, and a new mode for returning errors as rows from a set returning function. The new mode is used by a new function verify_btreeam(), analogous to verify_heapam(), both of which are used by the pg_amcheck command line tool.\n>\n> 3) The regression test which generates corruption within a table uses the pageinspect module to determine the location of each tuple on disk for corrupting. This was suggested upthread.\n>\n> Testing on the command line shows that the pre-existing btree checking code could use some hardening, as it currently crashes the backend on certain corruptions. When I corrupt relation files for tables and indexes in the backend and then use pg_amcheck to check all objects in the database, I keep getting assertions from the btree checking code. I think I need to harden this code, but wanted to post an updated patch and solicit opinions before doing so. Here are some example problems I'm seeing. Note the stack trace when calling from the command line tool includes the new verify_btreeam function, but you can get the same crashes using the old interface via psql:\n>\n> From psql, first error:\n>\n> test=# select bt_index_parent_check('corrupted_idx', true, true);\n> TRAP: FailedAssertion(\"_bt_check_natts(rel, key->heapkeyspace, page, offnum)\", File: \"nbtsearch.c\", Line: 663)\n> 0 postgres 0x0000000106872977 ExceptionalCondition + 103\n> 1 postgres 0x00000001063a33e2 _bt_compare + 1090\n> 2 amcheck.so 0x0000000106d62921 bt_target_page_check + 6033\n> 3 amcheck.so 0x0000000106d5fd2f bt_index_check_internal + 2847\n> 4 amcheck.so 0x0000000106d60433 bt_index_parent_check + 67\n> 5 postgres 0x00000001064d6762 ExecInterpExpr + 1634\n> 6 postgres 0x000000010650d071 ExecResult + 321\n> 7 postgres 0x00000001064ddc3d standard_ExecutorRun + 301\n> 8 postgres 0x00000001066600c5 PortalRunSelect + 389\n> 9 postgres 0x000000010665fc7f PortalRun + 527\n> 10 postgres 0x000000010665ed59 exec_simple_query + 1641\n> 11 postgres 0x000000010665c99d PostgresMain + 3661\n> 12 postgres 0x00000001065d6a8a BackendRun + 410\n> 13 postgres 0x00000001065d61c4 ServerLoop + 3044\n> 14 postgres 0x00000001065d2fe9 PostmasterMain + 3769\n> 15 postgres 0x000000010652e3b0 help + 0\n> 16 libdyld.dylib 0x00007fff6725fcc9 start + 1\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: 2020-05-11 10:11:47.394 PDT [41091] LOG: server process (PID 41309) was terminated by signal 6: Abort trap: 6\n>\n>\n>\n> From commandline, second error:\n>\n> pgtest % pg_amcheck -i test\n> (relname=corrupted,blkno=0,offnum=16,lp_off=7680,lp_flags=1,lp_len=31,attnum=,chunk=)\n> tuple xmin = 3289393 is in the future\n> (relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\n> tuple xmax = 0 precedes relation relminmxid = 1\n> (relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\n> tuple xmin = 12593 is in the future\n> (relname=corrupted,blkno=0,offnum=17,lp_off=7648,lp_flags=1,lp_len=31,attnum=,chunk=)\n>\n> <snip>\n>\n> (relname=corrupted,blkno=107,offnum=20,lp_off=7392,lp_flags=1,lp_len=34,attnum=,chunk=)\n> tuple xmin = 306 precedes relation relfrozenxid = 487\n> (relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\n> tuple xmax = 0 precedes relation relminmxid = 1\n> (relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\n> tuple xmin = 305 precedes relation relfrozenxid = 487\n> (relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\n> t_hoff > lp_len (54 > 34)\n> (relname=corrupted,blkno=107,offnum=22,lp_off=7312,lp_flags=1,lp_len=34,attnum=,chunk=)\n> t_hoff not max-aligned (54)\n> TRAP: FailedAssertion(\"TransactionIdIsValid(xmax)\", File: \"heapam_visibility.c\", Line: 1319)\n> 0 postgres 0x0000000105b22977 ExceptionalCondition + 103\n> 1 postgres 0x0000000105636e86 HeapTupleSatisfiesVacuum + 1158\n> 2 postgres 0x0000000105634aa1 heapam_index_build_range_scan + 1089\n> 3 amcheck.so 0x00000001060100f3 bt_index_check_internal + 3811\n> 4 amcheck.so 0x000000010601057c verify_btreeam + 316\n> 5 postgres 0x0000000105796266 ExecMakeTableFunctionResult + 422\n> 6 postgres 0x00000001057a8c35 FunctionNext + 101\n> 7 postgres 0x00000001057bbf3e ExecNestLoop + 478\n> 8 postgres 0x000000010578dc3d standard_ExecutorRun + 301\n> 9 postgres 0x00000001059100c5 PortalRunSelect + 389\n> 10 postgres 0x000000010590fc7f PortalRun + 527\n> 11 postgres 0x000000010590ed59 exec_simple_query + 1641\n> 12 postgres 0x000000010590c99d PostgresMain + 3661\n> 13 postgres 0x0000000105886a8a BackendRun + 410\n> 14 postgres 0x00000001058861c4 ServerLoop + 3044\n> 15 postgres 0x0000000105882fe9 PostmasterMain + 3769\n> 16 postgres 0x00000001057de3b0 help + 0\n> 17 libdyld.dylib 0x00007fff6725fcc9 start + 1\n> pg_amcheck: error: query failed: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n\nI have just browsed through the patch and the idea is quite\ninteresting. I think we can expand it to check that whether the flags\nset in the infomask are sane or not w.r.t other flags and xid status.\nSome examples are\n\n- If HEAP_XMAX_LOCK_ONLY is set in infomask then HEAP_KEYS_UPDATED\nshould not be set in new_infomask2.\n- If HEAP_XMIN(XMAX)_COMMITTED is set in the infomask then can we\nactually cross verify the transaction status from the CLOG and check\nwhether is matching the hint bit or not.\n\nWhile browsing through the code I could not find that we are doing\nthis kind of check, ignore if we are already checking this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 21:44:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On May 11, 2020, at 10:21 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> <v5-0001-Adding-verify_heapam-and-pg_amcheck.patch>\n\nRebased with some whitespace fixes, but otherwise unmodified from v5.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 11 Jun 2020 11:52:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jun 11, 2020, at 9:14 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> I have just browsed through the patch and the idea is quite\n> interesting. I think we can expand it to check that whether the flags\n> set in the infomask are sane or not w.r.t other flags and xid status.\n> Some examples are\n> \n> - If HEAP_XMAX_LOCK_ONLY is set in infomask then HEAP_KEYS_UPDATED\n> should not be set in new_infomask2.\n> - If HEAP_XMIN(XMAX)_COMMITTED is set in the infomask then can we\n> actually cross verify the transaction status from the CLOG and check\n> whether is matching the hint bit or not.\n> \n> While browsing through the code I could not find that we are doing\n> this kind of check, ignore if we are already checking this.\n\nThanks for taking a look!\n\nHaving both of those bits set simultaneously appears to fall into a different category than what I wrote verify_heapam.c to detect. It doesn't violate any assertion in the backend, nor does it cause the code to crash. (At least, I don't immediately see how it does either of those things.) At first glance it appears invalid to have those bits both set simultaneously, but I'm hesitant to enforce that without good reason. If it is a good thing to enforce, should we also change the backend code to Assert?\n\nI integrated your idea into one of the regression tests. It now sets these two bits in the header of one of the rows in a table. The verify_heapam check output (which includes all detected corruptions) does not change, which verifies your observation that verify_heapam is not checking for this. I've attached that as a patch to this email. Note that this patch should be applied atop the v6 patch recently posted in another email.\n\n\n\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 11 Jun 2020 12:10:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Jun 12, 2020 at 12:40 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 11, 2020, at 9:14 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have just browsed through the patch and the idea is quite\n> > interesting. I think we can expand it to check that whether the flags\n> > set in the infomask are sane or not w.r.t other flags and xid status.\n> > Some examples are\n> >\n> > - If HEAP_XMAX_LOCK_ONLY is set in infomask then HEAP_KEYS_UPDATED\n> > should not be set in new_infomask2.\n> > - If HEAP_XMIN(XMAX)_COMMITTED is set in the infomask then can we\n> > actually cross verify the transaction status from the CLOG and check\n> > whether is matching the hint bit or not.\n> >\n> > While browsing through the code I could not find that we are doing\n> > this kind of check, ignore if we are already checking this.\n>\n> Thanks for taking a look!\n>\n> Having both of those bits set simultaneously appears to fall into a different category than what I wrote verify_heapam.c to detect.\n\nOk\n\n It doesn't violate any assertion in the backend, nor does it cause\nthe code to crash. (At least, I don't immediately see how it does\neither of those things.) At first glance it appears invalid to have\nthose bits both set simultaneously, but I'm hesitant to enforce that\nwithout good reason. If it is a good thing to enforce, should we also\nchange the backend code to Assert?\n\nYeah, it may not hit assert or crash but it could lead to a wrong\nresult. But I agree that it could be an assertion in the backend\ncode. What about the other check, like hint bit is saying the\ntransaction is committed but actually as per the clog the status is\nsomething else. I think in general processing it is hard to check\nsuch things in backend no? because if the hint bit is set saying that\nthe transaction is committed then we will directly check its\nvisibility with the snapshot. I think a corruption checker may be a\ngood tool for catching such anomalies.\n\n> I integrated your idea into one of the regression tests. It now sets these two bits in the header of one of the rows in a table. The verify_heapam check output (which includes all detected corruptions) does not change, which verifies your observation that verifies _heapam is not checking for this. I've attached that as a patch to this email. Note that this patch should be applied atop the v6 patch recently posted in another email.\n\nOk.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jun 2020 12:05:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jun 11, 2020, at 11:35 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> On Fri, Jun 12, 2020 at 12:40 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \n>> \n>>> On Jun 11, 2020, at 9:14 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>> \n>>> I have just browsed through the patch and the idea is quite\n>>> interesting. I think we can expand it to check that whether the flags\n>>> set in the infomask are sane or not w.r.t other flags and xid status.\n>>> Some examples are\n>>> \n>>> - If HEAP_XMAX_LOCK_ONLY is set in infomask then HEAP_KEYS_UPDATED\n>>> should not be set in new_infomask2.\n>>> - If HEAP_XMIN(XMAX)_COMMITTED is set in the infomask then can we\n>>> actually cross verify the transaction status from the CLOG and check\n>>> whether is matching the hint bit or not.\n>>> \n>>> While browsing through the code I could not find that we are doing\n>>> this kind of check, ignore if we are already checking this.\n>> \n>> Thanks for taking a look!\n>> \n>> Having both of those bits set simultaneously appears to fall into a different category than what I wrote verify_heapam.c to detect.\n> \n> Ok\n> \n> \n>> It doesn't violate any assertion in the backend, nor does it cause\n>> the code to crash. (At least, I don't immediately see how it does\n>> either of those things.) At first glance it appears invalid to have\n>> those bits both set simultaneously, but I'm hesitant to enforce that\n>> without good reason. If it is a good thing to enforce, should we also\n>> change the backend code to Assert?\n> \n> Yeah, it may not hit assert or crash but it could lead to a wrong\n> result. But I agree that it could be an assertion in the backend\n> code. \n\nFor v7, I've added an assertion for this. Per heap/README.tuplock, \"We currently never set the HEAP_XMAX_COMMITTED when the HEAP_XMAX_IS_MULTI bit is set.\" I added an assertion for that, too. Both new assertions are in RelationPutHeapTuple(). I'm not sure if that is the best place to put the assertion, but I am confident that the assertion needs to only check tuples destined for disk, as in memory tuples can and do violate the assertion.\n\nAlso for v7, I've updated contrib/amcheck to report these two conditions as corruption.\n\n> What about the other check, like hint bit is saying the\n> transaction is committed but actually as per the clog the status is\n> something else. I think in general processing it is hard to check\n> such things in backend no? because if the hint bit is set saying that\n> the transaction is committed then we will directly check its\n> visibility with the snapshot. I think a corruption checker may be a\n> good tool for catching such anomalies.\n\nI already made some design changes to this patch to avoid taking the CLogTruncationLock too often. I'm happy to incorporate this idea, but perhaps you could provide a design on how to do it without all the extra locking? If not, I can try to get this into v8 as an optional check, so users can turn it on at their discretion. Having the check enabled by default is probably a non-starter.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 12 Jun 2020 14:06:18 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-06-12 23:06, Mark Dilger wrote:\n\n> [v7-0001-Adding-verify_heapam-and-pg_amcheck.patch]\n> [v7-0002-Adding-checks-o...ations-of-hint-bit.patch]\n\nI came across these typos in the sgml:\n\n--exclude-scheam should be\n--exclude-schema\n\n<option>table</option> should be\n<option>--table</option>\n\n\nI found this connection problem (or perhaps it is as designed):\n\n$ env | grep ^PG\nPGPORT=6965\nPGPASSFILE=/home/aardvark/.pg_aardvark\nPGDATABASE=testdb\nPGDATA=/home/aardvark/pg_stuff/pg_installations/pgsql.amcheck/data\n\n-- just to show that psql is connecting (via $PGPASSFILE and $PGPORT and \n$PGDATABASE):\n-- and showing a table t that I made earlier\n\n$ psql\nSET\nTiming is on.\npsql (14devel_amcheck_0612_2f48)\nType \"help\" for help.\n\ntestdb=# \\dt+ t\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description\n--------+------+-------+----------+-------------+--------+-------------\n public | t | table | aardvark | permanent | 346 MB |\n(1 row)\n\ntestdb=# \\q\n\nI think this should work:\n\n$ pg_amcheck -i -t t\npg_amcheck: error: no matching tables were found\n\nIt seems a bug that I have to add '-d testdb':\n\nThis works OK:\npg_amcheck -i -t t -d testdb\n\nIs that error as expected?\n\n\nthanks,\n\nErik Rijkers\n\n\n", "msg_date": "Sat, 13 Jun 2020 23:13:03 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module (typos)" }, { "msg_contents": "> On Jun 13, 2020, at 2:13 PM, Erik Rijkers <er@xs4all.nl> wrote:\n\nThanks for the review!\n\n> On 2020-06-12 23:06, Mark Dilger wrote:\n> \n>> [v7-0001-Adding-verify_heapam-and-pg_amcheck.patch]\n>> [v7-0002-Adding-checks-o...ations-of-hint-bit.patch]\n> \n> I came across these typos in the sgml:\n> \n> --exclude-scheam should be\n> --exclude-schema\n> \n> <option>table</option> should be\n> <option>--table</option>\n\nYeah, I agree and have made these changes for v8.\n\n> I found this connection problem (or perhaps it is as designed):\n> \n> $ env | grep ^PG\n> PGPORT=6965\n> PGPASSFILE=/home/aardvark/.pg_aardvark\n> PGDATABASE=testdb\n> PGDATA=/home/aardvark/pg_stuff/pg_installations/pgsql.amcheck/data\n> \n> -- just to show that psql is connecting (via $PGPASSFILE and $PGPORT and $PGDATABASE):\n> -- and showing a table t that I made earlier\n> \n> $ psql\n> SET\n> Timing is on.\n> psql (14devel_amcheck_0612_2f48)\n> Type \"help\" for help.\n> \n> testdb=# \\dt+ t\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description\n> --------+------+-------+----------+-------------+--------+-------------\n> public | t | table | aardvark | permanent | 346 MB |\n> (1 row)\n> \n> testdb=# \\q\n> \n> I think this should work:\n> \n> $ pg_amcheck -i -t t\n> pg_amcheck: error: no matching tables were found\n> \n> It seems a bug that I have to add '-d testdb':\n> \n> This works OK:\n> pg_amcheck -i -t t -d testdb\n> \n> Is that error as expected?\n\nIt was expected, but looking more broadly at other tools, your expectations seem to be more typical. I've changed it in v8. Thanks again for having a look at this patch!\n\nNote that I've merge the two patches (v7-0001 and v7-0002) back into a single patch, since the separation introduced in v7 was only for illustration of changes in v7.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 13 Jun 2020 15:11:42 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module (typos)" }, { "msg_contents": "On Sat, Jun 13, 2020 at 2:36 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 11, 2020, at 11:35 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Jun 12, 2020 at 12:40 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> >>\n> >>\n> >>\n> >>> On Jun 11, 2020, at 9:14 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>>\n> >>> I have just browsed through the patch and the idea is quite\n> >>> interesting. I think we can expand it to check that whether the flags\n> >>> set in the infomask are sane or not w.r.t other flags and xid status.\n> >>> Some examples are\n> >>>\n> >>> - If HEAP_XMAX_LOCK_ONLY is set in infomask then HEAP_KEYS_UPDATED\n> >>> should not be set in new_infomask2.\n> >>> - If HEAP_XMIN(XMAX)_COMMITTED is set in the infomask then can we\n> >>> actually cross verify the transaction status from the CLOG and check\n> >>> whether is matching the hint bit or not.\n> >>>\n> >>> While browsing through the code I could not find that we are doing\n> >>> this kind of check, ignore if we are already checking this.\n> >>\n> >> Thanks for taking a look!\n> >>\n> >> Having both of those bits set simultaneously appears to fall into a different category than what I wrote verify_heapam.c to detect.\n> >\n> > Ok\n> >\n> >\n> >> It doesn't violate any assertion in the backend, nor does it cause\n> >> the code to crash. (At least, I don't immediately see how it does\n> >> either of those things.) At first glance it appears invalid to have\n> >> those bits both set simultaneously, but I'm hesitant to enforce that\n> >> without good reason. If it is a good thing to enforce, should we also\n> >> change the backend code to Assert?\n> >\n> > Yeah, it may not hit assert or crash but it could lead to a wrong\n> > result. But I agree that it could be an assertion in the backend\n> > code.\n>\n> For v7, I've added an assertion for this. Per heap/README.tuplock, \"We currently never set the HEAP_XMAX_COMMITTED when the HEAP_XMAX_IS_MULTI bit is set.\" I added an assertion for that, too. Both new assertions are in RelationPutHeapTuple(). I'm not sure if that is the best place to put the assertion, but I am confident that the assertion needs to only check tuples destined for disk, as in memory tuples can and do violate the assertion.\n>\n> Also for v7, I've updated contrib/amcheck to report these two conditions as corruption.\n>\n> > What about the other check, like hint bit is saying the\n> > transaction is committed but actually as per the clog the status is\n> > something else. I think in general processing it is hard to check\n> > such things in backend no? because if the hint bit is set saying that\n> > the transaction is committed then we will directly check its\n> > visibility with the snapshot. I think a corruption checker may be a\n> > good tool for catching such anomalies.\n>\n> I already made some design changes to this patch to avoid taking the CLogTruncationLock too often. I'm happy to incorporate this idea, but perhaps you could provide a design on how to do it without all the extra locking? If not, I can try to get this into v8 as an optional check, so users can turn it on at their discretion. Having the check enabled by default is probably a non-starter.\n\nOkay, even I can't think a way to do it without an extra locking.\n\nI have looked into 0001 patch and I have a few comments.\n\n1.\n+\n+ /* Skip over unused/dead/redirected line pointers */\n+ if (!ItemIdIsUsed(ctx.itemid) ||\n+ ItemIdIsDead(ctx.itemid) ||\n+ ItemIdIsRedirected(ctx.itemid))\n+ continue;\n\nIsn't it a good idea to verify the Redirected Itemtid? Because we\nwill still access the redirected item id to find the\nactual tuple from the index scan. Maybe not exactly at this level,\nbut we can verify that the link itemid store in that\nis within the itemid range of the page or not.\n\n2.\n\n+ /* Check for tuple header corruption */\n+ if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n+ {\n+ confess(ctx,\n+ psprintf(\"t_hoff < SizeofHeapTupleHeader (%u < %u)\",\n+ ctx->tuphdr->t_hoff,\n+ (unsigned) SizeofHeapTupleHeader));\n+ fatal = true;\n+ }\n\nI think we can also check that if there is no NULL attributes (if\n(!(t_infomask & HEAP_HASNULL)) then\nctx->tuphdr->t_hoff should be equal to SizeofHeapTupleHeader.\n\n\n3.\n+ ctx->offset = 0;\n+ for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n+ {\n+ if (!check_tuple_attribute(ctx))\n+ break;\n+ }\n+ ctx->offset = -1;\n+ ctx->attnum = -1;\n\nSo we are first setting ctx->offset to 0, then inside\ncheck_tuple_attribute, we will keep updating the offset as we process\nthe attributes and after the loop is over we set ctx->offset to -1, I\ndid not understand that why we need to reset it to -1, do we ever\ncheck for that. We don't even initialize the ctx->offset to -1 while\ninitializing the context for the tuple so I do not understand what is\nthe meaning of the random value -1.\n\n4.\n+ if (!VARATT_IS_EXTENDED(chunk))\n+ {\n+ chunksize = VARSIZE(chunk) - VARHDRSZ;\n+ chunkdata = VARDATA(chunk);\n+ }\n+ else if (VARATT_IS_SHORT(chunk))\n+ {\n+ /*\n+ * could happen due to heap_form_tuple doing its thing\n+ */\n+ chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n+ chunkdata = VARDATA_SHORT(chunk);\n+ }\n+ else\n+ {\n+ /* should never happen */\n+ confess(ctx,\n+ pstrdup(\"toast chunk is neither short nor extended\"));\n+ return;\n+ }\n\nI think the error message \"toast chunk is neither short nor extended\".\nBecause ideally, the toast chunk should not be further toasted.\nSo I think the check is correct, but the error message is not correct.\n\n5.\n\n+ ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n+ check_relation_relkind_and_relam(ctx.rel);\n+\n+ /*\n+ * Open the toast relation, if any, also protected from concurrent\n+ * vacuums.\n+ */\n+ if (ctx.rel->rd_rel->reltoastrelid)\n+ {\n+ int offset;\n+\n+ /* Main relation has associated toast relation */\n+ ctx.toastrel = table_open(ctx.rel->rd_rel->reltoastrelid,\n+ ShareUpdateExclusiveLock);\n+ offset = toast_open_indexes(ctx.toastrel,\n....\n+ if (TransactionIdIsNormal(ctx.relfrozenxid) &&\n+ TransactionIdPrecedes(ctx.relfrozenxid, ctx.oldestValidXid))\n+ {\n+ confess(&ctx, psprintf(\"relfrozenxid %u precedes global \"\n+ \"oldest valid xid %u \",\n+ ctx.relfrozenxid, ctx.oldestValidXid));\n+ PG_RETURN_NULL();\n+ }\n\nDon't we need to close the relation/toastrel/toastindexrel in such\nreturn which is without an abort? IIRC, we\nwill get relcache leak WARNING on commit if we left them open in commit path.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 21 Jun 2020 15:24:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jun 21, 2020, at 2:54 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> I have looked into 0001 patch and I have a few comments.\n> \n> 1.\n> +\n> + /* Skip over unused/dead/redirected line pointers */\n> + if (!ItemIdIsUsed(ctx.itemid) ||\n> + ItemIdIsDead(ctx.itemid) ||\n> + ItemIdIsRedirected(ctx.itemid))\n> + continue;\n> \n> Isn't it a good idea to verify the Redirected Itemtid? Because we\n> will still access the redirected item id to find the\n> actual tuple from the index scan. Maybe not exactly at this level,\n> but we can verify that the link itemid store in that\n> is within the itemid range of the page or not.\n\nGood idea. I've added checks that the redirection is valid, both in terms of being within bounds and in terms of alignment.\n\n> 2.\n> \n> + /* Check for tuple header corruption */\n> + if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n> + {\n> + confess(ctx,\n> + psprintf(\"t_hoff < SizeofHeapTupleHeader (%u < %u)\",\n> + ctx->tuphdr->t_hoff,\n> + (unsigned) SizeofHeapTupleHeader));\n> + fatal = true;\n> + }\n> \n> I think we can also check that if there is no NULL attributes (if\n> (!(t_infomask & HEAP_HASNULL)) then\n> ctx->tuphdr->t_hoff should be equal to SizeofHeapTupleHeader.\n\nYou have to take alignment padding into account, but otherwise yes, and I've added a check for that.\n\n> 3.\n> + ctx->offset = 0;\n> + for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n> + {\n> + if (!check_tuple_attribute(ctx))\n> + break;\n> + }\n> + ctx->offset = -1;\n> + ctx->attnum = -1;\n> \n> So we are first setting ctx->offset to 0, then inside\n> check_tuple_attribute, we will keep updating the offset as we process\n> the attributes and after the loop is over we set ctx->offset to -1, I\n> did not understand that why we need to reset it to -1, do we ever\n> check for that. We don't even initialize the ctx->offset to -1 while\n> initializing the context for the tuple so I do not understand what is\n> the meaning of the random value -1.\n\nAhh, right, those are left over from a previous design of the code. Thanks for pointing them out. They are now removed.\n\n> 4.\n> + if (!VARATT_IS_EXTENDED(chunk))\n> + {\n> + chunksize = VARSIZE(chunk) - VARHDRSZ;\n> + chunkdata = VARDATA(chunk);\n> + }\n> + else if (VARATT_IS_SHORT(chunk))\n> + {\n> + /*\n> + * could happen due to heap_form_tuple doing its thing\n> + */\n> + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n> + chunkdata = VARDATA_SHORT(chunk);\n> + }\n> + else\n> + {\n> + /* should never happen */\n> + confess(ctx,\n> + pstrdup(\"toast chunk is neither short nor extended\"));\n> + return;\n> + }\n> \n> I think the error message \"toast chunk is neither short nor extended\".\n> Because ideally, the toast chunk should not be further toasted.\n> So I think the check is correct, but the error message is not correct.\n\nI agree the error message was wrongly stated, and I've changed it, but you might suggest a better wording than what I came up with, \"corrupt toast chunk va_header\".\n\n> 5.\n> \n> + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> + check_relation_relkind_and_relam(ctx.rel);\n> +\n> + /*\n> + * Open the toast relation, if any, also protected from concurrent\n> + * vacuums.\n> + */\n> + if (ctx.rel->rd_rel->reltoastrelid)\n> + {\n> + int offset;\n> +\n> + /* Main relation has associated toast relation */\n> + ctx.toastrel = table_open(ctx.rel->rd_rel->reltoastrelid,\n> + ShareUpdateExclusiveLock);\n> + offset = toast_open_indexes(ctx.toastrel,\n> ....\n> + if (TransactionIdIsNormal(ctx.relfrozenxid) &&\n> + TransactionIdPrecedes(ctx.relfrozenxid, ctx.oldestValidXid))\n> + {\n> + confess(&ctx, psprintf(\"relfrozenxid %u precedes global \"\n> + \"oldest valid xid %u \",\n> + ctx.relfrozenxid, ctx.oldestValidXid));\n> + PG_RETURN_NULL();\n> + }\n> \n> Don't we need to close the relation/toastrel/toastindexrel in such\n> return which is without an abort? IIRC, we\n> will get relcache leak WARNING on commit if we left them open in commit path.\n\nOk, I've added logic to close them.\n\nAll changes inspired by your review are included in the v9-0001 patch. The differences since v8 are pulled out into v9_diffs for easier review.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 21 Jun 2020 17:14:39 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jun 22, 2020 at 5:44 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 21, 2020, at 2:54 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have looked into 0001 patch and I have a few comments.\n> >\n> > 1.\n> > +\n> > + /* Skip over unused/dead/redirected line pointers */\n> > + if (!ItemIdIsUsed(ctx.itemid) ||\n> > + ItemIdIsDead(ctx.itemid) ||\n> > + ItemIdIsRedirected(ctx.itemid))\n> > + continue;\n> >\n> > Isn't it a good idea to verify the Redirected Itemtid? Because we\n> > will still access the redirected item id to find the\n> > actual tuple from the index scan. Maybe not exactly at this level,\n> > but we can verify that the link itemid store in that\n> > is within the itemid range of the page or not.\n>\n> Good idea. I've added checks that the redirection is valid, both in terms of being within bounds and in terms of alignment.\n>\n> > 2.\n> >\n> > + /* Check for tuple header corruption */\n> > + if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n> > + {\n> > + confess(ctx,\n> > + psprintf(\"t_hoff < SizeofHeapTupleHeader (%u < %u)\",\n> > + ctx->tuphdr->t_hoff,\n> > + (unsigned) SizeofHeapTupleHeader));\n> > + fatal = true;\n> > + }\n> >\n> > I think we can also check that if there is no NULL attributes (if\n> > (!(t_infomask & HEAP_HASNULL)) then\n> > ctx->tuphdr->t_hoff should be equal to SizeofHeapTupleHeader.\n>\n> You have to take alignment padding into account, but otherwise yes, and I've added a check for that.\n>\n> > 3.\n> > + ctx->offset = 0;\n> > + for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n> > + {\n> > + if (!check_tuple_attribute(ctx))\n> > + break;\n> > + }\n> > + ctx->offset = -1;\n> > + ctx->attnum = -1;\n> >\n> > So we are first setting ctx->offset to 0, then inside\n> > check_tuple_attribute, we will keep updating the offset as we process\n> > the attributes and after the loop is over we set ctx->offset to -1, I\n> > did not understand that why we need to reset it to -1, do we ever\n> > check for that. We don't even initialize the ctx->offset to -1 while\n> > initializing the context for the tuple so I do not understand what is\n> > the meaning of the random value -1.\n>\n> Ahh, right, those are left over from a previous design of the code. Thanks for pointing them out. They are now removed.\n>\n> > 4.\n> > + if (!VARATT_IS_EXTENDED(chunk))\n> > + {\n> > + chunksize = VARSIZE(chunk) - VARHDRSZ;\n> > + chunkdata = VARDATA(chunk);\n> > + }\n> > + else if (VARATT_IS_SHORT(chunk))\n> > + {\n> > + /*\n> > + * could happen due to heap_form_tuple doing its thing\n> > + */\n> > + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n> > + chunkdata = VARDATA_SHORT(chunk);\n> > + }\n> > + else\n> > + {\n> > + /* should never happen */\n> > + confess(ctx,\n> > + pstrdup(\"toast chunk is neither short nor extended\"));\n> > + return;\n> > + }\n> >\n> > I think the error message \"toast chunk is neither short nor extended\".\n> > Because ideally, the toast chunk should not be further toasted.\n> > So I think the check is correct, but the error message is not correct.\n>\n> I agree the error message was wrongly stated, and I've changed it, but you might suggest a better wording than what I came up with, \"corrupt toast chunk va_header\".\n>\n> > 5.\n> >\n> > + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> > + check_relation_relkind_and_relam(ctx.rel);\n> > +\n> > + /*\n> > + * Open the toast relation, if any, also protected from concurrent\n> > + * vacuums.\n> > + */\n> > + if (ctx.rel->rd_rel->reltoastrelid)\n> > + {\n> > + int offset;\n> > +\n> > + /* Main relation has associated toast relation */\n> > + ctx.toastrel = table_open(ctx.rel->rd_rel->reltoastrelid,\n> > + ShareUpdateExclusiveLock);\n> > + offset = toast_open_indexes(ctx.toastrel,\n> > ....\n> > + if (TransactionIdIsNormal(ctx.relfrozenxid) &&\n> > + TransactionIdPrecedes(ctx.relfrozenxid, ctx.oldestValidXid))\n> > + {\n> > + confess(&ctx, psprintf(\"relfrozenxid %u precedes global \"\n> > + \"oldest valid xid %u \",\n> > + ctx.relfrozenxid, ctx.oldestValidXid));\n> > + PG_RETURN_NULL();\n> > + }\n> >\n> > Don't we need to close the relation/toastrel/toastindexrel in such\n> > return which is without an abort? IIRC, we\n> > will get relcache leak WARNING on commit if we left them open in commit path.\n>\n> Ok, I've added logic to close them.\n>\n> All changes inspired by your review are included in the v9-0001 patch. The differences since v8 are pulled out into v9_diffs for easier review.\n\nI have reviewed the changes in v9_diffs and looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 28 Jun 2020 20:59:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Sun, Jun 28, 2020 at 8:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jun 22, 2020 at 5:44 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > > On Jun 21, 2020, at 2:54 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I have looked into 0001 patch and I have a few comments.\n> > >\n> > > 1.\n> > > +\n> > > + /* Skip over unused/dead/redirected line pointers */\n> > > + if (!ItemIdIsUsed(ctx.itemid) ||\n> > > + ItemIdIsDead(ctx.itemid) ||\n> > > + ItemIdIsRedirected(ctx.itemid))\n> > > + continue;\n> > >\n> > > Isn't it a good idea to verify the Redirected Itemtid? Because we\n> > > will still access the redirected item id to find the\n> > > actual tuple from the index scan. Maybe not exactly at this level,\n> > > but we can verify that the link itemid store in that\n> > > is within the itemid range of the page or not.\n> >\n> > Good idea. I've added checks that the redirection is valid, both in terms of being within bounds and in terms of alignment.\n> >\n> > > 2.\n> > >\n> > > + /* Check for tuple header corruption */\n> > > + if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n> > > + {\n> > > + confess(ctx,\n> > > + psprintf(\"t_hoff < SizeofHeapTupleHeader (%u < %u)\",\n> > > + ctx->tuphdr->t_hoff,\n> > > + (unsigned) SizeofHeapTupleHeader));\n> > > + fatal = true;\n> > > + }\n> > >\n> > > I think we can also check that if there is no NULL attributes (if\n> > > (!(t_infomask & HEAP_HASNULL)) then\n> > > ctx->tuphdr->t_hoff should be equal to SizeofHeapTupleHeader.\n> >\n> > You have to take alignment padding into account, but otherwise yes, and I've added a check for that.\n> >\n> > > 3.\n> > > + ctx->offset = 0;\n> > > + for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n> > > + {\n> > > + if (!check_tuple_attribute(ctx))\n> > > + break;\n> > > + }\n> > > + ctx->offset = -1;\n> > > + ctx->attnum = -1;\n> > >\n> > > So we are first setting ctx->offset to 0, then inside\n> > > check_tuple_attribute, we will keep updating the offset as we process\n> > > the attributes and after the loop is over we set ctx->offset to -1, I\n> > > did not understand that why we need to reset it to -1, do we ever\n> > > check for that. We don't even initialize the ctx->offset to -1 while\n> > > initializing the context for the tuple so I do not understand what is\n> > > the meaning of the random value -1.\n> >\n> > Ahh, right, those are left over from a previous design of the code. Thanks for pointing them out. They are now removed.\n> >\n> > > 4.\n> > > + if (!VARATT_IS_EXTENDED(chunk))\n> > > + {\n> > > + chunksize = VARSIZE(chunk) - VARHDRSZ;\n> > > + chunkdata = VARDATA(chunk);\n> > > + }\n> > > + else if (VARATT_IS_SHORT(chunk))\n> > > + {\n> > > + /*\n> > > + * could happen due to heap_form_tuple doing its thing\n> > > + */\n> > > + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n> > > + chunkdata = VARDATA_SHORT(chunk);\n> > > + }\n> > > + else\n> > > + {\n> > > + /* should never happen */\n> > > + confess(ctx,\n> > > + pstrdup(\"toast chunk is neither short nor extended\"));\n> > > + return;\n> > > + }\n> > >\n> > > I think the error message \"toast chunk is neither short nor extended\".\n> > > Because ideally, the toast chunk should not be further toasted.\n> > > So I think the check is correct, but the error message is not correct.\n> >\n> > I agree the error message was wrongly stated, and I've changed it, but you might suggest a better wording than what I came up with, \"corrupt toast chunk va_header\".\n> >\n> > > 5.\n> > >\n> > > + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> > > + check_relation_relkind_and_relam(ctx.rel);\n> > > +\n> > > + /*\n> > > + * Open the toast relation, if any, also protected from concurrent\n> > > + * vacuums.\n> > > + */\n> > > + if (ctx.rel->rd_rel->reltoastrelid)\n> > > + {\n> > > + int offset;\n> > > +\n> > > + /* Main relation has associated toast relation */\n> > > + ctx.toastrel = table_open(ctx.rel->rd_rel->reltoastrelid,\n> > > + ShareUpdateExclusiveLock);\n> > > + offset = toast_open_indexes(ctx.toastrel,\n> > > ....\n> > > + if (TransactionIdIsNormal(ctx.relfrozenxid) &&\n> > > + TransactionIdPrecedes(ctx.relfrozenxid, ctx.oldestValidXid))\n> > > + {\n> > > + confess(&ctx, psprintf(\"relfrozenxid %u precedes global \"\n> > > + \"oldest valid xid %u \",\n> > > + ctx.relfrozenxid, ctx.oldestValidXid));\n> > > + PG_RETURN_NULL();\n> > > + }\n> > >\n> > > Don't we need to close the relation/toastrel/toastindexrel in such\n> > > return which is without an abort? IIRC, we\n> > > will get relcache leak WARNING on commit if we left them open in commit path.\n> >\n> > Ok, I've added logic to close them.\n> >\n> > All changes inspired by your review are included in the v9-0001 patch. The differences since v8 are pulled out into v9_diffs for easier review.\n>\n> I have reviewed the changes in v9_diffs and looks fine to me.\n\nSome more comments on v9_0001.\n1.\n+ LWLockAcquire(XidGenLock, LW_SHARED);\n+ nextFullXid = ShmemVariableCache->nextFullXid;\n+ ctx.oldestValidXid = ShmemVariableCache->oldestXid;\n+ LWLockRelease(XidGenLock);\n+ ctx.nextKnownValidXid = XidFromFullTransactionId(nextFullXid);\n...\n...\n+\n+ for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n+ {\n+ int32 mapbits;\n+ OffsetNumber maxoff;\n+ PageHeader ph;\n+\n+ /* Optionally skip over all-frozen or all-visible blocks */\n+ if (skip_all_frozen || skip_all_visible)\n+ {\n+ mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n+ &vmbuffer);\n+ if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n+ continue;\n+ if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n+ continue;\n+ }\n+\n+ /* Read and lock the next page. */\n+ ctx.buffer = ReadBufferExtended(ctx.rel, MAIN_FORKNUM, ctx.blkno,\n+ RBM_NORMAL, ctx.bstrategy);\n+ LockBuffer(ctx.buffer, BUFFER_LOCK_SHARE);\n\nI might be missing something, but it appears that first we are getting\nthe nextFullXid and after that, we are scanning the block by block.\nSo while we are scanning the block if the nextXid is advanced and it\nhas updated some tuple in the heap pages, then it seems the current\nlogic will complain about out of range xid. I did not test this\nbehavior so please point me to the logic which is protecting this.\n\n2.\n/*\n * Helper function to construct the TupleDesc needed by verify_heapam.\n */\nstatic TupleDesc\nverify_heapam_tupdesc(void)\n\n From function name, it appeared that it is verifying tuple descriptor\nbut this is just creating the tuple descriptor.\n\n3.\n+ /* Optionally skip over all-frozen or all-visible blocks */\n+ if (skip_all_frozen || skip_all_visible)\n+ {\n+ mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n+ &vmbuffer);\n+ if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n+ continue;\n+ if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n+ continue;\n+ }\n\nHere, do we want to test that in VM the all visible bit is set whereas\non the page it is not set? That can lead to a wrong result in an\nindex-only scan.\n\n4. One cosmetic comment\n\n+ /* Skip non-varlena values, but update offset first */\n..\n+\n+ /* Ok, we're looking at a varlena attribute. */\n\nThroughout the patch, I have noticed that some of your single-line\ncomments have \"full stop\" whereas other don't. Can we keep them\nconsistent?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 28 Jun 2020 21:35:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jun 28, 2020, at 9:05 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> Some more comments on v9_0001.\n> 1.\n> + LWLockAcquire(XidGenLock, LW_SHARED);\n> + nextFullXid = ShmemVariableCache->nextFullXid;\n> + ctx.oldestValidXid = ShmemVariableCache->oldestXid;\n> + LWLockRelease(XidGenLock);\n> + ctx.nextKnownValidXid = XidFromFullTransactionId(nextFullXid);\n> ...\n> ...\n> +\n> + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n> + {\n> + int32 mapbits;\n> + OffsetNumber maxoff;\n> + PageHeader ph;\n> +\n> + /* Optionally skip over all-frozen or all-visible blocks */\n> + if (skip_all_frozen || skip_all_visible)\n> + {\n> + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n> + &vmbuffer);\n> + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n> + continue;\n> + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n> + continue;\n> + }\n> +\n> + /* Read and lock the next page. */\n> + ctx.buffer = ReadBufferExtended(ctx.rel, MAIN_FORKNUM, ctx.blkno,\n> + RBM_NORMAL, ctx.bstrategy);\n> + LockBuffer(ctx.buffer, BUFFER_LOCK_SHARE);\n> \n> I might be missing something, but it appears that first we are getting\n> the nextFullXid and after that, we are scanning the block by block.\n> So while we are scanning the block if the nextXid is advanced and it\n> has updated some tuple in the heap pages, then it seems the current\n> logic will complain about out of range xid. I did not test this\n> behavior so please point me to the logic which is protecting this.\n\nWe know the oldest valid Xid cannot advance, because we hold a lock that would prevent it from doing so. We cannot know that the newest Xid will not advance, but when we see an Xid beyond the end of the known valid range, we check its validity, and either report it as a corruption or advance our idea of the newest valid Xid, depending on that check. That logic is in TransactionIdValidInRel.\n\n> 2.\n> /*\n> * Helper function to construct the TupleDesc needed by verify_heapam.\n> */\n> static TupleDesc\n> verify_heapam_tupdesc(void)\n> \n> From function name, it appeared that it is verifying tuple descriptor\n> but this is just creating the tuple descriptor.\n\nIn amcheck--1.2--1.3.sql we define a function named verify_heapam which returns a set of records. This is the tuple descriptor for that function. I understand that the name can be parsed as verify_(heapam_tupdesc), but it is meant as (verify_heapam)_tupdesc. Do you have a name you would prefer?\n\n> 3.\n> + /* Optionally skip over all-frozen or all-visible blocks */\n> + if (skip_all_frozen || skip_all_visible)\n> + {\n> + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n> + &vmbuffer);\n> + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n> + continue;\n> + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n> + continue;\n> + }\n> \n> Here, do we want to test that in VM the all visible bit is set whereas\n> on the page it is not set? That can lead to a wrong result in an\n> index-only scan.\n\nIf the caller has specified that the corruption check should skip over all-frozen or all-visible data, then we cannot load the page that the VM claims is all-frozen or all-visible without defeating the purpose of the caller having specified these options. Without loading the page, we cannot check the page's header bits.\n\nWhen not skipping all-visible or all-frozen blocks, we might like to pin both the heap page and the visibility map page in order to compare the two, being careful not to hold a pin on the one while performing I/O on the other. See for example the logic in heap_delete(). But I'm not sure what guarantees the system makes about agreement between these two bits. Certainly, the VM should not claim a page is all visible when it isn't, but are we guaranteed that a page that is all-visible will always have its all-visible bit set? I don't know if (possibly transient) disagreement between these two bits constitutes corruption. Perhaps others following this thread can advise?\n\n> 4. One cosmetic comment\n> \n> + /* Skip non-varlena values, but update offset first */\n> ..\n> +\n> + /* Ok, we're looking at a varlena attribute. */\n> \n> Throughout the patch, I have noticed that some of your single-line\n> comments have \"full stop\" whereas other don't. Can we keep them\n> consistent?\n\nI try to use a \"full stop\" at the end of sentences, but not at the end of sentence fragments. To me, a \"full stop\" means that a sentence has reached its conclusion. I don't intentionally use one at the end of a fragment, unless the fragment precedes a full sentence, in which case the \"full stop\" is needed to separate the two. Of course, I may have violated my own rule in a few places, but before I submit a v10 patch with comment punctuation changes, perhaps we can agree on what the rule is? (This has probably been discussed before and agreed before. A link to the appropriate email thread would be sufficient.)\n\nFor example:\n\n\t/* red, green, or blue */\n\t/* set to pink */\n\t/* set to blue. We have not closed the file. */\n\t/* At this point, we have chosen the color. */\n\nThe first comment is not a sentence, but the fourth is. The third comment is a fragment followed by a full sentence, and a \"full stop\" separates the two. As for the second comment, as I recall, verb phrases can be interpreted as a full sentence, as in \"Close the door!\", when they are meant as commands to the listener, but not otherwise. \"set to pink\" is not a command to the reader, but rather a description of what the code is doing at that point, so I think of it as a mere verb phrase and not a full sentence.\n\nMaking matters even more complicated, portions of the logic in verify_heapam were taken from sections of code that would ereport(), elog(), or Assert() on corruption, and when I took such code, I sometimes also took the comments in unmodified form. That means that my normal commenting rules don't apply, as I'm not the comment author in such cases.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 28 Jun 2020 10:48:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I think there are two very large patches here. One adds checking of\nheapam tables to amcheck, and the other adds a binary that eases calling\namcheck from the command line. I think these should be two separate\npatches.\n\nI don't know what to think of a module contrib/pg_amcheck. I kinda lean\ntowards fitting it in src/bin/scripts rather than as a contrib module.\nHowever, it seems a bit weird that it depends on a contrib module.\nMaybe amcheck should not be a contrib module at all but rather a new\nextension in src/extensions/ that is compiled and installed (in the\nfilesystem, not in databases) by default.\n\nI strongly agree with hardening backend code so that all the crashes\nthat Mark has found can be repaired. (We discussed this topic\nbefore[1]: we'd repair all crashes when run with production code, not\nall assertion crashes.)\n\n[1] https://postgr.es/m/20200513221051.GA26592@alvherre.pgsql\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:44:44 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jun 30, 2020, at 11:44 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> I think there are two very large patches here. One adds checking of\n> heapam tables to amcheck, and the other adds a binary that eases calling\n> amcheck from the command line. I think these should be two separate\n> patches.\n\ncontrib/amcheck has pretty limited regression test coverage. I wrote pg_amcheck in large part because the infrastructure I was writing for testing contrib/amcheck was starting to look like a stand-alone tool, so I made it one. I can split contrib/pg_amcheck into a separate patch, but I would expect reviewers to use it to review contrib/amcheck Say the word, and I'll resubmit as two separate patches.\n\n> I don't know what to think of a module contrib/pg_amcheck. I kinda lean\n> towards fitting it in src/bin/scripts rather than as a contrib module.\n> However, it seems a bit weird that it depends on a contrib module.\n\nAgreed.\n\n> Maybe amcheck should not be a contrib module at all but rather a new\n> extension in src/extensions/ that is compiled and installed (in the\n> filesystem, not in databases) by default.\n\nFine with me, but I'll have to see what others think about that.\n\n> I strongly agree with hardening backend code so that all the crashes\n> that Mark has found can be repaired. (We discussed this topic\n> before[1]: we'd repair all crashes when run with production code, not\n> all assertion crashes.)\n\nI'm guessing that hardening the backend would be a separate patch? Or did you want that as part of this one?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:28:54 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-Jun-30, Mark Dilger wrote:\n\n> I'm guessing that hardening the backend would be a separate patch? Or\n> did you want that as part of this one?\n\nLately, to me the foremost criterion to determine what is a separate\npatch and what isn't is the way the commit message is structured. If it\nlooks too much like a bullet list of unrelated things, that suggests\nthat the commit should be split into one commit per bullet point; of\ncourse, there are counterexamples. But when I have a commit message\nthat says \"I do A, and I also do B because I need it for A\", then it\nmakes more sense to do B first standalone and then A on top. OTOH if\ntwo things are done because they're heavily intermixed (e.g. commit\n850196b610d2, bullet points galore), that suggests that one commit is a\ndecent approach.\n\nJust my opinion, of course.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 17:55:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Sun, Jun 28, 2020 at 11:18 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 28, 2020, at 9:05 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Some more comments on v9_0001.\n> > 1.\n> > + LWLockAcquire(XidGenLock, LW_SHARED);\n> > + nextFullXid = ShmemVariableCache->nextFullXid;\n> > + ctx.oldestValidXid = ShmemVariableCache->oldestXid;\n> > + LWLockRelease(XidGenLock);\n> > + ctx.nextKnownValidXid = XidFromFullTransactionId(nextFullXid);\n> > ...\n> > ...\n> > +\n> > + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n> > + {\n> > + int32 mapbits;\n> > + OffsetNumber maxoff;\n> > + PageHeader ph;\n> > +\n> > + /* Optionally skip over all-frozen or all-visible blocks */\n> > + if (skip_all_frozen || skip_all_visible)\n> > + {\n> > + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n> > + &vmbuffer);\n> > + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n> > + continue;\n> > + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n> > + continue;\n> > + }\n> > +\n> > + /* Read and lock the next page. */\n> > + ctx.buffer = ReadBufferExtended(ctx.rel, MAIN_FORKNUM, ctx.blkno,\n> > + RBM_NORMAL, ctx.bstrategy);\n> > + LockBuffer(ctx.buffer, BUFFER_LOCK_SHARE);\n> >\n> > I might be missing something, but it appears that first we are getting\n> > the nextFullXid and after that, we are scanning the block by block.\n> > So while we are scanning the block if the nextXid is advanced and it\n> > has updated some tuple in the heap pages, then it seems the current\n> > logic will complain about out of range xid. I did not test this\n> > behavior so please point me to the logic which is protecting this.\n>\n> We know the oldest valid Xid cannot advance, because we hold a lock that would prevent it from doing so. We cannot know that the newest Xid will not advance, but when we see an Xid beyond the end of the known valid range, we check its validity, and either report it as a corruption or advance our idea of the newest valid Xid, depending on that check. That logic is in TransactionIdValidInRel.\n\nThat makes sense to me.\n\n>\n> > 2.\n> > /*\n> > * Helper function to construct the TupleDesc needed by verify_heapam.\n> > */\n> > static TupleDesc\n> > verify_heapam_tupdesc(void)\n> >\n> > From function name, it appeared that it is verifying tuple descriptor\n> > but this is just creating the tuple descriptor.\n>\n> In amcheck--1.2--1.3.sql we define a function named verify_heapam which returns a set of records. This is the tuple descriptor for that function. I understand that the name can be parsed as verify_(heapam_tupdesc), but it is meant as (verify_heapam)_tupdesc. Do you have a name you would prefer?\n\nNot very particular, but if we have a name like\nverify_heapam_get_tupdesc, But, just a suggestion so it's your choice\nif you prefer the current name I have no objection.\n\n>\n> > 3.\n> > + /* Optionally skip over all-frozen or all-visible blocks */\n> > + if (skip_all_frozen || skip_all_visible)\n> > + {\n> > + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,\n> > + &vmbuffer);\n> > + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)\n> > + continue;\n> > + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)\n> > + continue;\n> > + }\n> >\n> > Here, do we want to test that in VM the all visible bit is set whereas\n> > on the page it is not set? That can lead to a wrong result in an\n> > index-only scan.\n>\n> If the caller has specified that the corruption check should skip over all-frozen or all-visible data, then we cannot load the page that the VM claims is all-frozen or all-visible without defeating the purpose of the caller having specified these options. Without loading the page, we cannot check the page's header bits.\n>\n> When not skipping all-visible or all-frozen blocks, we might like to pin both the heap page and the visibility map page in order to compare the two, being careful not to hold a pin on the one while performing I/O on the other. See for example the logic in heap_delete(). But I'm not sure what guarantees the system makes about agreement between these two bits. Certainly, the VM should not claim a page is all visible when it isn't, but are we guaranteed that a page that is all-visible will always have its all-visible bit set? I don't know if (possibly transient) disagreement between these two bits constitutes corruption. Perhaps others following this thread can advise?\n\nRight, the VM should not claim its all visible when it actually not.\nBut, IIRC, it is not guaranteed that if the page is all visible then\nthe VM must set the all visible flag.\n\n> > 4. One cosmetic comment\n> >\n> > + /* Skip non-varlena values, but update offset first */\n> > ..\n> > +\n> > + /* Ok, we're looking at a varlena attribute. */\n> >\n> > Throughout the patch, I have noticed that some of your single-line\n> > comments have \"full stop\" whereas other don't. Can we keep them\n> > consistent?\n>\n> I try to use a \"full stop\" at the end of sentences, but not at the end of sentence fragments. To me, a \"full stop\" means that a sentence has reached its conclusion. I don't intentionally use one at the end of a fragment, unless the fragment precedes a full sentence, in which case the \"full stop\" is needed to separate the two. Of course, I may have violated my own rule in a few places, but before I submit a v10 patch with comment punctuation changes, perhaps we can agree on what the rule is? (This has probably been discussed before and agreed before. A link to the appropriate email thread would be sufficient.)\n\nI can see in different files we have followed different rules. I am\nfine as far as those are consistent across the file.\n\n> For example:\n>\n> /* red, green, or blue */\n> /* set to pink */\n> /* set to blue. We have not closed the file. */\n> /* At this point, we have chosen the color. */\n>\n> The first comment is not a sentence, but the fourth is. The third comment is a fragment followed by a full sentence, and a \"full stop\" separates the two. As for the second comment, as I recall, verb phrases can be interpreted as a full sentence, as in \"Close the door!\", when they are meant as commands to the listener, but not otherwise. \"set to pink\" is not a command to the reader, but rather a description of what the code is doing at that point, so I think of it as a mere verb phrase and not a full sentence.\n\n> Making matters even more complicated, portions of the logic in verify_heapam were taken from sections of code that would ereport(), elog(), or Assert() on corruption, and when I took such code, I sometimes also took the comments in unmodified form. That means that my normal commenting rules don't apply, as I'm not the comment author in such cases.\n\nI agree.\n\nA few more comments.\n1.\n\n+ if (!VARATT_IS_EXTERNAL_ONDISK(attr))\n+ {\n+ confess(ctx,\n+ pstrdup(\"attribute is external but not marked as on disk\"));\n+ return true;\n+ }\n+\n....\n+\n+ /*\n+ * Must dereference indirect toast pointers before we can check them\n+ */\n+ if (VARATT_IS_EXTERNAL_INDIRECT(attr))\n+ {\n\n\nSo first we are checking that if the varatt is not\nVARATT_IS_EXTERNAL_ONDISK then we are returning, but just a\nfew statements down we are checking if the varatt is\nVARATT_IS_EXTERNAL_INDIRECT, so seems like unreachable code.\n\n2. Another point related to the same code is that toast_save_datum\nalways set the VARTAG_ONDISK tag. IIUC, we use\nVARTAG_INDIRECT in reorderbuffer for generating temp tuple so ideally\nwhile scanning the heap we should never get\nVARATT_IS_EXTERNAL_INDIRECT tuple. Am I missing something here?\n\n3.\n+ if (VARATT_IS_1B_E(tp + ctx->offset))\n+ {\n+ uint8 va_tag = va_tag = VARTAG_EXTERNAL(tp + ctx->offset);\n+\n+ if (va_tag != VARTAG_ONDISK)\n+ {\n+ confess(ctx, psprintf(\"unexpected TOAST vartag %u for \"\n+ \"attribute #%u at t_hoff = %u, \"\n+ \"offset = %u\",\n+ va_tag, ctx->attnum,\n+ ctx->tuphdr->t_hoff, ctx->offset));\n+ return false; /* We can't know where the next attribute\n+ * begins */\n+ }\n+ }\n\n+ /* Skip values that are not external */\n+ if (!VARATT_IS_EXTERNAL(attr))\n+ return true;\n+\n+ /* It is external, and we're looking at a page on disk */\n+ if (!VARATT_IS_EXTERNAL_ONDISK(attr))\n+ {\n+ confess(ctx,\n+ pstrdup(\"attribute is external but not marked as on disk\"));\n+ return true;\n+ }\n\nFirst, we are checking that if VARATT_IS_1B_E and if so we will check\nwhether its tag is VARTAG_ONDISK or not. But just after that, we will\nget the actual attribute pointer and\nAgain check the same thing with 2 different checks. Can you explain\nwhy this is necessary?\n\n4.\n+ if ((ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n+ (ctx->tuphdr->t_infomask2 & HEAP_KEYS_UPDATED))\n+ {\n+ confess(ctx,\n+ psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED both set\"));\n+ }\n+ if ((ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED) &&\n+ (ctx->tuphdr->t_infomask & HEAP_XMAX_IS_MULTI))\n+ {\n+ confess(ctx,\n+ psprintf(\"HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_MULTI both set\"));\n+ }\n\nMaybe we can further expand these checks, like if the tuple is\nHEAP_XMAX_LOCK_ONLY then HEAP_UPDATED or HEAP_HOT_UPDATED should not\nbe set.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Jul 2020 18:34:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jul 4, 2020, at 6:04 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> A few more comments.\n\nYour comments all pertain to function check_tuple_attribute(), which follows the logic of heap_deform_tuple() and detoast_external_attr(). The idea is that any error that could result in an assertion or crash in those functions should be checked carefully by check_tuple_attribute(), and checked *before* any such asserts or crashes might be triggered.\n\nI obviously did not explain this thinking in the function comment. That is rectified in the v10 patch, attached.\n\n> 1.\n> \n> + if (!VARATT_IS_EXTERNAL_ONDISK(attr))\n> + {\n> + confess(ctx,\n> + pstrdup(\"attribute is external but not marked as on disk\"));\n> + return true;\n> + }\n> +\n> ....\n> +\n> + /*\n> + * Must dereference indirect toast pointers before we can check them\n> + */\n> + if (VARATT_IS_EXTERNAL_INDIRECT(attr))\n> + {\n> \n> \n> So first we are checking that if the varatt is not\n> VARATT_IS_EXTERNAL_ONDISK then we are returning, but just a\n> few statements down we are checking if the varatt is\n> VARATT_IS_EXTERNAL_INDIRECT, so seems like unreachable code.\n\nTrue. I've removed the VARATT_IS_EXTERNAL_INDIRECT check.\n\n\n> 2. Another point related to the same code is that toast_save_datum\n> always set the VARTAG_ONDISK tag. IIUC, we use\n> VARTAG_INDIRECT in reorderbuffer for generating temp tuple so ideally\n> while scanning the heap we should never get\n> VARATT_IS_EXTERNAL_INDIRECT tuple. Am I missing something here?\n\nI think you are right that we cannot get a VARATT_IS_EXTERNAL_INDIRECT tuple. That check is removed in v10.\n\n> 3.\n> + if (VARATT_IS_1B_E(tp + ctx->offset))\n> + {\n> + uint8 va_tag = va_tag = VARTAG_EXTERNAL(tp + ctx->offset);\n> +\n> + if (va_tag != VARTAG_ONDISK)\n> + {\n> + confess(ctx, psprintf(\"unexpected TOAST vartag %u for \"\n> + \"attribute #%u at t_hoff = %u, \"\n> + \"offset = %u\",\n> + va_tag, ctx->attnum,\n> + ctx->tuphdr->t_hoff, ctx->offset));\n> + return false; /* We can't know where the next attribute\n> + * begins */\n> + }\n> + }\n> \n> + /* Skip values that are not external */\n> + if (!VARATT_IS_EXTERNAL(attr))\n> + return true;\n> +\n> + /* It is external, and we're looking at a page on disk */\n> + if (!VARATT_IS_EXTERNAL_ONDISK(attr))\n> + {\n> + confess(ctx,\n> + pstrdup(\"attribute is external but not marked as on disk\"));\n> + return true;\n> + }\n> \n> First, we are checking that if VARATT_IS_1B_E and if so we will check\n> whether its tag is VARTAG_ONDISK or not. But just after that, we will\n> get the actual attribute pointer and\n> Again check the same thing with 2 different checks. Can you explain\n> why this is necessary?\n\nThe code that calls check_tuple_attribute() expects it to check the current attribute, but also to safely advance the ctx->offset value to the next attribute, as the caller is iterating over all attributes. The first check verifies that it is safe to call att_addlength_pointer, as we must not call att_addlength_pointer on a corrupt datum. The second check simply returns on non-external attributes, having advanced ctx->offset, there is nothing left to do. The third check is validating the external attribute, now that we know that it is external. You are right that the third check cannot fail, as the first check would already have confess()ed and returned false. The third check is removed in v10, attached.\n\n> 4.\n> + if ((ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> + (ctx->tuphdr->t_infomask2 & HEAP_KEYS_UPDATED))\n> + {\n> + confess(ctx,\n> + psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED both set\"));\n> + }\n> + if ((ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED) &&\n> + (ctx->tuphdr->t_infomask & HEAP_XMAX_IS_MULTI))\n> + {\n> + confess(ctx,\n> + psprintf(\"HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_MULTI both set\"));\n> + }\n> \n> Maybe we can further expand these checks, like if the tuple is\n> HEAP_XMAX_LOCK_ONLY then HEAP_UPDATED or HEAP_HOT_UPDATED should not\n> be set.\n\nAdding Asserts in src/backend/access/heap/hio.c against those two conditions, the regression tests fail in quite a lot of places where HEAP_XMAX_LOCK_ONLY and HEAP_UPDATED are both true. I'm leaving this idea out for v10, since it doesn't work, but in case you want to tell me what I did wrong, here are the changed I made on top of v10:\n\ndiff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c\nindex 00de10b7c9..76d23e141a 100644\n--- a/src/backend/access/heap/hio.c\n+++ b/src/backend/access/heap/hio.c\n@@ -57,6 +57,10 @@ RelationPutHeapTuple(Relation relation,\n (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_COMMITTED) &&\n (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)));\n+ Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n+ (tuple->t_data->t_infomask & HEAP_UPDATED)));\n+ Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n+ (tuple->t_data->t_infomask2 & HEAP_HOT_UPDATED)));\n\ndiff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\nindex 49d3d5618a..60e4ad5be0 100644\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -969,12 +969,19 @@ check_tuple(HeapCheckContext * ctx)\n ctx->tuphdr->t_hoff));\n fatal = true;\n }\n- if ((ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n- (ctx->tuphdr->t_infomask2 & HEAP_KEYS_UPDATED))\n+ if (ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY)\n {\n- confess(ctx,\n- psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED both set\"));\n+ if (ctx->tuphdr->t_infomask2 & HEAP_KEYS_UPDATED)\n+ confess(ctx,\n+ psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED both set\"));\n+ if (ctx->tuphdr->t_infomask & HEAP_UPDATED)\n+ confess(ctx,\n+ psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_UPDATED both set\"));\n+ if (ctx->tuphdr->t_infomask2 & HEAP_HOT_UPDATED)\n+ confess(ctx,\n+ psprintf(\"HEAP_XMAX_LOCK_ONLY and HEAP_HOT_UPDATED both set\"));\n }\n+\n if ((ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED) &&\n (ctx->tuphdr->t_infomask & HEAP_XMAX_IS_MULTI))\n {\n\n\nThe v10 patch without these ideas is here:\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 6 Jul 2020 11:06:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jul 6, 2020 at 2:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> The v10 patch without these ideas is here:\n\nAlong the lines of what Alvaro was saying before, I think this\ndefinitely needs to be split up into a series of patches. The commit\nmessage for v10 describes it doing three pretty separate things, and I\nthink that argues for splitting it into a series of three patches. I'd\nargue for this ordering:\n\n0001 Refactoring existing amcheck btree checking functions to optionally\nreturn corruption information rather than ereport'ing it. This is\nused by the new pg_amcheck command line tool for reporting back to\nthe caller.\n\n0002 Adding new function verify_heapam for checking a heap relation and\nassociated toast relation, if any, to contrib/amcheck.\n\n0003 Adding new contrib module pg_amcheck, which is a command line\ninterface for running amcheck's verifications against tables and\nindexes.\n\nIt's too hard to review things like this when it's all mixed together.\n\n+++ b/contrib/amcheck/t/skipping.pl\n\nThe name of this file is inconsistent with the tree's usual\nconvention, which is all stuff like 001_whatever.pl, except for\nsrc/test/modules/brin, which randomly decided to use two digits\ninstead of three. There's no precedent for a test file with no leading\nnumeric digits. Also, what does \"skipping\" even have to do with what\nthe test is checking? Maybe it's intended to refer to the new error\nhandling \"skipping\" the actual error in favor of just reporting it\nwithout stopping, but that's not really what the word \"skipping\"\nnormally means. Finally, it seems a bit over-engineered: do we really\nneed 183 test cases to check that detecting a problem doesn't lead to\nan abort? Like, if that's the purpose of the test, I'd expect it to\ncheck one corrupt relation and one non-corrupt relation, each with and\nwithout the no-error behavior. And that's about it. Or maybe it's\ntalking about skipping pages during the checks, because those pages\nare all-visible or all-frozen? It's not very clear to me what's going\non here.\n\n+ TransactionId nextKnownValidXid;\n+ TransactionId oldestValidXid;\n\nPlease add explanatory comments indicating what these are intended to\nmean. For most of the the structure members, the brief comments\nalready present seem sufficient; but here, more explanation looks\nnecessary and less is provided. The \"Values for returning tuples\"\ncould possibly also use some more detail.\n\n+#define HEAPCHECK_RELATION_COLS 8\n\nI think this should really be at the top of the file someplace.\nSometimes people have adopted this style when the #define is only used\nwithin the function that contains it, but that's not the case here.\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n+ errhint(\"please choose from 'all visible', 'all frozen', \"\n+ \"or NULL\")));\n\nI think it would be better if we had three string values selecting the\ndifferent behaviors, and made the parameter NOT NULL but with a\ndefault. It seems like that would be easier to understand. Right now,\nI can tell that my options for what to skip are \"all visible\", \"all\nfrozen\", and, uh, some other thing that I don't know what it is. I'm\ngonna guess the third option is to skip nothing, but it seems best to\nmake that explicit. Also, should we maybe consider spelling this\n'all-visible' and 'all-frozen' with dashes, instead of using spaces?\nSpaces in an option value seems a little icky to me somehow.\n\n+ int64 startblock = -1;\n+ int64 endblock = -1;\n...\n+ if (!PG_ARGISNULL(3))\n+ startblock = PG_GETARG_INT64(3);\n+ if (!PG_ARGISNULL(4))\n+ endblock = PG_GETARG_INT64(4);\n...\n+ if (startblock < 0)\n+ startblock = 0;\n+ if (endblock < 0 || endblock > ctx.nblocks)\n+ endblock = ctx.nblocks;\n+\n+ for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n\nSo, the user can specify a negative value explicitly and it will be\ntreated as the default, and an endblock value that's larger than the\nrelation size will be treated as the relation size. The way pg_prewarm\ndoes the corresponding checks seems superior: null indicates the\ndefault value, and any non-null value must be within range or you get\nan error. Also, you seem to be treating endblock as the first block\nthat should not be checked, whereas pg_prewarm takes what seems to me\nto be the more natural interpretation: the end block is the last block\nthat IS checked. If you do it this way, then someone who specifies the\nsame start and end block will check no blocks -- silently, I think.\n\n+ if (skip_all_frozen || skip_all_visible)\n\nSince you can't skip all frozen without skipping all visible, this\ntest could be simplified. Or you could introduce a three-valued enum\nand test that skip_pages != SKIP_PAGES_NONE, which might be even\nbetter.\n\n+ /* We must unlock the page from the prior iteration, if any */\n+ Assert(ctx.blkno == InvalidBlockNumber || ctx.buffer != InvalidBuffer);\n\nI don't understand this assertion, and I don't understand the comment,\neither. I think ctx.blkno can never be equal to InvalidBlockNumber\nbecause we never set it to anything outside the range of 0..(endblock\n- 1), and I think ctx.buffer must always be unequal to InvalidBuffer\nbecause we just initialized it by calling ReadBufferExtended(). So I\nthink this assertion would still pass if we wrote && rather than ||.\nBut even then, I don't know what that has to do with the comment or\nwhy it even makes sense to have an assertion for that in the first\nplace.\n\n+ /*\n+ * Open the relation. We use ShareUpdateExclusive to prevent concurrent\n+ * vacuums from changing the relfrozenxid, relminmxid, or advancing the\n+ * global oldestXid to be newer than those. This protection\nsaves us from\n+ * having to reacquire the locks and recheck those minimums for every\n+ * tuple, which would be expensive.\n+ */\n+ ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n\nI don't think we'd need to recheck for every tuple, would we? Just for\ncases where there's an apparent violation of the rules. I guess that\ncould still be expensive if there's a lot of them, but needing\nShareUpdateExclusiveLock rather than only AccessShareLock is a little\nunfortunate.\n\nIt's also unclear to me why this concerns itself with relfrozenxid and\nthe cluster-wide oldestXid value but not with datfrozenxid. It seems\nlike if we're going to sanity-check the relfrozenxid against the\ncluster-wide value, we ought to also check it against the\ndatabase-wide value. Checking neither would also seem like a plausible\nchoice. But it seems very strange to only check against the\ncluster-wide value.\n\n+ StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n+ \"InvalidOffsetNumber\nincrements to FirstOffsetNumber\");\n\nIf you are going to rely on this property, I agree that it is good to\ncheck it. But it would be better to NOT rely on this property, and I\nsuspect the code can be written quite cleanly without relying on it.\nAnd actually, that's what you did, because you first set ctx.offnum =\nInvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\nthe loop initializer. So AFAICS the first initializer, and the static\nassert, are pointless.\n\n+ if (ItemIdIsRedirected(ctx.itemid))\n+ {\n+ uint16 redirect = ItemIdGetRedirect(ctx.itemid);\n+ if (redirect <= SizeOfPageHeaderData\n|| redirect >= ph->pd_lower)\n...\n+ if ((redirect - SizeOfPageHeaderData)\n% sizeof(uint16))\n\nI think that ItemIdGetRedirect() returns an offset, not a byte\nposition. So the expectation that I would have is that it would be any\ninteger >= 0 and <= maxoff. Am I confused? BTW, it seems like it might\nbe good to complain if the item to which it points is LP_UNUSED...\nAFAIK that shouldn't happen.\n\n+ errmsg(\"\\\"%s\\\" is not a heap AM\",\n\nI think the correct wording would be just \"is not a heap.\" The \"heap\nAM\" is the thing in pg_am, not a specific table.\n\n+confess(HeapCheckContext * ctx, char *msg)\n+TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n+check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n\nThis is what happens when you pgindent without adding all the right\nthings to typedefs.list first ... or when you don't pgindent and have\nodd ideas about how to indent things.\n\n\n+ /*\n+ * In principle, there is nothing to prevent a scan over a large, highly\n+ * corrupted table from using workmem worth of memory building up the\n+ * tuplestore. Don't leak the msg argument memory.\n+ */\n+ pfree(msg);\n\nMaybe change the second sentence to something like: \"That should be\nOK, else the user can lower work_mem, but we'd better not leak any\nadditional memory.\"\n\n+/*\n+ * check_tuphdr_xids\n+ *\n+ * Determine whether tuples are visible for verification. Similar to\n+ * HeapTupleSatisfiesVacuum, but with critical differences.\n+ *\n+ * 1) Does not touch hint bits. It seems imprudent to write hint bits\n+ * to a table during a corruption check.\n+ * 2) Only makes a boolean determination of whether verification should\n+ * see the tuple, rather than doing extra work for vacuum-related\n+ * categorization.\n+ *\n+ * The caller should already have checked that xmin and xmax are not out of\n+ * bounds for the relation.\n+ */\n\nFirst, check_tuphdr_xids() doesn't seem like a very good name. If you\nhave a function with that name and, like this one, it returns Boolean,\nwhat does true mean? What does false mean? Kinda hard to tell. And\nalso, check the tuple header XIDs *for what*? If you called it, say,\ntuple_is_visible(), that would be self-evident.\n\nSecond, consider that we hold at least AccessShareLock on the relation\n- actually, ATM we hold ShareUpdateExclusiveLock. Either way, there\ncannot be a concurrent modification to the tuple descriptor in\nprogress. Therefore, I think that only a HEAPTUPLE_DEAD tuple is\npotentially using a non-current schema. If the tuple is\nHEAPTUPLE_INSERT_IN_PROGRESS, there's either no ADD COLUMN in the\ninserting transaction, or that transaction committed before we got our\nlock. Similarly if it's HEAPTUPLE_DELETE_IN_PROGRESS or\nHEAPTUPLE_RECENTLY_DEAD, the original inserter must've committed\nbefore we got our lock. Or if it's both inserted and deleted in the\nsame transaction, say, then that transaction committed before we got\nour lock or else contains no relevant DDL. IOW, I think you can check\neverything but dead tuples here.\n\nCapitalization and punctuation for messages complaining about problems\nneed to be consistent. verify_heapam() has \"Invalid redirect line\npointer offset %u out of bounds\" which starts with a capital letter,\nbut check_tuphdr_xids() has \"heap tuple with XMAX_IS_MULTI is neither\nLOCKED_ONLY nor has a valid xmax\" which does not. I vote for lower\ncase, but in any event it should be the same. Also,\ncheck_tuphdr_xids() has \"tuple xvac = %u invalid\" which is either a\ndebugging leftover or a very unclear complaint. I think some real work\nneeds to be put into the phrasing of these messages so that it's more\nclear exactly what is going on and why it's bad. For example the first\nexample in this paragraph is clearly a problem of some kind, but it's\nnot very clear exactly what is happening: is %u the offset of the\ninvalid line redirect or the value to which it points? I don't think\nthe phrasing is very grammatical, which makes it hard to tell which is\nmeant, and I actually think it would be a good idea to include both\nthings.\n\nProject policy is generally against splitting a string across multiple\nlines to fit within 80 characters. We like to fit within 80\ncharacters, but we like to be able to grep for strings more, and\nbreaking them up like this makes that harder.\n\n+ confess(ctx,\n+ pstrdup(\"corrupt toast chunk va_header\"));\n\nThis is another message that I don't think is very clear. There's two\nelements to that. One is that the phrasing is not very good, and the\nother is that there are no % escapes. What's somebody going to do when\nthey see this message? First, they're probably going to have to look\nat the code to figure out in which circumstances it gets generated;\nthat's a sign that the message isn't phrased clearly enough. That will\ntell them that an unexpected bit pattern has been found, but not what\nthat unexpected bit pattern actually was. So then, they're going to\nhave to try to find the relevant va_header by some other means and\nfish out the relevant bit so that they can see what actually went\nwrong.\n\n+ * Checks the current attribute as tracked in ctx for corruption. Records\n+ * any corruption found in ctx->corruption.\n+ *\n+ *\n\nExtra blank line.\n\n+ Form_pg_attribute thisatt = TupleDescAttr(RelationGetDescr(ctx->rel),\n+\n ctx->attnum);\n\nMaybe you could avoid the line wrap by declaring this without\ninitializing it, and then initializing it as a separate statement.\n\n+ confess(ctx, psprintf(\"t_hoff + offset > lp_len (%u + %u > %u)\",\n+\nctx->tuphdr->t_hoff, ctx->offset,\n+ ctx->lp_len));\n\nUggh! This isn't even remotely an English sentence. I don't think\nformulas are the way to go here, but I like the idea of formulas in\nsome places and written-out messages in others even less. I guess the\ncomplaint here in English is something like \"tuple attribute %d should\nstart at offset %u, but tuple length is only %u\" or something of that\nsort. Also, it seems like this complaint really ought to have been\nreported on the *preceding* loop iteration, either complaining that\n(1) the fixed length attribute is more than the number of remaining\nbytes in the tuple or (2) the varlena header for the tuple specifies\nan excessively high length. It seems like you're blaming the wrong\nattribute for the problem.\n\nBTW, the header comments for this function (check_tuple_attribute)\nneglect to document the meaning of the return value.\n\n+ confess(ctx, psprintf(\"tuple xmax = %u\nprecedes relation \"\n+\n\"relfrozenxid = %u\",\n\nThis is another example of these messages needing work. The\ncorresponding message from heap_prepare_freeze_tuple() is \"found\nupdate xid %u from before relfrozenxid %u\". That's better, because we\ndon't normally include equals signs in our messages like this, and\nalso because \"relation relfrozenxid\" is redundant. I think this should\nsay something like \"tuple xmax %u precedes relfrozenxid %u\".\n\n+ confess(ctx, psprintf(\"tuple xmax = %u is in\nthe future\",\n+ xmax));\n\nAnd then this could be something like \"tuple xmax %u follows\nlast-assigned xid %u\". That would be more symmetric and more\ninformative.\n\n+ if (SizeofHeapTupleHeader + BITMAPLEN(ctx->natts) >\nctx->tuphdr->t_hoff)\n\nI think we should be able to predict the exact value of t_hoff and\ncomplain if it isn't precisely equal to the expected value. Or is that\nnot possible for some reason?\n\nIs there some place that's checking that lp_len >=\nSizeOfHeapTupleHeader before check_tuple() goes and starts poking into\nthe header? If not, there should be.\n\n+$node->command_ok(\n\n+ [\n+ 'pg_amcheck', '-p', $port, 'postgres'\n+ ],\n+ 'pg_amcheck all schemas and tables implicitly');\n+\n+$node->command_ok(\n+ [\n+ 'pg_amcheck', '-i', '-p', $port, 'postgres'\n+ ],\n+ 'pg_amcheck all schemas, tables and indexes');\n\nI haven't really looked through the btree-checking and pg_amcheck\nparts of this much yet, but this caught my eye. Why would the default\nbe to check tables but not indexes? I think the default ought to be to\ncheck everything we know how to check.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 15:38:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, May 14, 2020 at 03:50:52PM -0400, Tom Lane wrote:\n> I think there's definitely value in corrupting data in some predictable\n> (reproducible) way and verifying that the check code catches it and\n> responds as expected. Sure, this will not be 100% coverage, but it'll be\n> a lot better than 0% coverage.\n\nSkimming quickly through the patch, that's what is done in a way\nsimilar to pg_checksums's 002_actions.pl. So it seems fine to me to\nuse something like that for some basic coverage. We may want to\nrefactor the test APIs to unify all that though.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 10:25:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jul 16, 2020, at 12:38 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jul 6, 2020 at 2:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> The v10 patch without these ideas is here:\n> \n> Along the lines of what Alvaro was saying before, I think this\n> definitely needs to be split up into a series of patches. The commit\n> message for v10 describes it doing three pretty separate things, and I\n> think that argues for splitting it into a series of three patches. I'd\n> argue for this ordering:\n> \n> 0001 Refactoring existing amcheck btree checking functions to optionally\n> return corruption information rather than ereport'ing it. This is\n> used by the new pg_amcheck command line tool for reporting back to\n> the caller.\n> \n> 0002 Adding new function verify_heapam for checking a heap relation and\n> associated toast relation, if any, to contrib/amcheck.\n> \n> 0003 Adding new contrib module pg_amcheck, which is a command line\n> interface for running amcheck's verifications against tables and\n> indexes.\n> \n> It's too hard to review things like this when it's all mixed together.\n\nThe v11 patch series is broken up as you suggest.\n\n> +++ b/contrib/amcheck/t/skipping.pl\n> \n> The name of this file is inconsistent with the tree's usual\n> convention, which is all stuff like 001_whatever.pl, except for\n> src/test/modules/brin, which randomly decided to use two digits\n> instead of three. There's no precedent for a test file with no leading\n> numeric digits. Also, what does \"skipping\" even have to do with what\n> the test is checking? Maybe it's intended to refer to the new error\n> handling \"skipping\" the actual error in favor of just reporting it\n> without stopping, but that's not really what the word \"skipping\"\n> normally means. Finally, it seems a bit over-engineered: do we really\n> need 183 test cases to check that detecting a problem doesn't lead to\n> an abort? Like, if that's the purpose of the test, I'd expect it to\n> check one corrupt relation and one non-corrupt relation, each with and\n> without the no-error behavior. And that's about it. Or maybe it's\n> talking about skipping pages during the checks, because those pages\n> are all-visible or all-frozen? It's not very clear to me what's going\n> on here.\n\nThe \"skipping\" did originally refer to testing verify_heapam()'s option to skip all-visible or all-frozen blocks. I have renamed it 001_verify_heapam.pl, since it tests that function.\n\n> \n> + TransactionId nextKnownValidXid;\n> + TransactionId oldestValidXid;\n> \n> Please add explanatory comments indicating what these are intended to\n> mean.\n\nDone.\n\n> For most of the the structure members, the brief comments\n> already present seem sufficient; but here, more explanation looks\n> necessary and less is provided. The \"Values for returning tuples\"\n> could possibly also use some more detail.\n\nOk, I've expanded the comments for these.\n\n> +#define HEAPCHECK_RELATION_COLS 8\n> \n> I think this should really be at the top of the file someplace.\n> Sometimes people have adopted this style when the #define is only used\n> within the function that contains it, but that's not the case here.\n\nDone.\n\n> \n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n> + errhint(\"please choose from 'all visible', 'all frozen', \"\n> + \"or NULL\")));\n> \n> I think it would be better if we had three string values selecting the\n> different behaviors, and made the parameter NOT NULL but with a\n> default. It seems like that would be easier to understand. Right now,\n> I can tell that my options for what to skip are \"all visible\", \"all\n> frozen\", and, uh, some other thing that I don't know what it is. I'm\n> gonna guess the third option is to skip nothing, but it seems best to\n> make that explicit. Also, should we maybe consider spelling this\n> 'all-visible' and 'all-frozen' with dashes, instead of using spaces?\n> Spaces in an option value seems a little icky to me somehow.\n\nI've made the options 'all-visible', 'all-frozen', and 'none'. It defaults to 'none'. I did not mark the function as strict, as I think NULL is a reasonable value (and the default) for startblock and endblock. \n\n> + int64 startblock = -1;\n> + int64 endblock = -1;\n> ...\n> + if (!PG_ARGISNULL(3))\n> + startblock = PG_GETARG_INT64(3);\n> + if (!PG_ARGISNULL(4))\n> + endblock = PG_GETARG_INT64(4);\n> ...\n> + if (startblock < 0)\n> + startblock = 0;\n> + if (endblock < 0 || endblock > ctx.nblocks)\n> + endblock = ctx.nblocks;\n> +\n> + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n> \n> So, the user can specify a negative value explicitly and it will be\n> treated as the default, and an endblock value that's larger than the\n> relation size will be treated as the relation size. The way pg_prewarm\n> does the corresponding checks seems superior: null indicates the\n> default value, and any non-null value must be within range or you get\n> an error. Also, you seem to be treating endblock as the first block\n> that should not be checked, whereas pg_prewarm takes what seems to me\n> to be the more natural interpretation: the end block is the last block\n> that IS checked. If you do it this way, then someone who specifies the\n> same start and end block will check no blocks -- silently, I think.\n\nUnder that regime, for relations with one block of data, (startblock=0, endblock=0) means \"check the zero'th block\", and for relations with no blocks of data, specifying any non-null (startblock,endblock) pair raises an exception. I don't like that too much, but I'm happy to defer to precedent. Since you say pg_prewarm works this way (I did not check), I have changed verify_heapam to do likewise.\n\n> + if (skip_all_frozen || skip_all_visible)\n> \n> Since you can't skip all frozen without skipping all visible, this\n> test could be simplified. Or you could introduce a three-valued enum\n> and test that skip_pages != SKIP_PAGES_NONE, which might be even\n> better.\n\nIt works now with a three-valued enum.\n\n> + /* We must unlock the page from the prior iteration, if any */\n> + Assert(ctx.blkno == InvalidBlockNumber || ctx.buffer != InvalidBuffer);\n> \n> I don't understand this assertion, and I don't understand the comment,\n> either. I think ctx.blkno can never be equal to InvalidBlockNumber\n> because we never set it to anything outside the range of 0..(endblock\n> - 1), and I think ctx.buffer must always be unequal to InvalidBuffer\n> because we just initialized it by calling ReadBufferExtended(). So I\n> think this assertion would still pass if we wrote && rather than ||.\n> But even then, I don't know what that has to do with the comment or\n> why it even makes sense to have an assertion for that in the first\n> place.\n\nYes, it is vestigial. Removed.\n\n> + /*\n> + * Open the relation. We use ShareUpdateExclusive to prevent concurrent\n> + * vacuums from changing the relfrozenxid, relminmxid, or advancing the\n> + * global oldestXid to be newer than those. This protection\n> saves us from\n> + * having to reacquire the locks and recheck those minimums for every\n> + * tuple, which would be expensive.\n> + */\n> + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> \n> I don't think we'd need to recheck for every tuple, would we? Just for\n> cases where there's an apparent violation of the rules.\n\nIt's a bit fuzzy what an \"apparent violation\" might be if both ends of the range of valid xids may be moving, and arbitrarily much. It's also not clear how often to recheck, since you'd be dealing with a race condition no matter how often you check. Perhaps the comments shouldn't mention how often you'd have to recheck, since there is no really defensible choice for that. I removed the offending sentence.\n\n> I guess that\n> could still be expensive if there's a lot of them, but needing\n> ShareUpdateExclusiveLock rather than only AccessShareLock is a little\n> unfortunate.\n\nI welcome strategies that would allow for taking a lesser lock.\n\n> It's also unclear to me why this concerns itself with relfrozenxid and\n> the cluster-wide oldestXid value but not with datfrozenxid. It seems\n> like if we're going to sanity-check the relfrozenxid against the\n> cluster-wide value, we ought to also check it against the\n> database-wide value. Checking neither would also seem like a plausible\n> choice. But it seems very strange to only check against the\n> cluster-wide value.\n\nIf the relation has a normal relfrozenxid, then the oldest valid xid we can encounter in the table is relfrozenxid. Otherwise, each row needs to be compared against some other minimum xid value.\n\nLogically, that other minimum xid value should be the oldest valid xid for the database, which must logically be at least as old as any valid row in the table and no older than the oldest valid xid for the cluster.\n\nUnfortunately, if the comments in commands/vacuum.c circa line 1572 can be believed, and if I am reading them correctly, the stored value for the oldest valid xid in the database has been known to be corrupted by bugs in pg_upgrade. This is awful. If I compare the xid of a row in a table against the oldest xid value for the database, and the xid of the row is older, what can I do? I don't have a principled basis for determining which one of them is wrong. \n\nThe logic in verify_heapam is conservative; it makes no guarantees about finding and reporting all corruption, but if it does report a row as corrupt, you can bank on that, bugs in verify_heapam itself not withstanding. I think this is a good choice; a tool with only false negatives is much more useful than one with both false positives and false negatives. \n\nI have added a comment about my reasoning to verify_heapam.c. I'm happy to be convinced of a better strategy for handling this situation.\n\n> \n> + StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n> + \"InvalidOffsetNumber\n> increments to FirstOffsetNumber\");\n> \n> If you are going to rely on this property, I agree that it is good to\n> check it. But it would be better to NOT rely on this property, and I\n> suspect the code can be written quite cleanly without relying on it.\n> And actually, that's what you did, because you first set ctx.offnum =\n> InvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\n> the loop initializer. So AFAICS the first initializer, and the static\n> assert, are pointless.\n\nAh, right you are. Removed.\n\n> \n> + if (ItemIdIsRedirected(ctx.itemid))\n> + {\n> + uint16 redirect = ItemIdGetRedirect(ctx.itemid);\n> + if (redirect <= SizeOfPageHeaderData\n> || redirect >= ph->pd_lower)\n> ...\n> + if ((redirect - SizeOfPageHeaderData)\n> % sizeof(uint16))\n> \n> I think that ItemIdGetRedirect() returns an offset, not a byte\n> position. So the expectation that I would have is that it would be any\n> integer >= 0 and <= maxoff. Am I confused?\n\nI think you are right about it returning an offset, which should be between FirstOffsetNumber and maxoff, inclusive. I have updated the checks.\n\n> BTW, it seems like it might\n> be good to complain if the item to which it points is LP_UNUSED...\n> AFAIK that shouldn't happen.\n\nThanks for mentioning that. It now checks for that.\n\n> + errmsg(\"\\\"%s\\\" is not a heap AM\",\n> \n> I think the correct wording would be just \"is not a heap.\" The \"heap\n> AM\" is the thing in pg_am, not a specific table.\n\nFixed.\n\n> +confess(HeapCheckContext * ctx, char *msg)\n> +TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n> +check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n> \n> This is what happens when you pgindent without adding all the right\n> things to typedefs.list first ... or when you don't pgindent and have\n> odd ideas about how to indent things.\n\nHmm. I don't see the three lines of code you are quoting. Which patch is that from?\n\n> \n> + /*\n> + * In principle, there is nothing to prevent a scan over a large, highly\n> + * corrupted table from using workmem worth of memory building up the\n> + * tuplestore. Don't leak the msg argument memory.\n> + */\n> + pfree(msg);\n> \n> Maybe change the second sentence to something like: \"That should be\n> OK, else the user can lower work_mem, but we'd better not leak any\n> additional memory.\"\n\nIt may be a little wordy, but I went with\n\n /*\n * In principle, there is nothing to prevent a scan over a large, highly\n * corrupted table from using workmem worth of memory building up the\n * tuplestore. That's ok, but if we also leak the msg argument memory\n * until the end of the query, we could exceed workmem by more than a\n * trivial amount. Therefore, free the msg argument each time we are\n * called rather than waiting for our current memory context to be freed.\n */\n\n> +/*\n> + * check_tuphdr_xids\n> + *\n> + * Determine whether tuples are visible for verification. Similar to\n> + * HeapTupleSatisfiesVacuum, but with critical differences.\n> + *\n> + * 1) Does not touch hint bits. It seems imprudent to write hint bits\n> + * to a table during a corruption check.\n> + * 2) Only makes a boolean determination of whether verification should\n> + * see the tuple, rather than doing extra work for vacuum-related\n> + * categorization.\n> + *\n> + * The caller should already have checked that xmin and xmax are not out of\n> + * bounds for the relation.\n> + */\n> \n> First, check_tuphdr_xids() doesn't seem like a very good name. If you\n> have a function with that name and, like this one, it returns Boolean,\n> what does true mean? What does false mean? Kinda hard to tell. And\n> also, check the tuple header XIDs *for what*? If you called it, say,\n> tuple_is_visible(), that would be self-evident.\n\nChanged.\n\n> Second, consider that we hold at least AccessShareLock on the relation\n> - actually, ATM we hold ShareUpdateExclusiveLock. Either way, there\n> cannot be a concurrent modification to the tuple descriptor in\n> progress. Therefore, I think that only a HEAPTUPLE_DEAD tuple is\n> potentially using a non-current schema. If the tuple is\n> HEAPTUPLE_INSERT_IN_PROGRESS, there's either no ADD COLUMN in the\n> inserting transaction, or that transaction committed before we got our\n> lock. Similarly if it's HEAPTUPLE_DELETE_IN_PROGRESS or\n> HEAPTUPLE_RECENTLY_DEAD, the original inserter must've committed\n> before we got our lock. Or if it's both inserted and deleted in the\n> same transaction, say, then that transaction committed before we got\n> our lock or else contains no relevant DDL. IOW, I think you can check\n> everything but dead tuples here.\n\nOk, I have changed tuple_is_visible to return true rather than false for those other cases.\n\n> Capitalization and punctuation for messages complaining about problems\n> need to be consistent. verify_heapam() has \"Invalid redirect line\n> pointer offset %u out of bounds\" which starts with a capital letter,\n> but check_tuphdr_xids() has \"heap tuple with XMAX_IS_MULTI is neither\n> LOCKED_ONLY nor has a valid xmax\" which does not. I vote for lower\n> case, but in any event it should be the same.\n\nI standardized on all lowercase text, though I left embedded symbols and constants such as LOCKED_ONLY alone.\n\n> Also,\n> check_tuphdr_xids() has \"tuple xvac = %u invalid\" which is either a\n> debugging leftover or a very unclear complaint.\n\nRight. That has been changed to \"old-style VACUUM FULL transaction ID %u is invalid in this relation\".\n\n> I think some real work\n> needs to be put into the phrasing of these messages so that it's more\n> clear exactly what is going on and why it's bad. For example the first\n> example in this paragraph is clearly a problem of some kind, but it's\n> not very clear exactly what is happening: is %u the offset of the\n> invalid line redirect or the value to which it points? I don't think\n> the phrasing is very grammatical, which makes it hard to tell which is\n> meant, and I actually think it would be a good idea to include both\n> things.\n\nBeware that every row returned from amcheck has more fields than just the error message.\n\n blkno OUT bigint,\n offnum OUT integer,\n lp_off OUT smallint,\n lp_flags OUT smallint,\n lp_len OUT smallint,\n attnum OUT integer,\n chunk OUT integer,\n msg OUT text\n\nRather than including blkno, offnum, lp_off, lp_flags, lp_len, attnum, or chunk in the message, it would be better to remove these things from messages that include them. For the specific message under consideration, I've converted the text to \"line pointer redirection to item at offset number %u is outside valid bounds %u .. %u\". That avoids duplicating the offset information of the referring item, while reporting to offset of the referred item.\n\n> Project policy is generally against splitting a string across multiple\n> lines to fit within 80 characters. We like to fit within 80\n> characters, but we like to be able to grep for strings more, and\n> breaking them up like this makes that harder.\n\nThanks for clarifying the project policy. I joined these message strings back together.\n\n> + confess(ctx,\n> + pstrdup(\"corrupt toast chunk va_header\"));\n> \n> This is another message that I don't think is very clear. There's two\n> elements to that. One is that the phrasing is not very good, and the\n> other is that there are no % escapes\n\nChanged to \"corrupt extended toast chunk with sequence number %d has invalid varlena header %0x\". I think all the other information about where the corruption was found is already present in the other returned columns.\n\n> What's somebody going to do when\n> they see this message? First, they're probably going to have to look\n> at the code to figure out in which circumstances it gets generated;\n> that's a sign that the message isn't phrased clearly enough. That will\n> tell them that an unexpected bit pattern has been found, but not what\n> that unexpected bit pattern actually was. So then, they're going to\n> have to try to find the relevant va_header by some other means and\n> fish out the relevant bit so that they can see what actually went\n> wrong.\n\nRight.\n\n> \n> + * Checks the current attribute as tracked in ctx for corruption. Records\n> + * any corruption found in ctx->corruption.\n> + *\n> + *\n> \n> Extra blank line.\n\nFixed.\n\n> + Form_pg_attribute thisatt = TupleDescAttr(RelationGetDescr(ctx->rel),\n> +\n> ctx->attnum);\n> \n> Maybe you could avoid the line wrap by declaring this without\n> initializing it, and then initializing it as a separate statement.\n\nYes, I like that better. I did not need to do the same with infomask, but it looks better to me to break the declaration and initialization for both, so I did that.\n\n> \n> + confess(ctx, psprintf(\"t_hoff + offset > lp_len (%u + %u > %u)\",\n> +\n> ctx->tuphdr->t_hoff, ctx->offset,\n> + ctx->lp_len));\n> \n> Uggh! This isn't even remotely an English sentence. I don't think\n> formulas are the way to go here, but I like the idea of formulas in\n> some places and written-out messages in others even less. I guess the\n> complaint here in English is something like \"tuple attribute %d should\n> start at offset %u, but tuple length is only %u\" or something of that\n> sort. Also, it seems like this complaint really ought to have been\n> reported on the *preceding* loop iteration, either complaining that\n> (1) the fixed length attribute is more than the number of remaining\n> bytes in the tuple or (2) the varlena header for the tuple specifies\n> an excessively high length. It seems like you're blaming the wrong\n> attribute for the problem.\n\nYeah, and it wouldn't complain if the final attribute of a tuple was overlong, as there wouldn't be a next attribute to blame it on. I've changed it to report as you suggest, although it also still complains if the first attribute starts outside the bounds of the tuple. The two error messages now read as \"tuple attribute should start at offset %u, but tuple length is only %u\" and \"tuple attribute of length %u ends at offset %u, but tuple length is only %u\".\n\n> BTW, the header comments for this function (check_tuple_attribute)\n> neglect to document the meaning of the return value.\n\nFixed.\n\n> + confess(ctx, psprintf(\"tuple xmax = %u\n> precedes relation \"\n> +\n> \"relfrozenxid = %u\",\n> \n> This is another example of these messages needing work. The\n> corresponding message from heap_prepare_freeze_tuple() is \"found\n> update xid %u from before relfrozenxid %u\". That's better, because we\n> don't normally include equals signs in our messages like this, and\n> also because \"relation relfrozenxid\" is redundant. I think this should\n> say something like \"tuple xmax %u precedes relfrozenxid %u\".\n> \n> + confess(ctx, psprintf(\"tuple xmax = %u is in\n> the future\",\n> + xmax));\n> \n> And then this could be something like \"tuple xmax %u follows\n> last-assigned xid %u\". That would be more symmetric and more\n> informative.\n\nBoth of these have been changed.\n\n> + if (SizeofHeapTupleHeader + BITMAPLEN(ctx->natts) >\n> ctx->tuphdr->t_hoff)\n> \n> I think we should be able to predict the exact value of t_hoff and\n> complain if it isn't precisely equal to the expected value. Or is that\n> not possible for some reason?\n\nThat is possible, and I've updated the error message to match. There are cases where you can't know if the HEAP_HASNULL bit is wrong or if the t_hoff value is wrong, but I've changed the code to just compute the length based on the HEAP_HASNULL setting and use that as the expected value, and complain when the actual value does not match the expected. That sidesteps the problem of not knowing exactly which value to blame.\n\n> Is there some place that's checking that lp_len >=\n> SizeOfHeapTupleHeader before check_tuple() goes and starts poking into\n> the header? If not, there should be.\n\nGood catch. check_tuple() now does that before reading the header.\n\n> +$node->command_ok(\n> \n> + [\n> + 'pg_amcheck', '-p', $port, 'postgres'\n> + ],\n> + 'pg_amcheck all schemas and tables implicitly');\n> +\n> +$node->command_ok(\n> + [\n> + 'pg_amcheck', '-i', '-p', $port, 'postgres'\n> + ],\n> + 'pg_amcheck all schemas, tables and indexes');\n> \n> I haven't really looked through the btree-checking and pg_amcheck\n> parts of this much yet, but this caught my eye. Why would the default\n> be to check tables but not indexes? I think the default ought to be to\n> check everything we know how to check.\n\nI have changed the default to match your expectations.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 20 Jul 2020 14:02:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi Mark,\n\nI think new structures should be listed in src/tools/pgindent/typedefs.list,\notherwise, pgindent might disturb its indentation.\n\nRegards,\nAmul\n\n\nOn Tue, Jul 21, 2020 at 2:32 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jul 16, 2020, at 12:38 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jul 6, 2020 at 2:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >> The v10 patch without these ideas is here:\n> >\n> > Along the lines of what Alvaro was saying before, I think this\n> > definitely needs to be split up into a series of patches. The commit\n> > message for v10 describes it doing three pretty separate things, and I\n> > think that argues for splitting it into a series of three patches. I'd\n> > argue for this ordering:\n> >\n> > 0001 Refactoring existing amcheck btree checking functions to optionally\n> > return corruption information rather than ereport'ing it. This is\n> > used by the new pg_amcheck command line tool for reporting back to\n> > the caller.\n> >\n> > 0002 Adding new function verify_heapam for checking a heap relation and\n> > associated toast relation, if any, to contrib/amcheck.\n> >\n> > 0003 Adding new contrib module pg_amcheck, which is a command line\n> > interface for running amcheck's verifications against tables and\n> > indexes.\n> >\n> > It's too hard to review things like this when it's all mixed together.\n>\n> The v11 patch series is broken up as you suggest.\n>\n> > +++ b/contrib/amcheck/t/skipping.pl\n> >\n> > The name of this file is inconsistent with the tree's usual\n> > convention, which is all stuff like 001_whatever.pl, except for\n> > src/test/modules/brin, which randomly decided to use two digits\n> > instead of three. There's no precedent for a test file with no leading\n> > numeric digits. Also, what does \"skipping\" even have to do with what\n> > the test is checking? Maybe it's intended to refer to the new error\n> > handling \"skipping\" the actual error in favor of just reporting it\n> > without stopping, but that's not really what the word \"skipping\"\n> > normally means. Finally, it seems a bit over-engineered: do we really\n> > need 183 test cases to check that detecting a problem doesn't lead to\n> > an abort? Like, if that's the purpose of the test, I'd expect it to\n> > check one corrupt relation and one non-corrupt relation, each with and\n> > without the no-error behavior. And that's about it. Or maybe it's\n> > talking about skipping pages during the checks, because those pages\n> > are all-visible or all-frozen? It's not very clear to me what's going\n> > on here.\n>\n> The \"skipping\" did originally refer to testing verify_heapam()'s option to skip all-visible or all-frozen blocks. I have renamed it 001_verify_heapam.pl, since it tests that function.\n>\n> >\n> > + TransactionId nextKnownValidXid;\n> > + TransactionId oldestValidXid;\n> >\n> > Please add explanatory comments indicating what these are intended to\n> > mean.\n>\n> Done.\n>\n> > For most of the the structure members, the brief comments\n> > already present seem sufficient; but here, more explanation looks\n> > necessary and less is provided. The \"Values for returning tuples\"\n> > could possibly also use some more detail.\n>\n> Ok, I've expanded the comments for these.\n>\n> > +#define HEAPCHECK_RELATION_COLS 8\n> >\n> > I think this should really be at the top of the file someplace.\n> > Sometimes people have adopted this style when the #define is only used\n> > within the function that contains it, but that's not the case here.\n>\n> Done.\n>\n> >\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n> > + errhint(\"please choose from 'all visible', 'all frozen', \"\n> > + \"or NULL\")));\n> >\n> > I think it would be better if we had three string values selecting the\n> > different behaviors, and made the parameter NOT NULL but with a\n> > default. It seems like that would be easier to understand. Right now,\n> > I can tell that my options for what to skip are \"all visible\", \"all\n> > frozen\", and, uh, some other thing that I don't know what it is. I'm\n> > gonna guess the third option is to skip nothing, but it seems best to\n> > make that explicit. Also, should we maybe consider spelling this\n> > 'all-visible' and 'all-frozen' with dashes, instead of using spaces?\n> > Spaces in an option value seems a little icky to me somehow.\n>\n> I've made the options 'all-visible', 'all-frozen', and 'none'. It defaults to 'none'. I did not mark the function as strict, as I think NULL is a reasonable value (and the default) for startblock and endblock.\n>\n> > + int64 startblock = -1;\n> > + int64 endblock = -1;\n> > ...\n> > + if (!PG_ARGISNULL(3))\n> > + startblock = PG_GETARG_INT64(3);\n> > + if (!PG_ARGISNULL(4))\n> > + endblock = PG_GETARG_INT64(4);\n> > ...\n> > + if (startblock < 0)\n> > + startblock = 0;\n> > + if (endblock < 0 || endblock > ctx.nblocks)\n> > + endblock = ctx.nblocks;\n> > +\n> > + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n> >\n> > So, the user can specify a negative value explicitly and it will be\n> > treated as the default, and an endblock value that's larger than the\n> > relation size will be treated as the relation size. The way pg_prewarm\n> > does the corresponding checks seems superior: null indicates the\n> > default value, and any non-null value must be within range or you get\n> > an error. Also, you seem to be treating endblock as the first block\n> > that should not be checked, whereas pg_prewarm takes what seems to me\n> > to be the more natural interpretation: the end block is the last block\n> > that IS checked. If you do it this way, then someone who specifies the\n> > same start and end block will check no blocks -- silently, I think.\n>\n> Under that regime, for relations with one block of data, (startblock=0, endblock=0) means \"check the zero'th block\", and for relations with no blocks of data, specifying any non-null (startblock,endblock) pair raises an exception. I don't like that too much, but I'm happy to defer to precedent. Since you say pg_prewarm works this way (I did not check), I have changed verify_heapam to do likewise.\n>\n> > + if (skip_all_frozen || skip_all_visible)\n> >\n> > Since you can't skip all frozen without skipping all visible, this\n> > test could be simplified. Or you could introduce a three-valued enum\n> > and test that skip_pages != SKIP_PAGES_NONE, which might be even\n> > better.\n>\n> It works now with a three-valued enum.\n>\n> > + /* We must unlock the page from the prior iteration, if any */\n> > + Assert(ctx.blkno == InvalidBlockNumber || ctx.buffer != InvalidBuffer);\n> >\n> > I don't understand this assertion, and I don't understand the comment,\n> > either. I think ctx.blkno can never be equal to InvalidBlockNumber\n> > because we never set it to anything outside the range of 0..(endblock\n> > - 1), and I think ctx.buffer must always be unequal to InvalidBuffer\n> > because we just initialized it by calling ReadBufferExtended(). So I\n> > think this assertion would still pass if we wrote && rather than ||.\n> > But even then, I don't know what that has to do with the comment or\n> > why it even makes sense to have an assertion for that in the first\n> > place.\n>\n> Yes, it is vestigial. Removed.\n>\n> > + /*\n> > + * Open the relation. We use ShareUpdateExclusive to prevent concurrent\n> > + * vacuums from changing the relfrozenxid, relminmxid, or advancing the\n> > + * global oldestXid to be newer than those. This protection\n> > saves us from\n> > + * having to reacquire the locks and recheck those minimums for every\n> > + * tuple, which would be expensive.\n> > + */\n> > + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> >\n> > I don't think we'd need to recheck for every tuple, would we? Just for\n> > cases where there's an apparent violation of the rules.\n>\n> It's a bit fuzzy what an \"apparent violation\" might be if both ends of the range of valid xids may be moving, and arbitrarily much. It's also not clear how often to recheck, since you'd be dealing with a race condition no matter how often you check. Perhaps the comments shouldn't mention how often you'd have to recheck, since there is no really defensible choice for that. I removed the offending sentence.\n>\n> > I guess that\n> > could still be expensive if there's a lot of them, but needing\n> > ShareUpdateExclusiveLock rather than only AccessShareLock is a little\n> > unfortunate.\n>\n> I welcome strategies that would allow for taking a lesser lock.\n>\n> > It's also unclear to me why this concerns itself with relfrozenxid and\n> > the cluster-wide oldestXid value but not with datfrozenxid. It seems\n> > like if we're going to sanity-check the relfrozenxid against the\n> > cluster-wide value, we ought to also check it against the\n> > database-wide value. Checking neither would also seem like a plausible\n> > choice. But it seems very strange to only check against the\n> > cluster-wide value.\n>\n> If the relation has a normal relfrozenxid, then the oldest valid xid we can encounter in the table is relfrozenxid. Otherwise, each row needs to be compared against some other minimum xid value.\n>\n> Logically, that other minimum xid value should be the oldest valid xid for the database, which must logically be at least as old as any valid row in the table and no older than the oldest valid xid for the cluster.\n>\n> Unfortunately, if the comments in commands/vacuum.c circa line 1572 can be believed, and if I am reading them correctly, the stored value for the oldest valid xid in the database has been known to be corrupted by bugs in pg_upgrade. This is awful. If I compare the xid of a row in a table against the oldest xid value for the database, and the xid of the row is older, what can I do? I don't have a principled basis for determining which one of them is wrong.\n>\n> The logic in verify_heapam is conservative; it makes no guarantees about finding and reporting all corruption, but if it does report a row as corrupt, you can bank on that, bugs in verify_heapam itself not withstanding. I think this is a good choice; a tool with only false negatives is much more useful than one with both false positives and false negatives.\n>\n> I have added a comment about my reasoning to verify_heapam.c. I'm happy to be convinced of a better strategy for handling this situation.\n>\n> >\n> > + StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n> > + \"InvalidOffsetNumber\n> > increments to FirstOffsetNumber\");\n> >\n> > If you are going to rely on this property, I agree that it is good to\n> > check it. But it would be better to NOT rely on this property, and I\n> > suspect the code can be written quite cleanly without relying on it.\n> > And actually, that's what you did, because you first set ctx.offnum =\n> > InvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\n> > the loop initializer. So AFAICS the first initializer, and the static\n> > assert, are pointless.\n>\n> Ah, right you are. Removed.\n>\n> >\n> > + if (ItemIdIsRedirected(ctx.itemid))\n> > + {\n> > + uint16 redirect = ItemIdGetRedirect(ctx.itemid);\n> > + if (redirect <= SizeOfPageHeaderData\n> > || redirect >= ph->pd_lower)\n> > ...\n> > + if ((redirect - SizeOfPageHeaderData)\n> > % sizeof(uint16))\n> >\n> > I think that ItemIdGetRedirect() returns an offset, not a byte\n> > position. So the expectation that I would have is that it would be any\n> > integer >= 0 and <= maxoff. Am I confused?\n>\n> I think you are right about it returning an offset, which should be between FirstOffsetNumber and maxoff, inclusive. I have updated the checks.\n>\n> > BTW, it seems like it might\n> > be good to complain if the item to which it points is LP_UNUSED...\n> > AFAIK that shouldn't happen.\n>\n> Thanks for mentioning that. It now checks for that.\n>\n> > + errmsg(\"\\\"%s\\\" is not a heap AM\",\n> >\n> > I think the correct wording would be just \"is not a heap.\" The \"heap\n> > AM\" is the thing in pg_am, not a specific table.\n>\n> Fixed.\n>\n> > +confess(HeapCheckContext * ctx, char *msg)\n> > +TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n> > +check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n> >\n> > This is what happens when you pgindent without adding all the right\n> > things to typedefs.list first ... or when you don't pgindent and have\n> > odd ideas about how to indent things.\n>\n> Hmm. I don't see the three lines of code you are quoting. Which patch is that from?\n>\n> >\n> > + /*\n> > + * In principle, there is nothing to prevent a scan over a large, highly\n> > + * corrupted table from using workmem worth of memory building up the\n> > + * tuplestore. Don't leak the msg argument memory.\n> > + */\n> > + pfree(msg);\n> >\n> > Maybe change the second sentence to something like: \"That should be\n> > OK, else the user can lower work_mem, but we'd better not leak any\n> > additional memory.\"\n>\n> It may be a little wordy, but I went with\n>\n> /*\n> * In principle, there is nothing to prevent a scan over a large, highly\n> * corrupted table from using workmem worth of memory building up the\n> * tuplestore. That's ok, but if we also leak the msg argument memory\n> * until the end of the query, we could exceed workmem by more than a\n> * trivial amount. Therefore, free the msg argument each time we are\n> * called rather than waiting for our current memory context to be freed.\n> */\n>\n> > +/*\n> > + * check_tuphdr_xids\n> > + *\n> > + * Determine whether tuples are visible for verification. Similar to\n> > + * HeapTupleSatisfiesVacuum, but with critical differences.\n> > + *\n> > + * 1) Does not touch hint bits. It seems imprudent to write hint bits\n> > + * to a table during a corruption check.\n> > + * 2) Only makes a boolean determination of whether verification should\n> > + * see the tuple, rather than doing extra work for vacuum-related\n> > + * categorization.\n> > + *\n> > + * The caller should already have checked that xmin and xmax are not out of\n> > + * bounds for the relation.\n> > + */\n> >\n> > First, check_tuphdr_xids() doesn't seem like a very good name. If you\n> > have a function with that name and, like this one, it returns Boolean,\n> > what does true mean? What does false mean? Kinda hard to tell. And\n> > also, check the tuple header XIDs *for what*? If you called it, say,\n> > tuple_is_visible(), that would be self-evident.\n>\n> Changed.\n>\n> > Second, consider that we hold at least AccessShareLock on the relation\n> > - actually, ATM we hold ShareUpdateExclusiveLock. Either way, there\n> > cannot be a concurrent modification to the tuple descriptor in\n> > progress. Therefore, I think that only a HEAPTUPLE_DEAD tuple is\n> > potentially using a non-current schema. If the tuple is\n> > HEAPTUPLE_INSERT_IN_PROGRESS, there's either no ADD COLUMN in the\n> > inserting transaction, or that transaction committed before we got our\n> > lock. Similarly if it's HEAPTUPLE_DELETE_IN_PROGRESS or\n> > HEAPTUPLE_RECENTLY_DEAD, the original inserter must've committed\n> > before we got our lock. Or if it's both inserted and deleted in the\n> > same transaction, say, then that transaction committed before we got\n> > our lock or else contains no relevant DDL. IOW, I think you can check\n> > everything but dead tuples here.\n>\n> Ok, I have changed tuple_is_visible to return true rather than false for those other cases.\n>\n> > Capitalization and punctuation for messages complaining about problems\n> > need to be consistent. verify_heapam() has \"Invalid redirect line\n> > pointer offset %u out of bounds\" which starts with a capital letter,\n> > but check_tuphdr_xids() has \"heap tuple with XMAX_IS_MULTI is neither\n> > LOCKED_ONLY nor has a valid xmax\" which does not. I vote for lower\n> > case, but in any event it should be the same.\n>\n> I standardized on all lowercase text, though I left embedded symbols and constants such as LOCKED_ONLY alone.\n>\n> > Also,\n> > check_tuphdr_xids() has \"tuple xvac = %u invalid\" which is either a\n> > debugging leftover or a very unclear complaint.\n>\n> Right. That has been changed to \"old-style VACUUM FULL transaction ID %u is invalid in this relation\".\n>\n> > I think some real work\n> > needs to be put into the phrasing of these messages so that it's more\n> > clear exactly what is going on and why it's bad. For example the first\n> > example in this paragraph is clearly a problem of some kind, but it's\n> > not very clear exactly what is happening: is %u the offset of the\n> > invalid line redirect or the value to which it points? I don't think\n> > the phrasing is very grammatical, which makes it hard to tell which is\n> > meant, and I actually think it would be a good idea to include both\n> > things.\n>\n> Beware that every row returned from amcheck has more fields than just the error message.\n>\n> blkno OUT bigint,\n> offnum OUT integer,\n> lp_off OUT smallint,\n> lp_flags OUT smallint,\n> lp_len OUT smallint,\n> attnum OUT integer,\n> chunk OUT integer,\n> msg OUT text\n>\n> Rather than including blkno, offnum, lp_off, lp_flags, lp_len, attnum, or chunk in the message, it would be better to remove these things from messages that include them. For the specific message under consideration, I've converted the text to \"line pointer redirection to item at offset number %u is outside valid bounds %u .. %u\". That avoids duplicating the offset information of the referring item, while reporting to offset of the referred item.\n>\n> > Project policy is generally against splitting a string across multiple\n> > lines to fit within 80 characters. We like to fit within 80\n> > characters, but we like to be able to grep for strings more, and\n> > breaking them up like this makes that harder.\n>\n> Thanks for clarifying the project policy. I joined these message strings back together.\n>\n> > + confess(ctx,\n> > + pstrdup(\"corrupt toast chunk va_header\"));\n> >\n> > This is another message that I don't think is very clear. There's two\n> > elements to that. One is that the phrasing is not very good, and the\n> > other is that there are no % escapes\n>\n> Changed to \"corrupt extended toast chunk with sequence number %d has invalid varlena header %0x\". I think all the other information about where the corruption was found is already present in the other returned columns.\n>\n> > What's somebody going to do when\n> > they see this message? First, they're probably going to have to look\n> > at the code to figure out in which circumstances it gets generated;\n> > that's a sign that the message isn't phrased clearly enough. That will\n> > tell them that an unexpected bit pattern has been found, but not what\n> > that unexpected bit pattern actually was. So then, they're going to\n> > have to try to find the relevant va_header by some other means and\n> > fish out the relevant bit so that they can see what actually went\n> > wrong.\n>\n> Right.\n>\n> >\n> > + * Checks the current attribute as tracked in ctx for corruption. Records\n> > + * any corruption found in ctx->corruption.\n> > + *\n> > + *\n> >\n> > Extra blank line.\n>\n> Fixed.\n>\n> > + Form_pg_attribute thisatt = TupleDescAttr(RelationGetDescr(ctx->rel),\n> > +\n> > ctx->attnum);\n> >\n> > Maybe you could avoid the line wrap by declaring this without\n> > initializing it, and then initializing it as a separate statement.\n>\n> Yes, I like that better. I did not need to do the same with infomask, but it looks better to me to break the declaration and initialization for both, so I did that.\n>\n> >\n> > + confess(ctx, psprintf(\"t_hoff + offset > lp_len (%u + %u > %u)\",\n> > +\n> > ctx->tuphdr->t_hoff, ctx->offset,\n> > + ctx->lp_len));\n> >\n> > Uggh! This isn't even remotely an English sentence. I don't think\n> > formulas are the way to go here, but I like the idea of formulas in\n> > some places and written-out messages in others even less. I guess the\n> > complaint here in English is something like \"tuple attribute %d should\n> > start at offset %u, but tuple length is only %u\" or something of that\n> > sort. Also, it seems like this complaint really ought to have been\n> > reported on the *preceding* loop iteration, either complaining that\n> > (1) the fixed length attribute is more than the number of remaining\n> > bytes in the tuple or (2) the varlena header for the tuple specifies\n> > an excessively high length. It seems like you're blaming the wrong\n> > attribute for the problem.\n>\n> Yeah, and it wouldn't complain if the final attribute of a tuple was overlong, as there wouldn't be a next attribute to blame it on. I've changed it to report as you suggest, although it also still complains if the first attribute starts outside the bounds of the tuple. The two error messages now read as \"tuple attribute should start at offset %u, but tuple length is only %u\" and \"tuple attribute of length %u ends at offset %u, but tuple length is only %u\".\n>\n> > BTW, the header comments for this function (check_tuple_attribute)\n> > neglect to document the meaning of the return value.\n>\n> Fixed.\n>\n> > + confess(ctx, psprintf(\"tuple xmax = %u\n> > precedes relation \"\n> > +\n> > \"relfrozenxid = %u\",\n> >\n> > This is another example of these messages needing work. The\n> > corresponding message from heap_prepare_freeze_tuple() is \"found\n> > update xid %u from before relfrozenxid %u\". That's better, because we\n> > don't normally include equals signs in our messages like this, and\n> > also because \"relation relfrozenxid\" is redundant. I think this should\n> > say something like \"tuple xmax %u precedes relfrozenxid %u\".\n> >\n> > + confess(ctx, psprintf(\"tuple xmax = %u is in\n> > the future\",\n> > + xmax));\n> >\n> > And then this could be something like \"tuple xmax %u follows\n> > last-assigned xid %u\". That would be more symmetric and more\n> > informative.\n>\n> Both of these have been changed.\n>\n> > + if (SizeofHeapTupleHeader + BITMAPLEN(ctx->natts) >\n> > ctx->tuphdr->t_hoff)\n> >\n> > I think we should be able to predict the exact value of t_hoff and\n> > complain if it isn't precisely equal to the expected value. Or is that\n> > not possible for some reason?\n>\n> That is possible, and I've updated the error message to match. There are cases where you can't know if the HEAP_HASNULL bit is wrong or if the t_hoff value is wrong, but I've changed the code to just compute the length based on the HEAP_HASNULL setting and use that as the expected value, and complain when the actual value does not match the expected. That sidesteps the problem of not knowing exactly which value to blame.\n>\n> > Is there some place that's checking that lp_len >=\n> > SizeOfHeapTupleHeader before check_tuple() goes and starts poking into\n> > the header? If not, there should be.\n>\n> Good catch. check_tuple() now does that before reading the header.\n>\n> > +$node->command_ok(\n> >\n> > + [\n> > + 'pg_amcheck', '-p', $port, 'postgres'\n> > + ],\n> > + 'pg_amcheck all schemas and tables implicitly');\n> > +\n> > +$node->command_ok(\n> > + [\n> > + 'pg_amcheck', '-i', '-p', $port, 'postgres'\n> > + ],\n> > + 'pg_amcheck all schemas, tables and indexes');\n> >\n> > I haven't really looked through the btree-checking and pg_amcheck\n> > parts of this much yet, but this caught my eye. Why would the default\n> > be to check tables but not indexes? I think the default ought to be to\n> > check everything we know how to check.\n>\n> I have changed the default to match your expectations.\n>\n>\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n\n", "msg_date": "Tue, 21 Jul 2020 10:58:11 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Jul 21, 2020 at 10:58 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi Mark,\n>\n> I think new structures should be listed in src/tools/pgindent/typedefs.list,\n> otherwise, pgindent might disturb its indentation.\n>\n> Regards,\n> Amul\n>\n>\n> On Tue, Jul 21, 2020 at 2:32 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > > On Jul 16, 2020, at 12:38 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 6, 2020 at 2:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > >> The v10 patch without these ideas is here:\n> > >\n> > > Along the lines of what Alvaro was saying before, I think this\n> > > definitely needs to be split up into a series of patches. The commit\n> > > message for v10 describes it doing three pretty separate things, and I\n> > > think that argues for splitting it into a series of three patches. I'd\n> > > argue for this ordering:\n> > >\n> > > 0001 Refactoring existing amcheck btree checking functions to optionally\n> > > return corruption information rather than ereport'ing it. This is\n> > > used by the new pg_amcheck command line tool for reporting back to\n> > > the caller.\n> > >\n> > > 0002 Adding new function verify_heapam for checking a heap relation and\n> > > associated toast relation, if any, to contrib/amcheck.\n> > >\n> > > 0003 Adding new contrib module pg_amcheck, which is a command line\n> > > interface for running amcheck's verifications against tables and\n> > > indexes.\n> > >\n> > > It's too hard to review things like this when it's all mixed together.\n> >\n> > The v11 patch series is broken up as you suggest.\n> >\n> > > +++ b/contrib/amcheck/t/skipping.pl\n> > >\n> > > The name of this file is inconsistent with the tree's usual\n> > > convention, which is all stuff like 001_whatever.pl, except for\n> > > src/test/modules/brin, which randomly decided to use two digits\n> > > instead of three. There's no precedent for a test file with no leading\n> > > numeric digits. Also, what does \"skipping\" even have to do with what\n> > > the test is checking? Maybe it's intended to refer to the new error\n> > > handling \"skipping\" the actual error in favor of just reporting it\n> > > without stopping, but that's not really what the word \"skipping\"\n> > > normally means. Finally, it seems a bit over-engineered: do we really\n> > > need 183 test cases to check that detecting a problem doesn't lead to\n> > > an abort? Like, if that's the purpose of the test, I'd expect it to\n> > > check one corrupt relation and one non-corrupt relation, each with and\n> > > without the no-error behavior. And that's about it. Or maybe it's\n> > > talking about skipping pages during the checks, because those pages\n> > > are all-visible or all-frozen? It's not very clear to me what's going\n> > > on here.\n> >\n> > The \"skipping\" did originally refer to testing verify_heapam()'s option to skip all-visible or all-frozen blocks. I have renamed it 001_verify_heapam.pl, since it tests that function.\n> >\n> > >\n> > > + TransactionId nextKnownValidXid;\n> > > + TransactionId oldestValidXid;\n> > >\n> > > Please add explanatory comments indicating what these are intended to\n> > > mean.\n> >\n> > Done.\n> >\n> > > For most of the the structure members, the brief comments\n> > > already present seem sufficient; but here, more explanation looks\n> > > necessary and less is provided. The \"Values for returning tuples\"\n> > > could possibly also use some more detail.\n> >\n> > Ok, I've expanded the comments for these.\n> >\n> > > +#define HEAPCHECK_RELATION_COLS 8\n> > >\n> > > I think this should really be at the top of the file someplace.\n> > > Sometimes people have adopted this style when the #define is only used\n> > > within the function that contains it, but that's not the case here.\n> >\n> > Done.\n> >\n> > >\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > + errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n> > > + errhint(\"please choose from 'all visible', 'all frozen', \"\n> > > + \"or NULL\")));\n> > >\n> > > I think it would be better if we had three string values selecting the\n> > > different behaviors, and made the parameter NOT NULL but with a\n> > > default. It seems like that would be easier to understand. Right now,\n> > > I can tell that my options for what to skip are \"all visible\", \"all\n> > > frozen\", and, uh, some other thing that I don't know what it is. I'm\n> > > gonna guess the third option is to skip nothing, but it seems best to\n> > > make that explicit. Also, should we maybe consider spelling this\n> > > 'all-visible' and 'all-frozen' with dashes, instead of using spaces?\n> > > Spaces in an option value seems a little icky to me somehow.\n> >\n> > I've made the options 'all-visible', 'all-frozen', and 'none'. It defaults to 'none'. I did not mark the function as strict, as I think NULL is a reasonable value (and the default) for startblock and endblock.\n> >\n> > > + int64 startblock = -1;\n> > > + int64 endblock = -1;\n> > > ...\n> > > + if (!PG_ARGISNULL(3))\n> > > + startblock = PG_GETARG_INT64(3);\n> > > + if (!PG_ARGISNULL(4))\n> > > + endblock = PG_GETARG_INT64(4);\n> > > ...\n> > > + if (startblock < 0)\n> > > + startblock = 0;\n> > > + if (endblock < 0 || endblock > ctx.nblocks)\n> > > + endblock = ctx.nblocks;\n> > > +\n> > > + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)\n> > >\n> > > So, the user can specify a negative value explicitly and it will be\n> > > treated as the default, and an endblock value that's larger than the\n> > > relation size will be treated as the relation size. The way pg_prewarm\n> > > does the corresponding checks seems superior: null indicates the\n> > > default value, and any non-null value must be within range or you get\n> > > an error. Also, you seem to be treating endblock as the first block\n> > > that should not be checked, whereas pg_prewarm takes what seems to me\n> > > to be the more natural interpretation: the end block is the last block\n> > > that IS checked. If you do it this way, then someone who specifies the\n> > > same start and end block will check no blocks -- silently, I think.\n> >\n> > Under that regime, for relations with one block of data, (startblock=0, endblock=0) means \"check the zero'th block\", and for relations with no blocks of data, specifying any non-null (startblock,endblock) pair raises an exception. I don't like that too much, but I'm happy to defer to precedent. Since you say pg_prewarm works this way (I did not check), I have changed verify_heapam to do likewise.\n> >\n> > > + if (skip_all_frozen || skip_all_visible)\n> > >\n> > > Since you can't skip all frozen without skipping all visible, this\n> > > test could be simplified. Or you could introduce a three-valued enum\n> > > and test that skip_pages != SKIP_PAGES_NONE, which might be even\n> > > better.\n> >\n> > It works now with a three-valued enum.\n> >\n> > > + /* We must unlock the page from the prior iteration, if any */\n> > > + Assert(ctx.blkno == InvalidBlockNumber || ctx.buffer != InvalidBuffer);\n> > >\n> > > I don't understand this assertion, and I don't understand the comment,\n> > > either. I think ctx.blkno can never be equal to InvalidBlockNumber\n> > > because we never set it to anything outside the range of 0..(endblock\n> > > - 1), and I think ctx.buffer must always be unequal to InvalidBuffer\n> > > because we just initialized it by calling ReadBufferExtended(). So I\n> > > think this assertion would still pass if we wrote && rather than ||.\n> > > But even then, I don't know what that has to do with the comment or\n> > > why it even makes sense to have an assertion for that in the first\n> > > place.\n> >\n> > Yes, it is vestigial. Removed.\n> >\n> > > + /*\n> > > + * Open the relation. We use ShareUpdateExclusive to prevent concurrent\n> > > + * vacuums from changing the relfrozenxid, relminmxid, or advancing the\n> > > + * global oldestXid to be newer than those. This protection\n> > > saves us from\n> > > + * having to reacquire the locks and recheck those minimums for every\n> > > + * tuple, which would be expensive.\n> > > + */\n> > > + ctx.rel = relation_open(relid, ShareUpdateExclusiveLock);\n> > >\n> > > I don't think we'd need to recheck for every tuple, would we? Just for\n> > > cases where there's an apparent violation of the rules.\n> >\n> > It's a bit fuzzy what an \"apparent violation\" might be if both ends of the range of valid xids may be moving, and arbitrarily much. It's also not clear how often to recheck, since you'd be dealing with a race condition no matter how often you check. Perhaps the comments shouldn't mention how often you'd have to recheck, since there is no really defensible choice for that. I removed the offending sentence.\n> >\n> > > I guess that\n> > > could still be expensive if there's a lot of them, but needing\n> > > ShareUpdateExclusiveLock rather than only AccessShareLock is a little\n> > > unfortunate.\n> >\n> > I welcome strategies that would allow for taking a lesser lock.\n> >\n> > > It's also unclear to me why this concerns itself with relfrozenxid and\n> > > the cluster-wide oldestXid value but not with datfrozenxid. It seems\n> > > like if we're going to sanity-check the relfrozenxid against the\n> > > cluster-wide value, we ought to also check it against the\n> > > database-wide value. Checking neither would also seem like a plausible\n> > > choice. But it seems very strange to only check against the\n> > > cluster-wide value.\n> >\n> > If the relation has a normal relfrozenxid, then the oldest valid xid we can encounter in the table is relfrozenxid. Otherwise, each row needs to be compared against some other minimum xid value.\n> >\n> > Logically, that other minimum xid value should be the oldest valid xid for the database, which must logically be at least as old as any valid row in the table and no older than the oldest valid xid for the cluster.\n> >\n> > Unfortunately, if the comments in commands/vacuum.c circa line 1572 can be believed, and if I am reading them correctly, the stored value for the oldest valid xid in the database has been known to be corrupted by bugs in pg_upgrade. This is awful. If I compare the xid of a row in a table against the oldest xid value for the database, and the xid of the row is older, what can I do? I don't have a principled basis for determining which one of them is wrong.\n> >\n> > The logic in verify_heapam is conservative; it makes no guarantees about finding and reporting all corruption, but if it does report a row as corrupt, you can bank on that, bugs in verify_heapam itself not withstanding. I think this is a good choice; a tool with only false negatives is much more useful than one with both false positives and false negatives.\n> >\n> > I have added a comment about my reasoning to verify_heapam.c. I'm happy to be convinced of a better strategy for handling this situation.\n> >\n> > >\n> > > + StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n> > > + \"InvalidOffsetNumber\n> > > increments to FirstOffsetNumber\");\n> > >\n> > > If you are going to rely on this property, I agree that it is good to\n> > > check it. But it would be better to NOT rely on this property, and I\n> > > suspect the code can be written quite cleanly without relying on it.\n> > > And actually, that's what you did, because you first set ctx.offnum =\n> > > InvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\n> > > the loop initializer. So AFAICS the first initializer, and the static\n> > > assert, are pointless.\n> >\n> > Ah, right you are. Removed.\n> >\n> > >\n> > > + if (ItemIdIsRedirected(ctx.itemid))\n> > > + {\n> > > + uint16 redirect = ItemIdGetRedirect(ctx.itemid);\n> > > + if (redirect <= SizeOfPageHeaderData\n> > > || redirect >= ph->pd_lower)\n> > > ...\n> > > + if ((redirect - SizeOfPageHeaderData)\n> > > % sizeof(uint16))\n> > >\n> > > I think that ItemIdGetRedirect() returns an offset, not a byte\n> > > position. So the expectation that I would have is that it would be any\n> > > integer >= 0 and <= maxoff. Am I confused?\n> >\n> > I think you are right about it returning an offset, which should be between FirstOffsetNumber and maxoff, inclusive. I have updated the checks.\n> >\n> > > BTW, it seems like it might\n> > > be good to complain if the item to which it points is LP_UNUSED...\n> > > AFAIK that shouldn't happen.\n> >\n> > Thanks for mentioning that. It now checks for that.\n> >\n> > > + errmsg(\"\\\"%s\\\" is not a heap AM\",\n> > >\n> > > I think the correct wording would be just \"is not a heap.\" The \"heap\n> > > AM\" is the thing in pg_am, not a specific table.\n> >\n> > Fixed.\n> >\n> > > +confess(HeapCheckContext * ctx, char *msg)\n> > > +TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n> > > +check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n> > >\n> > > This is what happens when you pgindent without adding all the right\n> > > things to typedefs.list first ... or when you don't pgindent and have\n> > > odd ideas about how to indent things.\n> >\n> > Hmm. I don't see the three lines of code you are quoting. Which patch is that from?\n> >\n> > >\n> > > + /*\n> > > + * In principle, there is nothing to prevent a scan over a large, highly\n> > > + * corrupted table from using workmem worth of memory building up the\n> > > + * tuplestore. Don't leak the msg argument memory.\n> > > + */\n> > > + pfree(msg);\n> > >\n> > > Maybe change the second sentence to something like: \"That should be\n> > > OK, else the user can lower work_mem, but we'd better not leak any\n> > > additional memory.\"\n> >\n> > It may be a little wordy, but I went with\n> >\n> > /*\n> > * In principle, there is nothing to prevent a scan over a large, highly\n> > * corrupted table from using workmem worth of memory building up the\n> > * tuplestore. That's ok, but if we also leak the msg argument memory\n> > * until the end of the query, we could exceed workmem by more than a\n> > * trivial amount. Therefore, free the msg argument each time we are\n> > * called rather than waiting for our current memory context to be freed.\n> > */\n> >\n> > > +/*\n> > > + * check_tuphdr_xids\n> > > + *\n> > > + * Determine whether tuples are visible for verification. Similar to\n> > > + * HeapTupleSatisfiesVacuum, but with critical differences.\n> > > + *\n> > > + * 1) Does not touch hint bits. It seems imprudent to write hint bits\n> > > + * to a table during a corruption check.\n> > > + * 2) Only makes a boolean determination of whether verification should\n> > > + * see the tuple, rather than doing extra work for vacuum-related\n> > > + * categorization.\n> > > + *\n> > > + * The caller should already have checked that xmin and xmax are not out of\n> > > + * bounds for the relation.\n> > > + */\n> > >\n> > > First, check_tuphdr_xids() doesn't seem like a very good name. If you\n> > > have a function with that name and, like this one, it returns Boolean,\n> > > what does true mean? What does false mean? Kinda hard to tell. And\n> > > also, check the tuple header XIDs *for what*? If you called it, say,\n> > > tuple_is_visible(), that would be self-evident.\n> >\n> > Changed.\n> >\n> > > Second, consider that we hold at least AccessShareLock on the relation\n> > > - actually, ATM we hold ShareUpdateExclusiveLock. Either way, there\n> > > cannot be a concurrent modification to the tuple descriptor in\n> > > progress. Therefore, I think that only a HEAPTUPLE_DEAD tuple is\n> > > potentially using a non-current schema. If the tuple is\n> > > HEAPTUPLE_INSERT_IN_PROGRESS, there's either no ADD COLUMN in the\n> > > inserting transaction, or that transaction committed before we got our\n> > > lock. Similarly if it's HEAPTUPLE_DELETE_IN_PROGRESS or\n> > > HEAPTUPLE_RECENTLY_DEAD, the original inserter must've committed\n> > > before we got our lock. Or if it's both inserted and deleted in the\n> > > same transaction, say, then that transaction committed before we got\n> > > our lock or else contains no relevant DDL. IOW, I think you can check\n> > > everything but dead tuples here.\n> >\n> > Ok, I have changed tuple_is_visible to return true rather than false for those other cases.\n> >\n> > > Capitalization and punctuation for messages complaining about problems\n> > > need to be consistent. verify_heapam() has \"Invalid redirect line\n> > > pointer offset %u out of bounds\" which starts with a capital letter,\n> > > but check_tuphdr_xids() has \"heap tuple with XMAX_IS_MULTI is neither\n> > > LOCKED_ONLY nor has a valid xmax\" which does not. I vote for lower\n> > > case, but in any event it should be the same.\n> >\n> > I standardized on all lowercase text, though I left embedded symbols and constants such as LOCKED_ONLY alone.\n> >\n> > > Also,\n> > > check_tuphdr_xids() has \"tuple xvac = %u invalid\" which is either a\n> > > debugging leftover or a very unclear complaint.\n> >\n> > Right. That has been changed to \"old-style VACUUM FULL transaction ID %u is invalid in this relation\".\n> >\n> > > I think some real work\n> > > needs to be put into the phrasing of these messages so that it's more\n> > > clear exactly what is going on and why it's bad. For example the first\n> > > example in this paragraph is clearly a problem of some kind, but it's\n> > > not very clear exactly what is happening: is %u the offset of the\n> > > invalid line redirect or the value to which it points? I don't think\n> > > the phrasing is very grammatical, which makes it hard to tell which is\n> > > meant, and I actually think it would be a good idea to include both\n> > > things.\n> >\n> > Beware that every row returned from amcheck has more fields than just the error message.\n> >\n> > blkno OUT bigint,\n> > offnum OUT integer,\n> > lp_off OUT smallint,\n> > lp_flags OUT smallint,\n> > lp_len OUT smallint,\n> > attnum OUT integer,\n> > chunk OUT integer,\n> > msg OUT text\n> >\n> > Rather than including blkno, offnum, lp_off, lp_flags, lp_len, attnum, or chunk in the message, it would be better to remove these things from messages that include them. For the specific message under consideration, I've converted the text to \"line pointer redirection to item at offset number %u is outside valid bounds %u .. %u\". That avoids duplicating the offset information of the referring item, while reporting to offset of the referred item.\n> >\n> > > Project policy is generally against splitting a string across multiple\n> > > lines to fit within 80 characters. We like to fit within 80\n> > > characters, but we like to be able to grep for strings more, and\n> > > breaking them up like this makes that harder.\n> >\n> > Thanks for clarifying the project policy. I joined these message strings back together.\n\nIn v11-0001 and v11-0002 patches, there are still a few more errmsg that need to\nbe joined.\n\ne.g:\n\n+ /* check to see if caller supports us returning a tuplestore */\n+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"set-valued function called in context that cannot \"\n+ \"accept a set\")));\n+ if (!(rsinfo->allowedModes & SFRM_Materialize))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"materialize mode required, but it is not allowed \"\n+ \"in this context\")));\n\n> >\n> > > + confess(ctx,\n> > > + pstrdup(\"corrupt toast chunk va_header\"));\n> > >\n> > > This is another message that I don't think is very clear. There's two\n> > > elements to that. One is that the phrasing is not very good, and the\n> > > other is that there are no % escapes\n> >\n> > Changed to \"corrupt extended toast chunk with sequence number %d has invalid varlena header %0x\". I think all the other information about where the corruption was found is already present in the other returned columns.\n> >\n> > > What's somebody going to do when\n> > > they see this message? First, they're probably going to have to look\n> > > at the code to figure out in which circumstances it gets generated;\n> > > that's a sign that the message isn't phrased clearly enough. That will\n> > > tell them that an unexpected bit pattern has been found, but not what\n> > > that unexpected bit pattern actually was. So then, they're going to\n> > > have to try to find the relevant va_header by some other means and\n> > > fish out the relevant bit so that they can see what actually went\n> > > wrong.\n> >\n> > Right.\n> >\n> > >\n> > > + * Checks the current attribute as tracked in ctx for corruption. Records\n> > > + * any corruption found in ctx->corruption.\n> > > + *\n> > > + *\n> > >\n> > > Extra blank line.\n> >\n> > Fixed.\n> >\n> > > + Form_pg_attribute thisatt = TupleDescAttr(RelationGetDescr(ctx->rel),\n> > > +\n> > > ctx->attnum);\n> > >\n> > > Maybe you could avoid the line wrap by declaring this without\n> > > initializing it, and then initializing it as a separate statement.\n> >\n> > Yes, I like that better. I did not need to do the same with infomask, but it looks better to me to break the declaration and initialization for both, so I did that.\n> >\n> > >\n> > > + confess(ctx, psprintf(\"t_hoff + offset > lp_len (%u + %u > %u)\",\n> > > +\n> > > ctx->tuphdr->t_hoff, ctx->offset,\n> > > + ctx->lp_len));\n> > >\n> > > Uggh! This isn't even remotely an English sentence. I don't think\n> > > formulas are the way to go here, but I like the idea of formulas in\n> > > some places and written-out messages in others even less. I guess the\n> > > complaint here in English is something like \"tuple attribute %d should\n> > > start at offset %u, but tuple length is only %u\" or something of that\n> > > sort. Also, it seems like this complaint really ought to have been\n> > > reported on the *preceding* loop iteration, either complaining that\n> > > (1) the fixed length attribute is more than the number of remaining\n> > > bytes in the tuple or (2) the varlena header for the tuple specifies\n> > > an excessively high length. It seems like you're blaming the wrong\n> > > attribute for the problem.\n> >\n> > Yeah, and it wouldn't complain if the final attribute of a tuple was overlong, as there wouldn't be a next attribute to blame it on. I've changed it to report as you suggest, although it also still complains if the first attribute starts outside the bounds of the tuple. The two error messages now read as \"tuple attribute should start at offset %u, but tuple length is only %u\" and \"tuple attribute of length %u ends at offset %u, but tuple length is only %u\".\n> >\n> > > BTW, the header comments for this function (check_tuple_attribute)\n> > > neglect to document the meaning of the return value.\n> >\n> > Fixed.\n> >\n> > > + confess(ctx, psprintf(\"tuple xmax = %u\n> > > precedes relation \"\n> > > +\n> > > \"relfrozenxid = %u\",\n> > >\n> > > This is another example of these messages needing work. The\n> > > corresponding message from heap_prepare_freeze_tuple() is \"found\n> > > update xid %u from before relfrozenxid %u\". That's better, because we\n> > > don't normally include equals signs in our messages like this, and\n> > > also because \"relation relfrozenxid\" is redundant. I think this should\n> > > say something like \"tuple xmax %u precedes relfrozenxid %u\".\n> > >\n> > > + confess(ctx, psprintf(\"tuple xmax = %u is in\n> > > the future\",\n> > > + xmax));\n> > >\n> > > And then this could be something like \"tuple xmax %u follows\n> > > last-assigned xid %u\". That would be more symmetric and more\n> > > informative.\n> >\n> > Both of these have been changed.\n> >\n> > > + if (SizeofHeapTupleHeader + BITMAPLEN(ctx->natts) >\n> > > ctx->tuphdr->t_hoff)\n> > >\n> > > I think we should be able to predict the exact value of t_hoff and\n> > > complain if it isn't precisely equal to the expected value. Or is that\n> > > not possible for some reason?\n> >\n> > That is possible, and I've updated the error message to match. There are cases where you can't know if the HEAP_HASNULL bit is wrong or if the t_hoff value is wrong, but I've changed the code to just compute the length based on the HEAP_HASNULL setting and use that as the expected value, and complain when the actual value does not match the expected. That sidesteps the problem of not knowing exactly which value to blame.\n> >\n> > > Is there some place that's checking that lp_len >=\n> > > SizeOfHeapTupleHeader before check_tuple() goes and starts poking into\n> > > the header? If not, there should be.\n> >\n> > Good catch. check_tuple() now does that before reading the header.\n> >\n> > > +$node->command_ok(\n> > >\n> > > + [\n> > > + 'pg_amcheck', '-p', $port, 'postgres'\n> > > + ],\n> > > + 'pg_amcheck all schemas and tables implicitly');\n> > > +\n> > > +$node->command_ok(\n> > > + [\n> > > + 'pg_amcheck', '-i', '-p', $port, 'postgres'\n> > > + ],\n> > > + 'pg_amcheck all schemas, tables and indexes');\n> > >\n> > > I haven't really looked through the btree-checking and pg_amcheck\n> > > parts of this much yet, but this caught my eye. Why would the default\n> > > be to check tables but not indexes? I think the default ought to be to\n> > > check everything we know how to check.\n> >\n> > I have changed the default to match your expectations.\n> >\n> >\n> >\n> > —\n> > Mark Dilger\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n> >\n> >\n> >\n\n\n", "msg_date": "Tue, 21 Jul 2020 12:20:52 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jul 20, 2020, at 11:50 PM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> On Tue, Jul 21, 2020 at 10:58 AM Amul Sul <sulamul@gmail.com> wrote:\n>> \n>> Hi Mark,\n>> \n>> I think new structures should be listed in src/tools/pgindent/typedefs.list,\n>> otherwise, pgindent might disturb its indentation.\n>> \n\n<snip>\n\n> \n> In v11-0001 and v11-0002 patches, there are still a few more errmsg that need to\n> be joined.\n> \n> e.g:\n> \n> + /* check to see if caller supports us returning a tuplestore */\n> + if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"set-valued function called in context that cannot \"\n> + \"accept a set\")));\n> + if (!(rsinfo->allowedModes & SFRM_Materialize))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"materialize mode required, but it is not allowed \"\n> + \"in this context\")));\n\nThanks for the review!\n\nI believe these v12 patches resolve the two issues you raised.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Jul 2020 17:47:12 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Jul 21, 2020 at 2:32 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> [....]\n> >\n> > + StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n> > + \"InvalidOffsetNumber\n> > increments to FirstOffsetNumber\");\n> >\n> > If you are going to rely on this property, I agree that it is good to\n> > check it. But it would be better to NOT rely on this property, and I\n> > suspect the code can be written quite cleanly without relying on it.\n> > And actually, that's what you did, because you first set ctx.offnum =\n> > InvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\n> > the loop initializer. So AFAICS the first initializer, and the static\n> > assert, are pointless.\n>\n> Ah, right you are. Removed.\n>\n\nI can see the same assert and the unnecessary assignment in v12-0002, is that\nthe same thing that is supposed to be removed, or am I missing something?\n\n> [....]\n> > +confess(HeapCheckContext * ctx, char *msg)\n> > +TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n> > +check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n> >\n> > This is what happens when you pgindent without adding all the right\n> > things to typedefs.list first ... or when you don't pgindent and have\n> > odd ideas about how to indent things.\n>\n> Hmm. I don't see the three lines of code you are quoting. Which patch is that from?\n>\n\nI think it was the same thing related to my previous suggestion to list new\nstructures to typedefs.list. V12 has listed new structures but I think there\nare still some more adjustments needed in the code e.g. see space between\nHeapCheckContext and * (asterisk) that need to be fixed. I am not sure if the\npgindent will do that or not.\n\nHere are a few more minor comments for the v12-0002 patch & some of them\napply to other patches as well:\n\n #include \"utils/snapmgr.h\"\n-\n+#include \"amcheck.h\"\n\nDoesn't seem to be at the correct place -- need to be in sorted order.\n\n\n+ if (!PG_ARGISNULL(3))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"starting block \" INT64_FORMAT\n+ \" is out of bounds for relation with no blocks\",\n+ PG_GETARG_INT64(3))));\n+ if (!PG_ARGISNULL(4))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"ending block \" INT64_FORMAT\n+ \" is out of bounds for relation with no blocks\",\n+ PG_GETARG_INT64(4))));\n\nI think these errmsg() strings also should be in one line.\n\n\n+ if (fatal)\n+ {\n+ if (ctx.toast_indexes)\n+ toast_close_indexes(ctx.toast_indexes, ctx.num_toast_indexes,\n+ ShareUpdateExclusiveLock);\n+ if (ctx.toastrel)\n+ table_close(ctx.toastrel, ShareUpdateExclusiveLock);\n\nToast index and rel closing block style is not the same as at the ending of\nverify_heapam().\n\n\n+ /* If we get this far, we know the relation has at least one block */\n+ startblock = PG_ARGISNULL(3) ? 0 : PG_GETARG_INT64(3);\n+ endblock = PG_ARGISNULL(4) ? ((int64) ctx.nblocks) - 1 : PG_GETARG_INT64(4);\n+ if (startblock < 0 || endblock >= ctx.nblocks || startblock > endblock)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"block range \" INT64_FORMAT \" .. \" INT64_FORMAT\n+ \" is out of bounds for relation with block count %u\",\n+ startblock, endblock, ctx.nblocks)));\n+\n...\n...\n+ if (startblock < 0)\n+ startblock = 0;\n+ if (endblock < 0 || endblock > ctx.nblocks)\n+ endblock = ctx.nblocks;\n\nOther than endblock < 0 case, do we really need that? I think due to the above\nerror check the rest of the cases will not reach this place.\n\n\n+ confess(ctx, psprintf(\n+ \"tuple xmax %u follows last assigned xid %u\",\n+ xmax, ctx->nextKnownValidXid));\n+ fatal = true;\n+ }\n+ }\n+\n+ /* Check for tuple header corruption */\n+ if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n+ {\n+ confess(ctx,\n+ psprintf(\"tuple's header size is %u bytes which is less than the %u\nbyte minimum valid header size\",\n+ ctx->tuphdr->t_hoff,\n+ (unsigned) SizeofHeapTupleHeader));\n\nconfess() call has two different code styles, first one where psprintf()'s only\nargument got its own line and second style where psprintf has its own line with\nthe argument. I think the 2nd style is what we do follow & correct, not the\nformer.\n\n\n+ if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"\\\"%s\\\" is not a heap\",\n+ RelationGetRelationName(rel))));\n\nLike elsewhere, can we have errmsg as \"only heap AM is supported\" and error\ncode is ERRCODE_FEATURE_NOT_SUPPORTED ?\n\n\nThat all, for now, apologize for multiple review emails.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 27 Jul 2020 09:57:04 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jul 26, 2020, at 9:27 PM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> On Tue, Jul 21, 2020 at 2:32 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> [....]\n>>> \n>>> + StaticAssertStmt(InvalidOffsetNumber + 1 == FirstOffsetNumber,\n>>> + \"InvalidOffsetNumber\n>>> increments to FirstOffsetNumber\");\n>>> \n>>> If you are going to rely on this property, I agree that it is good to\n>>> check it. But it would be better to NOT rely on this property, and I\n>>> suspect the code can be written quite cleanly without relying on it.\n>>> And actually, that's what you did, because you first set ctx.offnum =\n>>> InvalidOffsetNumber but then just after that you set ctx.offnum = 0 in\n>>> the loop initializer. So AFAICS the first initializer, and the static\n>>> assert, are pointless.\n>> \n>> Ah, right you are. Removed.\n>> \n> \n> I can see the same assert and the unnecessary assignment in v12-0002, is that\n> the same thing that is supposed to be removed, or am I missing something?\n\nThat's the same thing. I removed it, but obviously I somehow removed the removal prior to making the patch. My best guess is that I reverted some set of changes that unintentionally included this one.\n\n> \n>> [....]\n>>> +confess(HeapCheckContext * ctx, char *msg)\n>>> +TransactionIdValidInRel(TransactionId xid, HeapCheckContext * ctx)\n>>> +check_tuphdr_xids(HeapTupleHeader tuphdr, HeapCheckContext * ctx)\n>>> \n>>> This is what happens when you pgindent without adding all the right\n>>> things to typedefs.list first ... or when you don't pgindent and have\n>>> odd ideas about how to indent things.\n>> \n>> Hmm. I don't see the three lines of code you are quoting. Which patch is that from?\n>> \n> \n> I think it was the same thing related to my previous suggestion to list new\n> structures to typedefs.list. V12 has listed new structures but I think there\n> are still some more adjustments needed in the code e.g. see space between\n> HeapCheckContext and * (asterisk) that need to be fixed. I am not sure if the\n> pgindent will do that or not.\n\nHmm. I'm not seeing an example of HeapCheckContext with wrong spacing. Can you provide a file and line number? There was a problem with enum SkipPages. I've added that to the typedefs.list and rerun pgindent.\n\nWhile looking at that, I noticed that the function and variable naming conventions in this patch were irregular, with names like TransactionIdValidInRel (init-caps) and tuple_is_visible (underscores), so I spent some time cleaning that up for v13.\n\n> Here are a few more minor comments for the v12-0002 patch & some of them\n> apply to other patches as well:\n> \n> #include \"utils/snapmgr.h\"\n> -\n> +#include \"amcheck.h\"\n> \n> Doesn't seem to be at the correct place -- need to be in sorted order.\n\nFixed.\n\n> + if (!PG_ARGISNULL(3))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"starting block \" INT64_FORMAT\n> + \" is out of bounds for relation with no blocks\",\n> + PG_GETARG_INT64(3))));\n> + if (!PG_ARGISNULL(4))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"ending block \" INT64_FORMAT\n> + \" is out of bounds for relation with no blocks\",\n> + PG_GETARG_INT64(4))));\n> \n> I think these errmsg() strings also should be in one line.\n\nI chose not to do so, because the INT64_FORMAT bit breaks up the text even if placed all on one line. I don't feel strongly about that, though, so I'll join them for v13.\n\n> + if (fatal)\n> + {\n> + if (ctx.toast_indexes)\n> + toast_close_indexes(ctx.toast_indexes, ctx.num_toast_indexes,\n> + ShareUpdateExclusiveLock);\n> + if (ctx.toastrel)\n> + table_close(ctx.toastrel, ShareUpdateExclusiveLock);\n> \n> Toast index and rel closing block style is not the same as at the ending of\n> verify_heapam().\n\nI've harmonized the two. Thanks for noticing.\n\n> + /* If we get this far, we know the relation has at least one block */\n> + startblock = PG_ARGISNULL(3) ? 0 : PG_GETARG_INT64(3);\n> + endblock = PG_ARGISNULL(4) ? ((int64) ctx.nblocks) - 1 : PG_GETARG_INT64(4);\n> + if (startblock < 0 || endblock >= ctx.nblocks || startblock > endblock)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"block range \" INT64_FORMAT \" .. \" INT64_FORMAT\n> + \" is out of bounds for relation with block count %u\",\n> + startblock, endblock, ctx.nblocks)));\n> +\n> ...\n> ...\n> + if (startblock < 0)\n> + startblock = 0;\n> + if (endblock < 0 || endblock > ctx.nblocks)\n> + endblock = ctx.nblocks;\n> \n> Other than endblock < 0 case\n\nThis case does not need special checking, either. The combination of checking that startblock >= 0 and that startblock <= endblock already handles it.\n\n> , do we really need that? I think due to the above\n> error check the rest of the cases will not reach this place.\n\nWe don't need any of that. Removed in v13.\n\n> + confess(ctx, psprintf(\n> + \"tuple xmax %u follows last assigned xid %u\",\n> + xmax, ctx->nextKnownValidXid));\n> + fatal = true;\n> + }\n> + }\n> +\n> + /* Check for tuple header corruption */\n> + if (ctx->tuphdr->t_hoff < SizeofHeapTupleHeader)\n> + {\n> + confess(ctx,\n> + psprintf(\"tuple's header size is %u bytes which is less than the %u\n> byte minimum valid header size\",\n> + ctx->tuphdr->t_hoff,\n> + (unsigned) SizeofHeapTupleHeader));\n> \n> confess() call has two different code styles, first one where psprintf()'s only\n> argument got its own line and second style where psprintf has its own line with\n> the argument. I think the 2nd style is what we do follow & correct, not the\n> former.\n\nOk, standardized in v13.\n\n> + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"\\\"%s\\\" is not a heap\",\n> + RelationGetRelationName(rel))));\n> \n> Like elsewhere, can we have errmsg as \"only heap AM is supported\" and error\n> code is ERRCODE_FEATURE_NOT_SUPPORTED ?\n\nI'm indifferent about that change. Done for v13.\n\n> That all, for now, apologize for multiple review emails.\n\nNot at all! I appreciate all the reviews.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 27 Jul 2020 10:01:57 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jul 20, 2020 at 5:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I've made the options 'all-visible', 'all-frozen', and 'none'. It defaults to 'none'.\n\nThat looks nice.\n\n> > I guess that\n> > could still be expensive if there's a lot of them, but needing\n> > ShareUpdateExclusiveLock rather than only AccessShareLock is a little\n> > unfortunate.\n>\n> I welcome strategies that would allow for taking a lesser lock.\n\nI guess I'm not seeing why you need any particular strategy here. Say\nthat at the beginning you note the starting relfrozenxid of the table\n-- I think I would lean toward just ignoring datfrozenxid and the\ncluster-wide value completely. You also note the current value of the\ntransaction ID counter. Those are the two ends of the acceptable\nrange.\n\nLet's first consider the oldest acceptable XID, bounded by\nrelfrozenxid. If you see a value that is older than the relfrozenxid\nvalue that you noted at the start, it is definitely invalid. If you\nsee a newer value, it could still be older than the table's current\nrelfrozenxid, but that doesn't seem very worrisome. If the user\nvacuumed the table while they were running this tool, they can always\nrun the tool again afterward if they wish. Forcing the vacuum to wait\nby taking ShareUpdateExclusiveLock doesn't actually solve anything\nanyway: you STILL won't notice any problems the vacuum introduces, and\nin fact you are now GUARANTEED not to notice them, plus now the vacuum\nhappens later.\n\nNow let's consider the newest acceptable XID, bounded by the value of\nthe transaction ID counter. Any time you see a newer XID than the last\nvalue of the transaction ID counter that you observed, you go observe\nit again. If the value from the table still looks invalid, then you\ncomplain about it. Either way, you remember the new observation and\ncheck future tuples against that value. I think the patch is already\ndoing this anyway; if it weren't, you'd need an even stronger lock,\none sufficient to prevent any insert/update/delete activity on the\ntable altogether.\n\nMaybe I'm just being dense here -- exactly what problem are you worried about?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jul 2020 15:52:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jul 27, 2020 at 1:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Not at all! I appreciate all the reviews.\n\nReviewing 0002, reading through verify_heapam.c:\n\n+typedef enum SkipPages\n+{\n+ SKIP_ALL_FROZEN_PAGES,\n+ SKIP_ALL_VISIBLE_PAGES,\n+ SKIP_PAGES_NONE\n+} SkipPages;\n\nThis looks inconsistent. Maybe just start them all with SKIP_PAGES_.\n\n+ if (PG_ARGISNULL(0))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"missing required parameter for 'rel'\")));\n\nThis doesn't look much like other error messages in the code. Do\nsomething like git grep -A4 PG_ARGISNULL | grep -A3 ereport and study\nthe comparables.\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n+ errhint(\"please choose from 'all-visible', 'all-frozen', or 'none'\")));\n\nSame problem. Check pg_prewarm's handling of the prewarm type, or\nEXPLAIN's handling of the FORMAT option, or similar examples. Read the\nmessage style guidelines concerning punctuation of hint and detail\nmessages.\n\n+ * Bugs in pg_upgrade are reported (see commands/vacuum.c circa line 1572)\n+ * to have sometimes rendered the oldest xid value for a database invalid.\n+ * It seems unwise to report rows as corrupt for failing to be newer than\n+ * a value which itself may be corrupt. We instead use the oldest xid for\n+ * the entire cluster, which must be at least as old as the oldest xid for\n+ * our database.\n\nThis kind of reference to another comment will not age well; line\nnumbers and files change a lot. But I think the right thing to do here\nis just rely on relfrozenxid and relminmxid. If the table is\ninconsistent with those, then something needs fixing. datfrozenxid and\nthe cluster-wide value can look out for themselves. The corruption\ndetector shouldn't be trying to work around any bugs in setting\nrelfrozenxid itself; such problems are arguably precisely what we're\nhere to find.\n\n+/*\n+ * confess\n+ *\n+ * Return a message about corruption, including information\n+ * about where in the relation the corruption was found.\n+ *\n+ * The msg argument is pfree'd by this function.\n+ */\n+static void\n+confess(HeapCheckContext *ctx, char *msg)\n\nContrary to what the comments say, the function doesn't return a\nmessage about corruption or anything else. It returns void.\n\nI don't really like the name, either. I get that it's probably\ninspired by Perl, but I think it should be given a less-clever name\nlike report_corruption() or something.\n\n+ * corrupted table from using workmem worth of memory building up the\n\nThis kind of thing destroys grep-ability. If you're going to refer to\nwork_mem, you gotta spell it the same way we do everywhere else.\n\n+ * Helper function to construct the TupleDesc needed by verify_heapam.\n\nInstead of saying it's the TupleDesc somebody needs, how about saying\nthat it's the TupleDesc that we'll use to report problems that we find\nwhile scanning the heap, or something like that?\n\n+ * Given a TransactionId, attempt to interpret it as a valid\n+ * FullTransactionId, neither in the future nor overlong in\n+ * the past. Stores the inferred FullTransactionId in *fxid.\n\nIt really doesn't, because there's no such thing as 'fxid' referenced\nanywhere here. You should really make the effort to proofread your\npatches before posting, and adjust comments and so on as you go.\nOtherwise reviewing takes longer, and if you keep introducing new\nstuff like this as you fix other stuff, you can fail to ever produce a\ncommittable patch.\n\n+ * Determine whether tuples are visible for verification. Similar to\n+ * HeapTupleSatisfiesVacuum, but with critical differences.\n\nNot accurate, because it also reports problems, which is not mentioned\nanywhere in the function header comment that purports to be a detailed\ndescription of what the function does.\n\n+ else if (TransactionIdIsCurrentTransactionId(raw_xmin))\n+ return true; /* insert or delete in progress */\n+ else if (TransactionIdIsInProgress(raw_xmin))\n+ return true; /* HEAPTUPLE_INSERT_IN_PROGRESS */\n+ else if (!TransactionIdDidCommit(raw_xmin))\n+ {\n+ return false; /* HEAPTUPLE_DEAD */\n+ }\n\nOne of these cases is not punctuated like the others.\n\n+ pstrdup(\"heap tuple with XMAX_IS_MULTI is neither LOCKED_ONLY nor\nhas a valid xmax\"));\n\n1. I don't think that's very grammatical.\n\n2. Why abbreviate HEAP_XMAX_IS_MULTI to XMAX_IS_MULTI and\nHEAP_XMAX_IS_LOCKED_ONLY to LOCKED_ONLY? I don't even think you should\nbe referencing C constant names here at all, and if you are I don't\nthink you should abbreviate, and if you do abbreviate I don't think\nyou should omit different numbers of words depending on which constant\nit is.\n\nI wonder what the intended division of responsibility is here,\nexactly. It seems like you've ended up with some sanity checks in\ncheck_tuple() before tuple_is_visible() is called, and others in\ntuple_is_visible() proper. As far as I can see the comments don't\nreally discuss the logic behind the split, but there's clearly a close\nrelationship between the two sets of checks, even to the point where\nyou have \"heap tuple with XMAX_IS_MULTI is neither LOCKED_ONLY nor has\na valid xmax\" in tuple_is_visible() and \"tuple xmax marked\nincompatibly as keys updated and locked only\" in check_tuple(). Now,\nthose are not the same check, but they seem like closely related\nthings, so it's not ideal that they happen in different functions with\ndifferently-formatted messages to report problems and no explanation\nof why it's different.\n\nI think it might make sense here to see whether you could either move\nmore stuff out of tuple_is_visible(), so that it really just checks\nwhether the tuple is visible, or move more stuff into it, so that it\nhas the job not only of checking whether we should continue with\nchecks on the tuple contents but also complaining about any other\nvisibility problems. Or if neither of those make sense then there\nshould be a stronger attempt to rationalize in the comments what\nchecks are going where and for what reason, and also a stronger\nattempt to rationalize the message wording.\n\n+ curchunk = DatumGetInt32(fastgetattr(toasttup, 2,\n+ ctx->toast_rel->rd_att, &isnull));\n\nShould we be worrying about the possibility of fastgetattr crapping\nout if the TOAST tuple is corrupted?\n\n+ if (ctx->tuphdr->t_hoff + ctx->offset > ctx->lp_len)\n+ {\n+ confess(ctx,\n+ psprintf(\"tuple attribute should start at offset %u, but tuple\nlength is only %u\",\n+ ctx->tuphdr->t_hoff + ctx->offset, ctx->lp_len));\n+ return false;\n+ }\n+\n+ /* Skip null values */\n+ if (infomask & HEAP_HASNULL && att_isnull(ctx->attnum, ctx->tuphdr->t_bits))\n+ return true;\n+\n+ /* Skip non-varlena values, but update offset first */\n+ if (thisatt->attlen != -1)\n+ {\n+ ctx->offset = att_align_nominal(ctx->offset, thisatt->attalign);\n+ ctx->offset = att_addlength_pointer(ctx->offset, thisatt->attlen,\n+ tp + ctx->offset);\n+ return true;\n+ }\n\nThis looks like it's not going to complain about a fixed-length\nattribute that overruns the tuple length. There's code further down\nthat handles that case for a varlena attribute, but there's nothing\ncomparable for the fixed-length case.\n\n+ confess(ctx,\n+ psprintf(\"%s toast at offset %u is unexpected\",\n+ va_tag == VARTAG_INDIRECT ? \"indirect\" :\n+ va_tag == VARTAG_EXPANDED_RO ? \"expanded\" :\n+ va_tag == VARTAG_EXPANDED_RW ? \"expanded\" :\n+ \"unexpected\",\n+ ctx->tuphdr->t_hoff + ctx->offset));\n\nI suggest \"unexpected TOAST tag %d\", without trying to convert to a\nstring. Such a conversion will likely fail in the case of genuine\ncorruption, and isn't meaningful even if it works.\n\nAgain, let's try to standardize terminology here: most of the messages\nin this function are now of the form \"tuple attribute %d has some\nproblem\" or \"attribute %d has some problem\", but some have neither.\nSince we're separately returning attnum I don't see why it should be\nin the message, and if we weren't separately returning attnum then it\nought to be in the message the same way all the time, rather than\nsometimes writing \"attribute\" and other times \"tuple attribute\".\n\n+ /* Check relminmxid against mxid, if any */\n+ xmax = HeapTupleHeaderGetRawXmax(ctx->tuphdr);\n+ if (infomask & HEAP_XMAX_IS_MULTI &&\n+ MultiXactIdPrecedes(xmax, ctx->relminmxid))\n+ {\n+ confess(ctx,\n+ psprintf(\"tuple xmax %u precedes relminmxid %u\",\n+ xmax, ctx->relminmxid));\n+ fatal = true;\n+ }\n\nThere are checks that an XID is neither too old nor too new, and\npresumably something similar could be done for MultiXactIds, but here\nyou only check one end of the range. Seems like you should check both.\n\n+ /* Check xmin against relfrozenxid */\n+ xmin = HeapTupleHeaderGetXmin(ctx->tuphdr);\n+ if (TransactionIdIsNormal(ctx->relfrozenxid) &&\n+ TransactionIdIsNormal(xmin))\n+ {\n+ if (TransactionIdPrecedes(xmin, ctx->relfrozenxid))\n+ {\n+ confess(ctx,\n+ psprintf(\"tuple xmin %u precedes relfrozenxid %u\",\n+ xmin, ctx->relfrozenxid));\n+ fatal = true;\n+ }\n+ else if (!xid_valid_in_rel(xmin, ctx))\n+ {\n+ confess(ctx,\n+ psprintf(\"tuple xmin %u follows last assigned xid %u\",\n+ xmin, ctx->next_valid_xid));\n+ fatal = true;\n+ }\n+ }\n\nHere you do check both ends of the range, but the comment claims\notherwise. Again, please proof-read for this kind of stuff.\n\n+ /* Check xmax against relfrozenxid */\n\nDitto here.\n\n+ psprintf(\"tuple's header size is %u bytes which is less than the %u\nbyte minimum valid header size\",\n\nI suggest: tuple data begins at byte %u, but the tuple header must be\nat least %u bytes\n\n+ psprintf(\"tuple's %u byte header size exceeds the %u byte length of\nthe entire tuple\",\n\nI suggest: tuple data begins at byte %u, but the entire tuple length\nis only %u bytes\n\n+ psprintf(\"tuple's user data offset %u not maximally aligned to %u\",\n\nI suggest: tuple data begins at byte %u, but that is not maximally aligned\nOr: tuple data begins at byte %u, which is not a multiple of %u\n\nThat makes the messages look much more similar to each other\ngrammatically and is more consistent about calling things by the same\nnames.\n\n+ psprintf(\"tuple with null values has user data offset %u rather than\nthe expected offset %u\",\n+ psprintf(\"tuple without null values has user data offset %u rather\nthan the expected offset %u\",\n\nI suggest merging these: tuple data offset %u, but expected offset %u\n(%u attributes, %s)\nwhere %s is either \"has nulls\" or \"no nulls\"\n\nIn fact, aren't several of the above checks redundant with this one?\nLike, why check for a value less than SizeofHeapTupleHeader or that's\nnot properly aligned first? Just check this straightaway and call it\ngood.\n\n+ * If we get this far, the tuple is visible to us, so it must not be\n+ * incompatible with our relDesc. The natts field could be legitimately\n+ * shorter than rel's natts, but it cannot be longer than rel's natts.\n\nThis is yet another case where you didn't update the comments.\ntuple_is_visible() now checks whether the tuple is visible to anyone,\nnot whether it's visible to us, but the comment doesn't agree. In some\nsense I think this comment is redundant with the previous one anyway,\nbecause that one already talks about the tuple being visible. Maybe\njust write: The tuple is visible, so it must be compatible with the\ncurrent version of the relation descriptor. It might have fewer\ncolumns than are present in the relation descriptor, but it cannot\nhave more.\n\n+ psprintf(\"tuple has %u attributes in relation with only %u attributes\",\n+ ctx->natts,\n+ RelationGetDescr(ctx->rel)->natts));\n\nI suggest: tuple has %u attributes, but relation has only %u attributes\n\n+ /*\n+ * Iterate over the attributes looking for broken toast values. This\n+ * roughly follows the logic of heap_deform_tuple, except that it doesn't\n+ * bother building up isnull[] and values[] arrays, since nobody wants\n+ * them, and it unrolls anything that might trip over an Assert when\n+ * processing corrupt data.\n+ */\n+ ctx->offset = 0;\n+ for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n+ {\n+ if (!check_tuple_attribute(ctx))\n+ break;\n+ }\n\nI think this comment is too wordy. This text belongs in the header\ncomment of check_tuple_attribute(), not at the place where it gets\ncalled. Otherwise, as you update what check_tuple_attribute() does,\nyou have to remember to come find this comment and fix it to match,\nand you might forget to do that. In fact... looks like that already\nhappened, because check_tuple_attribute() now checks more than broken\nTOAST attributes. Seems like you could just simplify this down to\nsomething like \"Now check each attribute.\" Also, you could lose the\nextra braces.\n\n- bt_index_check | relname | relpages\n+ bt_index_check | relname | relpages\n\nDon't include unrelated changes in the patch.\n\nI'm not really sure that the list of fields you're displaying for each\nreported problem really makes sense. I think the theory here should be\nthat we want to report the information that the user needs to localize\nthe problem but not everything that they could find out from\ninspecting the page, and not things that are too specific to\nparticular classes of errors. So I would vote for keeping blkno,\noffnum, and attnum, but I would lose lp_flags, lp_len, and chunk.\nlp_off feels like it's a more arguable case: technically, it's a\nlocator for the problem, because it gives you the byte offset within\nthe page, but normally we reference tuples by TID, i.e. (blkno,\noffset), not byte offset. On balance I'd be inclined to omit it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 13:59:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 29, 2020, at 12:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jul 20, 2020 at 5:02 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I've made the options 'all-visible', 'all-frozen', and 'none'. It defaults to 'none'.\n> \n> That looks nice.\n> \n>>> I guess that\n>>> could still be expensive if there's a lot of them, but needing\n>>> ShareUpdateExclusiveLock rather than only AccessShareLock is a little\n>>> unfortunate.\n>> \n>> I welcome strategies that would allow for taking a lesser lock.\n> \n> I guess I'm not seeing why you need any particular strategy here. Say\n> that at the beginning you note the starting relfrozenxid of the table\n> -- I think I would lean toward just ignoring datfrozenxid and the\n> cluster-wide value completely. You also note the current value of the\n> transaction ID counter. Those are the two ends of the acceptable\n> range.\n> \n> Let's first consider the oldest acceptable XID, bounded by\n> relfrozenxid. If you see a value that is older than the relfrozenxid\n> value that you noted at the start, it is definitely invalid. If you\n> see a newer value, it could still be older than the table's current\n> relfrozenxid, but that doesn't seem very worrisome. If the user\n> vacuumed the table while they were running this tool, they can always\n> run the tool again afterward if they wish. Forcing the vacuum to wait\n> by taking ShareUpdateExclusiveLock doesn't actually solve anything\n> anyway: you STILL won't notice any problems the vacuum introduces, and\n> in fact you are now GUARANTEED not to notice them, plus now the vacuum\n> happens later.\n> \n> Now let's consider the newest acceptable XID, bounded by the value of\n> the transaction ID counter. Any time you see a newer XID than the last\n> value of the transaction ID counter that you observed, you go observe\n> it again. If the value from the table still looks invalid, then you\n> complain about it. Either way, you remember the new observation and\n> check future tuples against that value. I think the patch is already\n> doing this anyway; if it weren't, you'd need an even stronger lock,\n> one sufficient to prevent any insert/update/delete activity on the\n> table altogether.\n> \n> Maybe I'm just being dense here -- exactly what problem are you worried about?\n\nPer tuple, tuple_is_visible() potentially checks whether the xmin or xmax committed via TransactionIdDidCommit. I am worried about concurrent truncation of clog entries causing I/O errors on SLRU lookup when performing that check. The three strategies I had for dealing with that were taking the XactTruncationLock (formerly known as CLogTruncationLock, for those reading this thread from the beginning), locking out vacuum, and the idea upthread from Andres about setting PROC_IN_VACUUM and such. Maybe I'm being dense and don't need to worry about this. But I haven't convinced myself of that, yet.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 13:18:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn 2020-07-30 13:18:01 -0700, Mark Dilger wrote:\n> Per tuple, tuple_is_visible() potentially checks whether the xmin or xmax committed via TransactionIdDidCommit. I am worried about concurrent truncation of clog entries causing I/O errors on SLRU lookup when performing that check. The three strategies I had for dealing with that were taking the XactTruncationLock (formerly known as CLogTruncationLock, for those reading this thread from the beginning), locking out vacuum, and the idea upthread from Andres about setting PROC_IN_VACUUM and such. Maybe I'm being dense and don't need to worry about this. But I haven't convinced myself of that, yet.\n\nI think it's not at all ok to look in the procarray or clog for xids\nthat are older than what you're announcing you may read. IOW I don't\nthink it's OK to just ignore the problem, or try to work around it by\nholding XactTruncationLock.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Jul 2020 13:47:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jul 30, 2020 at 4:18 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > Maybe I'm just being dense here -- exactly what problem are you worried about?\n>\n> Per tuple, tuple_is_visible() potentially checks whether the xmin or xmax committed via TransactionIdDidCommit. I am worried about concurrent truncation of clog entries causing I/O errors on SLRU lookup when performing that check. The three strategies I had for dealing with that were taking the XactTruncationLock (formerly known as CLogTruncationLock, for those reading this thread from the beginning), locking out vacuum, and the idea upthread from Andres about setting PROC_IN_VACUUM and such. Maybe I'm being dense and don't need to worry about this. But I haven't convinced myself of that, yet.\n\nI don't get it. If you've already checked that the XIDs are >=\nrelfrozenxid and <= ReadNewFullTransactionId(), then this shouldn't be\na problem. It could be, if CLOG is hosed, which is possible, because\nif the table is corrupted, why shouldn't CLOG also be corrupted? But\nI'm not sure that's what your concern is here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 17:00:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 30, 2020, at 2:00 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Jul 30, 2020 at 4:18 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> Maybe I'm just being dense here -- exactly what problem are you worried about?\n>> \n>> Per tuple, tuple_is_visible() potentially checks whether the xmin or xmax committed via TransactionIdDidCommit. I am worried about concurrent truncation of clog entries causing I/O errors on SLRU lookup when performing that check. The three strategies I had for dealing with that were taking the XactTruncationLock (formerly known as CLogTruncationLock, for those reading this thread from the beginning), locking out vacuum, and the idea upthread from Andres about setting PROC_IN_VACUUM and such. Maybe I'm being dense and don't need to worry about this. But I haven't convinced myself of that, yet.\n> \n> I don't get it. If you've already checked that the XIDs are >=\n> relfrozenxid and <= ReadNewFullTransactionId(), then this shouldn't be\n> a problem. It could be, if CLOG is hosed, which is possible, because\n> if the table is corrupted, why shouldn't CLOG also be corrupted? But\n> I'm not sure that's what your concern is here.\n\nNo, that wasn't my concern. I was thinking about CLOG entries disappearing during the scan as a consequence of concurrent vacuums, and the effect that would have on the validity of the cached [relfrozenxid..next_valid_xid] range. In the absence of corruption, I don't immediately see how this would cause any problems. But for a corrupt table, I'm less certain how it would play out.\n\nThe kind of scenario I'm worried about may not be possible in practice. I think it would depend on how vacuum behaves when scanning a corrupt table that is corrupt in some way that vacuum doesn't notice, and whether vacuum could finish scanning the table with the false belief that it has frozen all tuples with xids less than some cutoff.\n\nI thought it would be safer if that kind of thing were not happening during verify_heapam's scan of the table. Even if a careful analysis proved it was not an issue with the current coding of vacuum, I don't think there is any coding convention requiring future versions of vacuum to be hardened against corruption, so I don't see how I can rely on vacuum not causing such problems.\n\nI don't think this is necessarily a too-rare-to-care-about type concern, either. If corruption across multiple tables prevents autovacuum from succeeding, and the DBA doesn't get involved in scanning tables for corruption until the lack of successful vacuums impacts the production system, I imagine you could end up with vacuums repeatedly happening (or trying to happen) around the time the DBA is trying to fix tables, or perhaps drop them, or whatever, using verify_heapam for guidance on which tables are corrupted.\n\nAnyway, that's what I was thinking. I was imagining that calling TransactionIdDidCommit might keep crashing the backend while the DBA is trying to find and fix corruption, and that could get really annoying.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 15:10:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 30, 2020, at 1:47 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2020-07-30 13:18:01 -0700, Mark Dilger wrote:\n>> Per tuple, tuple_is_visible() potentially checks whether the xmin or xmax committed via TransactionIdDidCommit. I am worried about concurrent truncation of clog entries causing I/O errors on SLRU lookup when performing that check. The three strategies I had for dealing with that were taking the XactTruncationLock (formerly known as CLogTruncationLock, for those reading this thread from the beginning), locking out vacuum, and the idea upthread from Andres about setting PROC_IN_VACUUM and such. Maybe I'm being dense and don't need to worry about this. But I haven't convinced myself of that, yet.\n> \n> I think it's not at all ok to look in the procarray or clog for xids\n> that are older than what you're announcing you may read. IOW I don't\n> think it's OK to just ignore the problem, or try to work around it by\n> holding XactTruncationLock.\n\nThe current state of the patch is that concurrent vacuums are kept out of the table being checked by means of taking a ShareUpdateExclusive lock on the table being checked. In response to Robert's review, I was contemplating whether that was necessary, but you raise the interesting question of whether it is even sufficient. The logic in verify_heapam is currently relying on the ShareUpdateExclusive lock to prevent any of the xids in the range relfrozenxid..nextFullXid from being invalid arguments to TransactionIdDidCommit. Ignoring whether that is a good choice vis-a-vis performance, is that even a valid strategy? It sounds like you are saying it is not.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 15:11:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jul 30, 2020 at 6:10 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> No, that wasn't my concern. I was thinking about CLOG entries disappearing during the scan as a consequence of concurrent vacuums, and the effect that would have on the validity of the cached [relfrozenxid..next_valid_xid] range. In the absence of corruption, I don't immediately see how this would cause any problems. But for a corrupt table, I'm less certain how it would play out.\n\nOh, hmm. I wasn't thinking about that problem. I think the only way\nthis can happen is if we read a page and then, before we try to look\nup the CID, vacuum zooms past, finishes the whole table, and truncates\nclog. But if that's possible, then it seems like it would be an issue\nfor SELECT as well, and it apparently isn't, or we would've done\nsomething about it by now. I think the reason it's not possible is\nbecause of the locking rules described in\nsrc/backend/storage/buffer/README, which require that you hold a\nbuffer lock until you've determined that the tuple is visible. Since\nyou hold a share lock on the buffer, a VACUUM that hasn't already\nprocessed that freeze the tuples in that buffer; it would need an\nexclusive lock on the buffer to do that. Therefore it can't finish and\ntruncate clog either.\n\nNow, you raise the question of whether this is still true if the table\nis corrupt, but I don't really see why that makes any difference.\nVACUUM is supposed to freeze each page it encounters, to the extent\nthat such freezing is necessary, and with Andres's changes, it's\nsupposed to ERROR out if things are messed up. We can postulate a bug\nin that logic, but inserting a VACUUM-blocking lock into this tool to\nguard against a hypothetical vacuum bug seems strange to me. Why would\nthe right solution not be to fix such a bug if and when we find that\nthere is one?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 20:53:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 30, 2020, at 5:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Jul 30, 2020 at 6:10 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> No, that wasn't my concern. I was thinking about CLOG entries disappearing during the scan as a consequence of concurrent vacuums, and the effect that would have on the validity of the cached [relfrozenxid..next_valid_xid] range. In the absence of corruption, I don't immediately see how this would cause any problems. But for a corrupt table, I'm less certain how it would play out.\n> \n> Oh, hmm. I wasn't thinking about that problem. I think the only way\n> this can happen is if we read a page and then, before we try to look\n> up the CID, vacuum zooms past, finishes the whole table, and truncates\n> clog. But if that's possible, then it seems like it would be an issue\n> for SELECT as well, and it apparently isn't, or we would've done\n> something about it by now. I think the reason it's not possible is\n> because of the locking rules described in\n> src/backend/storage/buffer/README, which require that you hold a\n> buffer lock until you've determined that the tuple is visible. Since\n> you hold a share lock on the buffer, a VACUUM that hasn't already\n> processed that freeze the tuples in that buffer; it would need an\n> exclusive lock on the buffer to do that. Therefore it can't finish and\n> truncate clog either.\n> \n> Now, you raise the question of whether this is still true if the table\n> is corrupt, but I don't really see why that makes any difference.\n> VACUUM is supposed to freeze each page it encounters, to the extent\n> that such freezing is necessary, and with Andres's changes, it's\n> supposed to ERROR out if things are messed up. We can postulate a bug\n> in that logic, but inserting a VACUUM-blocking lock into this tool to\n> guard against a hypothetical vacuum bug seems strange to me. Why would\n> the right solution not be to fix such a bug if and when we find that\n> there is one?\n\nSince I can't think of a plausible concrete example of corruption which would elicit the problem I was worrying about, I'll withdraw the argument. But that leaves me wondering about a comment that Andres made upthread:\n\n> On Apr 20, 2020, at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:\n\n> I don't think random interspersed uses of CLogTruncationLock are a good\n> idea. If you move to only checking visibility after tuple fits into\n> [relfrozenxid, nextXid), then you don't need to take any locks here, as\n> long as a lock against vacuum is taken (which I think this should do\n> anyway).\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 18:38:43 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jul 30, 2020 at 9:38 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jul 30, 2020, at 5:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Jul 30, 2020 at 6:10 PM Mark Dilger\n> Since I can't think of a plausible concrete example of corruption which would elicit the problem I was worrying about, I'll withdraw the argument. But that leaves me wondering about a comment that Andres made upthread:\n>\n> > On Apr 20, 2020, at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:\n>\n> > I don't think random interspersed uses of CLogTruncationLock are a good\n> > idea. If you move to only checking visibility after tuple fits into\n> > [relfrozenxid, nextXid), then you don't need to take any locks here, as\n> > long as a lock against vacuum is taken (which I think this should do\n> > anyway).\n\nThe version of the patch I'm looking at doesn't seem to mention\nCLogTruncationLock at all, so I'm confused about the comment. But what\nit is that you are wondering about exactly?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 08:02:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 31, 2020, at 5:02 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Jul 30, 2020 at 9:38 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> On Jul 30, 2020, at 5:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> On Thu, Jul 30, 2020 at 6:10 PM Mark Dilger\n>> Since I can't think of a plausible concrete example of corruption which would elicit the problem I was worrying about, I'll withdraw the argument. But that leaves me wondering about a comment that Andres made upthread:\n>> \n>>> On Apr 20, 2020, at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:\n>> \n>>> I don't think random interspersed uses of CLogTruncationLock are a good\n>>> idea. If you move to only checking visibility after tuple fits into\n>>> [relfrozenxid, nextXid), then you don't need to take any locks here, as\n>>> long as a lock against vacuum is taken (which I think this should do\n>>> anyway).\n> \n> The version of the patch I'm looking at doesn't seem to mention\n> CLogTruncationLock at all, so I'm confused about the comment. But what\n> it is that you are wondering about exactly?\n\nIn earlier versions of the patch, I was guarding (perhaps unnecessarily) against clog truncation, (perhaps incorrectly) by taking the CLogTruncationLock (aka XactTruncationLock.) . I thought Andres was arguing that such locks were not necessary \"as long as a lock against vacuum is taken\". That's what motivated me to remove the clog locking business and put in the ShareUpdateExclusive lock. I don't want to remove the ShareUpdateExclusive lock from the patch without perhaps a clarification from Andres on the subject. His recent reply upthread seems to still support the idea that some kind of protection is required:\n\n> I think it's not at all ok to look in the procarray or clog for xids\n> that are older than what you're announcing you may read. IOW I don't\n> think it's OK to just ignore the problem, or try to work around it by\n> holding XactTruncationLock.\n\nI don't understand that paragraph fully, in particular the part about \"than what you're announcing you may read\", since the cached value of relfrozenxid is not announced; we're just assuming that as long as vacuum cannot advance it during our scan, that we should be safe checking whether xids newer than that value (and not in the future) were committed.\n\nAndres?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 31 Jul 2020 08:51:50 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn 2020-07-31 08:51:50 -0700, Mark Dilger wrote:\n> In earlier versions of the patch, I was guarding (perhaps\n> unnecessarily) against clog truncation, (perhaps incorrectly) by\n> taking the CLogTruncationLock (aka XactTruncationLock.) . I thought\n> Andres was arguing that such locks were not necessary \"as long as a\n> lock against vacuum is taken\". That's what motivated me to remove the\n> clog locking business and put in the ShareUpdateExclusive lock. I\n> don't want to remove the ShareUpdateExclusive lock from the patch\n> without perhaps a clarification from Andres on the subject. His\n> recent reply upthread seems to still support the idea that some kind\n> of protection is required:\n\nI'm not sure what I was thinking \"back then\", but right now I'd argue\nthat the best lock against vacuum isn't a SUE, but announcing the\ncorrect ->xmin, so you can be sure that clog entries won't be yanked out\nfrom under you. Potentially with the right flag sets to avoid old enough\ntuples eing pruned.\n\n\n> > I think it's not at all ok to look in the procarray or clog for xids\n> > that are older than what you're announcing you may read. IOW I don't\n> > think it's OK to just ignore the problem, or try to work around it by\n> > holding XactTruncationLock.\n> \n> I don't understand that paragraph fully, in particular the part about\n> \"than what you're announcing you may read\", since the cached value of\n> relfrozenxid is not announced; we're just assuming that as long as\n> vacuum cannot advance it during our scan, that we should be safe\n> checking whether xids newer than that value (and not in the future)\n> were committed.\n\nWith 'announcing' I mean using the normal mechanism for avoiding the\nclog being truncated for values one might look up. Which is announcing\nthe oldest xid one may look up in PGXACT->xmin.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Jul 2020 09:33:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Jul 31, 2020 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure what I was thinking \"back then\", but right now I'd argue\n> that the best lock against vacuum isn't a SUE, but announcing the\n> correct ->xmin, so you can be sure that clog entries won't be yanked out\n> from under you. Potentially with the right flag sets to avoid old enough\n> tuples eing pruned.\n\nSuppose we don't even do anything special in terms of advertising\nxmin. What can go wrong? To have a problem, we've got to be running\nconcurrently with a vacuum that truncates clog. The clog truncation\nmust happen before our XID lookups, but vacuum has to remove the XIDs\nfrom the heap before it can truncate. So we have to observe the XIDs\nbefore vacuum removes them, but then vacuum has to truncate before we\nlook them up. But since we observe them and look them up while holding\na ShareLock on the buffer, this seems impossible. What's the flaw in\nthis argument?\n\nIf we do need to do something special in terms of advertising xmin,\nhow would you do it? Normally it happens by registering a snapshot,\nbut here all we would have is an XID; specifically, the value of\nrelfrozenxid that we observed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 12:42:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn 2020-07-31 12:42:51 -0400, Robert Haas wrote:\n> On Fri, Jul 31, 2020 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not sure what I was thinking \"back then\", but right now I'd argue\n> > that the best lock against vacuum isn't a SUE, but announcing the\n> > correct ->xmin, so you can be sure that clog entries won't be yanked out\n> > from under you. Potentially with the right flag sets to avoid old enough\n> > tuples eing pruned.\n> \n> Suppose we don't even do anything special in terms of advertising\n> xmin. What can go wrong? To have a problem, we've got to be running\n> concurrently with a vacuum that truncates clog. The clog truncation\n> must happen before our XID lookups, but vacuum has to remove the XIDs\n> from the heap before it can truncate. So we have to observe the XIDs\n> before vacuum removes them, but then vacuum has to truncate before we\n> look them up. But since we observe them and look them up while holding\n> a ShareLock on the buffer, this seems impossible. What's the flaw in\n> this argument?\n\nThe page could have been wrongly marked all-frozen. There could be\ninteractions between heap and toast table that are checked. Other bugs\ncould apply, like a broken hot chain or such.\n\n\n> If we do need to do something special in terms of advertising xmin,\n> how would you do it? Normally it happens by registering a snapshot,\n> but here all we would have is an XID; specifically, the value of\n> relfrozenxid that we observed.\n\nAn appropriate procarray or snapmgr function would probably suffice?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Jul 2020 12:05:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Jul 31, 2020 at 3:05 PM Andres Freund <andres@anarazel.de> wrote:\n> The page could have been wrongly marked all-frozen. There could be\n> interactions between heap and toast table that are checked. Other bugs\n> could apply, like a broken hot chain or such.\n\nOK, at least the first two of these do sound like problems. Not sure\nabout the third one.\n\n> > If we do need to do something special in terms of advertising xmin,\n> > how would you do it? Normally it happens by registering a snapshot,\n> > but here all we would have is an XID; specifically, the value of\n> > relfrozenxid that we observed.\n>\n> An appropriate procarray or snapmgr function would probably suffice?\n\nNot sure; I guess that'll need some investigation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 15:16:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jul 30, 2020, at 10:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> + curchunk = DatumGetInt32(fastgetattr(toasttup, 2,\n> + ctx->toast_rel->rd_att, &isnull));\n> \n> Should we be worrying about the possibility of fastgetattr crapping\n> out if the TOAST tuple is corrupted?\n\nI think we should, but I'm not sure we should be worrying about it at this location. If the toast index is corrupt, systable_getnext_ordered could trip over the index corruption in the process of retrieving the toast tuple, so checking the toast tuple only helps if the toast index does not cause a crash first. I think the toast index should be checked before this point, ala verify_nbtree, so that we don't need to worry about that here. It might also make more sense to verify the toast table ala verify_heapam prior to here, so we don't have to worry about that here either. But that raises questions about whose responsibility this all is. If verify_heapam checks the toast table and toast index before the main table, that takes care of it, but makes a mess of the idea of verify_heapam taking a start and end block, since verifying the toast index is an all or nothing proposition, not something to be done in incremental pieces. If we leave verify_heapam as it is, then it is up to the caller to check the toast before the main relation, which is more flexible, but is more complicated and requires the user to remember to do it. We could split the difference by having verify_heapam do nothing about toast, leaving it up to the caller, but make pg_amcheck handle it by default, making it easier for users to not think about the issue. Users who want to do incremental checking could still keep track of the chunks that have already been checked, not just for the main relation, but for the toast relation, too, and give start and end block arguments to verify_heapam for the toast table check and then again for the main table check. That doesn't fix the question of incrementally checking the index, though.\n\nLooking at it a slightly different way, I think what is being checked at the point in the code you mention is the logical structure of the toasted value related to the current main table tuple, not the lower level tuple structure of the toast table. We already have a function for checking a heap, namely verify_heapam, and we (or the caller, really) should be using that. The clean way to do things is\n\n\tverify_heapam(toast_rel)\n\tverify_btreeam(toast_idx)\n\tverify_heapam(main_rel)\n\nand then depending on how fast and loose you want to be, you can use the start and end block arguments, which are inherently a bit half-baked, given the lack of any way to be sure you check precisely the right range of blocks, and also you can be fast and loose about skipping the index check or not, as you see fit.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 2 Aug 2020 20:17:25 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jul 27, 2020 at 10:02 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I'm indifferent about that change. Done for v13.\n\nMoving on with verification of the same index in the event of B-Tree\nindex corruption is a categorical mistake. verify_nbtree.c was simply\nnot designed to work that way.\n\nYou were determined to avoid allowing any behavior that can result in\na backend crash in the event of corruption, but this design will\ndefeat various measures I took to avoid crashing with corrupt data\n(e.g. in commit a9ce839a313).\n\nWhat's the point in not just giving up on the index (though not\nnecessarily the table or other indexes) at the first sign of trouble,\nanyway? It makes sense for the heap structure, but not for indexes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 2 Aug 2020 20:59:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jul 30, 2020 at 10:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't really like the name, either. I get that it's probably\n> inspired by Perl, but I think it should be given a less-clever name\n> like report_corruption() or something.\n\n+1 -- confess() is an awful name for this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 2 Aug 2020 21:13:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Aug 2, 2020, at 8:59 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> What's the point in not just giving up on the index (though not\n> necessarily the table or other indexes) at the first sign of trouble,\n> anyway? It makes sense for the heap structure, but not for indexes.\n\nThe case that came to mind was an index broken by a glibc update with breaking changes to the collation sort order underlying the index. If the breaking change has already been live in production for quite some time before a DBA notices, they might want to quantify how broken the index has been for the last however many days, not just drop and recreate the index. I'm happy to drop that from the patch, though.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 3 Aug 2020 07:42:19 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Aug 2, 2020, at 9:13 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Thu, Jul 30, 2020 at 10:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I don't really like the name, either. I get that it's probably\n>> inspired by Perl, but I think it should be given a less-clever name\n>> like report_corruption() or something.\n> \n> +1 -- confess() is an awful name for this.\n\nI was trying to limit unnecessary whitespace changes. s/ereport/econfess/ leaves the function name nearly the same length such that the following lines of indented error text don't usually get moved by pgindent. Given the unpopularity of the name, it's not worth it, so I'll go with Robert's report_corruption, instead.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 3 Aug 2020 08:02:19 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Aug 3, 2020 at 12:00 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Moving on with verification of the same index in the event of B-Tree\n> index corruption is a categorical mistake. verify_nbtree.c was simply\n> not designed to work that way.\n>\n> You were determined to avoid allowing any behavior that can result in\n> a backend crash in the event of corruption, but this design will\n> defeat various measures I took to avoid crashing with corrupt data\n> (e.g. in commit a9ce839a313).\n>\n> What's the point in not just giving up on the index (though not\n> necessarily the table or other indexes) at the first sign of trouble,\n> anyway? It makes sense for the heap structure, but not for indexes.\n\nI agree that there's a serious design problem with Mark's patch in\nthis regard, but I disagree that the effort is pointless on its own\nterms. You're basically postulating that users don't care how corrupt\ntheir index is: whether there's one problem or one million problems,\nit's all the same. If the user presents an index with one million\nproblems and we tell them about one of them, we've done our job and\ncan go home.\n\nThis doesn't match my experience. When an EDB customer reports\ncorruption, typically one of the first things I want to understand is\nhow widespread the problem is. This same issue came up on the thread\nabout relfrozenxid/relminmxid corruption. If you've got a table with\none or two rows where tuple.xmin < relfrozenxid, that's a different\nkind of problem than if 50% of the tuples in the table have tuple.xmin\n< relfrozenxid; the latter might well indicate that relfrozenxid value\nitself is garbage, while the former indicates that a few tuples\nslipped through the cracks somehow. If you're contemplating a recovery\nstrategy like \"nuke the affected tuples from orbit,\" you really need\nto understand which of those cases you've got.\n\nGranted, this is a bit less important with indexes, because in most\ncases you're just going to REINDEX. But, even there, the question is\nnot entirely academic. For instance, consider the case of a user whose\ndatabase crashes and then fails to restart because WAL replay fails.\nTypically, there is little option here but to run pg_resetwal. At this\npoint, you know that there is some damage, but you don't know how bad\nit is. If there was little system activity at the time of the crash,\nthere may be only a handful of problems with the database. If there\nwas a heavy OLTP workload running at the time of the crash, with a\nlong checkpoint interval, the problems may be widespread. If the user\nhas done this repeatedly before bothering to contact support, which is\nmore common than you might suppose, the damage may be extremely\nwidespread.\n\nNow, you could argue (and not unreasonably) that in any case after\nsomething like this happens even once, the user ought to dump and\nrestore to get back to a known good state. However, when the cluster\nis 10TB in size and there's a $100,000 financial loss for every hour\nof downtime, the question naturally arises of how urgent that dump and\nrestore is. Can we wait until our next maintenance window? Can we at\nleast wait until off hours? Being able to tell the user whether\nthey've got a tiny bit of corruption or a whole truckload of\ncorruption can enable them to make better decisions in such cases, or\nat least more educated ones.\n\nNow, again, just replacing ereport(ERROR, ...) with something else\nthat does not abort the rest of the checks is clearly not OK. I don't\nendorse that approach, or anything like it. But neither do I accept\nthe argument that it would be useless to report all the errors even if\nwe could do so safely.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:09:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Aug 3, 2020 at 11:02 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I was trying to limit unnecessary whitespace changes. s/ereport/econfess/ leaves the function name nearly the same length such that the following lines of indented error text don't usually get moved by pgindent. Given the unpopularity of the name, it's not worth it, so I'll go with Robert's report_corruption, instead.\n\nYeah, that's not really a good reason for something like that. I think\nwhat you should do is drop the nbtree portion of this for now; the\nlength of the name then doesn't even matter at all, because all the\ncode in which this is used will be new code. Even if we were churning\nexisting code, mechanical stuff like this isn't really a huge problem\nmost of the time, but there's no need for that here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:10:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Aug 3, 2020 at 8:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree that there's a serious design problem with Mark's patch in\n> this regard, but I disagree that the effort is pointless on its own\n> terms. You're basically postulating that users don't care how corrupt\n> their index is: whether there's one problem or one million problems,\n> it's all the same. If the user presents an index with one million\n> problems and we tell them about one of them, we've done our job and\n> can go home.\n\nIt's not so much that I think that users won't care about whether any\ngiven index is a bit corrupt or very corrupt. It's more like I don't\nthink that it's worth the eye-watering complexity, especially without\na real concrete goal in mind. \"Counting all the errors, not just the\nfirst\" sounds like a tractable goal for the heap/table structure, but\nit's just not like that with indexes. If you really wanted to do this,\nyou'd have to describe a practical scenario under which it made sense\nto soldier on, where we'd definitely be able to count the number of\nproblems in a meaningful way, without much risk of either massively\novercounting or undecounting inconsistencies.\n\nConsider how the search in verify_ntree.c actually works at a high\nlevel. If you thoroughly corrupted one B-Tree leaf page (let's say you\nreplaced it with an all-zero page image), all pages to the right of\nthe page would be fundamentally inaccessible to the left-to-right\nlevel search that is coordinated within\nbt_check_level_from_leftmost(). And yet, most real index scans can\nstill be expected to work. How do you know to skip past that one\ncorrupt leaf page (by going back to the parent to get the next sibling\nleaf page) during index verification? That's what it would take to do\nthis in the general case, I guess. More fundamentally, I wonder how\nmany inconsistencies one should imagine that this index has, before we\neven get into talking about the implementation.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 Aug 2020 10:15:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Aug 3, 2020 at 1:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you really wanted to do this,\n> you'd have to describe a practical scenario under which it made sense\n> to soldier on, where we'd definitely be able to count the number of\n> problems in a meaningful way, without much risk of either massively\n> overcounting or undecounting inconsistencies.\n\nI completely agree. You have to have a careful plan to make this sort\nof thing work - you want to skip checking the things that are\ndependent on the part already determined to be bad, without skipping\neverything. You need a strategy for where and how to restart checking,\nfirst bypassing whatever needs to be skipped.\n\n> Consider how the search in verify_ntree.c actually works at a high\n> level. If you thoroughly corrupted one B-Tree leaf page (let's say you\n> replaced it with an all-zero page image), all pages to the right of\n> the page would be fundamentally inaccessible to the left-to-right\n> level search that is coordinated within\n> bt_check_level_from_leftmost(). And yet, most real index scans can\n> still be expected to work. How do you know to skip past that one\n> corrupt leaf page (by going back to the parent to get the next sibling\n> leaf page) during index verification? That's what it would take to do\n> this in the general case, I guess.\n\nIn that particular example, you would want the function that verifies\nthat page to return some indicator. If it finds that two keys in the\npage are out-of-order, it tells the caller that it can still follow\nthe right-link. But if it finds that the whole page is garbage, then\nit tells the caller that it doesn't have a valid right-link and the\ncaller's got to do something else, like give up on the rest of the\nchecks or (better) try to recover a pointer to the next page from the\nparent.\n\n> More fundamentally, I wonder how\n> many inconsistencies one should imagine that this index has, before we\n> even get into talking about the implementation.\n\nI think we should try not to imagine anything in particular. Just to\nbe clear, I am not trying to knock what you have; I know it was a lot\nof work to create and it's a huge improvement over having nothing. But\nin my mind, a perfect tool would do just what a human being would do\nif investigating manually: assume initially that you know nothing -\nthe index might be totally fine, mildly corrupted in a very localized\nway, completely hosed, or anything in between. And it would\nsystematically try to track that down by traversing the usable\npointers that it has until it runs out of things to do. It does not\nseem impossible to build a tool that would allow us to take a big\nindex and overwrite a random subset of pages with garbage data and\nhave the tool tell us about all the bad pages that are still reachable\nfrom the root by any path. If you really wanted to go crazy with it,\nyou could even try to find the bad pages that are not reachable from\nthe root, by doing a pass after the fact over all the pages that you\ndidn't otherwise reach. It would be a lot of work to build something\nlike that and maybe not the best use of time, but if I got to wave\ntools into existence using my magic wand, I think that would be the\ngold standard.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 10:59:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 4, 2020 at 7:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think we should try not to imagine anything in particular. Just to\n> be clear, I am not trying to knock what you have; I know it was a lot\n> of work to create and it's a huge improvement over having nothing. But\n> in my mind, a perfect tool would do just what a human being would do\n> if investigating manually: assume initially that you know nothing -\n> the index might be totally fine, mildly corrupted in a very localized\n> way, completely hosed, or anything in between. And it would\n> systematically try to track that down by traversing the usable\n> pointers that it has until it runs out of things to do. It does not\n> seem impossible to build a tool that would allow us to take a big\n> index and overwrite a random subset of pages with garbage data and\n> have the tool tell us about all the bad pages that are still reachable\n> from the root by any path. If you really wanted to go crazy with it,\n> you could even try to find the bad pages that are not reachable from\n> the root, by doing a pass after the fact over all the pages that you\n> didn't otherwise reach. It would be a lot of work to build something\n> like that and maybe not the best use of time, but if I got to wave\n> tools into existence using my magic wand, I think that would be the\n> gold standard.\n\nI guess that might be true.\n\nWith indexes you tend to have redundancy in how relationships among\npages are described. So you have siblings whose pointers must be in\nagreement (left points to right, right points to left), and it's not\nclear which one you should trust when they don't agree. It's not like\nsimple heuristics get you all that far. I really can't think of a good\none, and detecting corruption should mean detecting truly exceptional\ncases. I guess you could build a model based on Bayesian methods, or\nsomething like that. But that is very complicated, and only used when\nyou actually have corruption -- which is presumably extremely rare in\nreality. That's very unappealing as a project.\n\nI have always believed that the big problem is not \"known unknowns\".\nRather, I think that the problem is \"unknown unknowns\". I accept that\nyou have a point, especially when it comes to heap checking, but even\nthere the most important consideration should be to make corruption\ndetection thorough and cheap. The vast vast majority of databases do\nnot have any corruption at any given time. You're not searching for a\nneedle in a haystack; you're searching for a needle in many many\nhaystacks within a field filled with haystacks, which taken together\nprobably contain no needles at all. (OTOH, once you find one needle\nall bets are off, and you could very well go on to find a huge number\nof them.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Aug 2020 09:00:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Jul 31, 2020 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure what I was thinking \"back then\", but right now I'd argue\n> that the best lock against vacuum isn't a SUE, but announcing the\n> correct ->xmin, so you can be sure that clog entries won't be yanked out\n> from under you. Potentially with the right flag sets to avoid old enough\n> tuples eing pruned.\n\nI was just thinking about this some more (and talking it over with\nMark) and I think this might actually be a really bad idea. One\nproblem with it is that it means that the oldest-xmin value can go\nbackward, which is something that I think has caused us some problems\nbefore. There are some other cases where it can happen, and I'm not\nsure that there's any necessarily fatal problem with doing it in this\ncase, but it would definitely be a shame if this contrib module broke\nsomething for core in a way that was hard to fix. But let's leave that\naside and suppose that there is no fatal problem there. Essentially\nwhat we're talking about here is advertising the table's relfrozenxid\nas our xmin. How old is that likely to be? Maybe pretty old. The\ndefault value of vacuum_freeze_table_age is 150 million transactions,\nand that's just the trigger to start vacuuming; the actual value of\nage(relfrozenxid) could easily be higher than that. But even if it's\nonly a fraction of that, it's still pretty bad. Advertising an xmin\nhalf that old (75 million transactions) is equivalent to keeping a\nsnapshot open for an amount of time equal to however long it takes you\nto burn through 75 million XIDs. For instance, if you burn 10 million\nXIDs/hour, that's the equivalent of keeping a snapshot open for 7.5\nhours. In other words, it's quite likely that doing this is going to\nmake VACUUM (and HOT pruning) drastically less effective throughout\nthe entire database cluster. To me, this seems a lot worse than just\ntaking ShareUpdateExclusiveLock on the table. After all,\nShareUpdateExclusiveLock will prevent VACUUM from running on that\ntable, but it only affects that one table rather than the whole\ncluster, and it \"only\" stops VACUUM from running, which is still\nbetter than having it do lots of I/O but not clean anything up.\n\nI think I see another problem with this approach, too: it's racey. If\nsome other process has entered vac_update_datfrozenxid() and has\ngotten past the calls to GetOldestXmin() and GetOldestMultiXactId(),\nand we then advertise an older xmin (and I guess also oldestMXact) it\ncan still go on to update datfrozenxid/datminmxid and then truncate\nthe SLRUs. Even holding XactTruncationLock is insufficient to protect\nagainst this race condition, and there doesn't seem to be any other\nobvious approach, either.\n\nSo I would like to back up a minute and lay out the possible solutions\nas I understand them. The specific problem here I'm talking about here\nis: how do we keep from looking up an XID or MXID whose information\nmight have been truncated away from the relevant SLRU?\n\n1. Take a ShareUpdateExclusiveLock on the table. This prevents VACUUM\nfrom running concurrently on this table (which sucks), but that for\nsure guarantees that the table's relfrozenxid and relminmxid can't\nadvance, which precludes a concurrent CLOG truncation.\n\n2. Advertise an older xmin and minimum MXID. See above.\n\n3. Acquire XactTruncationLock for each lookup, like pg_xact_status().\nOne downside here is a lot of extra lock acquisitions, but we can\nmitigate that to some degree by caching the results of lookups, and by\nnot doing it for XIDs that our newer than our advertised xmin (which\nmust be OK) or at least as old as the newest XID we previously\ndiscovered to be unsafe to look up (because those must not be OK\neither). The problem case is a table with lots of different XIDs that\nare all new enough to look up but older than our xmin, e.g. a table\npopulated using many single-row inserts. But even if we hit this case,\nhow bad is it really? I don't think XactTruncationLock is particularly\nhot, so maybe it just doesn't matter very much. We could contend\nagainst other sessions checking other tables, or against widespread\nuse of pg_xact_status(), but I think that's about it. Another downside\nof this approach is that I'm not sure it does anything to help us with\nthe MXID case; fixing that might require building some new\ninfrastructure similar to XactTruncationLock but for MXIDs.\n\n4. Provide entrypoints for looking up XIDs that fail gently instead of\nthrowing errors. I've got my doubts about how practical this is; if\nit's easy, why didn't we do that instead of inventing\nXactTruncationLock?\n\nMaybe there are other options here, too? At the moment, I'm thinking\nthat (2) and (4) are just bad and so we ought to either do (3) if it\ndoesn't suck too much for performance (which I don't quite see why it\nshould, but it might) or else fall back on (1). (1) doesn't feel\nclever enough but it might be better to be not clever enough than to\nbe too clever.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 12:07:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 4, 2020 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> With indexes you tend to have redundancy in how relationships among\n> pages are described. So you have siblings whose pointers must be in\n> agreement (left points to right, right points to left), and it's not\n> clear which one you should trust when they don't agree. It's not like\n> simple heuristics get you all that far. I really can't think of a good\n> one, and detecting corruption should mean detecting truly exceptional\n> cases. I guess you could build a model based on Bayesian methods, or\n> something like that. But that is very complicated, and only used when\n> you actually have corruption -- which is presumably extremely rare in\n> reality. That's very unappealing as a project.\n\nI think it might be possible to distinguish between different types of\ncorruption and to separate, at least to some degree, the checking\nassociated with each type. I think one can imagine something that\nchecks the structure of a btree without regard to the contents. That\nis, it cares that left and right links are consistent with each other\nand with downlinks from the parent level. So it checks things like the\nleft link of the page to which my right link points is pointing back\nto me, and that's also the page to which my parent's next downlink\npoints. It could also verify that there's a proper tree structure,\nwhere every page has a well-defined tree level. So you assign the root\npage level 1, and each time you traverse a downlink you assign that\npage a level one larger. If you ever try to assign to a page a level\nunequal to the level previously assigned to it, you report that as a\nproblem. You can check, too, that if a page does not have a left or\nright link, it's actually the last page at that level according what\nyou saw at the parent, grandparent, etc. levels. Finally, you can\ncheck that all of the max-level pages you can find are leaf pages, and\nthe others are all internal pages. All of this structural stuff can be\nverified without caring a whit about what keys you've got or what they\nmean or whether there's even a heap associated with this index.\n\nNow a second type of checking, which can also be done without regard\nto keys, is checking that the TIDs in the index point to TIDs that are\non heap pages that actually exist, and that the corresponding items\nare not unused, nor are they tuples which are not the root of a HOT\nchain. Passing a check of this type doesn't prove that the index and\nheap are consistent, but failing it proves that they are inconsistent.\nThis kind of check can be done on every leaf index page you can find\nby any means even if it fails the structural checks described above.\nFailure of these checks on one page does not preclude checking the\nsame invariants for other pages. Let's call this kind of thing \"basic\nindex-heap sanity checking.\"\n\nA third type of checking is to verify the relationship between the\nindex keys within and across the index pages: are the keys actually in\norder within a page, and are they in order across pages? The first\npart of this can be checked individually for each page pretty much no\nmatter what other problems we may have; we only have to abandon this\nchecking for a particular page if it's total garbage and we cannot\nidentify any index items on the page at all. The second part, though,\nhas the problem you mention. I think the solution is to skip the\nsecond part of the check for any pages that failed related structural\nchecks. For example, if my right sibling thinks that I am not its left\nsibling, or my right sibling and I agree that we are siblings but do\nnot agree on who our parent is, or if that parent does not agree that\nwe have the same sibling relationship that we think we have, then we\nshould report that problem and forget about issuing any complaints\nabout the relationship between my key space and that sibling's key\nspace. The internal consistency of each page with respect to key\nordering can still be verified, though, and it's possible that my key\nspace can be validly compared to the key space of my other sibling, if\nthe structural checks pass on that side.\n\nA fourth type of checking is to verify the index key against the keys\nin the heap tuples to which they point, but only for index tuples that\npassed the basic index-heap sanity checking and where the tuples have\nnot been pruned. This can be sensibly done even if the structural\nchecks or index-ordering checks have failed.\n\nI don't mean to suggest that one would implement all of these things\nas separate phases; that would be crazy expensive, and what if things\nchanged by the time you visit the page? Rather, the checks likely\nought to be interleaved, just keeping track internally of which things\nneed to be skipped because prerequisite checks have already failed.\n\nAside from providing a way to usefully continue after errors, this\nwould also be useful in certain scenarios where you want to know what\nkind of corruption you have. For example, suppose that I start getting\nwrong answers from index lookups on a particular index. Upon\ninvestigation, it turns out that my last glibc update changed my OS\ncollation definitions for the collation I'm using, and therefore it is\nto be expected that some of my keys may appear to be out of order with\nrespect to the new definitions. Now what I really want to know before\nrunning REINDEX is that this is the only problem I have. It would be\namazing if I could run the tool and have it give me a list of problems\nso that I could confirm that I have only index-ordering problems, not\nany other kind, and even more amazing if it could tell me the specific\nkeys that were affected so that I could understand exactly how the\nsorting behavior changed. If I were to discover that my index also has\nstructural problems or inconsistencies with the heap, then I'd know\nthat it couldn't be right to blame it only the collation update;\nsomething else has gone wrong.\n\nI'm speaking here with fairly limited knowledge of the details of how\nall this actually works and, again, I'm not trying to suggest that you\nor anyone is obligated to do any work on this, or that it would be\neasy to accomplish or worth the time it took. I'm just trying to\nsketch out what I see as maybe being theoretically possible, and why I\nthink it would be useful if it did.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 12:44:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 4, 2020 at 9:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think it might be possible to distinguish between different types of\n> corruption and to separate, at least to some degree, the checking\n> associated with each type. I think one can imagine something that\n> checks the structure of a btree without regard to the contents. That\n> is, it cares that left and right links are consistent with each other\n> and with downlinks from the parent level. So it checks things like the\n> left link of the page to which my right link points is pointing back\n> to me, and that's also the page to which my parent's next downlink\n> points.\n\nI think that this kind of phased approach to B-Tree verification is\npossible, more or less, but hard to justify. And it seems impossible\nto do with only an AccessShareLock.\n\nIt's not clear that what you describe is much better than just\nchecking a bunch of indexes and seeing what patterns emerge. For\nexample, the involvement of collated text might be a common factor\nacross indexes. That kind of pattern is the first thing that I look\nfor, and often the only thing. It also serves to give me an idea of\nhow messed up things are. There are not that many meaningful degrees\nof messed-up with indexes in my experience. The first error really\ndoes tell you most of what you need to know about any given corrupt\nindex. Kind of like how you can bucket the number of cockroaches in\nyour home into perhaps three meaningful buckets: 0 cockroaches, at\nleast 1 cockroach, and lots of cockroaches. (Even there, if you really\ncare about the distinction between the second and third bucket,\nsomething has gone terribly wrong -- so even three buckets seems like\na lot to me.)\n\nFWIW, current DEBUG1 + DEBUG2 output for amcheck shows you quite a lot\nof details about the tree structure. It's a handy way of getting a\nsense of what's going on at a high level. For example, if index\ncorruption is found very early on, that strongly suggests that it's\npretty pervasive.\n\n> Now a second type of checking, which can also be done without regard\n> to keys, is checking that the TIDs in the index point to TIDs that are\n> on heap pages that actually exist, and that the corresponding items\n> are not unused, nor are they tuples which are not the root of a HOT\n> chain. Passing a check of this type doesn't prove that the index and\n> heap are consistent, but failing it proves that they are inconsistent.\n> This kind of check can be done on every leaf index page you can find\n> by any means even if it fails the structural checks described above.\n> Failure of these checks on one page does not preclude checking the\n> same invariants for other pages. Let's call this kind of thing \"basic\n> index-heap sanity checking.\"\n\nOne real weakness in the current code is our inability to detect index\ntuples that are in the correct order and so on, but point to the wrong\nthing -- we can detect that if it manifests itself as the absence of\nan index tuple that should be in the index (when you use\nheapallindexed verification), but we cannot *reliably* detect the\npresence of an index tuple that shouldn't be in the index at all\n(though in practice it probably mostly gets caught).\n\nThe checks on the tree structure itself are excellent with\nbt_index_parent_check() following Alexander's commit d114cc53 (which I\nthought was really excellent work). But we still have that one\nremaining blind spot in verify_nbtree.c, even when you opt in to every\npossible type of verification (i.e. bt_index_parent_check() with all\noptions). I'd much rather fix that, or help with the new heap checker\nstuff.\n\n> A fourth type of checking is to verify the index key against the keys\n> in the heap tuples to which they point, but only for index tuples that\n> passed the basic index-heap sanity checking and where the tuples have\n> not been pruned. This can be sensibly done even if the structural\n> checks or index-ordering checks have failed.\n\nThat's going to require the equivalent of a merge join, which is\nterribly expensive relative to such a small benefit.\n\n> Aside from providing a way to usefully continue after errors, this\n> would also be useful in certain scenarios where you want to know what\n> kind of corruption you have. For example, suppose that I start getting\n> wrong answers from index lookups on a particular index. Upon\n> investigation, it turns out that my last glibc update changed my OS\n> collation definitions for the collation I'm using, and therefore it is\n> to be expected that some of my keys may appear to be out of order with\n> respect to the new definitions. Now what I really want to know before\n> running REINDEX is that this is the only problem I have. It would be\n> amazing if I could run the tool and have it give me a list of problems\n> so that I could confirm that I have only index-ordering problems, not\n> any other kind, and even more amazing if it could tell me the specific\n> keys that were affected so that I could understand exactly how the\n> sorting behavior changed.\n\nThis detail seems really hard. There are probably lots of cases where\nthe sorting behavior changed but it just didn't affect you, given the\ndata you had -- it just so happened that you didn't have exactly the\nwrong kind of diacritic mark or whatever. After all, revisions to how\nstrings in a given natural language are supposed to sort are likely to\nbe relatively rare and relatively obscure (even among people that\nspeak the language in question). Also, the task of figuring out if the\ntuple to the left or the right is in the wrong order seems kind of\ndaunting.\n\nMeanwhile, a simple smoke test covering many indexes probably gives\nyou a fairly meaningful idea of the extent of the damage, without\nrequiring that we do any hard engineering work.\n\n> I'm speaking here with fairly limited knowledge of the details of how\n> all this actually works and, again, I'm not trying to suggest that you\n> or anyone is obligated to do any work on this, or that it would be\n> easy to accomplish or worth the time it took. I'm just trying to\n> sketch out what I see as maybe being theoretically possible, and why I\n> think it would be useful if it did.\n\nI don't think that your relatively limited knowledge of the B-Tree\ncode is an issue here -- your intuitions seem pretty reasonable. I\nappreciate your perspective here. Corruption detection presents us\nwith some odd qualitative questions of the kind that are just awkward\nto discuss. Discouraging perspectives that don't quite match my own\nwould be quite counterproductive.\n\nThat having been said, I suspect that this is a huge task for a small\nbenefit. It's exceptionally hard to test because you have lots of\nnon-trivial code that only gets used in circumstances that by\ndefinition should never happen. If users really needed to recover the\ndata in the index then maybe it would happen -- but they don't.\n\nThe biggest problem that amcheck currently has is that it isn't used\nenough, because it isn't positioned as a general purpose tool at all.\nI'm hoping that the work from Mark helps with that.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Aug 2020 18:06:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 4, 2020 at 9:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> of messed-up with indexes in my experience. The first error really\n> does tell you most of what you need to know about any given corrupt\n> index. Kind of like how you can bucket the number of cockroaches in\n> your home into perhaps three meaningful buckets: 0 cockroaches, at\n> least 1 cockroach, and lots of cockroaches. (Even there, if you really\n> care about the distinction between the second and third bucket,\n> something has gone terribly wrong -- so even three buckets seems like\n> a lot to me.)\n\nNot sure I agree with this. As a homeowner, the distinction between 0\nand 1 is less significant to me than the distinction between a few\n(preferably in places where I'll never see them) and whole lot. I\nagree with you to an extent though: all I really care about is whether\nI have too few to worry about, enough that I'd better try to take care\nof it somehow, or so many that I need a professional exterminator. If,\nhowever, I were a professional exterminator, I would be unhappy with\njust knowing that there are few problems or many. I suspect I would\nwant to know something about where the problems were, and get a more\nnuanced indication of just how bad things are in each location.\n\nFWIW, pg_catcheck is an example of an existing tool (designed by me\nand written partially by me) that uses the kind of model I'm talking\nabout. It does a single SELECT * FROM pg_<whatever> on each catalog\ntable - so that it doesn't get confused if your system catalog indexes\nare messed up - and then performs a bunch of cross-checks on the\ntuples it gets back and tells you about all the messed up stuff. If it\ncan't get data from all your catalog tables it performs whichever\nchecks are valid given what data it was able to get. As a professional\nexterminator of catalog corruption, I find it quite helpful. If\nsomeone sends me the output from a database cluster, I can tell right\naway whether they are just fine, in a little bit of trouble, or in a\nwhole lot of trouble; I can speculate pretty well about what kind of\nthing might've happened to cause the problem; and I can recommend\nsteps to straighten things out.\n\n> FWIW, current DEBUG1 + DEBUG2 output for amcheck shows you quite a lot\n> of details about the tree structure. It's a handy way of getting a\n> sense of what's going on at a high level. For example, if index\n> corruption is found very early on, that strongly suggests that it's\n> pretty pervasive.\n\nInteresting.\n\n> > A fourth type of checking is to verify the index key against the keys\n> > in the heap tuples to which they point, but only for index tuples that\n> > passed the basic index-heap sanity checking and where the tuples have\n> > not been pruned. This can be sensibly done even if the structural\n> > checks or index-ordering checks have failed.\n>\n> That's going to require the equivalent of a merge join, which is\n> terribly expensive relative to such a small benefit.\n\nI think it depends on how big your data is. If you've got a 2TB table\nand 512GB of RAM, it's pretty impractical no matter the algorithm. But\nfor small tables even a naive nested loop will suffice.\n\n> Meanwhile, a simple smoke test covering many indexes probably gives\n> you a fairly meaningful idea of the extent of the damage, without\n> requiring that we do any hard engineering work.\n\nIn my experience, when EDB customers complain about corruption-related\nproblems, the two most common patterns are: (1) my whole system is\nmessed up and (2) I have one or a few specific objects which are\nmessed up and everything else is fine. The first category is often\nsomething like inability to start the database, or scary messages in\nthe log file complaining about, say, checkpoints failing. The second\ncategory is the one I'm worried about here. The people who are in this\ncategory generally already know which things are broken; they've\nfigured that out through trial and error. Sometimes they miss some\nproblems, but more frequently, in my experience, their understanding\nof what problems they have is accurate. Now that category of users can\nbe further decomposed into two groups: the people who don't care what\nhappened and just want to barrel through it, and the people who do\ncare what happened and want to know what happened, why it happened,\nwhether it's a bug, etc. The first group are unproblematic: tell them\nto REINDEX (or restore from backup, or whatever) and you're done.\n\nThe second group is a lot harder. It is in general difficult to\nspeculate about how something that is now wrong got that way given\nknowledge only of the present state of affairs. But good tooling makes\nit easier to speculate intelligently. To take a classic example,\nthere's a great difference between a checksum failure caused by the\nchecksum being incorrect on an otherwise-valid page; a checksum\nfailure on a page the first half of which appears valid and the second\nhalf of which looks like it might be some other database page; and a\nchecksum failure on a page whose contents appear to be taken from a\nMicrosoft Word document. I'm not saying we ever want a tool which\ntries to figure that sort of thing out in an automated way; there's no\nsubstitute for human intelligence (yet, anyway). But, the more the\ntools we do have localize the problems to particular pages or tuples\nand describe them accurately, the easier it is to do manual\ninvestigation as follow-up, when it's necessary.\n\n> That having been said, I suspect that this is a huge task for a small\n> benefit. It's exceptionally hard to test because you have lots of\n> non-trivial code that only gets used in circumstances that by\n> definition should never happen. If users really needed to recover the\n> data in the index then maybe it would happen -- but they don't.\n\nYep, that's a very key difference as compared to the heap.\n\n> The biggest problem that amcheck currently has is that it isn't used\n> enough, because it isn't positioned as a general purpose tool at all.\n> I'm hoping that the work from Mark helps with that.\n\nAgreed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 10:08:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Aug 5, 2020 at 7:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Not sure I agree with this. As a homeowner, the distinction between 0\n> and 1 is less significant to me than the distinction between a few\n> (preferably in places where I'll never see them) and whole lot. I\n> agree with you to an extent though: all I really care about is whether\n> I have too few to worry about, enough that I'd better try to take care\n> of it somehow, or so many that I need a professional exterminator. If,\n> however, I were a professional exterminator, I would be unhappy with\n> just knowing that there are few problems or many. I suspect I would\n> want to know something about where the problems were, and get a more\n> nuanced indication of just how bad things are in each location.\n\nRight, but the professional exterminator can be expected to use expert\nlevel tools, where a great deal of technical sophistication is\nrequired to interpret what's going on sensibly. An amatuer can only\nuse them to determine if something is wrong at all, which is usually\nnot how they add value.\n\n(I think that my analogy is slightly flawed in that it hinged upon\neverybody hating cockroaches as much as I do, which is more than the\nordinary amount.)\n\n> FWIW, pg_catcheck is an example of an existing tool (designed by me\n> and written partially by me) that uses the kind of model I'm talking\n> about. It does a single SELECT * FROM pg_<whatever> on each catalog\n> table - so that it doesn't get confused if your system catalog indexes\n> are messed up - and then performs a bunch of cross-checks on the\n> tuples it gets back and tells you about all the messed up stuff. If it\n> can't get data from all your catalog tables it performs whichever\n> checks are valid given what data it was able to get. As a professional\n> exterminator of catalog corruption, I find it quite helpful.\n\nI myself seem to have had quite different experiences with corruption,\npresumably because it happened at product companies like Heroku. I\ntended to find software bugs (e.g. the one fixed by commit 008c4135)\nthat were rare and novel by casting a wide net over a large number of\nrelatively homogenous databases. Whereas your experiences tend to\ninvolve large support customers with more opportunity for operator\nerror. Both perspectives are important.\n\n> The second group is a lot harder. It is in general difficult to\n> speculate about how something that is now wrong got that way given\n> knowledge only of the present state of affairs. But good tooling makes\n> it easier to speculate intelligently. To take a classic example,\n> there's a great difference between a checksum failure caused by the\n> checksum being incorrect on an otherwise-valid page; a checksum\n> failure on a page the first half of which appears valid and the second\n> half of which looks like it might be some other database page; and a\n> checksum failure on a page whose contents appear to be taken from a\n> Microsoft Word document. I'm not saying we ever want a tool which\n> tries to figure that sort of thing out in an automated way; there's no\n> substitute for human intelligence (yet, anyway).\n\nI wrote my own expert level tool, pg_hexedit. I have to admit that the\nlevel of interest in that tool doesn't seem to be all that great,\nthough I myself have used it to investigate corruption to great\neffect. But I suppose there is no way to know how it's being used.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 Aug 2020 13:36:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Aug 5, 2020 at 4:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Right, but the professional exterminator can be expected to use expert\n> level tools, where a great deal of technical sophistication is\n> required to interpret what's going on sensibly. An amatuer can only\n> use them to determine if something is wrong at all, which is usually\n> not how they add value.\n\nQuite true.\n\n> I myself seem to have had quite different experiences with corruption,\n> presumably because it happened at product companies like Heroku. I\n> tended to find software bugs (e.g. the one fixed by commit 008c4135)\n> that were rare and novel by casting a wide net over a large number of\n> relatively homogenous databases. Whereas your experiences tend to\n> involve large support customers with more opportunity for operator\n> error. Both perspectives are important.\n\nI concur.\n\n> I wrote my own expert level tool, pg_hexedit. I have to admit that the\n> level of interest in that tool doesn't seem to be all that great,\n> though I myself have used it to investigate corruption to great\n> effect. But I suppose there is no way to know how it's being used.\n\nI admit not to having tried pg_hexedit, but I doubt that it would help\nme very much outside of my own development work. The problem is that\nin a typical case I am trying to help someone in a professional\ncapacity without access to their machines, and without knowledge of\ntheir environment or data. Moreover, sometimes the person I'm trying\nto help is an unreliable narrator. I can ask people to run tools they\nhave and send the output, and then I can look at that output and tell\nthem what to do next. But it has to be a tool they have (or they can\neasily get) and it can't involve any complicated if-then stuff.\nSomething like \"if the page is totally garbled then do X but if it\nlooks mostly OK then do Y\" is radically out of reach. They have no\nclue about that. Hence my interest in tools that automate as much of\nthe investigation as may be practical.\n\nWe're probably beating this topic to death at this point; I don't\nthink we are really in any sort of meaningful disagreement, and the\nnext steps in this particular case seem clear enough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Aug 2020 12:43:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jul 30, 2020 at 11:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jul 27, 2020 at 1:02 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > Not at all! I appreciate all the reviews.\n>\n> Reviewing 0002, reading through verify_heapam.c:\n>\n> +typedef enum SkipPages\n> +{\n> + SKIP_ALL_FROZEN_PAGES,\n> + SKIP_ALL_VISIBLE_PAGES,\n> + SKIP_PAGES_NONE\n> +} SkipPages;\n>\n> This looks inconsistent. Maybe just start them all with SKIP_PAGES_.\n>\n> + if (PG_ARGISNULL(0))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"missing required parameter for 'rel'\")));\n>\n> This doesn't look much like other error messages in the code. Do\n> something like git grep -A4 PG_ARGISNULL | grep -A3 ereport and study\n> the comparables.\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"unrecognized parameter for 'skip': %s\", skip),\n> + errhint(\"please choose from 'all-visible', 'all-frozen', or 'none'\")));\n>\n> Same problem. Check pg_prewarm's handling of the prewarm type, or\n> EXPLAIN's handling of the FORMAT option, or similar examples. Read the\n> message style guidelines concerning punctuation of hint and detail\n> messages.\n>\n> + * Bugs in pg_upgrade are reported (see commands/vacuum.c circa line 1572)\n> + * to have sometimes rendered the oldest xid value for a database invalid.\n> + * It seems unwise to report rows as corrupt for failing to be newer than\n> + * a value which itself may be corrupt. We instead use the oldest xid for\n> + * the entire cluster, which must be at least as old as the oldest xid for\n> + * our database.\n>\n> This kind of reference to another comment will not age well; line\n> numbers and files change a lot. But I think the right thing to do here\n> is just rely on relfrozenxid and relminmxid. If the table is\n> inconsistent with those, then something needs fixing. datfrozenxid and\n> the cluster-wide value can look out for themselves. The corruption\n> detector shouldn't be trying to work around any bugs in setting\n> relfrozenxid itself; such problems are arguably precisely what we're\n> here to find.\n>\n> +/*\n> + * confess\n> + *\n> + * Return a message about corruption, including information\n> + * about where in the relation the corruption was found.\n> + *\n> + * The msg argument is pfree'd by this function.\n> + */\n> +static void\n> +confess(HeapCheckContext *ctx, char *msg)\n>\n> Contrary to what the comments say, the function doesn't return a\n> message about corruption or anything else. It returns void.\n>\n> I don't really like the name, either. I get that it's probably\n> inspired by Perl, but I think it should be given a less-clever name\n> like report_corruption() or something.\n>\n> + * corrupted table from using workmem worth of memory building up the\n>\n> This kind of thing destroys grep-ability. If you're going to refer to\n> work_mem, you gotta spell it the same way we do everywhere else.\n>\n> + * Helper function to construct the TupleDesc needed by verify_heapam.\n>\n> Instead of saying it's the TupleDesc somebody needs, how about saying\n> that it's the TupleDesc that we'll use to report problems that we find\n> while scanning the heap, or something like that?\n>\n> + * Given a TransactionId, attempt to interpret it as a valid\n> + * FullTransactionId, neither in the future nor overlong in\n> + * the past. Stores the inferred FullTransactionId in *fxid.\n>\n> It really doesn't, because there's no such thing as 'fxid' referenced\n> anywhere here. You should really make the effort to proofread your\n> patches before posting, and adjust comments and so on as you go.\n> Otherwise reviewing takes longer, and if you keep introducing new\n> stuff like this as you fix other stuff, you can fail to ever produce a\n> committable patch.\n>\n> + * Determine whether tuples are visible for verification. Similar to\n> + * HeapTupleSatisfiesVacuum, but with critical differences.\n>\n> Not accurate, because it also reports problems, which is not mentioned\n> anywhere in the function header comment that purports to be a detailed\n> description of what the function does.\n>\n> + else if (TransactionIdIsCurrentTransactionId(raw_xmin))\n> + return true; /* insert or delete in progress */\n> + else if (TransactionIdIsInProgress(raw_xmin))\n> + return true; /* HEAPTUPLE_INSERT_IN_PROGRESS */\n> + else if (!TransactionIdDidCommit(raw_xmin))\n> + {\n> + return false; /* HEAPTUPLE_DEAD */\n> + }\n>\n> One of these cases is not punctuated like the others.\n>\n> + pstrdup(\"heap tuple with XMAX_IS_MULTI is neither LOCKED_ONLY nor\n> has a valid xmax\"));\n>\n> 1. I don't think that's very grammatical.\n>\n> 2. Why abbreviate HEAP_XMAX_IS_MULTI to XMAX_IS_MULTI and\n> HEAP_XMAX_IS_LOCKED_ONLY to LOCKED_ONLY? I don't even think you should\n> be referencing C constant names here at all, and if you are I don't\n> think you should abbreviate, and if you do abbreviate I don't think\n> you should omit different numbers of words depending on which constant\n> it is.\n>\n> I wonder what the intended division of responsibility is here,\n> exactly. It seems like you've ended up with some sanity checks in\n> check_tuple() before tuple_is_visible() is called, and others in\n> tuple_is_visible() proper. As far as I can see the comments don't\n> really discuss the logic behind the split, but there's clearly a close\n> relationship between the two sets of checks, even to the point where\n> you have \"heap tuple with XMAX_IS_MULTI is neither LOCKED_ONLY nor has\n> a valid xmax\" in tuple_is_visible() and \"tuple xmax marked\n> incompatibly as keys updated and locked only\" in check_tuple(). Now,\n> those are not the same check, but they seem like closely related\n> things, so it's not ideal that they happen in different functions with\n> differently-formatted messages to report problems and no explanation\n> of why it's different.\n>\n> I think it might make sense here to see whether you could either move\n> more stuff out of tuple_is_visible(), so that it really just checks\n> whether the tuple is visible, or move more stuff into it, so that it\n> has the job not only of checking whether we should continue with\n> checks on the tuple contents but also complaining about any other\n> visibility problems. Or if neither of those make sense then there\n> should be a stronger attempt to rationalize in the comments what\n> checks are going where and for what reason, and also a stronger\n> attempt to rationalize the message wording.\n>\n> + curchunk = DatumGetInt32(fastgetattr(toasttup, 2,\n> + ctx->toast_rel->rd_att, &isnull));\n>\n> Should we be worrying about the possibility of fastgetattr crapping\n> out if the TOAST tuple is corrupted?\n>\n> + if (ctx->tuphdr->t_hoff + ctx->offset > ctx->lp_len)\n> + {\n> + confess(ctx,\n> + psprintf(\"tuple attribute should start at offset %u, but tuple\n> length is only %u\",\n> + ctx->tuphdr->t_hoff + ctx->offset, ctx->lp_len));\n> + return false;\n> + }\n> +\n> + /* Skip null values */\n> + if (infomask & HEAP_HASNULL && att_isnull(ctx->attnum, ctx->tuphdr->t_bits))\n> + return true;\n> +\n> + /* Skip non-varlena values, but update offset first */\n> + if (thisatt->attlen != -1)\n> + {\n> + ctx->offset = att_align_nominal(ctx->offset, thisatt->attalign);\n> + ctx->offset = att_addlength_pointer(ctx->offset, thisatt->attlen,\n> + tp + ctx->offset);\n> + return true;\n> + }\n>\n> This looks like it's not going to complain about a fixed-length\n> attribute that overruns the tuple length. There's code further down\n> that handles that case for a varlena attribute, but there's nothing\n> comparable for the fixed-length case.\n>\n> + confess(ctx,\n> + psprintf(\"%s toast at offset %u is unexpected\",\n> + va_tag == VARTAG_INDIRECT ? \"indirect\" :\n> + va_tag == VARTAG_EXPANDED_RO ? \"expanded\" :\n> + va_tag == VARTAG_EXPANDED_RW ? \"expanded\" :\n> + \"unexpected\",\n> + ctx->tuphdr->t_hoff + ctx->offset));\n>\n> I suggest \"unexpected TOAST tag %d\", without trying to convert to a\n> string. Such a conversion will likely fail in the case of genuine\n> corruption, and isn't meaningful even if it works.\n>\n> Again, let's try to standardize terminology here: most of the messages\n> in this function are now of the form \"tuple attribute %d has some\n> problem\" or \"attribute %d has some problem\", but some have neither.\n> Since we're separately returning attnum I don't see why it should be\n> in the message, and if we weren't separately returning attnum then it\n> ought to be in the message the same way all the time, rather than\n> sometimes writing \"attribute\" and other times \"tuple attribute\".\n>\n> + /* Check relminmxid against mxid, if any */\n> + xmax = HeapTupleHeaderGetRawXmax(ctx->tuphdr);\n> + if (infomask & HEAP_XMAX_IS_MULTI &&\n> + MultiXactIdPrecedes(xmax, ctx->relminmxid))\n> + {\n> + confess(ctx,\n> + psprintf(\"tuple xmax %u precedes relminmxid %u\",\n> + xmax, ctx->relminmxid));\n> + fatal = true;\n> + }\n>\n> There are checks that an XID is neither too old nor too new, and\n> presumably something similar could be done for MultiXactIds, but here\n> you only check one end of the range. Seems like you should check both.\n>\n> + /* Check xmin against relfrozenxid */\n> + xmin = HeapTupleHeaderGetXmin(ctx->tuphdr);\n> + if (TransactionIdIsNormal(ctx->relfrozenxid) &&\n> + TransactionIdIsNormal(xmin))\n> + {\n> + if (TransactionIdPrecedes(xmin, ctx->relfrozenxid))\n> + {\n> + confess(ctx,\n> + psprintf(\"tuple xmin %u precedes relfrozenxid %u\",\n> + xmin, ctx->relfrozenxid));\n> + fatal = true;\n> + }\n> + else if (!xid_valid_in_rel(xmin, ctx))\n> + {\n> + confess(ctx,\n> + psprintf(\"tuple xmin %u follows last assigned xid %u\",\n> + xmin, ctx->next_valid_xid));\n> + fatal = true;\n> + }\n> + }\n>\n> Here you do check both ends of the range, but the comment claims\n> otherwise. Again, please proof-read for this kind of stuff.\n>\n> + /* Check xmax against relfrozenxid */\n>\n> Ditto here.\n>\n> + psprintf(\"tuple's header size is %u bytes which is less than the %u\n> byte minimum valid header size\",\n>\n> I suggest: tuple data begins at byte %u, but the tuple header must be\n> at least %u bytes\n>\n> + psprintf(\"tuple's %u byte header size exceeds the %u byte length of\n> the entire tuple\",\n>\n> I suggest: tuple data begins at byte %u, but the entire tuple length\n> is only %u bytes\n>\n> + psprintf(\"tuple's user data offset %u not maximally aligned to %u\",\n>\n> I suggest: tuple data begins at byte %u, but that is not maximally aligned\n> Or: tuple data begins at byte %u, which is not a multiple of %u\n>\n> That makes the messages look much more similar to each other\n> grammatically and is more consistent about calling things by the same\n> names.\n>\n> + psprintf(\"tuple with null values has user data offset %u rather than\n> the expected offset %u\",\n> + psprintf(\"tuple without null values has user data offset %u rather\n> than the expected offset %u\",\n>\n> I suggest merging these: tuple data offset %u, but expected offset %u\n> (%u attributes, %s)\n> where %s is either \"has nulls\" or \"no nulls\"\n>\n> In fact, aren't several of the above checks redundant with this one?\n> Like, why check for a value less than SizeofHeapTupleHeader or that's\n> not properly aligned first? Just check this straightaway and call it\n> good.\n>\n> + * If we get this far, the tuple is visible to us, so it must not be\n> + * incompatible with our relDesc. The natts field could be legitimately\n> + * shorter than rel's natts, but it cannot be longer than rel's natts.\n>\n> This is yet another case where you didn't update the comments.\n> tuple_is_visible() now checks whether the tuple is visible to anyone,\n> not whether it's visible to us, but the comment doesn't agree. In some\n> sense I think this comment is redundant with the previous one anyway,\n> because that one already talks about the tuple being visible. Maybe\n> just write: The tuple is visible, so it must be compatible with the\n> current version of the relation descriptor. It might have fewer\n> columns than are present in the relation descriptor, but it cannot\n> have more.\n>\n> + psprintf(\"tuple has %u attributes in relation with only %u attributes\",\n> + ctx->natts,\n> + RelationGetDescr(ctx->rel)->natts));\n>\n> I suggest: tuple has %u attributes, but relation has only %u attributes\n>\n> + /*\n> + * Iterate over the attributes looking for broken toast values. This\n> + * roughly follows the logic of heap_deform_tuple, except that it doesn't\n> + * bother building up isnull[] and values[] arrays, since nobody wants\n> + * them, and it unrolls anything that might trip over an Assert when\n> + * processing corrupt data.\n> + */\n> + ctx->offset = 0;\n> + for (ctx->attnum = 0; ctx->attnum < ctx->natts; ctx->attnum++)\n> + {\n> + if (!check_tuple_attribute(ctx))\n> + break;\n> + }\n>\n> I think this comment is too wordy. This text belongs in the header\n> comment of check_tuple_attribute(), not at the place where it gets\n> called. Otherwise, as you update what check_tuple_attribute() does,\n> you have to remember to come find this comment and fix it to match,\n> and you might forget to do that. In fact... looks like that already\n> happened, because check_tuple_attribute() now checks more than broken\n> TOAST attributes. Seems like you could just simplify this down to\n> something like \"Now check each attribute.\" Also, you could lose the\n> extra braces.\n>\n> - bt_index_check | relname | relpages\n> + bt_index_check | relname | relpages\n>\n> Don't include unrelated changes in the patch.\n>\n> I'm not really sure that the list of fields you're displaying for each\n> reported problem really makes sense. I think the theory here should be\n> that we want to report the information that the user needs to localize\n> the problem but not everything that they could find out from\n> inspecting the page, and not things that are too specific to\n> particular classes of errors. So I would vote for keeping blkno,\n> offnum, and attnum, but I would lose lp_flags, lp_len, and chunk.\n> lp_off feels like it's a more arguable case: technically, it's a\n> locator for the problem, because it gives you the byte offset within\n> the page, but normally we reference tuples by TID, i.e. (blkno,\n> offset), not byte offset. On balance I'd be inclined to omit it.\n>\n> --\n\nIn addition to this, I found a few more things while reading v13 patch are as\nbelow:\n\nPatch v13-0001:\n\n-\n+#include \"amcheck.h\"\n\nNot in correct order.\n\n\n+typedef struct BtreeCheckContext\n+{\n+ TupleDesc tupdesc;\n+ Tuplestorestate *tupstore;\n+ bool is_corrupt;\n+ bool on_error_stop;\n+} BtreeCheckContext;\n\nUnnecessary spaces/tabs between } and BtreeCheckContext.\n\n\n static void bt_index_check_internal(Oid indrelid, bool parentcheck,\n- bool heapallindexed, bool rootdescend);\n+ bool heapallindexed, bool rootdescend,\n+ BtreeCheckContext * ctx);\n\nUnnecessary space between * and ctx. The same changes needed for other places as\nwell.\n---\n\nPatch v13-0002:\n\n+-- partitioned tables (the parent ones) don't have visibility maps\n+create table test_partitioned (a int, b text default repeat('x', 5000))\n+ partition by list (a);\n+-- these should all fail\n+select * from verify_heapam('test_partitioned',\n+ on_error_stop := false,\n+ skip := NULL,\n+ startblock := NULL,\n+ endblock := NULL);\n+ERROR: \"test_partitioned\" is not a table, materialized view, or TOAST table\n+create table test_partition partition of test_partitioned for values in (1);\n+create index test_index on test_partition (a);\n\nCan't we make it work? If the input is partitioned, I think we could\ncollect all its leaf partitions and process them one by one. Thoughts?\n\n\n+ ctx->chunkno++;\n\nInstead of incrementing in check_toast_tuple(), I think incrementing should\nhappen at the caller -- just after check_toast_tuple() call.\n---\n\nPatch v13-0003:\n\n+ resetPQExpBuffer(query);\n+ destroyPQExpBuffer(query);\n\nresetPQExpBuffer() will be unnecessary if the next call is destroyPQExpBuffer().\n\n\n+ appendPQExpBuffer(query,\n+ \"SELECT c.relname, v.blkno, v.offnum, v.lp_off, \"\n+ \"v.lp_flags, v.lp_len, v.attnum, v.chunk, v.msg\"\n+ \"\\nFROM verify_heapam(rel := %u, on_error_stop := %s, \"\n+ \"skip := %s, startblock := %s, endblock := %s) v, \"\n+ \"pg_class c\"\n+ \"\\nWHERE c.oid = %u\",\n+ tbloid, stop, skip, settings.startblock,\n+ settings.endblock, tbloid\n\npg_class should be schema-qualified like elsewhere. IIUC, pg_class is meant to\nget relname only, instead, we could use '%u'::pg_catalog.regclass in the target\nlist for the relname. Thoughts?\n\nAlso I think we should skip '\\n' from the query string (see appendPQExpBuffer()\nin pg_dump.c)\n\n\n+ appendPQExpBuffer(query,\n+ \"SELECT i.indexrelid\"\n+ \"\\nFROM pg_catalog.pg_index i, pg_catalog.pg_class c\"\n+ \"\\nWHERE i.indexrelid = c.oid\"\n+ \"\\n AND c.relam = %u\"\n+ \"\\n AND i.indrelid = %u\",\n+ BTREE_AM_OID, tbloid);\n+\n+ ExecuteSqlStatement(\"RESET search_path\");\n+ res = ExecuteSqlQuery(query->data, PGRES_TUPLES_OK);\n+ PQclear(ExecuteSqlQueryForSingleRow(ALWAYS_SECURE_SEARCH_PATH_SQL));\n\nI don't think we need the search_path query. The main query doesn't have any\ndependencies on it. Same is in check_indexes(), check_index (),\nexpand_table_name_patterns() & get_table_check_list().\nCorrect me if I am missing something.\n\n\n+ output = PageOutput(lines + 2, NULL);\n+ for (lineno = 0; usage_text[lineno]; lineno++)\n+ fprintf(output, \"%s\\n\", usage_text[lineno]);\n+ fprintf(output, \"Report bugs to <%s>.\\n\", PACKAGE_BUGREPORT);\n+ fprintf(output, \"%s home page: <%s>\\n\", PACKAGE_NAME, PACKAGE_URL);\n\nI am not sure why we want PageOutput() if the second argument is always going to\nbe NULL? Can't we directly use printf() instead of PageOutput() + fprintf() ?\ne.g. usage() function in pg_basebackup.c.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 17 Aug 2020 10:07:38 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Aug 16, 2020, at 9:37 PM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> In addition to this, I found a few more things while reading v13 patch are as\n> below:\n> \n> Patch v13-0001:\n> \n> -\n> +#include \"amcheck.h\"\n> \n> Not in correct order.\n\nFixed.\n\n> +typedef struct BtreeCheckContext\n> +{\n> + TupleDesc tupdesc;\n> + Tuplestorestate *tupstore;\n> + bool is_corrupt;\n> + bool on_error_stop;\n> +} BtreeCheckContext;\n> \n> Unnecessary spaces/tabs between } and BtreeCheckContext.\n\nThis refers to a change in verify_nbtree.c that has been removed. Per discussions with Peter and Robert, I have simply withdrawn that portion of the patch.\n\n> static void bt_index_check_internal(Oid indrelid, bool parentcheck,\n> - bool heapallindexed, bool rootdescend);\n> + bool heapallindexed, bool rootdescend,\n> + BtreeCheckContext * ctx);\n> \n> Unnecessary space between * and ctx. The same changes needed for other places as\n> well.\n\nSame as above. The changes to verify_nbtree.c have been withdrawn.\n\n> ---\n> \n> Patch v13-0002:\n> \n> +-- partitioned tables (the parent ones) don't have visibility maps\n> +create table test_partitioned (a int, b text default repeat('x', 5000))\n> + partition by list (a);\n> +-- these should all fail\n> +select * from verify_heapam('test_partitioned',\n> + on_error_stop := false,\n> + skip := NULL,\n> + startblock := NULL,\n> + endblock := NULL);\n> +ERROR: \"test_partitioned\" is not a table, materialized view, or TOAST table\n> +create table test_partition partition of test_partitioned for values in (1);\n> +create index test_index on test_partition (a);\n> \n> Can't we make it work? If the input is partitioned, I think we could\n> collect all its leaf partitions and process them one by one. Thoughts?\n\nI was following the example from pg_visibility. I haven't thought about your proposal enough to have much opinion as yet, except that if we do this for pg_amcheck we should do likewise to pg_visibility, for consistency of the user interface.\n\n> + ctx->chunkno++;\n> \n> Instead of incrementing in check_toast_tuple(), I think incrementing should\n> happen at the caller -- just after check_toast_tuple() call.\n\nI agree.\n\n> ---\n> \n> Patch v13-0003:\n> \n> + resetPQExpBuffer(query);\n> + destroyPQExpBuffer(query);\n> \n> resetPQExpBuffer() will be unnecessary if the next call is destroyPQExpBuffer().\n\nThanks. I removed it in cases where destroyPQExpBuffer is obviously the very next call.\n\n> + appendPQExpBuffer(query,\n> + \"SELECT c.relname, v.blkno, v.offnum, v.lp_off, \"\n> + \"v.lp_flags, v.lp_len, v.attnum, v.chunk, v.msg\"\n> + \"\\nFROM verify_heapam(rel := %u, on_error_stop := %s, \"\n> + \"skip := %s, startblock := %s, endblock := %s) v, \"\n> + \"pg_class c\"\n> + \"\\nWHERE c.oid = %u\",\n> + tbloid, stop, skip, settings.startblock,\n> + settings.endblock, tbloid\n> \n> pg_class should be schema-qualified like elsewhere.\n\nAgreed, and changed.\n\n> IIUC, pg_class is meant to\n> get relname only, instead, we could use '%u'::pg_catalog.regclass in the target\n> list for the relname. Thoughts?\n\nget_table_check_list() creates the list of all tables to be checked, which check_tables() then iterates over, calling check_table() for each one. I think some verification that the table still exists is in order. Using '%u'::pg_catalog.regclass for a table that has since been dropped would pass in the old table Oid and draw an error of the 'ERROR: could not open relation with OID 36311' variety, whereas the current coding will just skip the dropped table.\n\n> Also I think we should skip '\\n' from the query string (see appendPQExpBuffer()\n> in pg_dump.c)\n\nI'm not sure I understand. pg_dump.c uses \"\\n\" in query strings it passes to appendPQExpBuffer(), in a manner very similar to what this patch does.\n\n> + appendPQExpBuffer(query,\n> + \"SELECT i.indexrelid\"\n> + \"\\nFROM pg_catalog.pg_index i, pg_catalog.pg_class c\"\n> + \"\\nWHERE i.indexrelid = c.oid\"\n> + \"\\n AND c.relam = %u\"\n> + \"\\n AND i.indrelid = %u\",\n> + BTREE_AM_OID, tbloid);\n> +\n> + ExecuteSqlStatement(\"RESET search_path\");\n> + res = ExecuteSqlQuery(query->data, PGRES_TUPLES_OK);\n> + PQclear(ExecuteSqlQueryForSingleRow(ALWAYS_SECURE_SEARCH_PATH_SQL));\n> \n> I don't think we need the search_path query. The main query doesn't have any\n> dependencies on it. Same is in check_indexes(), check_index (),\n> expand_table_name_patterns() & get_table_check_list().\n> Correct me if I am missing something.\n\nRight.\n\n> + output = PageOutput(lines + 2, NULL);\n> + for (lineno = 0; usage_text[lineno]; lineno++)\n> + fprintf(output, \"%s\\n\", usage_text[lineno]);\n> + fprintf(output, \"Report bugs to <%s>.\\n\", PACKAGE_BUGREPORT);\n> + fprintf(output, \"%s home page: <%s>\\n\", PACKAGE_NAME, PACKAGE_URL);\n> \n> I am not sure why we want PageOutput() if the second argument is always going to\n> be NULL? Can't we directly use printf() instead of PageOutput() + fprintf() ?\n> e.g. usage() function in pg_basebackup.c.\n\nDone.\n\n\nPlease find attached the next version of the patch. In addition to your review comments (above), I have made changes in response to Peter and Robert's review comments upthread.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 19 Aug 2020 19:30:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Aug 20, 2020 at 8:00 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Aug 16, 2020, at 9:37 PM, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > In addition to this, I found a few more things while reading v13 patch are as\n> > below:\n> >\n> > Patch v13-0001:\n> >\n> > -\n> > +#include \"amcheck.h\"\n> >\n> > Not in correct order.\n>\n> Fixed.\n>\n> > +typedef struct BtreeCheckContext\n> > +{\n> > + TupleDesc tupdesc;\n> > + Tuplestorestate *tupstore;\n> > + bool is_corrupt;\n> > + bool on_error_stop;\n> > +} BtreeCheckContext;\n> >\n> > Unnecessary spaces/tabs between } and BtreeCheckContext.\n>\n> This refers to a change in verify_nbtree.c that has been removed. Per discussions with Peter and Robert, I have simply withdrawn that portion of the patch.\n>\n> > static void bt_index_check_internal(Oid indrelid, bool parentcheck,\n> > - bool heapallindexed, bool rootdescend);\n> > + bool heapallindexed, bool rootdescend,\n> > + BtreeCheckContext * ctx);\n> >\n> > Unnecessary space between * and ctx. The same changes needed for other places as\n> > well.\n>\n> Same as above. The changes to verify_nbtree.c have been withdrawn.\n>\n> > ---\n> >\n> > Patch v13-0002:\n> >\n> > +-- partitioned tables (the parent ones) don't have visibility maps\n> > +create table test_partitioned (a int, b text default repeat('x', 5000))\n> > + partition by list (a);\n> > +-- these should all fail\n> > +select * from verify_heapam('test_partitioned',\n> > + on_error_stop := false,\n> > + skip := NULL,\n> > + startblock := NULL,\n> > + endblock := NULL);\n> > +ERROR: \"test_partitioned\" is not a table, materialized view, or TOAST table\n> > +create table test_partition partition of test_partitioned for values in (1);\n> > +create index test_index on test_partition (a);\n> >\n> > Can't we make it work? If the input is partitioned, I think we could\n> > collect all its leaf partitions and process them one by one. Thoughts?\n>\n> I was following the example from pg_visibility. I haven't thought about your proposal enough to have much opinion as yet, except that if we do this for pg_amcheck we should do likewise to pg_visibility, for consistency of the user interface.\n>\n\npg_visibility does exist from before the declarative partitioning came\nin, I think it's time to improve that as well.\n\n> > + ctx->chunkno++;\n> >\n> > Instead of incrementing in check_toast_tuple(), I think incrementing should\n> > happen at the caller -- just after check_toast_tuple() call.\n>\n> I agree.\n>\n> > ---\n> >\n> > Patch v13-0003:\n> >\n> > + resetPQExpBuffer(query);\n> > + destroyPQExpBuffer(query);\n> >\n> > resetPQExpBuffer() will be unnecessary if the next call is destroyPQExpBuffer().\n>\n> Thanks. I removed it in cases where destroyPQExpBuffer is obviously the very next call.\n>\n> > + appendPQExpBuffer(query,\n> > + \"SELECT c.relname, v.blkno, v.offnum, v.lp_off, \"\n> > + \"v.lp_flags, v.lp_len, v.attnum, v.chunk, v.msg\"\n> > + \"\\nFROM verify_heapam(rel := %u, on_error_stop := %s, \"\n> > + \"skip := %s, startblock := %s, endblock := %s) v, \"\n> > + \"pg_class c\"\n> > + \"\\nWHERE c.oid = %u\",\n> > + tbloid, stop, skip, settings.startblock,\n> > + settings.endblock, tbloid\n> >\n> > pg_class should be schema-qualified like elsewhere.\n>\n> Agreed, and changed.\n>\n> > IIUC, pg_class is meant to\n> > get relname only, instead, we could use '%u'::pg_catalog.regclass in the target\n> > list for the relname. Thoughts?\n>\n> get_table_check_list() creates the list of all tables to be checked, which check_tables() then iterates over, calling check_table() for each one. I think some verification that the table still exists is in order. Using '%u'::pg_catalog.regclass for a table that has since been dropped would pass in the old table Oid and draw an error of the 'ERROR: could not open relation with OID 36311' variety, whereas the current coding will just skip the dropped table.\n>\n> > Also I think we should skip '\\n' from the query string (see appendPQExpBuffer()\n> > in pg_dump.c)\n>\n> I'm not sure I understand. pg_dump.c uses \"\\n\" in query strings it passes to appendPQExpBuffer(), in a manner very similar to what this patch does.\n>\n\nI see there is a mix of styles, I was referring to dumpDatabase() from pg_dump.c\nwhich doesn't include '\\n'.\n\n> > + appendPQExpBuffer(query,\n> > + \"SELECT i.indexrelid\"\n> > + \"\\nFROM pg_catalog.pg_index i, pg_catalog.pg_class c\"\n> > + \"\\nWHERE i.indexrelid = c.oid\"\n> > + \"\\n AND c.relam = %u\"\n> > + \"\\n AND i.indrelid = %u\",\n> > + BTREE_AM_OID, tbloid);\n> > +\n> > + ExecuteSqlStatement(\"RESET search_path\");\n> > + res = ExecuteSqlQuery(query->data, PGRES_TUPLES_OK);\n> > + PQclear(ExecuteSqlQueryForSingleRow(ALWAYS_SECURE_SEARCH_PATH_SQL));\n> >\n> > I don't think we need the search_path query. The main query doesn't have any\n> > dependencies on it. Same is in check_indexes(), check_index (),\n> > expand_table_name_patterns() & get_table_check_list().\n> > Correct me if I am missing something.\n>\n> Right.\n>\n> > + output = PageOutput(lines + 2, NULL);\n> > + for (lineno = 0; usage_text[lineno]; lineno++)\n> > + fprintf(output, \"%s\\n\", usage_text[lineno]);\n> > + fprintf(output, \"Report bugs to <%s>.\\n\", PACKAGE_BUGREPORT);\n> > + fprintf(output, \"%s home page: <%s>\\n\", PACKAGE_NAME, PACKAGE_URL);\n> >\n> > I am not sure why we want PageOutput() if the second argument is always going to\n> > be NULL? Can't we directly use printf() instead of PageOutput() + fprintf() ?\n> > e.g. usage() function in pg_basebackup.c.\n>\n> Done.\n>\n>\n> Please find attached the next version of the patch. In addition to your review comments (above), I have made changes in response to Peter and Robert's review comments upthread.\n\nThanks for the updated version, I'll have a look.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 20 Aug 2020 17:17:08 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Few comments for v14 version:\n\nv14-0001:\n\nverify_heapam.c: In function ‘verify_heapam’:\nverify_heapam.c:339:14: warning: variable ‘ph’ set but not used\n[-Wunused-but-set-variable]\n PageHeader ph;\n ^\nverify_heapam.c: In function ‘check_toast_tuple’:\nverify_heapam.c:877:8: warning: variable ‘chunkdata’ set but not used\n[-Wunused-but-set-variable]\n char *chunkdata;\n\nGot these compilation warnings\n\n\n+++ b/contrib/amcheck/amcheck.h\n@@ -0,0 +1,5 @@\n+#include \"postgres.h\"\n+\n+Datum verify_heapam(PG_FUNCTION_ARGS);\n+Datum bt_index_check(PG_FUNCTION_ARGS);\n+Datum bt_index_parent_check(PG_FUNCTION_ARGS);\n\nbt_index_* are needed?\n\n\n#include \"access/htup_details.h\"\n#include \"access/xact.h\"\n#include \"catalog/pg_type.h\"\n#include \"catalog/storage_xlog.h\"\n#include \"storage/smgr.h\"\n#include \"utils/lsyscache.h\"\n#include \"utils/rel.h\"\n#include \"utils/snapmgr.h\"\n#include \"utils/syscache.h\"\n\nThese header file inclusion to verify_heapam.c. can be omitted. Some of those\nmight be implicitly got added by other header files or no longer need due to\nrecent changes.\n\n\n+ * on_error_stop:\n+ * Whether to stop at the end of the first page for which errors are\n+ * detected. Note that multiple rows may be returned.\n+ *\n+ * check_toast:\n+ * Whether to check each toasted attribute against the toast table to\n+ * verify that it can be found there.\n+ *\n+ * skip:\n+ * What kinds of pages in the heap relation should be skipped. Valid\n+ * options are \"all-visible\", \"all-frozen\", and \"none\".\n\nI think it would be good if the description also includes what will be default\nvalue otherwise.\n\n\n+ /*\n+ * Optionally open the toast relation, if any, also protected from\n+ * concurrent vacuums.\n+ */\n\nNow lock is changed to AccessShareLock, I think we need to rephrase this comment\nas well since we are not really doing anything extra explicitly to protect from\nthe concurrent vacuum.\n\n\n+/*\n+ * Return wehter a multitransaction ID is in the cached valid range.\n+ */\n\nTypo: s/wehter/whether\n\n\nv14-0002:\n\n+#define NOPAGER 0\n\nUnused macro.\n\n\n+ appendPQExpBuffer(querybuf,\n+ \"SELECT c.relname, v.blkno, v.offnum, v.attnum, v.msg\"\n+ \"\\nFROM public.verify_heapam(\"\n+ \"\\nrelation := %u,\"\n+ \"\\non_error_stop := %s,\"\n+ \"\\nskip := %s,\"\n+ \"\\ncheck_toast := %s,\"\n+ \"\\nstartblock := %s,\"\n+ \"\\nendblock := %s) v, \"\n+ \"\\npg_catalog.pg_class c\"\n+ \"\\nWHERE c.oid = %u\",\n+ tbloid, stop, skip, toast, startblock, endblock, tbloid);\n[....]\n+ appendPQExpBuffer(querybuf,\n+ \"SELECT public.bt_index_parent_check('%s'::regclass, %s, %s)\",\n+ idxoid,\n+ settings.heapallindexed ? \"true\" : \"false\",\n+ settings.rootdescend ? \"true\" : \"false\");\n\nThe assumption that the amcheck extension will be always installed in the public\nschema doesn't seem to be correct. This will not work if amcheck install\nsomewhere else.\n\nRegards,\nAmul\n\n\n\n\nOn Thu, Aug 20, 2020 at 5:17 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Aug 20, 2020 at 8:00 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > > On Aug 16, 2020, at 9:37 PM, Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > In addition to this, I found a few more things while reading v13 patch are as\n> > > below:\n> > >\n> > > Patch v13-0001:\n> > >\n> > > -\n> > > +#include \"amcheck.h\"\n> > >\n> > > Not in correct order.\n> >\n> > Fixed.\n> >\n> > > +typedef struct BtreeCheckContext\n> > > +{\n> > > + TupleDesc tupdesc;\n> > > + Tuplestorestate *tupstore;\n> > > + bool is_corrupt;\n> > > + bool on_error_stop;\n> > > +} BtreeCheckContext;\n> > >\n> > > Unnecessary spaces/tabs between } and BtreeCheckContext.\n> >\n> > This refers to a change in verify_nbtree.c that has been removed. Per discussions with Peter and Robert, I have simply withdrawn that portion of the patch.\n> >\n> > > static void bt_index_check_internal(Oid indrelid, bool parentcheck,\n> > > - bool heapallindexed, bool rootdescend);\n> > > + bool heapallindexed, bool rootdescend,\n> > > + BtreeCheckContext * ctx);\n> > >\n> > > Unnecessary space between * and ctx. The same changes needed for other places as\n> > > well.\n> >\n> > Same as above. The changes to verify_nbtree.c have been withdrawn.\n> >\n> > > ---\n> > >\n> > > Patch v13-0002:\n> > >\n> > > +-- partitioned tables (the parent ones) don't have visibility maps\n> > > +create table test_partitioned (a int, b text default repeat('x', 5000))\n> > > + partition by list (a);\n> > > +-- these should all fail\n> > > +select * from verify_heapam('test_partitioned',\n> > > + on_error_stop := false,\n> > > + skip := NULL,\n> > > + startblock := NULL,\n> > > + endblock := NULL);\n> > > +ERROR: \"test_partitioned\" is not a table, materialized view, or TOAST table\n> > > +create table test_partition partition of test_partitioned for values in (1);\n> > > +create index test_index on test_partition (a);\n> > >\n> > > Can't we make it work? If the input is partitioned, I think we could\n> > > collect all its leaf partitions and process them one by one. Thoughts?\n> >\n> > I was following the example from pg_visibility. I haven't thought about your proposal enough to have much opinion as yet, except that if we do this for pg_amcheck we should do likewise to pg_visibility, for consistency of the user interface.\n> >\n>\n> pg_visibility does exist from before the declarative partitioning came\n> in, I think it's time to improve that as well.\n>\n> > > + ctx->chunkno++;\n> > >\n> > > Instead of incrementing in check_toast_tuple(), I think incrementing should\n> > > happen at the caller -- just after check_toast_tuple() call.\n> >\n> > I agree.\n> >\n> > > ---\n> > >\n> > > Patch v13-0003:\n> > >\n> > > + resetPQExpBuffer(query);\n> > > + destroyPQExpBuffer(query);\n> > >\n> > > resetPQExpBuffer() will be unnecessary if the next call is destroyPQExpBuffer().\n> >\n> > Thanks. I removed it in cases where destroyPQExpBuffer is obviously the very next call.\n> >\n> > > + appendPQExpBuffer(query,\n> > > + \"SELECT c.relname, v.blkno, v.offnum, v.lp_off, \"\n> > > + \"v.lp_flags, v.lp_len, v.attnum, v.chunk, v.msg\"\n> > > + \"\\nFROM verify_heapam(rel := %u, on_error_stop := %s, \"\n> > > + \"skip := %s, startblock := %s, endblock := %s) v, \"\n> > > + \"pg_class c\"\n> > > + \"\\nWHERE c.oid = %u\",\n> > > + tbloid, stop, skip, settings.startblock,\n> > > + settings.endblock, tbloid\n> > >\n> > > pg_class should be schema-qualified like elsewhere.\n> >\n> > Agreed, and changed.\n> >\n> > > IIUC, pg_class is meant to\n> > > get relname only, instead, we could use '%u'::pg_catalog.regclass in the target\n> > > list for the relname. Thoughts?\n> >\n> > get_table_check_list() creates the list of all tables to be checked, which check_tables() then iterates over, calling check_table() for each one. I think some verification that the table still exists is in order. Using '%u'::pg_catalog.regclass for a table that has since been dropped would pass in the old table Oid and draw an error of the 'ERROR: could not open relation with OID 36311' variety, whereas the current coding will just skip the dropped table.\n> >\n> > > Also I think we should skip '\\n' from the query string (see appendPQExpBuffer()\n> > > in pg_dump.c)\n> >\n> > I'm not sure I understand. pg_dump.c uses \"\\n\" in query strings it passes to appendPQExpBuffer(), in a manner very similar to what this patch does.\n> >\n>\n> I see there is a mix of styles, I was referring to dumpDatabase() from pg_dump.c\n> which doesn't include '\\n'.\n>\n> > > + appendPQExpBuffer(query,\n> > > + \"SELECT i.indexrelid\"\n> > > + \"\\nFROM pg_catalog.pg_index i, pg_catalog.pg_class c\"\n> > > + \"\\nWHERE i.indexrelid = c.oid\"\n> > > + \"\\n AND c.relam = %u\"\n> > > + \"\\n AND i.indrelid = %u\",\n> > > + BTREE_AM_OID, tbloid);\n> > > +\n> > > + ExecuteSqlStatement(\"RESET search_path\");\n> > > + res = ExecuteSqlQuery(query->data, PGRES_TUPLES_OK);\n> > > + PQclear(ExecuteSqlQueryForSingleRow(ALWAYS_SECURE_SEARCH_PATH_SQL));\n> > >\n> > > I don't think we need the search_path query. The main query doesn't have any\n> > > dependencies on it. Same is in check_indexes(), check_index (),\n> > > expand_table_name_patterns() & get_table_check_list().\n> > > Correct me if I am missing something.\n> >\n> > Right.\n> >\n> > > + output = PageOutput(lines + 2, NULL);\n> > > + for (lineno = 0; usage_text[lineno]; lineno++)\n> > > + fprintf(output, \"%s\\n\", usage_text[lineno]);\n> > > + fprintf(output, \"Report bugs to <%s>.\\n\", PACKAGE_BUGREPORT);\n> > > + fprintf(output, \"%s home page: <%s>\\n\", PACKAGE_NAME, PACKAGE_URL);\n> > >\n> > > I am not sure why we want PageOutput() if the second argument is always going to\n> > > be NULL? Can't we directly use printf() instead of PageOutput() + fprintf() ?\n> > > e.g. usage() function in pg_basebackup.c.\n> >\n> > Done.\n> >\n> >\n> > Please find attached the next version of the patch. In addition to your review comments (above), I have made changes in response to Peter and Robert's review comments upthread.\n>\n> Thanks for the updated version, I'll have a look.\n>\n> Regards,\n> Amul\n\n\n", "msg_date": "Mon, 24 Aug 2020 15:18:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Aug 24, 2020, at 2:48 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> Few comments for v14 version:\n> \n> v14-0001:\n> \n> verify_heapam.c: In function ‘verify_heapam’:\n> verify_heapam.c:339:14: warning: variable ‘ph’ set but not used\n> [-Wunused-but-set-variable]\n> PageHeader ph;\n> ^\n> verify_heapam.c: In function ‘check_toast_tuple’:\n> verify_heapam.c:877:8: warning: variable ‘chunkdata’ set but not used\n> [-Wunused-but-set-variable]\n> char *chunkdata;\n> \n> Got these compilation warnings\n\nRemoved.\n\n> \n> \n> +++ b/contrib/amcheck/amcheck.h\n> @@ -0,0 +1,5 @@\n> +#include \"postgres.h\"\n> +\n> +Datum verify_heapam(PG_FUNCTION_ARGS);\n> +Datum bt_index_check(PG_FUNCTION_ARGS);\n> +Datum bt_index_parent_check(PG_FUNCTION_ARGS);\n> \n> bt_index_* are needed?\n\nThis entire header file is not needed. Removed.\n\n> #include \"access/htup_details.h\"\n> #include \"access/xact.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"catalog/storage_xlog.h\"\n> #include \"storage/smgr.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/rel.h\"\n> #include \"utils/snapmgr.h\"\n> #include \"utils/syscache.h\"\n> \n> These header file inclusion to verify_heapam.c. can be omitted. Some of those\n> might be implicitly got added by other header files or no longer need due to\n> recent changes.\n\nRemoved.\n\n\n> + * on_error_stop:\n> + * Whether to stop at the end of the first page for which errors are\n> + * detected. Note that multiple rows may be returned.\n> + *\n> + * check_toast:\n> + * Whether to check each toasted attribute against the toast table to\n> + * verify that it can be found there.\n> + *\n> + * skip:\n> + * What kinds of pages in the heap relation should be skipped. Valid\n> + * options are \"all-visible\", \"all-frozen\", and \"none\".\n> \n> I think it would be good if the description also includes what will be default\n> value otherwise.\n\nThe defaults are defined in amcheck--1.2--1.3.sql, and I was concerned that documenting them in verify_heapam.c would create a hazard of the defaults and their documented values getting out of sync. The handling of null arguments in verify_heapam.c was, however, duplicating the defaults from the .sql file, so I've changed that to just ereport error on null. (I can't make the whole function strict, as some other arguments are allowed to be null.) I have not documented the defaults in either file, as they are quite self-evident in the .sql file. I've updated some tests that were passing null to get the default behavior to now either pass nothing or explicitly pass the argument they want.\n\n> \n> + /*\n> + * Optionally open the toast relation, if any, also protected from\n> + * concurrent vacuums.\n> + */\n> \n> Now lock is changed to AccessShareLock, I think we need to rephrase this comment\n> as well since we are not really doing anything extra explicitly to protect from\n> the concurrent vacuum.\n\nRight. Comment changed.\n\n> +/*\n> + * Return wehter a multitransaction ID is in the cached valid range.\n> + */\n> \n> Typo: s/wehter/whether\n\nChanged.\n\n> v14-0002:\n> \n> +#define NOPAGER 0\n> \n> Unused macro.\n\nRemoved.\n\n> + appendPQExpBuffer(querybuf,\n> + \"SELECT c.relname, v.blkno, v.offnum, v.attnum, v.msg\"\n> + \"\\nFROM public.verify_heapam(\"\n> + \"\\nrelation := %u,\"\n> + \"\\non_error_stop := %s,\"\n> + \"\\nskip := %s,\"\n> + \"\\ncheck_toast := %s,\"\n> + \"\\nstartblock := %s,\"\n> + \"\\nendblock := %s) v, \"\n> + \"\\npg_catalog.pg_class c\"\n> + \"\\nWHERE c.oid = %u\",\n> + tbloid, stop, skip, toast, startblock, endblock, tbloid);\n> [....]\n> + appendPQExpBuffer(querybuf,\n> + \"SELECT public.bt_index_parent_check('%s'::regclass, %s, %s)\",\n> + idxoid,\n> + settings.heapallindexed ? \"true\" : \"false\",\n> + settings.rootdescend ? \"true\" : \"false\");\n> \n> The assumption that the amcheck extension will be always installed in the public\n> schema doesn't seem to be correct. This will not work if amcheck install\n> somewhere else.\n\nRight. I removed the schema qualification, leaving it up to the search path. \n\nThanks for the review!\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 Aug 2020 07:36:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> 25 авг. 2020 г., в 19:36, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n\nHi Mark!\n\nThanks for working on this important feature.\n\nI was experimenting a bit with our internal heapcheck and found out that it's not helping with truncated CLOG anyhow.\nWill your module be able to gather tid's of similar corruptions?\n\nserver/db M # select * from heap_check('pg_toast.pg_toast_4848601');\nERROR: 58P01: could not access status of transaction 636558742\nDETAIL: Could not open file \"pg_xact/025F\": No such file or directory.\nLOCATION: SlruReportIOError, slru.c:913\nTime: 3439.915 ms (00:03.440)\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 28 Aug 2020 10:07:45 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Aug 28, 2020 at 1:07 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> I was experimenting a bit with our internal heapcheck and found out that it's not helping with truncated CLOG anyhow.\n> Will your module be able to gather tid's of similar corruptions?\n>\n> server/db M # select * from heap_check('pg_toast.pg_toast_4848601');\n> ERROR: 58P01: could not access status of transaction 636558742\n> DETAIL: Could not open file \"pg_xact/025F\": No such file or directory.\n> LOCATION: SlruReportIOError, slru.c:913\n> Time: 3439.915 ms (00:03.440)\n\nThis kind of thing gets really tricky. PostgreSQL uses errors in tons\nof places to report problems, and if you want to accumulate a list of\nerrors and report them all rather than just letting the first one\ncancel the operation, you need special handling for each individual\nerror you want to bypass. A tool like this naturally wants to use as\nmuch PostgreSQL infrastructure as possible, to avoid duplicating a ton\nof code and creating a bloated monstrosity, but all that code can\nthrow errors. I think the code in its current form is trying to be\nresilient against problems on the table pages that it is actually\nchecking, but it can't necessarily handle gracefully corruption in\nother parts of the system. For instance:\n\n- CLOG could be truncated, as in your example\n- the disk files could have had their permissions changed so that they\ncan't be accessed\n- the PageIsVerified() check might fail when pages are read\n- the TOAST table's metadata in pg_class/pg_attribute/etc. could be corrupted\n- ...or the files for those system catalogs could've had their\npermissions changed\n- ....or they could contain invalid pages\n- ...or their indexes could be messed up\n\nI think there are probably a bunch more, and I don't think it's\npractical to allow this tool to continue after arbitrary stuff goes\nwrong. It'll be too much code and impossible to maintain. In the case\nyou mention, I think we should view that as a problem with clog rather\nthan a problem with the table, and thus out of scope.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 09:58:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Aug 27, 2020, at 10:07 PM, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 25 авг. 2020 г., в 19:36, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n> Hi Mark!\n> \n> Thanks for working on this important feature.\n> \n> I was experimenting a bit with our internal heapcheck and found out that it's not helping with truncated CLOG anyhow.\n> Will your module be able to gather tid's of similar corruptions?\n> \n> server/db M # select * from heap_check('pg_toast.pg_toast_4848601');\n> ERROR: 58P01: could not access status of transaction 636558742\n> DETAIL: Could not open file \"pg_xact/025F\": No such file or directory.\n> LOCATION: SlruReportIOError, slru.c:913\n> Time: 3439.915 ms (00:03.440)\n\nThe design principle for verify_heapam.c is, if the rest of the system is not corrupt, corruption in the table being checked should not cause a crash during the table check. This is a very limited principle. Even corruption in the associated toast table or toast index could cause a crash. That is why checking against the toast table is optional, and false by default.\n\nPerhaps a more extensive effort could be made later. I think it is out of scope for this release cycle. It is a very interesting area for further research, though.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 28 Aug 2020 08:12:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> 28 авг. 2020 г., в 18:58, Robert Haas <robertmhaas@gmail.com> написал(а):\n> In the case\n> you mention, I think we should view that as a problem with clog rather\n> than a problem with the table, and thus out of scope.\n\nI don't think so. ISTM It's the same problem of xmax<relfrozenxid actually, just hidden behind detoasing.\nOur regular heap_check was checking xmin\\xmax invariants for tables, but failed to recognise the problem in toast (while toast was accessible until CLOG truncation).\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 28 Aug 2020 23:10:34 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Aug 28, 2020, at 11:10 AM, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 28 авг. 2020 г., в 18:58, Robert Haas <robertmhaas@gmail.com> написал(а):\n>> In the case\n>> you mention, I think we should view that as a problem with clog rather\n>> than a problem with the table, and thus out of scope.\n> \n> I don't think so. ISTM It's the same problem of xmax<relfrozenxid actually, just hidden behind detoasing.\n> Our regular heap_check was checking xmin\\xmax invariants for tables, but failed to recognise the problem in toast (while toast was accessible until CLOG truncation).\n> \n> Best regards, Andrey Borodin.\n\nIf you lock the relations involved, check the toast table first, the toast index second, and the main table third, do you still get the problem? Look at how pg_amcheck handles this and let me know if you still see a problem. There is the ever present problem that external forces, like a rogue process deleting backend files, will strike at precisely the wrong moment, but barring that kind of concurrent corruption, I think the toast table being checked prior to the main table being checked solves some of the issues you are worried about.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 28 Aug 2020 11:23:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Aug 28, 2020 at 2:10 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> I don't think so. ISTM It's the same problem of xmax<relfrozenxid actually, just hidden behind detoasing.\n> Our regular heap_check was checking xmin\\xmax invariants for tables, but failed to recognise the problem in toast (while toast was accessible until CLOG truncation).\n\nThe code can (and should, and I think does) refrain from looking up\nXIDs that are out of the range thought to be valid -- but how do you\npropose that it avoid looking up XIDs that ought to have clog data\nassociated with them despite being >= relfrozenxid and < nextxid?\nTransactionIdDidCommit() does not have a suppress-errors flag, adding\none would be quite invasive, yet we cannot safely perform a\nsignificant number of checks without knowing whether the inserting\ntransaction committed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 15:56:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> 29 авг. 2020 г., в 00:56, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Fri, Aug 28, 2020 at 2:10 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> I don't think so. ISTM It's the same problem of xmax<relfrozenxid actually, just hidden behind detoasing.\n>> Our regular heap_check was checking xmin\\xmax invariants for tables, but failed to recognise the problem in toast (while toast was accessible until CLOG truncation).\n> \n> The code can (and should, and I think does) refrain from looking up\n> XIDs that are out of the range thought to be valid -- but how do you\n> propose that it avoid looking up XIDs that ought to have clog data\n> associated with them despite being >= relfrozenxid and < nextxid?\n> TransactionIdDidCommit() does not have a suppress-errors flag, adding\n> one would be quite invasive, yet we cannot safely perform a\n> significant number of checks without knowing whether the inserting\n> transaction committed.\n\nWhat you write seems completely correct to me. I agree that CLOG thresholds lookup seems unnecessary.\n\nBut I have a real corruption at hand (on testing site). If I have proposed here heapcheck. And I have pg_surgery from the thread nearby. Yet I cannot fix the problem, because cannot list affected tuples. These tools do not solve the problem neglected for long enough. It would be supercool if they could.\n\nThis corruption like a caries had 3 stages:\n1. incorrect VM flag that page do not need vacuum\n2. xmin and xmax < relfrozenxid\n3. CLOG truncated\n\nStage 2 is curable with proposed toolset, stage 3 is not. But they are not that different.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 29 Aug 2020 15:27:03 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Aug 29, 2020, at 3:27 AM, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 29 авг. 2020 г., в 00:56, Robert Haas <robertmhaas@gmail.com> написал(а):\n>> \n>> On Fri, Aug 28, 2020 at 2:10 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> I don't think so. ISTM It's the same problem of xmax<relfrozenxid actually, just hidden behind detoasing.\n>>> Our regular heap_check was checking xmin\\xmax invariants for tables, but failed to recognise the problem in toast (while toast was accessible until CLOG truncation).\n>> \n>> The code can (and should, and I think does) refrain from looking up\n>> XIDs that are out of the range thought to be valid -- but how do you\n>> propose that it avoid looking up XIDs that ought to have clog data\n>> associated with them despite being >= relfrozenxid and < nextxid?\n>> TransactionIdDidCommit() does not have a suppress-errors flag, adding\n>> one would be quite invasive, yet we cannot safely perform a\n>> significant number of checks without knowing whether the inserting\n>> transaction committed.\n> \n> What you write seems completely correct to me. I agree that CLOG thresholds lookup seems unnecessary.\n> \n> But I have a real corruption at hand (on testing site). If I have proposed here heapcheck. And I have pg_surgery from the thread nearby. Yet I cannot fix the problem, because cannot list affected tuples. These tools do not solve the problem neglected for long enough. It would be supercool if they could.\n> \n> This corruption like a caries had 3 stages:\n> 1. incorrect VM flag that page do not need vacuum\n> 2. xmin and xmax < relfrozenxid\n> 3. CLOG truncated\n> \n> Stage 2 is curable with proposed toolset, stage 3 is not. But they are not that different.\n\nI had an earlier version of the verify_heapam patch that included a non-throwing interface to clog. Ultimately, I ripped that out. My reasoning was that a simpler patch submission was more likely to be acceptable to the community.\n\nIf you want to submit a separate patch that creates a non-throwing version of the clog interface, and get the community to accept and commit it, I would seriously consider using that from verify_heapam. If it gets committed in time, I might even do so for this release cycle. But I don't want to make this patch dependent on that hypothetical patch getting written and accepted.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 29 Aug 2020 10:48:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 25, 2020 at 10:36 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Thanks for the review!\n\n+ msg OUT text\n+ )\n\nLooks like atypical formatting.\n\n+REVOKE ALL ON FUNCTION\n+verify_heapam(regclass, boolean, boolean, cstring, bigint, bigint)\n+FROM PUBLIC;\n\nThis too.\n\n+-- Don't want this to be available to public\n\nAdd \"by default, but superusers can grant access\" or so?\n\nI think there should be a call to pg_class_aclcheck() here, just like\nthe one in pg_prewarm, so that if the superuser does choose to grant\naccess, users given access can check tables they anyway have\npermission to access, but not others. Maybe put that in\ncheck_relation_relkind_and_relam() and rename it. Might want to look\nat the pg_surgery precedent, too. Oh, and that functions header\ncomment is also wrong.\n\nI think that the way the checks on the block range are performed could\nbe improved. Generally, we want to avoid reporting the same problem\nwith a variety of different message strings, because it adds burden\nfor translators and is potentially confusing for users. You've got two\nmessage strings that are only going to be used for empty relations and\na third message string that is only going to be used for non-empty\nrelations. What stops you from just ripping off the way that this is\ndone in pg_prewarm, which requires only 2 messages? Then you'd be\nadding a net total of 0 new messages instead of 3, and in my view they\nwould be clearer than your third message, \"block range is out of\nbounds for relation with block count %u: \" INT64_FORMAT \" .. \"\nINT64_FORMAT, which doesn't say very precisely what the problem is,\nand also falls afoul of our usual practice of avoiding the use of\nINT64_FORMAT in error messages that are subject to translation. I\nnotice that pg_prewarm just silently does nothing if the start and end\nblocks are swapped, rather than generating an error. We could choose\nto do differently here, but I'm not sure why we should bother.\n\n+ all_frozen = mapbits & VISIBILITYMAP_ALL_VISIBLE;\n+ all_visible = mapbits & VISIBILITYMAP_ALL_FROZEN;\n+\n+ if ((all_frozen && skip_option ==\nSKIP_PAGES_ALL_FROZEN) ||\n+ (all_visible && skip_option ==\nSKIP_PAGES_ALL_VISIBLE))\n+ {\n+ continue;\n+ }\n\nThis isn't horrible style, but why not just get rid of the local\nvariables? e.g. if (skip_option == SKIP_PAGES_ALL_FROZEN) { if\n((mapbits & VISIBILITYMAP_ALL_FROZEN) != 0) continue; } else { ... }\n\nTypically no braces around a block containing only one line.\n\n+ * table contains corrupt all frozen bits, a concurrent vacuum might skip the\n\nall-frozen?\n\n+ * relfrozenxid beyond xid.) Reporting the xid as valid under such conditions\n+ * seems acceptable, since if we had checked it earlier in our scan it would\n+ * have truly been valid at that time, and we break no MVCC guarantees by\n+ * failing to notice the concurrent change in its status.\n\nI agree with the first half of this sentence, but I don't know what\nMVCC guarantees have to do with anything. I'd just delete the second\npart, or make it a lot clearer.\n\n+ * Some kinds of tuple header corruption make it unsafe to check the tuple\n+ * attributes, for example when the tuple is foreshortened and such checks\n+ * would read beyond the end of the line pointer (and perhaps the page). In\n\nI think of foreshortening mostly as an art term, though I guess it has\nother meanings. Maybe it would be clearer to say something like \"Some\nkinds of corruption make it unsafe to check the tuple attributes, for\nexample when the line pointer refers to a range of bytes outside the\npage\"?\n\n+ * Other kinds of tuple header corruption do not bare on the question of\n\nbear\n\n+ pstrdup(_(\"updating\ntransaction ID marked incompatibly as keys updated and locked\nonly\")));\n+ pstrdup(_(\"updating\ntransaction ID marked incompatibly as committed and as a\nmultitransaction ID\")));\n\n\"updating transaction ID\" might scare somebody who thinks that you are\ntelling them that you changed something. That's not what it means, but\nit might not be totally clear. Maybe:\n\ntuple is marked as only locked, but also claims key columns were updated\nmultixact should not be marked committed\n\n+\npsprintf(_(\"data offset differs from expected: %u vs. %u (1 attribute,\nhas nulls)\"),\n\nFor these, how about:\n\ntuple data should begin at byte %u, but actually begins at byte %u (1\nattribute, has nulls)\netc.\n\n+\npsprintf(_(\"old-style VACUUM FULL transaction ID is in the future:\n%u\"),\n+\npsprintf(_(\"old-style VACUUM FULL transaction ID precedes freeze\nthreshold: %u\"),\n+\npsprintf(_(\"old-style VACUUM FULL transaction ID is invalid in this\nrelation: %u\"),\n\nold-style VACUUM FULL transaction ID %u is in the future\nold-style VACUUM FULL transaction ID %u precedes freeze threshold %u\nold-style VACUUM FULL transaction ID %u out of range %u..%u\n\nDoesn't the second of these overlap with the third?\n\nSimilarly in other places, e.g.\n\n+\npsprintf(_(\"inserting transaction ID is in the future: %u\"),\n\nI think this should change to: inserting transaction ID %u is in the future\n\n+ else if (VARATT_IS_SHORT(chunk))\n+ /*\n+ * could happen due to heap_form_tuple doing its thing\n+ */\n+ chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n\nAdd braces here, since there are multiple lines.\n\n+ psprintf(_(\"toast\nchunk sequence number not the expected sequence number: %u vs. %u\"),\n\ntoast chunk sequence number %u does not match expected sequence number %u\n\nThere are more instances of this kind of thing.\n\n+\npsprintf(_(\"toasted attribute has unexpected TOAST tag: %u\"),\n\nRemove colon.\n\n+\npsprintf(_(\"attribute ends at offset beyond total tuple length: %u vs.\n%u (attribute length %u)\"),\n\nLet's try to specify the attribute number in the attribute messages\nwhere we can, e.g.\n\n+\npsprintf(_(\"attribute ends at offset beyond total tuple length: %u vs.\n%u (attribute length %u)\"),\n\nHow about: attribute %u with length %u should end at offset %u, but\nthe tuple length is only %u\n\n+ if (TransactionIdIsNormal(ctx->relfrozenxid) &&\n+ TransactionIdPrecedes(xmin, ctx->relfrozenxid))\n+ {\n+ report_corruption(ctx,\n+ /*\ntranslator: Both %u are transaction IDs. */\n+\npsprintf(_(\"inserting transaction ID is from before freeze cutoff: %u\nvs. %u\"),\n+\n xmin, ctx->relfrozenxid));\n+ fatal = true;\n+ }\n+ else if (!xid_valid_in_rel(xmin, ctx))\n+ {\n+ report_corruption(ctx,\n+ /*\ntranslator: %u is a transaction ID. */\n+\npsprintf(_(\"inserting transaction ID is in the future: %u\"),\n+\n xmin));\n+ fatal = true;\n+ }\n\nThis seems like good evidence that xid_valid_in_rel needs some\nrethinking. As far as I can see, every place where you call\nxid_valid_in_rel, you have checks beforehand that duplicate some of\nwhat it does, so that you can give a more accurate error message.\nThat's not good. Either the message should be adjusted so that it\ncovers all the cases \"e.g. tuple xmin %u is outside acceptable range\n%u..%u\" or we should just get rid of xid_valid_in_rel() and have\nseparate error messages for each case, e.g. tuple xmin %u precedes\nrelfrozenxid %u\". I think it's OK to use terms like xmin and xmax in\nthese messages, rather than inserting transaction ID etc. We have\nexisting instances of that, and while someone might judge it\nuser-unfriendly, I disagree. A person who is qualified to interpret\nthis output must know what 'tuplex min' means immediately, but whether\nthey can understand that 'inserting transaction ID' means the same\nthing is questionable, I think.\n\nThis is not a full review, but in general I think that this is getting\npretty close to being committable. The error messages seem to still\nneed some polishing and I wouldn't be surprised if there are a few\nmore bugs lurking yet, but I think it's come a long way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Sep 2020 17:09:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Sep 21, 2020, at 2:09 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I think there should be a call to pg_class_aclcheck() here, just like\n> the one in pg_prewarm, so that if the superuser does choose to grant\n> access, users given access can check tables they anyway have\n> permission to access, but not others. Maybe put that in\n> check_relation_relkind_and_relam() and rename it. Might want to look\n> at the pg_surgery precedent, too. \n\nIn the presence of corruption, verify_heapam() reports to the user (in other words, leaks) metadata about the corrupted rows. Reasoning about the attack vectors this creates is hard, but a conservative approach is to assume that an attacker can cause corruption in order to benefit from the leakage, and make sure the leakage does not violate any reasonable security expectations.\n\nBasing the security decision on whether the user has access to read the table seems insufficient, as it ignores row level security. Perhaps that is ok if row level security is not enabled for the table or if the user has been granted BYPASSRLS. There is another problem, though. There is no grantable privilege to read dead rows. In the case of corruption, verify_heapam() may well report metadata about dead rows.\n\npg_surgery also appears to leak information about dead rows. Owners of tables can probe whether supplied TIDs refer to dead rows. If a table containing sensitive information has rows deleted prior to ownership being transferred, the new owner of the table could probe each page of deleted data to determine something of the content that was there. Information about the number of deleted rows is already available through the pg_stat_* views, but those views don't give such a fine-grained approach to figuring out how large each deleted row was. For a table with fixed content options, the content can sometimes be completely inferred from the length of the row. (Consider a table with a single text column containing either \"approved\" or \"denied\".)\n\nBut pg_surgery is understood to be a collection of sharp tools only to be used under fairly exceptional conditions. amcheck, on the other hand, is something that feels safer and more reasonable to use on a regular basis, perhaps from a cron job executed by a less trusted user. Forcing the user to be superuser makes it clearer that this feeling of safety is not justified.\n\nI am inclined to just restrict verify_heapam() to superusers and be done. What do you think?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 22 Sep 2020 10:55:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Sep 22, 2020 at 10:55 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I am inclined to just restrict verify_heapam() to superusers and be done. What do you think?\n\nThe existing amcheck functions were designed to have execute privilege\ngranted to non-superusers, though we never actually advertised that\nfact. Maybe now would be a good time to start doing so.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Sep 2020 10:58:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Sep 22, 2020 at 1:55 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I am inclined to just restrict verify_heapam() to superusers and be done. What do you think?\n\nI think that's an old and largely failed approach. If you want to use\npg_class_ownercheck here rather than pg_class_aclcheck or something\nlike that, seems fair enough. But I don't think there should be an\nis-superuser check in the code, because we've been trying really hard\nto get rid of those in most places. And I also don't think there\nshould be no secondary permissions check, because if somebody does\ngrant execute permission on these functions, it's unlikely that they\nwant the person getting that permission to be able to check every\nrelation in the system even those on which they have no other\nprivileges at all.\n\nBut now I see that there's no secondary permission check in the\nverify_nbtree.c code. Is that intentional? Peter, what's the\njustification for that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Sep 2020 15:41:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Sep 22, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But now I see that there's no secondary permission check in the\n> verify_nbtree.c code. Is that intentional? Peter, what's the\n> justification for that?\n\nAs noted by comments in contrib/amcheck/sql/check_btree.sql (the\nverify_nbtree.c tests), this is intentional. Note that we explicitly\ntest that a non-superuser role can perform verification following\nGRANT EXECUTE ON FUNCTION ... .\n\nAs I mentioned earlier, this is supported (or at least it is supported\nin my interpretation of things). It just isn't documented anywhere\noutside the test itself.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Sep 2020 13:17:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Sep 21, 2020 at 2:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> +REVOKE ALL ON FUNCTION\n> +verify_heapam(regclass, boolean, boolean, cstring, bigint, bigint)\n> +FROM PUBLIC;\n>\n> This too.\n\nDo we really want to use a cstring as an enum-like argument?\n\nI think that I see a bug at this point in check_tuple() (in\nv15-0001-Adding-function-verify_heapam-to-amcheck-module.patch):\n\n> + /* If xmax is a multixact, it should be within valid range */\n> + xmax = HeapTupleHeaderGetRawXmax(ctx->tuphdr);\n> + if ((infomask & HEAP_XMAX_IS_MULTI) && !mxid_valid_in_rel(xmax, ctx))\n> + {\n\n*** SNIP ***\n\n> + }\n> +\n> + /* If xmax is normal, it should be within valid range */\n> + if (TransactionIdIsNormal(xmax))\n> + {\n\nWhy should it be okay to call TransactionIdIsNormal(xmax) at this\npoint? It isn't certain that xmax is an XID at all (could be a\nMultiXactId, since you called HeapTupleHeaderGetRawXmax() to get the\nvalue in the first place). Don't you need to check \"(infomask &\nHEAP_XMAX_IS_MULTI) == 0\" here?\n\nThis does look like it's shaping up. Thanks for working on it, Mark.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Sep 2020 16:18:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Sat, Aug 29, 2020 at 10:48 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I had an earlier version of the verify_heapam patch that included a non-throwing interface to clog. Ultimately, I ripped that out. My reasoning was that a simpler patch submission was more likely to be acceptable to the community.\n\nIsn't some kind of pragmatic compromise possible?\n\n> But I don't want to make this patch dependent on that hypothetical patch getting written and accepted.\n\nFair enough, but if you're alluding to what I said then about\ncheck_tuphdr_xids()/clog checking a while back then FWIW I didn't\nintend to block progress on clog/xact status verification at all. I\njust don't think that it is sensible to impose an iron clad guarantee\nabout having no assertion failures with corrupt clog data -- that\nleads to far too much code duplication. But why should you need to\nprovide an absolute guarantee of that?\n\nI for one would be fine with making the clog checks an optional extra,\nthat rescinds the no crash guarantee that you're keen on -- just like\nwith the TOAST checks that you have already in v15. It might make\nsense to review how often crashes occur with simulated corruption, and\nthen to minimize the number of occurrences in the real world. Maybe we\ncould tolerate a usually-no-crash interface to clog -- if it could\nstill have assertion failures. Making a strong guarantee about\nassertions seems unnecessary.\n\nI don't see how verify_heapam will avoid raising an error during basic\nvalidation from PageIsVerified(), which will violate the guarantee\nabout not throwing errors. I don't see that as a problem myself, but\npresumably you will.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Sep 2020 17:16:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Sep 21, 2020 at 2:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> +REVOKE ALL ON FUNCTION\n>> +verify_heapam(regclass, boolean, boolean, cstring, bigint, bigint)\n>> +FROM PUBLIC;\n>> \n>> This too.\n\n> Do we really want to use a cstring as an enum-like argument?\n\nUgh. We should not be using cstring as a SQL-exposed datatype\nunless there really is no alternative. Why wasn't this argument\ndeclared \"text\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Sep 2020 21:17:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Tue, Sep 22, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > But now I see that there's no secondary permission check in the\n> > verify_nbtree.c code. Is that intentional? Peter, what's the\n> > justification for that?\n> \n> As noted by comments in contrib/amcheck/sql/check_btree.sql (the\n> verify_nbtree.c tests), this is intentional. Note that we explicitly\n> test that a non-superuser role can perform verification following\n> GRANT EXECUTE ON FUNCTION ... .\n\n> As I mentioned earlier, this is supported (or at least it is supported\n> in my interpretation of things). It just isn't documented anywhere\n> outside the test itself.\n\nWould certainly be good to document this but I tend to agree with the\ncomments that ideally-\n\na) it'd be nice for a relatively low-privileged user/process could run\n the tests in an ongoing manner\nb) we don't want to add more is-superuser checks\nc) users shouldn't really be given the ability to see rows they're not\n supposed to have access to\n\nIn other places in the code, when an error is generated and the user\ndoesn't have access to the underlying table or doesn't have BYPASSRLS,\nwe don't include the details or the actual data in the error. Perhaps\nthat approach would make sense here (or perhaps not, but it doesn't seem\nentirely crazy to me, anyway). In other words:\n\na) keep the ability for someone who has EXECUTE on the function to be\n able to run the function against any relation\nb) when we detect an issue, perform a permissions check to see if the\n user calling the function has rights to read the rows of the table\n and, if RLS is enabled on the table, if they have BYPASSRLS\nc) if the user has appropriate privileges, log the detailed error, if\n not, return a generic error with a HINT that details weren't\n available due to lack of privileges on the relation\n\nI can appreciate the concerns regarding dead rows ending up being\nvisible to someone who wouldn't normally be able to see them but I'd\nargue we could simply document that fact rather than try to build\nsomething to address it, for this particular case. If there's push back\non that then I'd suggest we have a \"can read dead rows\" or some such\ncapability that can be GRANT'd (in the form of a default role, I would\nthink) which a user would also have to have in order to get detailed\nerror reports from this function.\n\nThanks,\n\nStephen", "msg_date": "Wed, 23 Sep 2020 09:46:50 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Aug 25, 2020 at 07:36:53AM -0700, Mark Dilger wrote:\n> Removed.\n\nThis patch is failing to compile on Windows:\nC:\\projects\\postgresql\\src\\include\\fe_utils/print.h(18): fatal error\n C1083: Cannot open include file: 'libpq-fe.h': No such file or\n directory [C:\\projects\\postgresql\\pg_amcheck.vcxproj]\n\nIt looks like you forgot to tweak the scripts in src/tools/msvc/.\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 14:56:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Robert, Peter, Andrey, Stephen, and Michael,\n\nAttached is a new version based in part on your review comments, quoted and responded to below as necessary.\n\nThere remain a few open issues and/or things I did not implement:\n\n- This version follows Robert's suggestion of using pg_class_aclcheck() to check that the caller has permission to select from the table being checked. This is inconsistent with the btree checking logic, which does no such check. These two approaches should be reconciled, but there was apparently no agreement on this issue.\n\n- The public facing documentation, currently live at https://www.postgresql.org/docs/13/amcheck.html, claims \"amcheck functions may only be used by superusers.\" The docs on master still say the same. This patch replaces that language with alternate language explaining that execute permissions may be granted to non-superusers, along with a warning about the risk of data leakage. Perhaps some portion of that language in this patch should be back-patched?\n\n- Stephen's comments about restricting how much information goes into the returned corruption report depending on the permissions of the caller has not been implemented. I may implement some of this if doing so is consistent with whatever we decide to do for the aclcheck issue, above, though probably not. It seems overly complicated.\n\n- This version does not change clog handling, which leaves Andrey's concern unaddressed. Peter also showed some support for (or perhaps just a lack of opposition to) doing more of what Andrey suggests. I may come back to this issue, depending on time available and further feedback.\n\n\nMoving on to Michael's review....\n\n> On Sep 28, 2020, at 10:56 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Aug 25, 2020 at 07:36:53AM -0700, Mark Dilger wrote:\n>> Removed.\n> \n> This patch is failing to compile on Windows:\n> C:\\projects\\postgresql\\src\\include\\fe_utils/print.h(18): fatal error\n> C1083: Cannot open include file: 'libpq-fe.h': No such file or\n> directory [C:\\projects\\postgresql\\pg_amcheck.vcxproj]\n> \n> It looks like you forgot to tweak the scripts in src/tools/msvc/.\n\nFixed, I think. I have not tested on windows.\n\n\nMoving on to Stephen's review....\n\n> On Sep 23, 2020, at 6:46 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Peter Geoghegan (pg@bowt.ie) wrote:\n>> On Tue, Sep 22, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> But now I see that there's no secondary permission check in the\n>>> verify_nbtree.c code. Is that intentional? Peter, what's the\n>>> justification for that?\n>> \n>> As noted by comments in contrib/amcheck/sql/check_btree.sql (the\n>> verify_nbtree.c tests), this is intentional. Note that we explicitly\n>> test that a non-superuser role can perform verification following\n>> GRANT EXECUTE ON FUNCTION ... .\n> \n>> As I mentioned earlier, this is supported (or at least it is supported\n>> in my interpretation of things). It just isn't documented anywhere\n>> outside the test itself.\n> \n> Would certainly be good to document this but I tend to agree with the\n> comments that ideally-\n> \n> a) it'd be nice for a relatively low-privileged user/process could run\n> the tests in an ongoing manner\n> b) we don't want to add more is-superuser checks\n> c) users shouldn't really be given the ability to see rows they're not\n> supposed to have access to\n> \n> In other places in the code, when an error is generated and the user\n> doesn't have access to the underlying table or doesn't have BYPASSRLS,\n> we don't include the details or the actual data in the error. Perhaps\n> that approach would make sense here (or perhaps not, but it doesn't seem\n> entirely crazy to me, anyway). In other words:\n> \n> a) keep the ability for someone who has EXECUTE on the function to be\n> able to run the function against any relation\n> b) when we detect an issue, perform a permissions check to see if the\n> user calling the function has rights to read the rows of the table\n> and, if RLS is enabled on the table, if they have BYPASSRLS\n> c) if the user has appropriate privileges, log the detailed error, if\n> not, return a generic error with a HINT that details weren't\n> available due to lack of privileges on the relation\n> \n> I can appreciate the concerns regarding dead rows ending up being\n> visible to someone who wouldn't normally be able to see them but I'd\n> argue we could simply document that fact rather than try to build\n> something to address it, for this particular case. If there's push back\n> on that then I'd suggest we have a \"can read dead rows\" or some such\n> capability that can be GRANT'd (in the form of a default role, I would\n> think) which a user would also have to have in order to get detailed\n> error reports from this function.\n\nThere wasn't enough agreement on the thread about how this should work, so I left this idea unimplemented.\n\nI'm a bit concerned that restricting the results for non-superusers would create a perverse incentive to use a superuser role to connect and check tables. On the other hand, there would not be any difference in the output in the common case that no corruption exists, so maybe the perverse incentive would not be too significant.\n\nImplementing the idea you outline would complicate the patch a fair amount, as we'd need to tailor all the reports in this way, and extend the tests to verify we're not leaking any information to non-superusers. I would prefer to find a simpler solution.\n\n\nMoving on to Robert's review....\n\n> On Sep 21, 2020, at 2:09 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Aug 25, 2020 at 10:36 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Thanks for the review!\n> \n> + msg OUT text\n> + )\n> \n> Looks like atypical formatting.\n> \n> +REVOKE ALL ON FUNCTION\n> +verify_heapam(regclass, boolean, boolean, cstring, bigint, bigint)\n> +FROM PUBLIC;\n> \n> This too.\n\nChanged in this next version.\n\n> +-- Don't want this to be available to public\n> \n> Add \"by default, but superusers can grant access\" or so?\n\nHmm. I borrowed the verbiage from elsewhere.\n\ncontrib/pg_buffercache/pg_buffercache--1.2.sql:-- Don't want these to be available to public.\ncontrib/pg_freespacemap/pg_freespacemap--1.1.sql:-- Don't want these to be available to public.\ncontrib/pg_visibility/pg_visibility--1.1.sql:-- Don't want these to be available to public.\n\n> I think there should be a call to pg_class_aclcheck() here, just like\n> the one in pg_prewarm, so that if the superuser does choose to grant\n> access, users given access can check tables they anyway have\n> permission to access, but not others. Maybe put that in\n> check_relation_relkind_and_relam() and rename it. Might want to look\n> at the pg_surgery precedent, too.\n\nI don't think there are any great options here, but for this next version I've done it with pg_class_aclcheck().\n\n> Oh, and that functions header\n> comment is also wrong.\n\nChanged in this next version.\n\n> I think that the way the checks on the block range are performed could\n> be improved. Generally, we want to avoid reporting the same problem\n> with a variety of different message strings, because it adds burden\n> for translators and is potentially confusing for users. You've got two\n> message strings that are only going to be used for empty relations and\n> a third message string that is only going to be used for non-empty\n> relations. What stops you from just ripping off the way that this is\n> done in pg_prewarm, which requires only 2 messages? Then you'd be\n> adding a net total of 0 new messages instead of 3, and in my view they\n> would be clearer than your third message, \"block range is out of\n> bounds for relation with block count %u: \" INT64_FORMAT \" .. \"\n> INT64_FORMAT, which doesn't say very precisely what the problem is,\n> and also falls afoul of our usual practice of avoiding the use of\n> INT64_FORMAT in error messages that are subject to translation. I\n> notice that pg_prewarm just silently does nothing if the start and end\n> blocks are swapped, rather than generating an error. We could choose\n> to do differently here, but I'm not sure why we should bother.\n\nThis next version borrows pg_prewarm's messages as you suggest, except that pg_prewarm embeds INT64_FORMAT in the message strings, which are replaced with %u in this next patch. Also, there is no good way to report an invalid block range for empty tables using these messages, so the patch now just exists early in such a case for invalid ranges without throwing an error. This is a little bit non-orthogonal with how invalid block ranges are handled on non-empty tables, but perhaps that's ok. \n\n> \n> + all_frozen = mapbits & VISIBILITYMAP_ALL_VISIBLE;\n> + all_visible = mapbits & VISIBILITYMAP_ALL_FROZEN;\n> +\n> + if ((all_frozen && skip_option ==\n> SKIP_PAGES_ALL_FROZEN) ||\n> + (all_visible && skip_option ==\n> SKIP_PAGES_ALL_VISIBLE))\n> + {\n> + continue;\n> + }\n> \n> This isn't horrible style, but why not just get rid of the local\n> variables? e.g. if (skip_option == SKIP_PAGES_ALL_FROZEN) { if\n> ((mapbits & VISIBILITYMAP_ALL_FROZEN) != 0) continue; } else { ... }\n> \n> Typically no braces around a block containing only one line.\n\nChanged in this next version.\n\n> + * table contains corrupt all frozen bits, a concurrent vacuum might skip the\n> \n> all-frozen?\n\nChanged in this next version.\n\n> + * relfrozenxid beyond xid.) Reporting the xid as valid under such conditions\n> + * seems acceptable, since if we had checked it earlier in our scan it would\n> + * have truly been valid at that time, and we break no MVCC guarantees by\n> + * failing to notice the concurrent change in its status.\n> \n> I agree with the first half of this sentence, but I don't know what\n> MVCC guarantees have to do with anything. I'd just delete the second\n> part, or make it a lot clearer.\n\nChanged in this next version to simply omit the MVCC related language.\n\n> \n> + * Some kinds of tuple header corruption make it unsafe to check the tuple\n> + * attributes, for example when the tuple is foreshortened and such checks\n> + * would read beyond the end of the line pointer (and perhaps the page). In\n> \n> I think of foreshortening mostly as an art term, though I guess it has\n> other meanings. Maybe it would be clearer to say something like \"Some\n> kinds of corruption make it unsafe to check the tuple attributes, for\n> example when the line pointer refers to a range of bytes outside the\n> page\"?\n> \n> + * Other kinds of tuple header corruption do not bare on the question of\n> \n> bear\n\nChanged.\n\n> + pstrdup(_(\"updating\n> transaction ID marked incompatibly as keys updated and locked\n> only\")));\n> + pstrdup(_(\"updating\n> transaction ID marked incompatibly as committed and as a\n> multitransaction ID\")));\n> \n> \"updating transaction ID\" might scare somebody who thinks that you are\n> telling them that you changed something. That's not what it means, but\n> it might not be totally clear. Maybe:\n> \n> tuple is marked as only locked, but also claims key columns were updated\n> multixact should not be marked committed\n\nChanged to use your verbiage.\n\n> +\n> psprintf(_(\"data offset differs from expected: %u vs. %u (1 attribute,\n> has nulls)\"),\n> \n> For these, how about:\n> \n> tuple data should begin at byte %u, but actually begins at byte %u (1\n> attribute, has nulls)\n> etc.\n\nIs it ok to embed interpolated values into the message string like that? I thought that made it harder for translators. I agree that your language is easier to understand, and have used it in this next version of the patch. Many of your comments that follow raise the same issue, but I'm using your verbiage anyway.\n\n> +\n> psprintf(_(\"old-style VACUUM FULL transaction ID is in the future:\n> %u\"),\n> +\n> psprintf(_(\"old-style VACUUM FULL transaction ID precedes freeze\n> threshold: %u\"),\n> +\n> psprintf(_(\"old-style VACUUM FULL transaction ID is invalid in this\n> relation: %u\"),\n> \n> old-style VACUUM FULL transaction ID %u is in the future\n> old-style VACUUM FULL transaction ID %u precedes freeze threshold %u\n> old-style VACUUM FULL transaction ID %u out of range %u..%u\n> \n> Doesn't the second of these overlap with the third?\n\nGood point. If the second one reports, so will the third. I've changed it to use if/else if logic to avoid that, and to use your suggested verbiage.\n\n> \n> Similarly in other places, e.g.\n> \n> +\n> psprintf(_(\"inserting transaction ID is in the future: %u\"),\n> \n> I think this should change to: inserting transaction ID %u is in the future\n\nChanged, along with similarly formatted messages.\n\n> \n> + else if (VARATT_IS_SHORT(chunk))\n> + /*\n> + * could happen due to heap_form_tuple doing its thing\n> + */\n> + chunksize = VARSIZE_SHORT(chunk) - VARHDRSZ_SHORT;\n> \n> Add braces here, since there are multiple lines.\n\nChanged.\n\n> \n> + psprintf(_(\"toast\n> chunk sequence number not the expected sequence number: %u vs. %u\"),\n> \n> toast chunk sequence number %u does not match expected sequence number %u\n> \n> There are more instances of this kind of thing.\n\nChanged.\n\n> +\n> psprintf(_(\"toasted attribute has unexpected TOAST tag: %u\"),\n> \n> Remove colon.\n\nChanged.\n\n> +\n> psprintf(_(\"attribute ends at offset beyond total tuple length: %u vs.\n> %u (attribute length %u)\"),\n> \n> Let's try to specify the attribute number in the attribute messages\n> where we can, e.g.\n> \n> +\n> psprintf(_(\"attribute ends at offset beyond total tuple length: %u vs.\n> %u (attribute length %u)\"),\n> \n> How about: attribute %u with length %u should end at offset %u, but\n> the tuple length is only %u\n\nI had omitted the attribute numbers from the attribute corruption messages because attnum is one of the OUT parameters from verify_heapam. I'm including attnum in the message text for this next version, as you request.\n\n> + if (TransactionIdIsNormal(ctx->relfrozenxid) &&\n> + TransactionIdPrecedes(xmin, ctx->relfrozenxid))\n> + {\n> + report_corruption(ctx,\n> + /*\n> translator: Both %u are transaction IDs. */\n> +\n> psprintf(_(\"inserting transaction ID is from before freeze cutoff: %u\n> vs. %u\"),\n> +\n> xmin, ctx->relfrozenxid));\n> + fatal = true;\n> + }\n> + else if (!xid_valid_in_rel(xmin, ctx))\n> + {\n> + report_corruption(ctx,\n> + /*\n> translator: %u is a transaction ID. */\n> +\n> psprintf(_(\"inserting transaction ID is in the future: %u\"),\n> +\n> xmin));\n> + fatal = true;\n> + }\n> \n> This seems like good evidence that xid_valid_in_rel needs some\n> rethinking. As far as I can see, every place where you call\n> xid_valid_in_rel, you have checks beforehand that duplicate some of\n> what it does, so that you can give a more accurate error message.\n> That's not good. Either the message should be adjusted so that it\n> covers all the cases \"e.g. tuple xmin %u is outside acceptable range\n> %u..%u\" or we should just get rid of xid_valid_in_rel() and have\n> separate error messages for each case, e.g. tuple xmin %u precedes\n> relfrozenxid %u\".\n\nThis next version is refactored, removing the function xid_valid_in_rel entirely, and structuring get_xid_status differently.\n\n> I think it's OK to use terms like xmin and xmax in\n> these messages, rather than inserting transaction ID etc. We have\n> existing instances of that, and while someone might judge it\n> user-unfriendly, I disagree. A person who is qualified to interpret\n> this output must know what 'tuplex min' means immediately, but whether\n> they can understand that 'inserting transaction ID' means the same\n> thing is questionable, I think.\n\nDone.\n\n> This is not a full review, but in general I think that this is getting\n> pretty close to being committable. The error messages seem to still\n> need some polishing and I wouldn't be surprised if there are a few\n> more bugs lurking yet, but I think it's come a long way.\n\nThis next version has some other message rewording. While testing, I found it odd to report an xid as out of bounds (in the future, or before the freeze threshold, etc.), without mentioning the xid value against which it is being compared unfavorably. We don't normally need to think about the epoch when comparing two xids against each other, as they must both make sense relative to the current epoch; but for corruption, you can't assume the corrupt xid was written relative to any particular epoch, and only the 32-bit xid value can be printed since the epoch is unknown. The other xid value (freeze threshold, etc) can be printed with the epoch information, but printing the epoch+xid merely as xid8out does (in other words, as a UINT64) makes the messages thoroughly confusing. I went with the equivalent of sprintf(\"%u:%u\", epoch, xid), which follows the precedent from pg_controldata.c, gistdesc.c, and elsewhere.\n\n\nMoving on to Peter's reviews....\n\n> On Sep 22, 2020, at 4:18 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Mon, Sep 21, 2020 at 2:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> +REVOKE ALL ON FUNCTION\n>> +verify_heapam(regclass, boolean, boolean, cstring, bigint, bigint)\n>> +FROM PUBLIC;\n>> \n>> This too.\n> \n> Do we really want to use a cstring as an enum-like argument?\n\nPerhaps not. This next version has that as text.\n\n> \n> I think that I see a bug at this point in check_tuple() (in\n> v15-0001-Adding-function-verify_heapam-to-amcheck-module.patch):\n> \n>> + /* If xmax is a multixact, it should be within valid range */\n>> + xmax = HeapTupleHeaderGetRawXmax(ctx->tuphdr);\n>> + if ((infomask & HEAP_XMAX_IS_MULTI) && !mxid_valid_in_rel(xmax, ctx))\n>> + {\n> \n> *** SNIP ***\n> \n>> + }\n>> +\n>> + /* If xmax is normal, it should be within valid range */\n>> + if (TransactionIdIsNormal(xmax))\n>> + {\n> \n> Why should it be okay to call TransactionIdIsNormal(xmax) at this\n> point? It isn't certain that xmax is an XID at all (could be a\n> MultiXactId, since you called HeapTupleHeaderGetRawXmax() to get the\n> value in the first place). Don't you need to check \"(infomask &\n> HEAP_XMAX_IS_MULTI) == 0\" here?\n\nI think you are right. This check you suggest is used in this next version.\n\n\n> On Sep 22, 2020, at 5:16 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Sat, Aug 29, 2020 at 10:48 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I had an earlier version of the verify_heapam patch that included a non-throwing interface to clog. Ultimately, I ripped that out. My reasoning was that a simpler patch submission was more likely to be acceptable to the community.\n> \n> Isn't some kind of pragmatic compromise possible?\n> \n>> But I don't want to make this patch dependent on that hypothetical patch getting written and accepted.\n> \n> Fair enough, but if you're alluding to what I said then about\n> check_tuphdr_xids()/clog checking a while back then FWIW I didn't\n> intend to block progress on clog/xact status verification at all.\n\nI don't recall your comments factoring into my thinking on this specific issue, but rather a conversation I had off-list with Robert. The clog interface may be a hot enough code path that adding a flag for non-throwing behavior merely to support a contrib module might be resisted. If folks generally like such a change to the clog interface, I could consider adding that as a third patch in this set.\n\n> I\n> just don't think that it is sensible to impose an iron clad guarantee\n> about having no assertion failures with corrupt clog data -- that\n> leads to far too much code duplication. But why should you need to\n> provide an absolute guarantee of that?\n> \n> I for one would be fine with making the clog checks an optional extra,\n> that rescinds the no crash guarantee that you're keen on -- just like\n> with the TOAST checks that you have already in v15. It might make\n> sense to review how often crashes occur with simulated corruption, and\n> then to minimize the number of occurrences in the real world. Maybe we\n> could tolerate a usually-no-crash interface to clog -- if it could\n> still have assertion failures. Making a strong guarantee about\n> assertions seems unnecessary.\n> \n> I don't see how verify_heapam will avoid raising an error during basic\n> validation from PageIsVerified(), which will violate the guarantee\n> about not throwing errors. I don't see that as a problem myself, but\n> presumably you will.\n\nMy concern is not so much that verify_heapam will stop with an error, but rather that it might trigger a panic that stops all backends. Stopping with an error merely because it hits corruption is not ideal, as I would rather it completed the scan and reported all corruptions found, but that's minor compared to the damage done if verify_heapam creates downtime in a production environment offering high availability guarantees. That statement might seem nuts, given that the corrupt table itself would be causing downtime, but that analysis depends on assumptions about table access patterns, and there is no a priori reason to think that corrupt pages are necessarily ever being accessed, or accessed in a way that causes crashes (rather than merely wrong results) outside verify_heapam scanning the whole table.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 5 Oct 2020 17:24:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 5, 2020, at 5:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> - This version does not change clog handling, which leaves Andrey's concern unaddressed. Peter also showed some support for (or perhaps just a lack of opposition to) doing more of what Andrey suggests. I may come back to this issue, depending on time available and further feedback.\n\nAttached is a patch set that includes the clog handling as discussed. The 0001 and 0002 are effectively unchanged since version 16 posted yesterday, but this now includes 0003 which creates a non-throwing interface to clog, and 0004 which uses the non-throwing interface from within amcheck's heap checking functions.\n\nI think this is a pretty good sketch for discussion, though I am unsatisfied with the lack of regression test coverage of verify_heapam in the presence of clog truncation. I was hoping to have that as part of v17, but since it is taking a bit longer than I anticipated, I'll have to come back with that in a later patch.\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 6 Oct 2020 16:20:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> 7 окт. 2020 г., в 04:20, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n> \n> \n>> On Oct 5, 2020, at 5:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> \n>> - This version does not change clog handling, which leaves Andrey's concern unaddressed. Peter also showed some support for (or perhaps just a lack of opposition to) doing more of what Andrey suggests. I may come back to this issue, depending on time available and further feedback.\n> \n> Attached is a patch set that includes the clog handling as discussed. The 0001 and 0002 are effectively unchanged since version 16 posted yesterday, but this now includes 0003 which creates a non-throwing interface to clog, and 0004 which uses the non-throwing interface from within amcheck's heap checking functions.\n> \n> I think this is a pretty good sketch for discussion, though I am unsatisfied with the lack of regression test coverage of verify_heapam in the presence of clog truncation. I was hoping to have that as part of v17, but since it is taking a bit longer than I anticipated, I'll have to come back with that in a later patch.\n> \n\nMany thanks, Mark! I really appreciate this functionality. It could save me many hours of recreating clogs.\n\nI'm not entire sure this message is correct: psprintf(_(\"xmax %u commit status is lost\")\nIt seems to me to be not commit status, but rather transaction status.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 7 Oct 2020 11:27:13 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 6, 2020, at 11:27 PM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 7 окт. 2020 г., в 04:20, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n>> \n>> \n>> \n>>> On Oct 5, 2020, at 5:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> \n>>> - This version does not change clog handling, which leaves Andrey's concern unaddressed. Peter also showed some support for (or perhaps just a lack of opposition to) doing more of what Andrey suggests. I may come back to this issue, depending on time available and further feedback.\n>> \n>> Attached is a patch set that includes the clog handling as discussed. The 0001 and 0002 are effectively unchanged since version 16 posted yesterday, but this now includes 0003 which creates a non-throwing interface to clog, and 0004 which uses the non-throwing interface from within amcheck's heap checking functions.\n>> \n>> I think this is a pretty good sketch for discussion, though I am unsatisfied with the lack of regression test coverage of verify_heapam in the presence of clog truncation. I was hoping to have that as part of v17, but since it is taking a bit longer than I anticipated, I'll have to come back with that in a later patch.\n>> \n> \n> Many thanks, Mark! I really appreciate this functionality. It could save me many hours of recreating clogs.\n\nYou are quite welcome, though the thanks may be premature. I posted 0003 and 0004 patches mostly as concrete implementation examples that can be criticized.\n\n> I'm not entire sure this message is correct: psprintf(_(\"xmax %u commit status is lost\")\n> It seems to me to be not commit status, but rather transaction status.\n\nI have changed several such messages to say \"transaction status\" rather than \"commit status\". I'll be posting it in a separate email, shortly.\n\nThanks for reviewing!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Oct 2020 17:37:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 5, 2020, at 5:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> There remain a few open issues and/or things I did not implement:\n> \n> - This version follows Robert's suggestion of using pg_class_aclcheck() to check that the caller has permission to select from the table being checked. This is inconsistent with the btree checking logic, which does no such check. These two approaches should be reconciled, but there was apparently no agreement on this issue.\n\nThis next version, attached, has the acl checking and associated documentation changes split out into patch 0005, making it easier to review in isolation from the rest of the patch series.\n\nIndependently of acl considerations, this version also has some verbiage changes in 0004, in response to Andrey's review upthread.\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 7 Oct 2020 18:01:24 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Oct 5, 2020 at 5:24 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > I don't see how verify_heapam will avoid raising an error during basic\n> > validation from PageIsVerified(), which will violate the guarantee\n> > about not throwing errors. I don't see that as a problem myself, but\n> > presumably you will.\n>\n> My concern is not so much that verify_heapam will stop with an error, but rather that it might trigger a panic that stops all backends. Stopping with an error merely because it hits corruption is not ideal, as I would rather it completed the scan and reported all corruptions found, but that's minor compared to the damage done if verify_heapam creates downtime in a production environment offering high availability guarantees. That statement might seem nuts, given that the corrupt table itself would be causing downtime, but that analysis depends on assumptions about table access patterns, and there is no a priori reason to think that corrupt pages are necessarily ever being accessed, or accessed in a way that causes crashes (rather than merely wrong results) outside verify_heapam scanning the whole table.\n\nThat seems reasonable to me. I think that it makes sense to never take\ndown the server in a non-debug build with verify_heapam. That's not\nwhat I took away from your previous remarks on the issue, but perhaps\nit doesn't matter now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 7 Oct 2020 18:42:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Oct 7, 2020 at 9:01 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> This next version, attached, has the acl checking and associated documentation changes split out into patch 0005, making it easier to review in isolation from the rest of the patch series.\n>\n> Independently of acl considerations, this version also has some verbiage changes in 0004, in response to Andrey's review upthread.\n\nI was about to commit 0001, after making some cosmetic changes, when I\ndiscovered that it won't link for me. I think there must be something\nwrong with the NLS stuff. My version of 0001 is attached. The error I\ngot is:\n\nccache clang -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -Wno-unused-command-line-argument -g -O2 -Wall -Werror\n-fno-omit-frame-pointer -bundle -multiply_defined suppress -o\namcheck.so verify_heapam.o verify_nbtree.o -L../../src/port\n-L../../src/common -L/opt/local/lib -L/opt/local/lib\n-L/opt/local/lib -L/opt/local/lib -L/opt/local/lib\n-Wl,-dead_strip_dylibs -Wall -Werror -fno-omit-frame-pointer\n-bundle_loader ../../src/backend/postgres\nUndefined symbols for architecture x86_64:\n \"_libintl_gettext\", referenced from:\n _verify_heapam in verify_heapam.o\n _check_tuple in verify_heapam.o\nld: symbol(s) not found for architecture x86_64\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\nmake: *** [amcheck.so] Error 1\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 21 Oct 2020 15:46:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On 2020-Oct-21, Robert Haas wrote:\n\n> On Wed, Oct 7, 2020 at 9:01 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > This next version, attached, has the acl checking and associated documentation changes split out into patch 0005, making it easier to review in isolation from the rest of the patch series.\n> >\n> > Independently of acl considerations, this version also has some verbiage changes in 0004, in response to Andrey's review upthread.\n> \n> I was about to commit 0001, after making some cosmetic changes, when I\n> discovered that it won't link for me. I think there must be something\n> wrong with the NLS stuff. My version of 0001 is attached. The error I\n> got is:\n\nHmm ... I don't think we have translation support in contrib, do we? I\nthink you could solve that by adding a \"#undef _, #define _(...) (...)\"\nor similar at the top of the offending C files, assuming you don't want\nto rip out all use of _() there.\n\nTBH the usage of \"translation:\" comments in this patch seems\nover-enthusiastic to me.\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 17:13:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 21, 2020, at 1:13 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2020-Oct-21, Robert Haas wrote:\n> \n>> On Wed, Oct 7, 2020 at 9:01 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> This next version, attached, has the acl checking and associated documentation changes split out into patch 0005, making it easier to review in isolation from the rest of the patch series.\n>>> \n>>> Independently of acl considerations, this version also has some verbiage changes in 0004, in response to Andrey's review upthread.\n>> \n>> I was about to commit 0001, after making some cosmetic changes, when I\n>> discovered that it won't link for me. I think there must be something\n>> wrong with the NLS stuff. My version of 0001 is attached. The error I\n>> got is:\n> \n> Hmm ... I don't think we have translation support in contrib, do we? I\n> think you could solve that by adding a \"#undef _, #define _(...) (...)\"\n> or similar at the top of the offending C files, assuming you don't want\n> to rip out all use of _() there.\n\nThere is still something screwy here, though, as this compiles, links and runs fine for me on mac and linux, but not for Robert.\n\nOn mac, I'm using the toolchain from XCode, whereas Robert is using MacPorts.\n\nMine reports:\n\nApple clang version 11.0.0 (clang-1100.0.33.17)\nTarget: x86_64-apple-darwin19.6.0\nThread model: posix\nInstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n\nRobert's reports:\n\nclang version 5.0.2 (tags/RELEASE_502/final)\nTarget: x86_64-apple-darwin19.4.0\nThread model: posix\nInstalledDir: /opt/local/libexec/llvm-5.0/bin\n\nOn linux, I'm using gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36) \n\nSearching around on the web, there are various reports of MacPort's clang not linking libintl correctly, though I don't know if that is a real problem with MacPorts or just a few cases of user error. Has anybody else following this thread had issues with MacPort's version of clang vis-a-vis linking libintl's gettext?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 13:43:23 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I was about to commit 0001, after making some cosmetic changes, when I\n> discovered that it won't link for me. I think there must be something\n> wrong with the NLS stuff. My version of 0001 is attached. The error I\n> got is:\n\nWell, the short answer would be \"you need to add\n\nSHLIB_LINK += $(filter -lintl, $(LIBS))\n\nto the Makefile\". However, I would vote against that, because in point\nof fact amcheck has no translation support, just like all our other\ncontrib modules. What should likely happen instead is to rip out\nwhatever code is overoptimistically expecting it needs to support\ntranslation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Oct 2020 16:47:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> There is still something screwy here, though, as this compiles, links and runs fine for me on mac and linux, but not for Robert.\n\nAre you using --enable-nls at all on your Mac build? Because for sure it\nshould not work there, given the failure to include -lintl in amcheck's\nlink step. Some platforms are forgiving of that, but not Mac.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Oct 2020 16:51:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 21, 2020, at 1:51 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> There is still something screwy here, though, as this compiles, links and runs fine for me on mac and linux, but not for Robert.\n> \n> Are you using --enable-nls at all on your Mac build? Because for sure it\n> should not work there, given the failure to include -lintl in amcheck's\n> link step. Some platforms are forgiving of that, but not Mac.\n\nThanks, Tom!\n\nNo, that's the answer. I had a typo/thinko in my configure options, --with-nls instead of --enable-nls, and the warning about it being an invalid flag went by so fast I didn't see it. I had it spelled correctly on linux, but I guess that's one of the platforms that is more forgiving.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 13:58:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 21, 2020, at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I was about to commit 0001, after making some cosmetic changes, when I\n>> discovered that it won't link for me. I think there must be something\n>> wrong with the NLS stuff. My version of 0001 is attached. The error I\n>> got is:\n> \n> Well, the short answer would be \"you need to add\n> \n> SHLIB_LINK += $(filter -lintl, $(LIBS))\n> \n> to the Makefile\". However, I would vote against that, because in point\n> of fact amcheck has no translation support, just like all our other\n> contrib modules. What should likely happen instead is to rip out\n> whatever code is overoptimistically expecting it needs to support\n> translation.\n\nDone that way in the attached, which also include Robert's changes from v19 he posted earlier today.\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 21 Oct 2020 20:45:08 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Oct 21, 2020 at 11:45 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Done that way in the attached, which also include Robert's changes from v19 he posted earlier today.\n\nCommitted. Let's see what the buildfarm thinks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 08:51:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed. Let's see what the buildfarm thinks.\n\nIt is mostly happy, but thorntail is not:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2020-10-22%2012%3A58%3A11\n\nI thought that the problem might be related to the fact that thorntail\nis using force_parallel_mode, but I tried that here and it did not\ncause a failure. So my next guess is that it is related to the fact\nthat this is a sparc64 machine, but it's hard to tell, since none of\nthe other sparc64 critters have run yet. In any case I don't know why\nthat would cause a failure. The messages in the log aren't very\nilluminating, unfortunately. :-(\n\nMark, any ideas what might cause specifically that set of tests to fail?\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:06:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The messages in the log aren't very\n> illuminating, unfortunately. :-(\n\nConsidering this is a TAP test, why in the world is it designed to hide\nall details of any unexpected amcheck messages? Surely being able to\nsee what amcheck is saying would be helpful here.\n\nIOW, don't have the tests abbreviate the module output with count(*),\nbut return the full thing, and then use a regex to see if you got what\nwas expected. If you didn't, the output will show what you did get.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:28:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Considering this is a TAP test, why in the world is it designed to hide\n> all details of any unexpected amcheck messages? Surely being able to\n> see what amcheck is saying would be helpful here.\n>\n> IOW, don't have the tests abbreviate the module output with count(*),\n> but return the full thing, and then use a regex to see if you got what\n> was expected. If you didn't, the output will show what you did get.\n\nYeah, that thought crossed my mind, too. But I'm not sure it would\nhelp in the case of this particular failure, because I think the\nproblem is that we're expecting to get complaints and instead getting\nnone.\n\nIt might be good to change it anyway, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:46:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "lapwing just spit up a possibly relevant issue:\n\nccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror -fPIC -I. -I. -I../../src/include -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/et -c -o verify_heapam.o verify_heapam.c\nverify_heapam.c: In function 'get_xid_status':\nverify_heapam.c:1432:5: error: 'fxid.value' may be used uninitialized in this function [-Werror=maybe-uninitialized]\ncc1: all warnings being treated as errors\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:58:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 7:06 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Oct 22, 2020 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Committed. Let's see what the buildfarm thinks.\n> \n> It is mostly happy, but thorntail is not:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2020-10-22%2012%3A58%3A11\n> \n> I thought that the problem might be related to the fact that thorntail\n> is using force_parallel_mode, but I tried that here and it did not\n> cause a failure. So my next guess is that it is related to the fact\n> that this is a sparc64 machine, but it's hard to tell, since none of\n> the other sparc64 critters have run yet. In any case I don't know why\n> that would cause a failure. The messages in the log aren't very\n> illuminating, unfortunately. :-(\n> \n> Mark, any ideas what might cause specifically that set of tests to fail?\n\nThe code is correctly handling an uncorrupted table, but then more or less randomly failing some of the time when processing a corrupt table.\n\nTom identified a problem with an uninitialized variable. I'm putting together a new patch set to address it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 09:01:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 22, 2020, at 9:01 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Oct 22, 2020, at 7:06 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Thu, Oct 22, 2020 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> Committed. Let's see what the buildfarm thinks.\n>> \n>> It is mostly happy, but thorntail is not:\n>> \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2020-10-22%2012%3A58%3A11\n>> \n>> I thought that the problem might be related to the fact that thorntail\n>> is using force_parallel_mode, but I tried that here and it did not\n>> cause a failure. So my next guess is that it is related to the fact\n>> that this is a sparc64 machine, but it's hard to tell, since none of\n>> the other sparc64 critters have run yet. In any case I don't know why\n>> that would cause a failure. The messages in the log aren't very\n>> illuminating, unfortunately. :-(\n>> \n>> Mark, any ideas what might cause specifically that set of tests to fail?\n> \n> The code is correctly handling an uncorrupted table, but then more or less randomly failing some of the time when processing a corrupt table.\n> \n> Tom identified a problem with an uninitialized variable. I'm putting together a new patch set to address it.\n\nThe 0001 attached patch addresses the -Werror=maybe-uninitialized problem.\n\nThe 0002 attached patch addresses the test failures:\n\nThe failing test is designed to stop the server, create blunt force trauma to the heap and toast files through overwriting garbage bytes, restart the server, and verify that corruption is detected by amcheck's verify_heapam(). The exact trauma is intended to be the same on all platforms, in terms of the number of bytes written and the location in the file that it gets written, but owing to differences between platforms, by design the test does not expect a particular corruption message.\n\nThe test was overwriting far fewer bytes than I had intended, but since it was still sufficient to create corruption on the platforms where I tested, I failed to notice. It should do a more thorough job now.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 22 Oct 2020 12:15:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 3:15 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The 0001 attached patch addresses the -Werror=maybe-uninitialized problem.\n\nI am skeptical. Why so much code churn to fix a compiler warning? And\neven in the revised code, *status isn't set in all cases, so I don't\nsee why this would satisfy the compiler. Even if it satisfies this\nparticular compiler for some other reason, some other compiler is\nbound to be unhappy sometime. It's better to just arrange to set\n*status always, and use a dummy value in cases where it doesn't\nmatter. Also, \"return XID_BOUNDS_OK;;\" has exceeded its recommended\nallowance of semicolons.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:00:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 1:00 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Oct 22, 2020 at 3:15 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The 0001 attached patch addresses the -Werror=maybe-uninitialized problem.\n> \n> I am skeptical. Why so much code churn to fix a compiler warning? And\n> even in the revised code, *status isn't set in all cases, so I don't\n> see why this would satisfy the compiler. Even if it satisfies this\n> particular compiler for some other reason, some other compiler is\n> bound to be unhappy sometime. It's better to just arrange to set\n> *status always, and use a dummy value in cases where it doesn't\n> matter. Also, \"return XID_BOUNDS_OK;;\" has exceeded its recommended\n> allowance of semicolons.\n\nI think the compiler warning was about fxid not being set. The callers pass NULL for status if they don't want status checked, so writing *status unconditionally would be an error. Also, if the xid being checked is out of bounds, we can't check the status of the xid in clog.\n\nAs for the code churn, I probably refactored it a bit more than I needed to fix the compiler warning about fxid, but that was because the old arrangement seemed to make it harder to reason about when and where fxid got set. I think that is more clear now.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:04:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "ooh, looks like prairiedog sees the problem too. That means I should be\nable to reproduce it under a debugger, if you're not certain yet where\nthe problem lies.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:09:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "... btw, having now looked more closely at get_xid_status(), I wonder\nhow come there aren't more compilers bitching about it, because it\nis very very obviously broken. In particular, the case of\nrequesting status for an xid that is BootstrapTransactionId or\nFrozenTransactionId *will* fall through to perform\nFullTransactionIdPrecedesOrEquals with an uninitialized fxid.\n\nThe fact that most compilers seem to fail to notice that is quite scary.\nI suppose it has something to do with FullTransactionId being a struct,\nwhich makes me wonder if that choice was quite as wise as we thought.\n\nMeanwhile, so far as this code goes, I wonder why you don't just change it\nto always set that value, ie\n\n\tXidBoundsViolation result;\n\tFullTransactionId fxid;\n\tFullTransactionId clog_horizon;\n\n+\tfxid = FullTransactionIdFromXidAndCtx(xid, ctx);\n+\n\t/* Quick check for special xids */\n\tif (!TransactionIdIsValid(xid))\n\t\tresult = XID_INVALID;\n\telse if (xid == BootstrapTransactionId || xid == FrozenTransactionId)\n\t\tresult = XID_BOUNDS_OK;\n\telse\n\t{\n\t\t/* Check if the xid is within bounds */\n-\t\tfxid = FullTransactionIdFromXidAndCtx(xid, ctx);\n\t\tif (!fxid_in_cached_range(fxid, ctx))\n\t\t{\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:23:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 1:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> ooh, looks like prairiedog sees the problem too. That means I should be\n> able to reproduce it under a debugger, if you're not certain yet where\n> the problem lies.\n\nThanks, Tom, but I question whether the regression test failures are from a problem in the verify_heapam.c code. I think they are a busted perl test. The test was supposed to corrupt the heap by overwriting a heap file with a large chunk of garbage, but in fact only wrote a small amount of garbage. The idea was to write about 2000 bytes starting at offset 32 in the page, in order to corrupt the line pointers, but owing to my incorrect use of syswrite in the perl test, that didn't happen.\n\nI think the uninitialized variable warning is warning about a real problem in the c-code, but I have no reason to think that particular problem is causing this particular regression test failure.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:26:41 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 1:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> ... btw, having now looked more closely at get_xid_status(), I wonder\n> how come there aren't more compilers bitching about it, because it\n> is very very obviously broken. In particular, the case of\n> requesting status for an xid that is BootstrapTransactionId or\n> FrozenTransactionId *will* fall through to perform\n> FullTransactionIdPrecedesOrEquals with an uninitialized fxid.\n> \n> The fact that most compilers seem to fail to notice that is quite scary.\n> I suppose it has something to do with FullTransactionId being a struct,\n> which makes me wonder if that choice was quite as wise as we thought.\n> \n> Meanwhile, so far as this code goes, I wonder why you don't just change it\n> to always set that value, ie\n> \n> \tXidBoundsViolation result;\n> \tFullTransactionId fxid;\n> \tFullTransactionId clog_horizon;\n> \n> +\tfxid = FullTransactionIdFromXidAndCtx(xid, ctx);\n> +\n> \t/* Quick check for special xids */\n> \tif (!TransactionIdIsValid(xid))\n> \t\tresult = XID_INVALID;\n> \telse if (xid == BootstrapTransactionId || xid == FrozenTransactionId)\n> \t\tresult = XID_BOUNDS_OK;\n> \telse\n> \t{\n> \t\t/* Check if the xid is within bounds */\n> -\t\tfxid = FullTransactionIdFromXidAndCtx(xid, ctx);\n> \t\tif (!fxid_in_cached_range(fxid, ctx))\n> \t\t{\n\nYeah, I reached the same conclusion before submitting the fix upthread. I structured it a bit differently, but I believe fxid will now always get set before being used, though sometimes the function returns before doing either.\n\nI had the same thought about compilers not catching that, too.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:31:04 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 22, 2020, at 1:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ooh, looks like prairiedog sees the problem too. That means I should be\n>> able to reproduce it under a debugger, if you're not certain yet where\n>> the problem lies.\n\n> Thanks, Tom, but I question whether the regression test failures are from a problem in the verify_heapam.c code. I think they are a busted perl test. The test was supposed to corrupt the heap by overwriting a heap file with a large chunk of garbage, but in fact only wrote a small amount of garbage. The idea was to write about 2000 bytes starting at offset 32 in the page, in order to corrupt the line pointers, but owing to my incorrect use of syswrite in the perl test, that didn't happen.\n\nHm, but why are we seeing the failure only on specific machine\narchitectures? sparc64 and ppc32 is a weird pairing, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:31:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 1:31 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Oct 22, 2020, at 1:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> ooh, looks like prairiedog sees the problem too. That means I should be\n>>> able to reproduce it under a debugger, if you're not certain yet where\n>>> the problem lies.\n> \n>> Thanks, Tom, but I question whether the regression test failures are from a problem in the verify_heapam.c code. I think they are a busted perl test. The test was supposed to corrupt the heap by overwriting a heap file with a large chunk of garbage, but in fact only wrote a small amount of garbage. The idea was to write about 2000 bytes starting at offset 32 in the page, in order to corrupt the line pointers, but owing to my incorrect use of syswrite in the perl test, that didn't happen.\n> \n> Hm, but why are we seeing the failure only on specific machine\n> architectures? sparc64 and ppc32 is a weird pairing, too.\n\nIt is seeking to position 32 and writing '\\x77\\x77\\x77\\x77'. x86_64 is little-endian, and ppc32 and sparc64 are both big-endian, right?\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:39:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 22, 2020, at 1:31 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm, but why are we seeing the failure only on specific machine\n>> architectures? sparc64 and ppc32 is a weird pairing, too.\n\n> It is seeking to position 32 and writing '\\x77\\x77\\x77\\x77'. x86_64 is\n> little-endian, and ppc32 and sparc64 are both big-endian, right?\n\nThey are, but that should not meaningfully affect the results of\nthat corruption step. You zapped only one line pointer not\nseveral, but it would look the same regardless of endiannness.\n\nI find it more plausible that we might see the bad effects of\nthe uninitialized variable only on those arches --- but that\ntheory is still pretty shaky, since you'd think compiler\nchoices about register or stack-location assignment would\nbe the controlling factor, and those should be all over the\nmap.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:49:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> It is seeking to position 32 and writing '\\x77\\x77\\x77\\x77'. x86_64 is\n>> little-endian, and ppc32 and sparc64 are both big-endian, right?\n\n> They are, but that should not meaningfully affect the results of\n> that corruption step. You zapped only one line pointer not\n> several, but it would look the same regardless of endiannness.\n\nOh, wait a second. ItemIdData has the flag bits in the middle:\n\ntypedef struct ItemIdData\n{\n unsigned lp_off:15, /* offset to tuple (from start of page) */\n lp_flags:2, /* state of line pointer, see below */\n lp_len:15; /* byte length of tuple */\n} ItemIdData;\n\nmeaning that for that particular bit pattern, one endianness\nis going to see the flags as 01 (LP_NORMAL) and the other as 10\n(LP_REDIRECT). The offset/len are corrupt either way, but\nI'd certainly expect that amcheck would produce different\ncomplaints about those two cases. So it's unsurprising if\nthis test case's output is endian-dependent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 17:06:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 2:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> It is seeking to position 32 and writing '\\x77\\x77\\x77\\x77'. x86_64 is\n>>> little-endian, and ppc32 and sparc64 are both big-endian, right?\n> \n>> They are, but that should not meaningfully affect the results of\n>> that corruption step. You zapped only one line pointer not\n>> several, but it would look the same regardless of endiannness.\n> \n> Oh, wait a second. ItemIdData has the flag bits in the middle:\n> \n> typedef struct ItemIdData\n> {\n> unsigned lp_off:15, /* offset to tuple (from start of page) */\n> lp_flags:2, /* state of line pointer, see below */\n> lp_len:15; /* byte length of tuple */\n> } ItemIdData;\n> \n> meaning that for that particular bit pattern, one endianness\n> is going to see the flags as 01 (LP_NORMAL) and the other as 10\n> (LP_REDIRECT). The offset/len are corrupt either way, but\n> I'd certainly expect that amcheck would produce different\n> complaints about those two cases. So it's unsurprising if\n> this test case's output is endian-dependent.\n\nYeah, I'm already looking at that. The logic in verify_heapam skips over line pointers that are unused or dead, and the test is reporting zero corruption (and complaining about that), so it's probably not going to help to overwrite all the line pointers with this particular bit pattern any more than to just overwrite the first one, as it would just skip them all.\n\nI think the test should overwrite the line pointers with a variety of different bit patterns, or one calculated to work on all platforms. I'll have to write that up.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:10:55 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 2:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> It is seeking to position 32 and writing '\\x77\\x77\\x77\\x77'. x86_64 is\n>>> little-endian, and ppc32 and sparc64 are both big-endian, right?\n> \n>> They are, but that should not meaningfully affect the results of\n>> that corruption step. You zapped only one line pointer not\n>> several, but it would look the same regardless of endiannness.\n> \n> Oh, wait a second. ItemIdData has the flag bits in the middle:\n> \n> typedef struct ItemIdData\n> {\n> unsigned lp_off:15, /* offset to tuple (from start of page) */\n> lp_flags:2, /* state of line pointer, see below */\n> lp_len:15; /* byte length of tuple */\n> } ItemIdData;\n> \n> meaning that for that particular bit pattern, one endianness\n> is going to see the flags as 01 (LP_NORMAL) and the other as 10\n> (LP_REDIRECT). The offset/len are corrupt either way, but\n> I'd certainly expect that amcheck would produce different\n> complaints about those two cases. So it's unsurprising if\n> this test case's output is endian-dependent.\n\nWell, the issue is that on big-endian machines it is not reporting any corruption at all. Are you sure the difference will be LP_NORMAL vs LP_REDIRECT? I was thinking it was LP_DEAD vs LP_REDIRECT, as the little endian platforms are seeing corruption messages about bad redirect line pointers, and the big-endian are apparently skipping over the line pointer entirely, which makes sense if it is LP_DEAD but not if it is LP_NORMAL. It would also skip over LP_UNUSED, but I don't see how that could be stored in lp_flags, because 0x77 is going to either be 01110111 or 11101110, and in neither case do you get two zeros adjacent, but you could get two ones adjacent. (LP_UNUSED = binary 00 and LP_DEAD = binary 11)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:18:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Yeah, I'm already looking at that. The logic in verify_heapam skips over line pointers that are unused or dead, and the test is reporting zero corruption (and complaining about that), so it's probably not going to help to overwrite all the line pointers with this particular bit pattern any more than to just overwrite the first one, as it would just skip them all.\n\n> I think the test should overwrite the line pointers with a variety of different bit patterns, or one calculated to work on all platforms. I'll have to write that up.\n\nWhat we need here is to produce the same test results on either\nendianness. So probably the thing to do is apply the equivalent\nof ntohl() to produce a string that looks right for either host\nendianness. As a separate matter, you'd want to test corruption\nproducing any of the four flag bitpatterns, probably.\n\nIt says here you can use Perl's pack/unpack functions to get\nthe equivalent of ntohl(), but I've not troubled to work out how.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 17:23:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 5:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed. Let's see what the buildfarm thinks.\n\nThis is great work. Thanks Mark and Robert.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:26:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 2:26 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Thu, Oct 22, 2020 at 5:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Committed. Let's see what the buildfarm thinks.\n> \n> This is great work. Thanks Mark and Robert.\n\nThat's the first time I've laughed today. Having turned the build-farm red, this is quite ironic feedback! Thanks all the same for the sentiment.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:39:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 2:39 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > This is great work. Thanks Mark and Robert.\n>\n> That's the first time I've laughed today. Having turned the build-farm red, this is quite ironic feedback! Thanks all the same for the sentiment.\n\nBreaking the buildfarm is not a capital offense. Especially when it\nhappens with patches that are in some sense low level and/or novel,\nand therefore inherently more likely to cause trouble.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:45:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Oct 22, 2020 at 4:04 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I think the compiler warning was about fxid not being set. The callers pass NULL for status if they don't want status checked, so writing *status unconditionally would be an error. Also, if the xid being checked is out of bounds, we can't check the status of the xid in clog.\n\nSorry, you're (partly) right. The new logic is a lot more clear that\nwe never used that uninitialized.\n\nI'll remove the extra semi-colon and commit this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:15:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 22, 2020, at 2:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, wait a second. ItemIdData has the flag bits in the middle:\n>> meaning that for that particular bit pattern, one endianness\n>> is going to see the flags as 01 (LP_NORMAL) and the other as 10\n>> (LP_REDIRECT).\n\n> Well, the issue is that on big-endian machines it is not reporting any\n> corruption at all. Are you sure the difference will be LP_NORMAL vs\n> LP_REDIRECT?\n\n[ thinks a bit harder... ] Probably not. The byte/bit string looks\nthe same either way, given that it's four repetitions of the same\nbyte value. But which field is which will differ: we have either\n\n\toooooooooooooooFFlllllllllllllll\n\t01110111011101110111011101110111\n\nor\n\n\tlllllllllllllllFFooooooooooooooo\n\t01110111011101110111011101110111\n\nSo now I think this is a REDIRECT on either architecture, but the\noffset and length fields have different values, causing the redirect\npointer to point to different places. Maybe it happens to point\nat a DEAD tuple in the big-endian case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:45:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I wrote:\n> So now I think this is a REDIRECT on either architecture, but the\n> offset and length fields have different values, causing the redirect\n> pointer to point to different places. Maybe it happens to point\n> at a DEAD tuple in the big-endian case.\n\nJust to make sure, I tried this test program:\n\n#include <stdio.h>\n#include <string.h>\n\ntypedef struct ItemIdData\n{\n unsigned lp_off:15, /* offset to tuple (from start of page) */\n lp_flags:2, /* state of line pointer, see below */\n lp_len:15; /* byte length of tuple */\n} ItemIdData;\n\nint main()\n{\n ItemIdData lp;\n\n memset(&lp, 0x77, sizeof(lp));\n printf(\"off = %x, flags = %x, len = %x\\n\",\n lp.lp_off, lp.lp_flags, lp.lp_len);\n return 0;\n}\n\nI get\n\noff = 7777, flags = 2, len = 3bbb\n\non a little-endian machine, and\n\noff = 3bbb, flags = 2, len = 7777\n\non big-endian. It'd be less symmetric if the bytes weren't\nall the same ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 21:41:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I wrote:\n> I get\n> off = 7777, flags = 2, len = 3bbb\n> on a little-endian machine, and\n> off = 3bbb, flags = 2, len = 7777\n> on big-endian. It'd be less symmetric if the bytes weren't\n> all the same ...\n\n... but given that this is the test value we are using, why\ndon't both endiannesses whine about a non-maxalign'd offset?\nThe code really shouldn't even be trying to follow these\nredirects, because we risk SIGBUS on picky architectures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 21:46:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 6:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> So now I think this is a REDIRECT on either architecture, but the\n>> offset and length fields have different values, causing the redirect\n>> pointer to point to different places. Maybe it happens to point\n>> at a DEAD tuple in the big-endian case.\n> \n> Just to make sure, I tried this test program:\n> \n> #include <stdio.h>\n> #include <string.h>\n> \n> typedef struct ItemIdData\n> {\n> unsigned lp_off:15, /* offset to tuple (from start of page) */\n> lp_flags:2, /* state of line pointer, see below */\n> lp_len:15; /* byte length of tuple */\n> } ItemIdData;\n> \n> int main()\n> {\n> ItemIdData lp;\n> \n> memset(&lp, 0x77, sizeof(lp));\n> printf(\"off = %x, flags = %x, len = %x\\n\",\n> lp.lp_off, lp.lp_flags, lp.lp_len);\n> return 0;\n> }\n> \n> I get\n> \n> off = 7777, flags = 2, len = 3bbb\n> \n> on a little-endian machine, and\n> \n> off = 3bbb, flags = 2, len = 7777\n> \n> on big-endian. It'd be less symmetric if the bytes weren't\n> all the same ...\n\nI think we're going in the wrong direction here. The idea behind this test was to have as little knowledge about the layout of pages as possible and still verify that damaging the pages would result in corruption reports. Of course, not all damage will result in corruption reports, because some damage looks legit. I think it was just luck (good or bad depending on your perspective) that the damage in the test as committed works on little-endian but not big-endian.\n\nI can embed this knowledge that you have researched into the test if you want me to, but my instinct is to go the other direction and have even less knowledge about pages in the test. That would work if instead of expecting corruption for every time the test writes the file, instead to have it just make sure that it gets corruption reports at least some of the times that it does so. That seems more maintainable long term.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:47:50 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 6:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> I get\n>> off = 7777, flags = 2, len = 3bbb\n>> on a little-endian machine, and\n>> off = 3bbb, flags = 2, len = 7777\n>> on big-endian. It'd be less symmetric if the bytes weren't\n>> all the same ...\n> \n> ... but given that this is the test value we are using, why\n> don't both endiannesses whine about a non-maxalign'd offset?\n> The code really shouldn't even be trying to follow these\n> redirects, because we risk SIGBUS on picky architectures.\n\nAhh, crud. It's because\n\n\tsyswrite($fh, '\\x77\\x77\\x77\\x77', 500)\n\nis wrong twice. The 500 was wrong, but the string there isn't the bit pattern we want -- it's just a string literal with backslashes and such. It should have been double-quoted.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:50:29 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 22, 2020, at 6:50 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Oct 22, 2020, at 6:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> I wrote:\n>>> I get\n>>> off = 7777, flags = 2, len = 3bbb\n>>> on a little-endian machine, and\n>>> off = 3bbb, flags = 2, len = 7777\n>>> on big-endian. It'd be less symmetric if the bytes weren't\n>>> all the same ...\n>> \n>> ... but given that this is the test value we are using, why\n>> don't both endiannesses whine about a non-maxalign'd offset?\n>> The code really shouldn't even be trying to follow these\n>> redirects, because we risk SIGBUS on picky architectures.\n> \n> Ahh, crud. It's because\n> \n> \tsyswrite($fh, '\\x77\\x77\\x77\\x77', 500)\n> \n> is wrong twice. The 500 was wrong, but the string there isn't the bit pattern we want -- it's just a string literal with backslashes and such. It should have been double-quoted.\n\nThe reason this never came up in testing is what I was talking about elsewhere -- this test isn't designed to create *specific* corruptions. It's just supposed to corrupt the table in some random way. For whatever reasons I'm not too curious about, that string corrupts on little endian machines but not big endian machines. If we want to have a test that tailors very specific corruptions, I don't think the way to get there is by debugging this test.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:59:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Ahh, crud. It's because\n> \tsyswrite($fh, '\\x77\\x77\\x77\\x77', 500)\n> is wrong twice. The 500 was wrong, but the string there isn't the bit pattern we want -- it's just a string literal with backslashes and such. It should have been double-quoted.\n\nArgh. So we really have, using same test except\n\n\tmemcpy(&lp, \"\\\\x77\", sizeof(lp));\n\nlittle endian:\toff = 785c, flags = 2, len = 1b9b\nbig endian:\toff = 2e3c, flags = 0, len = 3737\n\nwhich explains the apparent LP_DEAD result.\n\nI'm not particularly on board with your suggestion of \"well, if it works\nsometimes then it's okay\". Then we have no idea of what we really tested.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 22:01:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 22, 2020, at 7:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> Ahh, crud. It's because\n>> \tsyswrite($fh, '\\x77\\x77\\x77\\x77', 500)\n>> is wrong twice. The 500 was wrong, but the string there isn't the bit pattern we want -- it's just a string literal with backslashes and such. It should have been double-quoted.\n> \n> Argh. So we really have, using same test except\n> \n> \tmemcpy(&lp, \"\\\\x77\", sizeof(lp));\n> \n> little endian:\toff = 785c, flags = 2, len = 1b9b\n> big endian:\toff = 2e3c, flags = 0, len = 3737\n> \n> which explains the apparent LP_DEAD result.\n> \n> I'm not particularly on board with your suggestion of \"well, if it works\n> sometimes then it's okay\". Then we have no idea of what we really tested.\n> \n> \t\t\tregards, tom lane\n\nOk, I've pruned it down to something you may like better. Instead of just checking that *some* corruption occurs, it checks the returned corruption against an expected regex, and if it fails to match, you should see in the logs what you got vs. what you expected.\n\nIt only corrupts the first two line pointers, the first one with 0x77777777 and the second one with 0xAAAAAAAA, which are consciously chosen to be bitwise reverses of each other and just strings of alternating bits rather than anything that could have a more complicated interpretation.\n\nOn my little-endian mac, the 0x77777777 value creates a line pointer which redirects to an invalid offset 0x7777, which gets reported as decimal 30583 in the corruption report, \"line pointer redirection to item at offset 30583 exceeds maximum offset 38\". The test is indifferent to whether the corruption it is looking for is reported relative to the first line pointer or the second one, so if endian-ness matters, it may be the 0xAAAAAAAA that results in that corruption message. I don't have a machine handy to test that. It would be nice to determine the minimum amount of paranoia necessary to make this portable and not commit the rest.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 22 Oct 2020 21:21:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 22, 2020, at 9:21 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Oct 22, 2020, at 7:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> Ahh, crud. It's because\n>>> \tsyswrite($fh, '\\x77\\x77\\x77\\x77', 500)\n>>> is wrong twice. The 500 was wrong, but the string there isn't the bit pattern we want -- it's just a string literal with backslashes and such. It should have been double-quoted.\n>> \n>> Argh. So we really have, using same test except\n>> \n>> \tmemcpy(&lp, \"\\\\x77\", sizeof(lp));\n>> \n>> little endian:\toff = 785c, flags = 2, len = 1b9b\n>> big endian:\toff = 2e3c, flags = 0, len = 3737\n>> \n>> which explains the apparent LP_DEAD result.\n>> \n>> I'm not particularly on board with your suggestion of \"well, if it works\n>> sometimes then it's okay\". Then we have no idea of what we really tested.\n>> \n>> \t\t\tregards, tom lane\n> \n> Ok, I've pruned it down to something you may like better. Instead of just checking that *some* corruption occurs, it checks the returned corruption against an expected regex, and if it fails to match, you should see in the logs what you got vs. what you expected.\n> \n> It only corrupts the first two line pointers, the first one with 0x77777777 and the second one with 0xAAAAAAAA, which are consciously chosen to be bitwise reverses of each other and just strings of alternating bits rather than anything that could have a more complicated interpretation.\n> \n> On my little-endian mac, the 0x77777777 value creates a line pointer which redirects to an invalid offset 0x7777, which gets reported as decimal 30583 in the corruption report, \"line pointer redirection to item at offset 30583 exceeds maximum offset 38\". The test is indifferent to whether the corruption it is looking for is reported relative to the first line pointer or the second one, so if endian-ness matters, it may be the 0xAAAAAAAA that results in that corruption message. I don't have a machine handy to test that. It would be nice to determine the minimum amount of paranoia necessary to make this portable and not commit the rest.\n\nObviously, that should have said 0x55555555 and 0xAAAAAAAA. After writing the patch that way, I checked that the old value 0x77777777 also works on my mac, which it does, and checked that writing the line pointers starting at offset 24 rather than 32 works on my mac, which it does, and then went on to write this rather confused email and attached the patch with those changes, which all work (at least on my mac) but are potentially less portable than what I had before testing those changes.\n\nI apologize for any confusion my email from last night may have caused.\n\nThe patch I *should* have attached last night this time:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 23 Oct 2020 07:04:04 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The patch I *should* have attached last night this time:\n\nThanks, I'll do some big-endian testing with this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:28:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> The patch I *should* have attached last night this time:\n\n> Thanks, I'll do some big-endian testing with this.\n\nSeems to work, so I pushed it (after some compulsive fooling\nabout with whitespace and perltidy-ing). It appears to me that\nthe code coverage for verify_heapam.c is not very good though,\nonly circa 50%. Do we care to expend more effort on that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:04:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 23, 2020, at 11:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> The patch I *should* have attached last night this time:\n> \n>> Thanks, I'll do some big-endian testing with this.\n> \n> Seems to work, so I pushed it (after some compulsive fooling\n> about with whitespace and perltidy-ing).\n\nThanks for all the help!\n\n> It appears to me that\n> the code coverage for verify_heapam.c is not very good though,\n> only circa 50%. Do we care to expend more effort on that?\n\nPart of the issue here is that I developed the heapcheck code as a sequence of patches, and there is much greater coverage in the tests in the 0002 patch, which hasn't been committed yet. (Nor do I know that it ever will be.) Over time, the patch set became:\n\n0001 -- adds verify_heapam.c to contrib/amcheck, with basic test coverage\n0002 -- adds pg_amcheck command line interface to contrib/pg_amcheck, with more extensive test coverage\n0003 -- creates a non-throwing interface to clog\n0004 -- uses the non-throwing clog interface from within verify_heapam\n0005 -- adds some controversial acl checks to verify_heapam\n\nYour question doesn't have much to do with 3,4,5 above, but it definitely matters whether we're going to commit 0002. The test in that patch, in contrib/pg_amcheck/t/004_verify_heapam.pl, does quite a bit of bit twiddling of heap tuples and toast records and checks that the right corruption messages come back. Part of the reason I was trying to keep 0001's t/001_verify_heapam.pl test ignorant of the exact page layout information is that I already had this other test that covers that.\n\nSo, should I port that test from (currently non-existant) contrib/pg_amcheck into contrib/amcheck, or should we wait to see if the 0002 patch is going to get committed?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:20:40 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hmm, we're not out of the woods yet: thorntail is even less happy\nthan before.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2020-10-23%2018%3A08%3A11\n\nI do not have 64-bit big-endian hardware to play with unfortunately.\nBut what I suspect is happening here is less about endianness and\nmore about alignment pickiness; or maybe we were unlucky enough to\nindex off the end of the shmem segment. I see that verify_heapam\ndoes this for non-redirect tuples:\n\n /* Set up context information about this next tuple */\n ctx.lp_len = ItemIdGetLength(ctx.itemid);\n ctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page, ctx.itemid);\n ctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);\n\nwith absolutely no thought for the possibility that lp_off is out of\nrange or not maxaligned. The checks for a sane lp_len seem to have\ngone missing as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:51:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> /* Set up context information about this next tuple */\n> ctx.lp_len = ItemIdGetLength(ctx.itemid);\n> ctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page, ctx.itemid);\n> ctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);\n>\n> with absolutely no thought for the possibility that lp_off is out of\n> range or not maxaligned. The checks for a sane lp_len seem to have\n> gone missing as well.\n\nThat is surprising. verify_nbtree.c has PageGetItemIdCareful() for\nthis exact reason.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:56:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 23, 2020, at 11:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Hmm, we're not out of the woods yet: thorntail is even less happy\n> than before.\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2020-10-23%2018%3A08%3A11\n> \n> I do not have 64-bit big-endian hardware to play with unfortunately.\n> But what I suspect is happening here is less about endianness and\n> more about alignment pickiness; or maybe we were unlucky enough to\n> index off the end of the shmem segment. I see that verify_heapam\n> does this for non-redirect tuples:\n> \n> /* Set up context information about this next tuple */\n> ctx.lp_len = ItemIdGetLength(ctx.itemid);\n> ctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page, ctx.itemid);\n> ctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);\n> \n> with absolutely no thought for the possibility that lp_off is out of\n> range or not maxaligned. The checks for a sane lp_len seem to have\n> gone missing as well.\n\nYou certainly appear to be right about that. I've added the extra checks, and extended the regression test to include them. Patch attached.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 23 Oct 2020 13:52:18 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 23, 2020, at 11:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I do not have 64-bit big-endian hardware to play with unfortunately.\n>> But what I suspect is happening here is less about endianness and\n>> more about alignment pickiness; or maybe we were unlucky enough to\n>> index off the end of the shmem segment.\n\n> You certainly appear to be right about that. I've added the extra checks, and extended the regression test to include them. Patch attached.\n\nMeanwhile, I've replicated the SIGBUS problem on gaur's host, so\nthat's definitely what's happening.\n\n(Although PPC is also alignment-picky on the hardware level, I believe\nthat both macOS and Linux try to mask that by having kernel trap handlers\nexecute unaligned accesses, leaving only a nasty performance loss behind.\nThat's why I failed to see this effect when checking your previous patch\non an old Apple box. We likely won't see it in the buildfarm either,\nunless maybe on Noah's AIX menagerie.)\n\nI'll check this patch on gaur and push it if it's clean.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:20:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> You certainly appear to be right about that. I've added the extra checks, and extended the regression test to include them. Patch attached.\n\nPushed with some more fooling about. The \"bit reversal\" idea is not\na sufficient guide to picking values that will hit all the code checks.\nFor instance, I was seeing out-of-range warnings on one endianness and\nnot the other because on the other one the maxalign check rejected the\nvalue first. I ended up manually tweaking the corruption patterns\nuntil they hit all the tests on both endiannesses. Pretty much the\nopposite of black-box testing, but it's not like our notions of line\npointer layout are going to change anytime soon.\n\nI made some logic rearrangements in the C code, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 19:12:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 23, 2020, at 4:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> You certainly appear to be right about that. I've added the extra checks, and extended the regression test to include them. Patch attached.\n> \n> Pushed with some more fooling about. The \"bit reversal\" idea is not\n> a sufficient guide to picking values that will hit all the code checks.\n> For instance, I was seeing out-of-range warnings on one endianness and\n> not the other because on the other one the maxalign check rejected the\n> value first. I ended up manually tweaking the corruption patterns\n> until they hit all the tests on both endiannesses. Pretty much the\n> opposite of black-box testing, but it's not like our notions of line\n> pointer layout are going to change anytime soon.\n> \n> I made some logic rearrangements in the C code, too.\n\nThanks Tom! And Peter, your comment earlier save me some time. Thanks to you, also! \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Oct 2020 16:22:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Oct 23, 2020 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Seems to work, so I pushed it (after some compulsive fooling\n> about with whitespace and perltidy-ing). It appears to me that\n> the code coverage for verify_heapam.c is not very good though,\n> only circa 50%. Do we care to expend more effort on that?\n\nThere are two competing goods here. On the one hand, more test\ncoverage is better than less. On the other hand, finicky tests that\nhave platform-dependent results or fail for strange reasons not\nindicative of actual problems with the code are often judged not to be\nworth the trouble. An early version of this patch set had a very\nextensive chunk of Perl code in it that actually understood the page\nlayout and, if we adopt something like that, it would probably be\neasier to test a whole bunch of scenarios. The downside is that it was\na lot of code that basically duplicated a lot of backend logic in\nPerl, and I was (and am) afraid that people will complain about the\namount of code and/or the difficulty of maintaining it. On the other\nhand, having all that code might allow better testing not only of this\nparticular patch but also other scenarios involving corrupted pages,\nso maybe it's wrong to view all that code as a burden that we have to\ncarry specifically to test this; or, alternatively, maybe it's worth\ncarrying even if we only use it for this. On the third hand, as Mark\npoints out, if we get 0002 committed, that will help somewhat with\ntest coverage even if we do nothing else.\n\nThanks for committing (and adjusting) the patches for the existing\nbuildfarm failures. If I understand the buildfarm results correctly,\nhornet is still unhappy even after\n321633e17b07968e68ca5341429e2c8bbf15c331?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 26 Oct 2020 09:37:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 26, 2020, at 6:37 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Oct 23, 2020 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Seems to work, so I pushed it (after some compulsive fooling\n>> about with whitespace and perltidy-ing). It appears to me that\n>> the code coverage for verify_heapam.c is not very good though,\n>> only circa 50%. Do we care to expend more effort on that?\n> \n> There are two competing goods here. On the one hand, more test\n> coverage is better than less. On the other hand, finicky tests that\n> have platform-dependent results or fail for strange reasons not\n> indicative of actual problems with the code are often judged not to be\n> worth the trouble. An early version of this patch set had a very\n> extensive chunk of Perl code in it that actually understood the page\n> layout and, if we adopt something like that, it would probably be\n> easier to test a whole bunch of scenarios. The downside is that it was\n> a lot of code that basically duplicated a lot of backend logic in\n> Perl, and I was (and am) afraid that people will complain about the\n> amount of code and/or the difficulty of maintaining it. On the other\n> hand, having all that code might allow better testing not only of this\n> particular patch but also other scenarios involving corrupted pages,\n> so maybe it's wrong to view all that code as a burden that we have to\n> carry specifically to test this; or, alternatively, maybe it's worth\n> carrying even if we only use it for this. On the third hand, as Mark\n> points out, if we get 0002 committed, that will help somewhat with\n> test coverage even if we do nothing else.\n\nMuch of the test in 0002 could be ported to work without committing the rest of 0002, if the pg_amcheck command line utiilty is not wanted.\n\n> \n> Thanks for committing (and adjusting) the patches for the existing\n> buildfarm failures. If I understand the buildfarm results correctly,\n> hornet is still unhappy even after\n> 321633e17b07968e68ca5341429e2c8bbf15c331?\n\nThat appears to be a failed test for pg_surgery rather than for amcheck. Or am I reading the log wrong?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 26 Oct 2020 06:56:24 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Oct 26, 2020 at 9:56 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Much of the test in 0002 could be ported to work without committing the rest of 0002, if the pg_amcheck command line utiilty is not wanted.\n\nHow much consensus do we think we have around 0002 at this point? I\nthink I remember a vote in favor and no votes against, but I haven't\nbeen paying a whole lot of attention.\n\n> > Thanks for committing (and adjusting) the patches for the existing\n> > buildfarm failures. If I understand the buildfarm results correctly,\n> > hornet is still unhappy even after\n> > 321633e17b07968e68ca5341429e2c8bbf15c331?\n>\n> That appears to be a failed test for pg_surgery rather than for amcheck. Or am I reading the log wrong?\n\nOh, yeah, you're right. I don't know why it just failed now, though:\nthere are a bunch of successful runs preceding it. But I guess it's\nunrelated to this thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 26 Oct 2020 10:00:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 26, 2020, at 7:00 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Oct 26, 2020 at 9:56 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Much of the test in 0002 could be ported to work without committing the rest of 0002, if the pg_amcheck command line utiilty is not wanted.\n> \n> How much consensus do we think we have around 0002 at this point? I\n> think I remember a vote in favor and no votes against, but I haven't\n> been paying a whole lot of attention.\n\nMy sense over the course of the thread is that people were very much in favor of having heap checking functionality, but quite vague on whether they wanted the command line interface. I think the interface is useful, but I'd rather hear from others on this list whether it is useful enough to justify maintaining it. As the author of it, I'm biased. Hopefully others with a more objective view of the matter will read this and vote?\n\nI don't recall patches 0003 through 0005 getting any votes. 0003 and 0004, which create and use a non-throwing interface to clog, were written in response to Andrey's request, so I'm guessing that's kind of a vote in favor. 0005 was factored out of of 0001 in response to a lack of agreement about whether verify_heapam should have acl checks. You seemed in favor, and Peter against, but I don't think we heard other opinions.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 26 Oct 2020 07:08:12 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 26, 2020 at 9:56 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> hornet is still unhappy even after\n>>> 321633e17b07968e68ca5341429e2c8bbf15c331?\n\n>> That appears to be a failed test for pg_surgery rather than for amcheck. Or am I reading the log wrong?\n\n> Oh, yeah, you're right. I don't know why it just failed now, though:\n> there are a bunch of successful runs preceding it. But I guess it's\n> unrelated to this thread.\n\npg_surgery's been unstable since it went in. I believe Andres is\nworking on a fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 10:13:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Hi,\n\nOn October 26, 2020 7:13:15 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Oct 26, 2020 at 9:56 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>>> hornet is still unhappy even after\n>>>> 321633e17b07968e68ca5341429e2c8bbf15c331?\n>\n>>> That appears to be a failed test for pg_surgery rather than for\n>amcheck. Or am I reading the log wrong?\n>\n>> Oh, yeah, you're right. I don't know why it just failed now, though:\n>> there are a bunch of successful runs preceding it. But I guess it's\n>> unrelated to this thread.\n>\n>pg_surgery's been unstable since it went in. I believe Andres is\n>working on a fix.\n\nI posted one a while ago - was planning to push a cleaned up version soon if nobody comments in the near future.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Mon, 26 Oct 2020 08:27:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Oct 26, 2020, at 7:08 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Oct 26, 2020, at 7:00 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Mon, Oct 26, 2020 at 9:56 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> Much of the test in 0002 could be ported to work without committing the rest of 0002, if the pg_amcheck command line utiilty is not wanted.\n>> \n>> How much consensus do we think we have around 0002 at this point? I\n>> think I remember a vote in favor and no votes against, but I haven't\n>> been paying a whole lot of attention.\n> \n> My sense over the course of the thread is that people were very much in favor of having heap checking functionality, but quite vague on whether they wanted the command line interface. I think the interface is useful, but I'd rather hear from others on this list whether it is useful enough to justify maintaining it. As the author of it, I'm biased. Hopefully others with a more objective view of the matter will read this and vote?\n> \n> I don't recall patches 0003 through 0005 getting any votes. 0003 and 0004, which create and use a non-throwing interface to clog, were written in response to Andrey's request, so I'm guessing that's kind of a vote in favor. 0005 was factored out of of 0001 in response to a lack of agreement about whether verify_heapam should have acl checks. You seemed in favor, and Peter against, but I don't think we heard other opinions.\n\nThe v20 patches 0002, 0003, and 0005 still apply cleanly, but 0004 required a rebase. (0001 was already committed last week.)\n\nHere is a rebased set of 4 patches, numbered 0002..0005 to be consistent with the previous naming. There are no substantial changes.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 26 Oct 2020 09:11:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Oct 21, 2020 at 11:45 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Done that way in the attached, which also include Robert's changes from v19 he posted earlier today.\n\n> Committed. Let's see what the buildfarm thinks.\n\nAnother thing that the buildfarm is pointing out is\n\n[WARN] FOUserAgent - The contents of fo:block line 2 exceed the available area in the inline-progression direction by more than 50 points. (See position 148863:380)\n\nThis is coming from the sample output for verify_heapam(), which is too\nwide to fit in even a normal-size browser window, let alone A4 PDF.\n\nWhile we could perhaps hack it up to allow more line breaks, or see\nif \\x formatting helps, my own suggestion would be to just nuke the\nsample output altogether. It doesn't look like it is any sort of\nrepresentative real output, and it is not useful enough to be worth\nspending time to patch up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 12:12:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Oct 26, 2020, at 9:12 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Wed, Oct 21, 2020 at 11:45 PM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> Done that way in the attached, which also include Robert's changes from v19 he posted earlier today.\n> \n>> Committed. Let's see what the buildfarm thinks.\n> \n> Another thing that the buildfarm is pointing out is\n> \n> [WARN] FOUserAgent - The contents of fo:block line 2 exceed the available area in the inline-progression direction by more than 50 points. (See position 148863:380)\n> \n> This is coming from the sample output for verify_heapam(), which is too\n> wide to fit in even a normal-size browser window, let alone A4 PDF.\n> \n> While we could perhaps hack it up to allow more line breaks, or see\n> if \\x formatting helps, my own suggestion would be to just nuke the\n> sample output altogether.\n\nOk.\n\n> It doesn't look like it is any sort of\n> representative real output,\n\nIt is not. It came from artificially created corruption in the regression tests. I may even have manually edited that, though I don't recall.\n\n> and it is not useful enough to be worth\n> spending time to patch up.\n\nOk.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 26 Oct 2020 09:28:45 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Oct 26, 2020 at 12:12 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The v20 patches 0002, 0003, and 0005 still apply cleanly, but 0004 required a rebase. (0001 was already committed last week.)\n>\n> Here is a rebased set of 4 patches, numbered 0002..0005 to be consistent with the previous naming. There are no substantial changes.\n\nHere's a review of 0002. I basically like the direction this is going\nbut I guess nobody will be surprised that there are some things in\nhere that I think could be improved.\n\n+const char *usage_text[] = {\n+ \"pg_amcheck is the PostgreSQL command line frontend for the\namcheck database corruption checker.\",\n+ \"\",\n\nThis looks like a novel approach to the problem of printing out the\nusage() information, and I think that it's inferior to the technique\nused elsewhere of just having a bunch of printf() statements, because\nunless I misunderstand, it doesn't permit localization.\n\n+ \" -b, --startblock begin checking table(s) at the\ngiven starting block number\",\n+ \" -e, --endblock check table(s) only up to the\ngiven ending block number\",\n+ \" -B, --toast-startblock begin checking toast table(s)\nat the given starting block\",\n+ \" -E, --toast-endblock check toast table(s) only up\nto the given ending block\",\n\nI am not very convinced by this. What's the use case? If you're just\nchecking a single table, you might want to specify a start and end\nblock, but then you don't need separate options for the TOAST and\nnon-TOAST cases, do you? If I want to check pg_statistic, I'll say\npg_amcheck -t pg_catalog.pg_statistic. If I want to check the TOAST\ntable for pg_statistic, I'll say pg_amcheck -t pg_toast.pg_toast_2619.\nIn either case, if I want to check just the first three blocks, I can\nadd -b 0 -e 2.\n\n+ \" -f, --skip-all-frozen do NOT check blocks marked as\nall frozen\",\n+ \" -v, --skip-all-visible do NOT check blocks marked as\nall visible\",\n\nI think this is using up too many one character option names for too\nlittle benefit on things that are too closely related. How about, -s,\n--skip=all-frozen|all-visible|none? And then -v could mean verbose,\nwhich could trigger things like printing all the queries sent to the\nserver, setting PQERRORS_VERBOSE, etc.\n\n+ \" -x, --check-indexes check btree indexes associated\nwith tables being checked\",\n+ \" -X, --skip-indexes do NOT check any btree indexes\",\n+ \" -i, --index=PATTERN check the specified index(es) only\",\n+ \" -I, --exclude-index=PATTERN do NOT check the specified index(es)\",\n\nThis is a lotta controls for something that has gotta have some\ndefault. Either the default is everything, in which case I don't see\nwhy I need -x, or it's nothing, in which case I don't see why I need\n-X.\n\n+ \" -c, --check-corrupt check indexes even if their\nassociated table is corrupt\",\n+ \" -C, --skip-corrupt do NOT check indexes if their\nassociated table is corrupt\",\n\nDitto. (I think the default be to check corrupt, and there can be an\noption to skip it.)\n\n+ \" -a, --heapallindexed check index tuples against the\ntable tuples\",\n+ \" -A, --no-heapallindexed do NOT check index tuples\nagainst the table tuples\",\n\nDitto. (Not sure what the default should be, though.)\n\n+ \" -r, --rootdescend search from the root page for\neach index tuple\",\n+ \" -R, --no-rootdescend do NOT search from the root\npage for each index tuple\",\n\nDitto. (Again, not sure about the default.)\n\nI'm also not sure if these descriptions are clear enough, but it may\nalso be hard to do a good job in a brief space. Still, comparing this\nto the documentation of heapallindexed makes me rather nervous. This\nis only trying to verify that the index contains all the tuples in the\nheap, not that the values in the heap and index tuples actually match.\n\n+typedef struct\n+AmCheckSettings\n+{\n+ char *dbname;\n+ char *host;\n+ char *port;\n+ char *username;\n+} ConnectOptions;\n\nMaking the struct name different from the type name seems not good,\nand the struct name also shouldn't be on a separate line.\n\n+typedef enum trivalue\n+{\n+ TRI_DEFAULT,\n+ TRI_NO,\n+ TRI_YES\n+} trivalue;\n\nUgh. It's not this patch's fault, but we really oughta move this to\nsomeplace more centralized.\n\n+typedef struct\n...\n+} AmCheckSettings;\n\nI'm not sure I consider all of these things settings, \"db\" in\nparticular. But maybe that's nitpicking.\n\n+static void expand_schema_name_patterns(const SimpleStringList *patterns,\n+\n const SimpleOidList *exclude_oids,\n+\n SimpleOidList *oids\n+\n bool strict_names);\n\nThis is copied from pg_dump, along with I think at least one other\nfunction from nearby. Unlike the trivalue case above, this would be\nthe first duplication of this logic. Can we push this stuff into\npgcommon, perhaps?\n\n+ /*\n+ * Default behaviors for user settable options. Note that these default\n+ * to doing all the safe checks and none of the unsafe ones,\non the theory\n+ * that if a user says \"pg_amcheck mydb\" without specifying\nany additional\n+ * options, we should check everything we know how to check without\n+ * risking any backend aborts.\n+ */\n\nThis to me seems too conservative. The result is that by default we\ncheck only tables, not indexes. I don't think that's going to be what\nusers want. I don't know whether they want the heapallindexed or\nrootdescend behaviors for index checks, but I think they want their\nindexes checked. Happy to hear opinions from actual users on what they\nwant; this is just me guessing that you've guessed wrong. :-)\n\n+ if (settings.db == NULL)\n+ {\n+ pg_log_error(\"no connection to server after\ninitial attempt\");\n+ exit(EXIT_BADCONN);\n+ }\n\nI think this is documented as meaning out of memory, and reported that\nway elsewhere. Anyway I am going to keep complaining until there are\nno cases where we tell the user it broke without telling them what\nbroke. Which means this bit is a problem too:\n\n+ if (!settings.db)\n+ {\n+ pg_log_error(\"no connection to server\");\n+ exit(EXIT_BADCONN);\n+ }\n\nSomething went wrong, good luck figuring out what it was!\n\n+ /*\n+ * All information about corrupt indexes are returned via\nereport, not as\n+ * tuples. We want all the details to report if corruption exists.\n+ */\n+ PQsetErrorVerbosity(settings.db, PQERRORS_VERBOSE);\n\nReally? Why? If I need the source code file name, function name, and\nline number to figure out what went wrong, that is not a great sign\nfor the quality of the error reports it produces.\n\n+ /*\n+ * The btree checking logic which optionally\nchecks the contents\n+ * of an index against the corresponding table\nhas not yet been\n+ * sufficiently hardened against corrupt\ntables. In particular,\n+ * when called with heapallindexed true, it\nsegfaults if the file\n+ * backing the table relation has been\nerroneously unlinked. In\n+ * any event, it seems unwise to reconcile an\nindex against its\n+ * table when we already know the table is corrupt.\n+ */\n+ old_heapallindexed = settings.heapallindexed;\n+ if (corruptions)\n+ settings.heapallindexed = false;\n\nThis seems pretty lame to me. Even if the btree checker can't tolerate\ncorruption to the extent that the heap checker does, seg faulting\nbecause of a missing file seems like a bug that we should just fix\n(and probably back-patch). I'm not very convinced by the decision to\noverride the user's decision about heapallindexed either. Maybe I lack\nimagination, but that seems pretty arbitrary. Suppose there's a giant\nindex which is missing entries for 5 million heap tuples and also\nthere's 1 entry in the table which has an xmin that is less than the\npg_clas.relfrozenxid value by 1. You are proposing that because I have\nthe latter problem I don't want you to check for the former one. But\nI, John Q. Smartuser, do not want you to second-guess what I told you\non the command line that I wanted. :-)\n\nI think in general you're worrying too much about the possibility of\nthis tool causing backend crashes. I think it's good that you wrote\nthe heapcheck code in a way that's hardened against that, and I think\nwe should try to harden other things as time permits. But I don't\nthink that the remote possibility of a crash due to the lack of such\nhardening should dictate the design behavior of this tool. If the\ncrash possibilities are not remote, then I think the solution is to\nfix them, rather than cutting out important checks.\n\nIt doesn't seem like great design to me that get_table_check_list()\ngets just the OID of the table itself, and then later if we decide to\ncheck the TOAST table we've got to run a separate query for each table\nwe want to check to fetch the TOAST OID, when we could've just fetched\nboth in get_table_check_list() by including two columns in the query\nrather than one and it would've been basically free. Imagine if some\nuser wrote a query that fetched the primary key value for all their\nrows and then had their application run a separate query to fetch the\nentire contents of each of those rows, said contents consisting of one\nmore integer. And then suppose they complained about performance. We'd\ntell them they were doing it wrong, and so here.\n\n+ if (settings.db == NULL)\n+ fatal(\"no connection on entry to check_table\");\n\nUninformative. Is this basically an Assert? If so maybe just make it\none. If not maybe fail somewhere else with a better message?\n\n+ if (startblock == NULL)\n+ startblock = \"NULL\";\n+ if (endblock == NULL)\n+ endblock = \"NULL\";\n\nIt seems like it would be more elegant to initialize\nsettings.startblock and settings.endblock to \"NULL.\" However, there's\nalso a related problem, which is that the startblock and endblock\nvalues can be anything, and are interpolated with quoting. I don't\nthink that it's good to ship a tool with SQL injection hazards built\ninto it. I think that you should (a) check that these values are\nintegers during argument parsing and error out if they are not and\nthen (b) use either a prepared query or PQescapeLiteral() anyway.\n\n+ stop = (on_error_stop) ? \"true\" : \"false\";\n+ toast = (check_toast) ? \"true\" : \"false\";\n\nThe parens aren't really needed here.\n\n+\nprintf(\"(relname=%s,blkno=%s,offnum=%s,attnum=%s)\\n%s\\n\",\n+ PQgetvalue(res, i, 0), /* relname */\n+ PQgetvalue(res, i, 1), /* blkno */\n+ PQgetvalue(res, i, 2), /* offnum */\n+ PQgetvalue(res, i, 3), /* attnum */\n+ PQgetvalue(res, i, 4)); /* msg */\n\nI am not quite sure how to format the output, but this looks like\nsomething designed by an engineer who knows too much about the topic.\nI suspect users won't find the use of things like \"relname\" and\n\"blkno\" too easy to understand. At least I think we should say\n\"relation, block, offset, attribute\" instead of \"relname, blkno,\noffnum, attnum\". I would probably drop the parenthesis and add spaces,\nso that you end up with something like:\n\nrelation \"%s\", block \"%s\", offset \"%s\", attribute \"%s\":\n\nI would also define variant strings so that we entirely omit things\nthat are NULL. e.g. have four strings:\n\nrelation \"%s\":\nrelation \"%s\", block \"%s\":(\nrelation \"%s\", block \"%s\", offset \"%s\":\nrelation \"%s\", block \"%s\", offset \"%s\", attribute \"%s\":\n\nWould it make it more readable if we indented the continuation line by\nfour spaces or something?\n\n+ corruption_cnt++;\n+ printf(\"%s\\n\", error);\n+ pfree(error);\n\nSeems like we could still print the relation name in this case, and\nthat it would be a good idea to do so, in case it's not in the message\nthat the server returns.\n\nThe general logic in this part of the code looks a bit strange to me.\nIf ExecuteSqlQuery() returns PGRES_TUPLES_OK, we print out the details\nfor each returned row. Otherwise, if error = true, we print the error.\nBut, what if neither of those things are the case? Then we'd just\nprint nothing despite having gotten back some weird response from the\nserver. That actually can't happen, because ExecuteSqlQuery() always\nsets *error when the return code is not PGRES_TUPLES_OK, but you\nwouldn't know that from looking at this code.\n\nHonestly, as written, ExecSqlQuery() seems like kind of a waste. The\nOrDie() version is useful as a notational shorthand, but this version\nseems to add more confusion than clarity. It has only three callers:\nthe ones in check_table() and check_indexes() have the problem\ndescribed above, and the one in get_toast_oid() could just as well be\nusing the OrDie() version. And also we should probably get rid of it\nentirely by fetching the toast OIDs the first time around, as\nmentioned above.\n\ncheck_indexes() lacks a function comment. It seems to have more or\nless the same problem as get_toast_oid() -- an extra query per table\nto get the list of indexes. I guess it has a better excuse: there\ncould be lots of indexes per table, and we're fetching multiple\ncolumns of data for each one, whereas in the TOAST case we are issuing\nan extra query per table to fetch a single integer. But, couldn't we\nfetch information about all the indexes we want to check in one go,\nrather than fetching them separately for each table being checked? I'm\nnot sure if that would create too much other complexity, but it seems\nlike it would be quicker.\n\n+ if (settings.db == NULL)\n+ fatal(\"no connection on entry to check_index\");\n+ if (idxname == NULL)\n+ fatal(\"no index name on entry to check_index\");\n+ if (tblname == NULL)\n+ fatal(\"no table name on entry to check_index\");\n\nAgain, probably these should be asserts, or if they're not, the error\nshould be reported better and maybe elsewhere.\n\nSimilarly in some other places, like expand_schema_name_patterns().\n\n+ * The loop below runs multiple SELECTs might sometimes result in\n+ * duplicate entries in the Oid list, but we don't care.\n\nThis is missing a which, like the place you copied it from, but the\nversion in pg_dumpall.c is better.\n\nexpand_table_name_patterns() should be reformatted to not gratuitously\nexceed 80 columns. Ditto for expand_index_name_patterns().\n\nI sort of expected that this patch might use threads to allow parallel\nchecking - seems like it would be a useful feature.\n\nI originally intended to review the docs and regression tests in the\nsame email as the patch itself, but this email has gotten rather long\nand taken rather longer to get together than I had hoped, so I'm going\nto stop here for now and come back to that stuff.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Nov 2020 12:06:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Nov 19, 2020 at 9:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm also not sure if these descriptions are clear enough, but it may\n> also be hard to do a good job in a brief space. Still, comparing this\n> to the documentation of heapallindexed makes me rather nervous. This\n> is only trying to verify that the index contains all the tuples in the\n> heap, not that the values in the heap and index tuples actually match.\n\nThat's a good point. As things stand, heapallindexed verification does\nnot notice when there are extra index tuples in the index that are in\nsome way inconsistent with the heap. Hopefully this isn't too much of\na problem in practice because the presence of extra spurious tuples\ngets detected by the index structure verification process. But in\ngeneral that might not happen.\n\nIdeally heapallindex verification would verify 1:1 correspondence. It\ndoesn't do that right now, but it could.\n\nThis could work by having two bloom filters -- one for the heap,\nanother for the index. The implementation would look for the absence\nof index tuples that should be in the index initially, just like\ntoday. But at the end it would modify the index bloom filter by &= it\nwith the complement of the heap bloom filter. If any bits are left set\nin the index bloom filter, we go back through the index once more and\nlocate index tuples that have at least some matching bits in the index\nbloom filter (we cannot expect all of the bits from each of the hash\nfunctions used by the bloom filter to still be matches).\n\n From here we can do some kind of lookup for maybe-not-matching index\ntuples that we locate. Make sure that they point to an LP_DEAD line\nitem in the heap or something. Make sure that they have the same\nvalues as the heap tuple if they're still retrievable (i.e. if we\nhaven't pruned the heap tuple away already).\n\n> This to me seems too conservative. The result is that by default we\n> check only tables, not indexes. I don't think that's going to be what\n> users want. I don't know whether they want the heapallindexed or\n> rootdescend behaviors for index checks, but I think they want their\n> indexes checked. Happy to hear opinions from actual users on what they\n> want; this is just me guessing that you've guessed wrong. :-)\n\nMy thoughts on these two options:\n\n* I don't think that users will ever want rootdescend verification.\n\nThat option exists now because I wanted to have something that relied\non the uniqueness property of B-Tree indexes following the Postgres 12\nwork. I didn't add retail index tuple deletion, so it seemed like a\ngood idea to have something that makes the same assumptions that it\nwould have to make. To validate the design.\n\nAnother factor is that Alexander Korotkov made the basic\nbt_index_parent_check() tests a lot better for Postgres 13. This\nundermined the practical argument for using rootdescend verification.\n\nFinally, note that bt_index_parent_check() was always supposed to be\nsomething that was to be used only when you already knew that you had\nbig problems, and wanted absolutely thorough verification without\nregard for the costs. This isn't the common case at all. It would be\nreasonable to not expose anything from bt_index_parent_check() at all,\nor to give it much less prominence. Not really sure of what the right\nbalance is here myself, so I'm not insisting on anything. Just telling\nyou what I know about it.\n\n* heapallindexed is kind of expensive, but valuable. But the extra\ncheck is probably less likely to help on the second or subsequent\nindex on a table.\n\nIt might be worth considering an option that only uses it with only\none index: Preferably the primary key index, failing that some unique\nindex, and failing that some other index.\n\n> This seems pretty lame to me. Even if the btree checker can't tolerate\n> corruption to the extent that the heap checker does, seg faulting\n> because of a missing file seems like a bug that we should just fix\n> (and probably back-patch). I'm not very convinced by the decision to\n> override the user's decision about heapallindexed either.\n\nI strongly agree.\n\n> Maybe I lack\n> imagination, but that seems pretty arbitrary. Suppose there's a giant\n> index which is missing entries for 5 million heap tuples and also\n> there's 1 entry in the table which has an xmin that is less than the\n> pg_clas.relfrozenxid value by 1. You are proposing that because I have\n> the latter problem I don't want you to check for the former one. But\n> I, John Q. Smartuser, do not want you to second-guess what I told you\n> on the command line that I wanted. :-)\n\nEven if your user is just average, they still have one major advantage\nover the architects of pg_amcheck: actual knowledge of the problem in\nfront of them.\n\n> I think in general you're worrying too much about the possibility of\n> this tool causing backend crashes. I think it's good that you wrote\n> the heapcheck code in a way that's hardened against that, and I think\n> we should try to harden other things as time permits. But I don't\n> think that the remote possibility of a crash due to the lack of such\n> hardening should dictate the design behavior of this tool. If the\n> crash possibilities are not remote, then I think the solution is to\n> fix them, rather than cutting out important checks.\n\nI couldn't agree more.\n\nI think that you need to have a kind of epistemic modesty with this\nstuff. Okay, we guarantee that the backend won't crash when certain\namcheck functions are run, based on these caveats. But don't we always\nguarantee something like that? And are the specific caveats actually\nthat different in each case, when you get right down to it? A\nguarantee does not exist in a vacuum. It always has implicit\nlimitations. For example, any guarantee implicitly comes with the\ncaveat \"unless I, the guarantor, am wrong\". Normally this doesn't\nreally matter because normally we're not concerned about extreme\nevents that will probably never happen even once. But amcheck is very\nmuch not like that. The chances of the guarantor being the weakest\nlink are actually rather high. Everyone is better off with a design\nthat accepts this view of things.\n\nI'm also suspicious of guarantees like this for less philosophical\nreasons. It seems to me like it solves our problem rather than the\nuser's problem. Having data that is so badly corrupt that it's\ndifficult to avoid segfaults when we perform some kind of standard\ntransformations on it is an appalling state of affairs for the user.\nThe segfault itself is very much not the point at all. We should focus\non making the tool as thorough and low overhead as possible. If we\nhave to make the tool significantly more complicated to avoid\nextremely unlikely segfaults then we're actually doing the user a\ndisservice, because we're increasing the chances that we the\nguarantors will be the weakest link (which was already high enough).\nThis smacks of hubris.\n\nI also agree that hardening is a worthwhile exercise here, of course.\nWe should be holding amcheck to a higher standard when it comes to not\nsegfaulting with corrupt data.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Nov 2020 11:47:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Nov 19, 2020 at 2:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Ideally heapallindex verification would verify 1:1 correspondence. It\n> doesn't do that right now, but it could.\n\nWell, that might be a cool new mode, but it doesn't necessarily have\nto supplant the thing we have now. The problem immediately before us\nis just making sure that the user can understand what we will and\nwon't be checking.\n\n> My thoughts on these two options:\n>\n> * I don't think that users will ever want rootdescend verification.\n\nThat seems too absolute. I think it's fine to say, we don't think that\nusers will want this, so let's not do it by default. But if it's so\nuseless as to not be worth a command-line option, then it was a\nmistake to put it into contrib at all. Let's expose all the things we\nhave, and try to set the defaults according to what we expect to be\nmost useful.\n\n> * heapallindexed is kind of expensive, but valuable. But the extra\n> check is probably less likely to help on the second or subsequent\n> index on a table.\n>\n> It might be worth considering an option that only uses it with only\n> one index: Preferably the primary key index, failing that some unique\n> index, and failing that some other index.\n\nThis seems a bit too clever for me. I would prefer a simpler schema,\nwhere we choose the default we think most people will want and use it\nfor everything -- and allow the user to override.\n\n> Even if your user is just average, they still have one major advantage\n> over the architects of pg_amcheck: actual knowledge of the problem in\n> front of them.\n\nQuite so.\n\n> I think that you need to have a kind of epistemic modesty with this\n> stuff. Okay, we guarantee that the backend won't crash when certain\n> amcheck functions are run, based on these caveats. But don't we always\n> guarantee something like that? And are the specific caveats actually\n> that different in each case, when you get right down to it? A\n> guarantee does not exist in a vacuum. It always has implicit\n> limitations. For example, any guarantee implicitly comes with the\n> caveat \"unless I, the guarantor, am wrong\".\n\nYep.\n\n> I'm also suspicious of guarantees like this for less philosophical\n> reasons. It seems to me like it solves our problem rather than the\n> user's problem. Having data that is so badly corrupt that it's\n> difficult to avoid segfaults when we perform some kind of standard\n> transformations on it is an appalling state of affairs for the user.\n> The segfault itself is very much not the point at all.\n\nI mostly agree with everything you say here, but I think we need to be\ncareful not to accept the position that seg faults are no big deal.\nConsider the following users, all of whom start with a database that\nthey believe to be non-corrupt:\n\nAlice runs pg_amcheck. It says that nothing is wrong, and that happens\nto be true.\nBob runs pg_amcheck. It says that there are problems, and there are.\nCarol runs pg_amcheck. It says that nothing is wrong, but in fact\nsomething is wrong.\nDan runs pg_amcheck. It says that there are problems, but in fact\nthere are none.\nErin runs pg_amcheck. The server crashes.\n\nAlice and Bob are clearly in the best shape here, but Carol and Dan\narguably haven't been harmed very much. Sure, Carol enjoys a false\nsense of security, but since she otherwise believed things were OK,\nthe impact of whatever problems exist is evidently not that bad. Dan\nis worrying over nothing, but the damage is only to his psyche, not\nhis database; we can hope he'll eventually sort out what has happened\nwithout grave consequences. Erin, on the other hand, is very possibly\nin a lot of trouble with her boss and her coworkers. She had what\nseemed to be a healthy database, and from their perspective, she shot\nit in the head without any real cause. It will be faint consolation to\nher and her coworkers that the database was corrupt all along: until\nshe ran the %$! tool, they did not have a problem that affected the\nability of their business to generate revenue. Now they had an outage,\nand that does.\n\nWhile I obviously haven't seen this exact scenario play out for a\ncustomer, because pg_amcheck is not committed, I have seen similar\nscenarios over and over. It's REALLY bad when the database goes down.\nThen the application goes down, and then it gets really ugly. As long\nas the database was just returning wrong answers or eating data,\nnobody's boss really cared that much, but now that it's down, they\ncare A LOT. This is of course not to say that nobody cares about the\naccuracy of results from the database: many people care a lot, and\nthat's why it's good to have tools like this. But we should not\nunderestimate the horror caused by a crash. A working database, even\nwith some wrong data in it, is a problem people would probably like to\nget fixed. A down database is an emergency. So I think we should\nactually get a lot more serious about ensuring that corrupt data on\ndisk doesn't cause crashes, even for regular SELECT statements. I\ndon't think we can take an arbitrary performance hit to get there,\nwhich is a challenge, but I do think that even a brief outage is\nnothing to take lightly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Nov 2020 16:33:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Nov 19, 2020 at 12:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I originally intended to review the docs and regression tests in the\n> same email as the patch itself, but this email has gotten rather long\n> and taken rather longer to get together than I had hoped, so I'm going\n> to stop here for now and come back to that stuff.\n\nBroad question: Does pg_amcheck belong in src/bin, or in contrib? You\nhave it in the latter place, but I'm not sure if that's the right\nidea. I'm not saying it *isn't* the right idea, but I'm just wondering\nwhat other people think.\n\nNow, on to the docs:\n\n+ Currently, this requires execute privileges on <xref linkend=\"amcheck\"/>'s\n+ <function>bt_index_parent_check</function> and\n<function>verify_heapam</function>\n\nThis makes me wonder why there isn't an option to call\nbt_index_check() rather than bt_index_parent_check().\n\nIt doesn't seem to be standard practice to include the entire output\nof the command's --help option in the documentation. That means as\nsoon as anybody changes anything they've got to change the\ndocumentation too. I don't see anything like that in the pages for\npsql or vacuumlo or pg_verifybackup. It also doesn't seem like a\nuseful thing to do. Anyone who is reading the documentation probably\nis in a position to try --help if they wish; they don't need that\nduplicated here.\n\nLooking at those other pages, what seems to be typical for an SGML is\nto list all the options and give a short paragraph on what each one\ndoes. What you have instead is a narrative description. I recommend\nlooking over the reference page for one of those other command-line\nutilities and adapting it to this case.\n\nBack to the the code:\n\n+static const char *\n+get_index_relkind_quals(void)\n+{\n+ if (!index_relkind_quals)\n+ index_relkind_quals = psprintf(\"'%c'\", RELKIND_INDEX);\n+ return index_relkind_quals;\n+}\n\nI feel like there ought to be a way to work this out at compile time\nrather than leaving it to runtime. I think that replacing the function\nbody with \"return CppAsString2(RELKIND_INDEX);\" would have the same\nresult, and once you do that you don't really need the function any\nmore. This is arguably cheating a bit: RELKIND_INDEX is defined as 'i'\nand CppAsString2() turns that into a string containing those three\ncharacters. That happens to work because what we want to do is quote\nthis for use in SQL, and SQL happens to use single quotes for literals\njust like C does for individual characters. It would be mor elegant to\nfigure out a way to interpolate just the character into C string, but\nI don't know of a macro trick that will do that. I think one could\nwrite char *something = { '\\'', RELKIND_INDEX, '\\'', '\\0' } but that\nwould be pretty darn awkward for the table case where you want an ANY\nwith three relkinds in there.\n\nBut maybe you could get around that by changing the query slightly.\nSuppose instead of relkind = BLAH, you write POSITION(relkind IN '%s')\n> 0. Then you could just have the caller pass either:\n\nchar *index_relkinds = { RELKIND_INDEX, '\\0' };\n-or-\nchar *table_relkinds = { RELKIND_RELATION, RELKIND_MATVIEW,\nRELKIND_TOASTVALUE, '\\0' };\n\nThe patch actually has RELKIND_PARTITIONED_TABLE there rather than\nRELKIND_RELATION, but that seems wrong to me, because partitioned\ntables don't have storage, and toast tables do. And if we're going to\ninclude RELKIND_PARTITIONED_TABLE for some reason, then why not\nRELKIND_PARTITIONED_INDEX for the index case?\n\nOn the tests:\n\nI think 003_check.pl needs to stop and restart the table between\npopulating the tables and corrupting them. Otherwise, how do we know\nthat the subsequent checks are going to actually see the corruption\nrather than something already cached in memory?\n\nThere are some philosophical questions to consider too, about how\nthese tests are written and what our philosophy ought to be here, but\nI am again going to push that off to a future email.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Nov 2020 16:35:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Nov 19, 2020, at 11:47 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> I think in general you're worrying too much about the possibility of\n>> this tool causing backend crashes. I think it's good that you wrote\n>> the heapcheck code in a way that's hardened against that, and I think\n>> we should try to harden other things as time permits. But I don't\n>> think that the remote possibility of a crash due to the lack of such\n>> hardening should dictate the design behavior of this tool. If the\n>> crash possibilities are not remote, then I think the solution is to\n>> fix them, rather than cutting out important checks.\n> \n> I couldn't agree more.\n\nOwing to how much run-time overhead it would entail, much of the backend code has not been, and probably will not be, hardened against corruption. The amcheck code uses backend code for accessing heaps and indexes. Only some of those uses can be preceded with sufficient safety checks to avoid stepping on landmines. It makes sense to me to have a \"don't run through minefields\" option, and a \"go ahead, run through minefields\" option for pg_amcheck, given that users in differing situations will have differing business consequences to bringing down the server in question.\n\nAs an example that we've already looked at, checking the status of an xid against clog is a dangerous thing to do. I wrote a patch to make it safer to query clog (0003) and a patch for pg_amcheck to use the safer interface (0004) and it looks unlikely either of those will ever be committed. I doubt other backend hardening is any more likely to get committed. It doesn't follow that if crash possibilities are not remote that we should therefore harden the backend. The performance considerations of the backend are not well aligned with the safety considerations of this tool. The backend code is written with the assumption of non-corrupt data, and this tool with the assumption of corrupt data, or at least a fair probability of corrupt data. I don't see how any one-hardening-fits-all will ever work.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 19 Nov 2020 13:50:33 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Nov 19, 2020 at 1:50 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It makes sense to me to have a \"don't run through minefields\" option, and a \"go ahead, run through minefields\" option for pg_amcheck, given that users in differing situations will have differing business consequences to bringing down the server in question.\n\nThis kind of framing suggests zero-risk bias to me:\n\nhttps://en.wikipedia.org/wiki/Zero-risk_bias\n\nIt's simply not helpful to think of the risks as \"running through a\nminefield\" versus \"not running through a minefield\". I also dislike\nthis framing because in reality nobody runs through a minefield,\nunless maybe it's a battlefield and the alternative is probably even\nworse. Risks are not discrete -- they're continuous. And they're\nsituational.\n\nI accept that there are certain reasonable gradations in the degree to\nwhich a segfault is bad, even in contexts in which pg_amcheck runs\ninto actual serious problems. And as Robert points out, experience\nsuggests that on average people care about availability the most when\npush comes to shove (though I hasten to add that that's not the same\nthing as considering a once-off segfault to be the greater evil here).\nEven still, I firmly believe that it's a mistake to assign *infinite*\nweight to not having a segfault. That is likely to have certain\nunintended consequences that could be even worse than a segfault, such\nas not detecting pernicious corruption over many months because our\ncan't-segfault version of core functionality fails to have the same\nbugs as the actual core functionality (and thus fails to detect a\nproblem in the core functionality).\n\nThe problem with giving infinite weight to any one bad outcome is that\nit makes it impossible to draw reasonable distinctions between it and\nsome other extreme bad outcome. For example, I would really not like\nto get infected with Covid-19. But I also think that it would be much\nworse to get infected with Ebola. It follows that Covid-19 must not be\ninfinitely bad, because if it is then I can't make this useful\ndistinction -- which might actually matter. If somebody hears me say\nthis, and takes it as evidence of my lackadaisical attitude towards\nCovid-19, I can live with that. I care about avoiding criticism as\nmuch as the next person, but I refuse to prioritize it over all other\nthings.\n\n> I doubt other backend hardening is any more likely to get committed.\n\nI suspect you're right about that. Because of the risks of causing\nreal harm to users.\n\nThe backend code is obviously *not* written with the assumption that\ndata cannot be corrupt. There are lots of specific ways in which it is\nhardened (e.g., there are many defensive \"can't happen\" elog()\nstatements). I really don't know why you insist on this black and\nwhite framing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Nov 2020 14:51:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Oct 27, 2020 at 5:12 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The v20 patches 0002, 0003, and 0005 still apply cleanly, but 0004 required a rebase. (0001 was already committed last week.)\n>\n> Here is a rebased set of 4 patches, numbered 0002..0005 to be consistent with the previous naming. There are no substantial changes.\n\nHi Mark,\n\nThe command line stuff fails to build on Windows[1]. I think it's\njust missing #include \"getopt_long.h\" (see\ncontrib/vacuumlo/vacuumlo.c).\n\n[1] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.123328\n\n\n", "msg_date": "Fri, 1 Jan 2021 08:31:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Nov 19, 2020, at 9:06 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Oct 26, 2020 at 12:12 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The v20 patches 0002, 0003, and 0005 still apply cleanly, but 0004 required a rebase. (0001 was already committed last week.)\n>> \n>> Here is a rebased set of 4 patches, numbered 0002..0005 to be consistent with the previous naming. There are no substantial changes.\n> \n> Here's a review of 0002. I basically like the direction this is going\n> but I guess nobody will be surprised that there are some things in\n> here that I think could be improved.\n\nThanks for the review!\n\nThe tools pg_dump and pg_amcheck both need to allow the user to specify which schemas, tables, and indexes either to dump or to check. There are command line options in pg_dump for this purpose, and functions for compiling lists of corresponding database objects. In prior versions of the pg_amcheck patch, I did some copy-and-pasting of this logic, and then had to fix up the copied functions a bit, given that pg_dump has its own ecosystem with things like fatal() and exit_nicely() and such.\n\nIn hindsight, it would have been better to factor these functions out into a shared location. I have done that, factoring them into fe_utils, and am attaching a series of patches that accomplishes that refactoring. Here are some brief explanations of what these are for. See also the commit comments in each patch:\n\n\nv3-0001-Moving-exit_nicely-and-fatal-into-fe_utils.patch\n\npg_dump allows on-exit callbacks to be registered, which it expects to get called when exit_nicely() is invoked. It doesn't work to factor functions out of pg_dump without having this infrastructure, as the functions being factored out include facilities for logging and exiting on error. Therefore, moving these functions into fe_utils.\n\n\nv3-0002-Refactoring-ExecuteSqlQuery-and-related-functions.patch\n\npg_dump has functions for running queries, but those functions take a pg_dump specific argument of type Archive rather than PGconn, with the expectation that the Archive's connection will be used. This has to be cleaned up a bit before these functions can be moved out of pg_dump to a shared location. Also, pg_dump has a fixed expectation that when a query fails, specific steps will be taken to print out the error information and exit. That's reasonable behavior, but not all callers will want that. Since the ultimate goal of this refactoring is to have higher level functions that translate shell patterns into oid lists, it's reasonable to imagine that not all callers will want to exit if the query fails. In particular, pg_amcheck won't want errors to automatically trigger exit() calls, given that pg_amcheck tries to continue in the face of errors. Therefore, adding a default error handler that does what pg_dump expects, but with an eye towards other callers being able to define handlers that behave differently.\n\n\nv3-0003-Creating-query_utils-frontend-utility.patch\n\nMoving the refactored functions to the shared location in fe_utils. This is kept separate from 0002 for ease of review.\n\n\nv3-0004-Adding-CurrentQueryHandler-logic.patch\n\nExtending the query error handling logic begun in the 0002 patch. It wasn't appropriate in the pg_dump project, but now the logic is in fe_utils.\n\n\nv3-0005-Refactoring-pg_dumpall-functions.patch\n\nRefactoring some remaining functions in the pg_dump project to use the new fe_utils facilities.\n\n\nv3-0006-Refactoring-expand_schema_name_patterns-and-frien.patch\n\nRefactoring functions in pg_dump that expand a list of patterns into a list of matching database objects. Specifically, changing them to no take pg_dump specific argument types, just as was done in 0002.\n\n\nv3-0007-Moving-pg_dump-functions-to-new-file-option_utils.patch\n\nMoving the functions refactored in 0006 into a new location fe_utils/option_utils\n\n\nv3-0008-Normalizing-option_utils-interface.patch\n\nReworking the functions moved in 0007 to have a more general purpose interface. The refactoring in 0006 only went so far as to make the functions moveable out of pg_dump. This refactoring is intentionally kept separate for ease of review.\n\n\nv3-0009-Adding-contrib-module-pg_amcheck.patch\n\nAdding contrib/pg_amcheck project, about which your review comments below apply.\n\n\n\nNot included in this patch set, but generated during the development of this patch set, I refactored processSQLNamePattern. string_utils mixes the logic for converting a shell-style pattern into a SQL style regex with the logic of performing the sql query to look up matching database objects. That makes it hard to look up multiple patterns in a single query, something that an intermediate version of this patch set was doing. I ultimately stopped doing that, as the code was overly complex, but the refactoring of processSQLNamePattern is not over-complicated and probably has some merit in its own right. Since it is not related to the pg_amcheck code, I expect that I will be posting that separately.\n\nAlso not included in this patch set, but likely to be in the next rev, is a patch that adds more interesting table and index corruption via PostgresNode, creating torn pages and such. That work is complete so far as I know, but I don't have all the regression tests that use it written yet, so I'll hold off posting it for now.\n\nNot yet written but still needed is the parallelization of the checking. I'll be working on that for the next patch set.\n\nThere is enough work here in need of review that I'm posting this now, hoping to get feedback on the general direction I'm going with this.\n\n\nTo your review....\n\n> \n> +const char *usage_text[] = {\n> + \"pg_amcheck is the PostgreSQL command line frontend for the\n> amcheck database corruption checker.\",\n> + \"\",\n> \n> This looks like a novel approach to the problem of printing out the\n> usage() information, and I think that it's inferior to the technique\n> used elsewhere of just having a bunch of printf() statements, because\n> unless I misunderstand, it doesn't permit localization.\n\nSince contrib modules are not localized, it seemed not to be a problem, but you've raised the question of whether pg_amcheck might be moved into core. I've changed it as suggested so that such a move would incur less code churn. The advantage to how I had it before was that each line was a bit shorter, making it fit better into the 80 column limit.\n\n> + \" -b, --startblock begin checking table(s) at the\n> given starting block number\",\n> + \" -e, --endblock check table(s) only up to the\n> given ending block number\",\n> + \" -B, --toast-startblock begin checking toast table(s)\n> at the given starting block\",\n> + \" -E, --toast-endblock check toast table(s) only up\n> to the given ending block\",\n> \n> I am not very convinced by this. What's the use case? If you're just\n> checking a single table, you might want to specify a start and end\n> block, but then you don't need separate options for the TOAST and\n> non-TOAST cases, do you? If I want to check pg_statistic, I'll say\n> pg_amcheck -t pg_catalog.pg_statistic. If I want to check the TOAST\n> table for pg_statistic, I'll say pg_amcheck -t pg_toast.pg_toast_2619.\n> In either case, if I want to check just the first three blocks, I can\n> add -b 0 -e 2.\n\nRemoved -B, --toast-startblock and -E, --toast-endblock.\n\n> \n> + \" -f, --skip-all-frozen do NOT check blocks marked as\n> all frozen\",\n> + \" -v, --skip-all-visible do NOT check blocks marked as\n> all visible\",\n> \n> I think this is using up too many one character option names for too\n> little benefit on things that are too closely related. How about, -s,\n> --skip=all-frozen|all-visible|none?\n\nI'm already using -s for \"strict-names', but I implemented your suggestion with -S, --skip\n\n> And then -v could mean verbose,\n> which could trigger things like printing all the queries sent to the\n> server, setting PQERRORS_VERBOSE, etc.\n\nI added -v, --verbose as you suggest.\n\n> + \" -x, --check-indexes check btree indexes associated\n> with tables being checked\",\n> + \" -X, --skip-indexes do NOT check any btree indexes\",\n> + \" -i, --index=PATTERN check the specified index(es) only\",\n> + \" -I, --exclude-index=PATTERN do NOT check the specified index(es)\",\n> \n> This is a lotta controls for something that has gotta have some\n> default. Either the default is everything, in which case I don't see\n> why I need -x, or it's nothing, in which case I don't see why I need\n> -X.\n\nI removed -x, --check-indexes and instead made that the default.\n\n> \n> + \" -c, --check-corrupt check indexes even if their\n> associated table is corrupt\",\n> + \" -C, --skip-corrupt do NOT check indexes if their\n> associated table is corrupt\",\n> \n> Ditto. (I think the default be to check corrupt, and there can be an\n> option to skip it.)\n\nLikewise, I removed -c, --check-corrupt and made that the default.\n\n> + \" -a, --heapallindexed check index tuples against the\n> table tuples\",\n> + \" -A, --no-heapallindexed do NOT check index tuples\n> against the table tuples\",\n> \n> Ditto. (Not sure what the default should be, though.)\n\nI removed -A, --no-heapallindexed and made that the default.\n\n> \n> + \" -r, --rootdescend search from the root page for\n> each index tuple\",\n> + \" -R, --no-rootdescend do NOT search from the root\n> page for each index tuple\",\n> \n> Ditto. (Again, not sure about the default.)\n\nI removed -R, --no-rootdescend and made that the default. Peter argued elsewhere for removing this altogether, but as I recall you argued against that, so for now I'm keeping the --rootdescend option.\n\n> I'm also not sure if these descriptions are clear enough, but it may\n> also be hard to do a good job in a brief space.\n\nYes. Better verbiage welcome.\n\n> Still, comparing this\n> to the documentation of heapallindexed makes me rather nervous. This\n> is only trying to verify that the index contains all the tuples in the\n> heap, not that the values in the heap and index tuples actually match.\n\nThis is complicated. The most reasonable approach from the point of view of somebody running pg_amcheck is to have the scan of the table and the scan of the index cooperate so that work is not duplicated. But from the point of view of amcheck (not pg_amcheck), there is no assumption that the table is being scanned just because the index is being checked. I'm not sure how best to resolve this, except that I'd rather punt this to a future version rather than require the first version of pg_amcheck to deal with it.\n\n> +typedef struct\n> +AmCheckSettings\n> +{\n> + char *dbname;\n> + char *host;\n> + char *port;\n> + char *username;\n> +} ConnectOptions;\n> \n> Making the struct name different from the type name seems not good,\n> and the struct name also shouldn't be on a separate line.\n\nFixed.\n\n> +typedef enum trivalue\n> +{\n> + TRI_DEFAULT,\n> + TRI_NO,\n> + TRI_YES\n> +} trivalue;\n> \n> Ugh. It's not this patch's fault, but we really oughta move this to\n> someplace more centralized.\n\nNot changed in this patch.\n\n> +typedef struct\n> ...\n> +} AmCheckSettings;\n> \n> I'm not sure I consider all of these things settings, \"db\" in\n> particular. But maybe that's nitpicking.\n\nIt is definitely nitpicking, but I agree with it. This next patch uses a static variable named \"conn\" rather than \"settings.db\".\n\n> +static void expand_schema_name_patterns(const SimpleStringList *patterns,\n> +\n> const SimpleOidList *exclude_oids,\n> +\n> SimpleOidList *oids\n> +\n> bool strict_names);\n> \n> This is copied from pg_dump, along with I think at least one other\n> function from nearby. Unlike the trivalue case above, this would be\n> the first duplication of this logic. Can we push this stuff into\n> pgcommon, perhaps?\n\nYes, these functions were largely copied from pg_dump. I have moved them out of pg_dump and into fe_utils, but that was a large enough effort that it deserves its own thread, so I'm creating a thread for that work independent of this thread.\n\n> + /*\n> + * Default behaviors for user settable options. Note that these default\n> + * to doing all the safe checks and none of the unsafe ones,\n> on the theory\n> + * that if a user says \"pg_amcheck mydb\" without specifying\n> any additional\n> + * options, we should check everything we know how to check without\n> + * risking any backend aborts.\n> + */\n> \n> This to me seems too conservative. The result is that by default we\n> check only tables, not indexes. I don't think that's going to be what\n> users want.\n\nChecking indexes has been made the default, as discussed above.\n\n> I don't know whether they want the heapallindexed or\n> rootdescend behaviors for index checks, but I think they want their\n> indexes checked. Happy to hear opinions from actual users on what they\n> want; this is just me guessing that you've guessed wrong. :-)\n\nThe heapallindexed and rootdescend options still exist but are false by default.\n\n> + if (settings.db == NULL)\n> + {\n> + pg_log_error(\"no connection to server after\n> initial attempt\");\n> + exit(EXIT_BADCONN);\n> + }\n> \n> I think this is documented as meaning out of memory, and reported that\n> way elsewhere. Anyway I am going to keep complaining until there are\n> no cases where we tell the user it broke without telling them what\n> broke. Which means this bit is a problem too:\n> \n> + if (!settings.db)\n> + {\n> + pg_log_error(\"no connection to server\");\n> + exit(EXIT_BADCONN);\n> + }\n> \n> Something went wrong, good luck figuring out what it was!\n\nI have changed this to more closely follow the behavior in scripts/common.c:connectDatabase. If pg_amcheck were moved into src/bin/scripts, I could just use that function outright.\n\n> + /*\n> + * All information about corrupt indexes are returned via\n> ereport, not as\n> + * tuples. We want all the details to report if corruption exists.\n> + */\n> + PQsetErrorVerbosity(settings.db, PQERRORS_VERBOSE);\n> \n> Really? Why? If I need the source code file name, function name, and\n> line number to figure out what went wrong, that is not a great sign\n> for the quality of the error reports it produces.\n\nYeah, you are right about that. In any event, the user can now specifiy --verbose if they like and get that extra information (not that they need it). I have removed this offending bit of code.\n\n> + /*\n> + * The btree checking logic which optionally\n> checks the contents\n> + * of an index against the corresponding table\n> has not yet been\n> + * sufficiently hardened against corrupt\n> tables. In particular,\n> + * when called with heapallindexed true, it\n> segfaults if the file\n> + * backing the table relation has been\n> erroneously unlinked. In\n> + * any event, it seems unwise to reconcile an\n> index against its\n> + * table when we already know the table is corrupt.\n> + */\n> + old_heapallindexed = settings.heapallindexed;\n> + if (corruptions)\n> + settings.heapallindexed = false;\n> \n> This seems pretty lame to me. Even if the btree checker can't tolerate\n> corruption to the extent that the heap checker does, seg faulting\n> because of a missing file seems like a bug that we should just fix\n> (and probably back-patch). I'm not very convinced by the decision to\n> override the user's decision about heapallindexed either. Maybe I lack\n> imagination, but that seems pretty arbitrary. Suppose there's a giant\n> index which is missing entries for 5 million heap tuples and also\n> there's 1 entry in the table which has an xmin that is less than the\n> pg_clas.relfrozenxid value by 1. You are proposing that because I have\n> the latter problem I don't want you to check for the former one. But\n> I, John Q. Smartuser, do not want you to second-guess what I told you\n> on the command line that I wanted. :-)\n\nI've removed this bit. I'm not sure what I was seeing back when I first wrote this code, but I no longer see any segfaults for missing relation files.\n\n> I think in general you're worrying too much about the possibility of\n> this tool causing backend crashes. I think it's good that you wrote\n> the heapcheck code in a way that's hardened against that, and I think\n> we should try to harden other things as time permits. But I don't\n> think that the remote possibility of a crash due to the lack of such\n> hardening should dictate the design behavior of this tool. If the\n> crash possibilities are not remote, then I think the solution is to\n> fix them, rather than cutting out important checks.\n\nRight. I've been worrying a bit less about this lately, in part because you and Peter are less concerned about it than I was, and in part because I've been banging away with various test cases and don't see all that much worth worrying about.\n\n> It doesn't seem like great design to me that get_table_check_list()\n> gets just the OID of the table itself, and then later if we decide to\n> check the TOAST table we've got to run a separate query for each table\n> we want to check to fetch the TOAST OID, when we could've just fetched\n> both in get_table_check_list() by including two columns in the query\n> rather than one and it would've been basically free. Imagine if some\n> user wrote a query that fetched the primary key value for all their\n> rows and then had their application run a separate query to fetch the\n> entire contents of each of those rows, said contents consisting of one\n> more integer. And then suppose they complained about performance. We'd\n> tell them they were doing it wrong, and so here.\n\nGood points. I've changed get_table_check_list to query both the main table and toast table oids as you suggest.\n\n> + if (settings.db == NULL)\n> + fatal(\"no connection on entry to check_table\");\n> \n> Uninformative. Is this basically an Assert? If so maybe just make it\n> one. If not maybe fail somewhere else with a better message?\n\nLooking at this again, I don't think it is even worth making it into an Assert, so I just removed it, along with similar useless checks of the same type elsewhere.\n\n> \n> + if (startblock == NULL)\n> + startblock = \"NULL\";\n> + if (endblock == NULL)\n> + endblock = \"NULL\";\n> \n> It seems like it would be more elegant to initialize\n> settings.startblock and settings.endblock to \"NULL.\" However, there's\n> also a related problem, which is that the startblock and endblock\n> values can be anything, and are interpolated with quoting. I don't\n> think that it's good to ship a tool with SQL injection hazards built\n> into it. I think that you should (a) check that these values are\n> integers during argument parsing and error out if they are not and\n> then (b) use either a prepared query or PQescapeLiteral() anyway.\n\nI've changed the logic to use strtol to parse these, and I'm storing them as long rather than as strings.\n\n> + stop = (on_error_stop) ? \"true\" : \"false\";\n> + toast = (check_toast) ? \"true\" : \"false\";\n> \n> The parens aren't really needed here.\n\nTrue. Removed.\n\n> +\n> printf(\"(relname=%s,blkno=%s,offnum=%s,attnum=%s)\\n%s\\n\",\n> + PQgetvalue(res, i, 0), /* relname */\n> + PQgetvalue(res, i, 1), /* blkno */\n> + PQgetvalue(res, i, 2), /* offnum */\n> + PQgetvalue(res, i, 3), /* attnum */\n> + PQgetvalue(res, i, 4)); /* msg */\n> \n> I am not quite sure how to format the output, but this looks like\n> something designed by an engineer who knows too much about the topic.\n> I suspect users won't find the use of things like \"relname\" and\n> \"blkno\" too easy to understand. At least I think we should say\n> \"relation, block, offset, attribute\" instead of \"relname, blkno,\n> offnum, attnum\". I would probably drop the parenthesis and add spaces,\n> so that you end up with something like:\n> \n> relation \"%s\", block \"%s\", offset \"%s\", attribute \"%s\":\n> \n> I would also define variant strings so that we entirely omit things\n> that are NULL. e.g. have four strings:\n> \n> relation \"%s\":\n> relation \"%s\", block \"%s\":(\n> relation \"%s\", block \"%s\", offset \"%s\":\n> relation \"%s\", block \"%s\", offset \"%s\", attribute \"%s\":\n> \n> Would it make it more readable if we indented the continuation line by\n> four spaces or something?\n\nI tried it that way and agree it looks better, including having the msg line indented four spaces. Changed.\n\n> + corruption_cnt++;\n> + printf(\"%s\\n\", error);\n> + pfree(error);\n> \n> Seems like we could still print the relation name in this case, and\n> that it would be a good idea to do so, in case it's not in the message\n> that the server returns.\n\nWe don't know the relation name in this case, only the oid, but I agree that would be useful to have, so I added that.\n\n> The general logic in this part of the code looks a bit strange to me.\n> If ExecuteSqlQuery() returns PGRES_TUPLES_OK, we print out the details\n> for each returned row. Otherwise, if error = true, we print the error.\n> But, what if neither of those things are the case? Then we'd just\n> print nothing despite having gotten back some weird response from the\n> server. That actually can't happen, because ExecuteSqlQuery() always\n> sets *error when the return code is not PGRES_TUPLES_OK, but you\n> wouldn't know that from looking at this code.\n> \n> Honestly, as written, ExecSqlQuery() seems like kind of a waste. The\n> OrDie() version is useful as a notational shorthand, but this version\n> seems to add more confusion than clarity. It has only three callers:\n> the ones in check_table() and check_indexes() have the problem\n> described above, and the one in get_toast_oid() could just as well be\n> using the OrDie() version. And also we should probably get rid of it\n> entirely by fetching the toast OIDs the first time around, as\n> mentioned above.\n\nThese functions have been factored out of pg_dump into fe_utils, so this bit of code review doesn't refer to anything now.\n\n> check_indexes() lacks a function comment. It seems to have more or\n> less the same problem as get_toast_oid() -- an extra query per table\n> to get the list of indexes. I guess it has a better excuse: there\n> could be lots of indexes per table, and we're fetching multiple\n> columns of data for each one, whereas in the TOAST case we are issuing\n> an extra query per table to fetch a single integer. But, couldn't we\n> fetch information about all the indexes we want to check in one go,\n> rather than fetching them separately for each table being checked? I'm\n> not sure if that would create too much other complexity, but it seems\n> like it would be quicker.\n\nIf the --skip-corrupt option is given, we need to only check the indexes associated with a table once the table has been found to be non-corrupt. Querying for all the indexes upfront, we'd need to keep information about which table the index came from, and check that against lists of tables that have been checked, etc. It seems pretty messy, even more so when considering the limited list facilities available to frontend code.\n\nI have made no changes in this version, though I'm not rejecting your idea here. Maybe I'll think of a clean way to do this for a later patch? \n\n> + if (settings.db == NULL)\n> + fatal(\"no connection on entry to check_index\");\n> + if (idxname == NULL)\n> + fatal(\"no index name on entry to check_index\");\n> + if (tblname == NULL)\n> + fatal(\"no table name on entry to check_index\");\n> \n> Again, probably these should be asserts, or if they're not, the error\n> should be reported better and maybe elsewhere.\n> \n> Similarly in some other places, like expand_schema_name_patterns().\n\nI removed these checks entirely.\n\n> + * The loop below runs multiple SELECTs might sometimes result in\n> + * duplicate entries in the Oid list, but we don't care.\n> \n> This is missing a which, like the place you copied it from, but the\n> version in pg_dumpall.c is better.\n> \n> expand_table_name_patterns() should be reformatted to not gratuitously\n> exceed 80 columns. Ditto for expand_index_name_patterns().\n\nRefactoring into fe_utils, as mentioned above.\n\n> I sort of expected that this patch might use threads to allow parallel\n> checking - seems like it would be a useful feature.\n\nYes, I think that makes sense, but I'm going to work on that in the next patch.\n\n> I originally intended to review the docs and regression tests in the\n> same email as the patch itself, but this email has gotten rather long\n> and taken rather longer to get together than I had hoped, so I'm going\n> to stop here for now and come back to that stuff.\n\n\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 6 Jan 2021 23:05:27 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 6, 2021, at 11:05 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I have done that, factoring them into fe_utils, and am attaching a series of patches that accomplishes that refactoring.\n\nThe previous set should have been named v30, not v3. My apologies for any confusion.\n\nThe attached patches, v31, are mostly the same, but with \"getopt_long.h\" included from pg_amcheck.c per Thomas's review, and a .gitignore file added in contrib/pg_amcheck/\n\n\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 7 Jan 2021 09:32:53 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Nov 19, 2020, at 11:47 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Thu, Nov 19, 2020 at 9:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I'm also not sure if these descriptions are clear enough, but it may\n>> also be hard to do a good job in a brief space. Still, comparing this\n>> to the documentation of heapallindexed makes me rather nervous. This\n>> is only trying to verify that the index contains all the tuples in the\n>> heap, not that the values in the heap and index tuples actually match.\n> \n> That's a good point. As things stand, heapallindexed verification does\n> not notice when there are extra index tuples in the index that are in\n> some way inconsistent with the heap. Hopefully this isn't too much of\n> a problem in practice because the presence of extra spurious tuples\n> gets detected by the index structure verification process. But in\n> general that might not happen.\n> \n> Ideally heapallindex verification would verify 1:1 correspondence. It\n> doesn't do that right now, but it could.\n> \n> This could work by having two bloom filters -- one for the heap,\n> another for the index. The implementation would look for the absence\n> of index tuples that should be in the index initially, just like\n> today. But at the end it would modify the index bloom filter by &= it\n> with the complement of the heap bloom filter. If any bits are left set\n> in the index bloom filter, we go back through the index once more and\n> locate index tuples that have at least some matching bits in the index\n> bloom filter (we cannot expect all of the bits from each of the hash\n> functions used by the bloom filter to still be matches).\n> \n> From here we can do some kind of lookup for maybe-not-matching index\n> tuples that we locate. Make sure that they point to an LP_DEAD line\n> item in the heap or something. Make sure that they have the same\n> values as the heap tuple if they're still retrievable (i.e. if we\n> haven't pruned the heap tuple away already).\n\nThis approach sounds very good to me, but beyond the scope of what I'm planning for this release cycle.\n\n>> This to me seems too conservative. The result is that by default we\n>> check only tables, not indexes. I don't think that's going to be what\n>> users want. I don't know whether they want the heapallindexed or\n>> rootdescend behaviors for index checks, but I think they want their\n>> indexes checked. Happy to hear opinions from actual users on what they\n>> want; this is just me guessing that you've guessed wrong. :-)\n> \n> My thoughts on these two options:\n> \n> * I don't think that users will ever want rootdescend verification.\n> \n> That option exists now because I wanted to have something that relied\n> on the uniqueness property of B-Tree indexes following the Postgres 12\n> work. I didn't add retail index tuple deletion, so it seemed like a\n> good idea to have something that makes the same assumptions that it\n> would have to make. To validate the design.\n> \n> Another factor is that Alexander Korotkov made the basic\n> bt_index_parent_check() tests a lot better for Postgres 13. This\n> undermined the practical argument for using rootdescend verification.\n\nThe latest version of the patch has rootdescend off by default, but a switch to turn it on. The documentation for that switch in doc/src/sgml/pgamcheck.sgml summarizes your comments:\n\n+ This form of verification was originally written to help in the\n+ development of btree index features. It may be of limited or even of no\n+ use in helping detect the kinds of corruption that occur in practice.\n+ In any event, it is known to be a rather expensive check to perform.\n\nFor my own self, I don't care if rootdescend is an option in pg_amcheck. You and Robert expressed somewhat different opinions, and I tried to split the difference. I'm happy to go a different direction if that's what the consensus is.\n\n> Finally, note that bt_index_parent_check() was always supposed to be\n> something that was to be used only when you already knew that you had\n> big problems, and wanted absolutely thorough verification without\n> regard for the costs. This isn't the common case at all. It would be\n> reasonable to not expose anything from bt_index_parent_check() at all,\n> or to give it much less prominence. Not really sure of what the right\n> balance is here myself, so I'm not insisting on anything. Just telling\n> you what I know about it.\n\nThis still needs work. Currently, there is a switch to turn off index checking, with the checks on by default. But there is no switch controlling which kind of check is performed (bt_index_check vs. bt_index_parent_check). Making matters more complicated, selecting both rootdescend and bt_index_check wouldn't make sense, as there is no rootdescend option on that function. So users would need multiple flags to turn on various options, with some flag combinations drawing an error about the flags not being mutually compatible. That's doable, but people may not like that interface.\n\n> * heapallindexed is kind of expensive, but valuable. But the extra\n> check is probably less likely to help on the second or subsequent\n> index on a table.\n\nThere is a switch for enabling this. It is off by default.\n\n> It might be worth considering an option that only uses it with only\n> one index: Preferably the primary key index, failing that some unique\n> index, and failing that some other index.\n\nIt might make sense for somebody to submit this for a later release. I don't have any plans to work on this during this release cycle.\n\n>> I'm not very convinced by the decision to\n>> override the user's decision about heapallindexed either.\n> \n> I strongly agree.\n\nI have removed the override.\n\n> \n>> Maybe I lack\n>> imagination, but that seems pretty arbitrary. Suppose there's a giant\n>> index which is missing entries for 5 million heap tuples and also\n>> there's 1 entry in the table which has an xmin that is less than the\n>> pg_clas.relfrozenxid value by 1. You are proposing that because I have\n>> the latter problem I don't want you to check for the former one. But\n>> I, John Q. Smartuser, do not want you to second-guess what I told you\n>> on the command line that I wanted. :-)\n> \n> Even if your user is just average, they still have one major advantage\n> over the architects of pg_amcheck: actual knowledge of the problem in\n> front of them.\n\nThere is a switch for skipping index checks on corrupt tables. By default, the indexes will be checked.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Jan 2021 10:11:44 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Fri, Jan 8, 2021 at 6:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> The attached patches, v31, are mostly the same, but with \"getopt_long.h\" included from pg_amcheck.c per Thomas's review, and a .gitignore file added in contrib/pg_amcheck/\n\nI couple more little things from Windows CI:\n\n C:\\projects\\postgresql\\src\\include\\fe_utils/option_utils.h(19):\nfatal error C1083: Cannot open include file: 'libpq-fe.h': No such\nfile or directory [C:\\projects\\postgresql\\pg_amcheck.vcxproj]\n\nDoes contrib/amcheck/Makefile need to say \"SHLIB_PREREQS =\nsubmake-libpq\" like other contrib modules that use libpq?\n\n pg_backup_utils.obj : error LNK2001: unresolved external symbol\nexit_nicely [C:\\projects\\postgresql\\pg_dump.vcxproj]\n\nI think this is probably because additions to src/fe_utils/Makefile's\nOBJS list need to be manually replicated in\nsrc/tools/msvc/Mkvcbuild.pm's @pgfeutilsfiles list. (If I'm right\nabout that, perhaps it needs a comment to remind us Unix hackers of\nthat, or perhaps it should be automated...)\n\n\n", "msg_date": "Mon, 11 Jan 2021 09:41:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 10, 2021, at 12:41 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Fri, Jan 8, 2021 at 6:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> The attached patches, v31, are mostly the same, but with \"getopt_long.h\" included from pg_amcheck.c per Thomas's review, and a .gitignore file added in contrib/pg_amcheck/\n> \n> I couple more little things from Windows CI:\n> \n> C:\\projects\\postgresql\\src\\include\\fe_utils/option_utils.h(19):\n> fatal error C1083: Cannot open include file: 'libpq-fe.h': No such\n> file or directory [C:\\projects\\postgresql\\pg_amcheck.vcxproj]\n> \n> Does contrib/amcheck/Makefile need to say \"SHLIB_PREREQS =\n> submake-libpq\" like other contrib modules that use libpq?\n\nAdded in v32.\n\n> pg_backup_utils.obj : error LNK2001: unresolved external symbol\n> exit_nicely [C:\\projects\\postgresql\\pg_dump.vcxproj]\n> \n> I think this is probably because additions to src/fe_utils/Makefile's\n> OBJS list need to be manually replicated in\n> src/tools/msvc/Mkvcbuild.pm's @pgfeutilsfiles list. (If I'm right\n> about that, perhaps it needs a comment to remind us Unix hackers of\n> that, or perhaps it should be automated...)\n\nAdded in v32, along with adding pg_amcheck to @contrib_uselibpq, @contrib_uselibpgport, and @contrib_uselibpgcommon\n\nThere are also a few additions in v32 to typedefs.list, and some whitespace changes due to running pgindent.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 11 Jan 2021 10:16:07 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Mon, Jan 11, 2021 at 1:16 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Added in v32, along with adding pg_amcheck to @contrib_uselibpq, @contrib_uselibpgport, and @contrib_uselibpgcommon\n\nexit_utils.c fails to achieve the goal of making this code independent\nof pg_dump, because of:\n\n#ifdef WIN32\n if (parallel_init_done && GetCurrentThreadId() != mainThreadId)\n _endthreadex(code);\n#endif\n\nparallel_init_done is a pg_dump-ism. Perhaps this chunk of code could\nbe a handler that gets registered using exit_nicely() rather than\nhard-coded like this. Note that the function comments for\nexit_nicely() are heavily implicated in this problem, since they also\napply to stuff that only happens in pg_dump and not other utilities.\n\nI'm skeptical about the idea of putting functions into string_utils.c\nwith names as generic as include_filter() and exclude_filter().\nExisting cases like fmtId() and fmtQualifiedId() are not great either,\nbut I think this is worse and that we should do some renaming. On a\nrelated note, it's not clear to me why these should be classified as\nstring_utils while stuff like expand_schema_name_patterns() gets\nclassified as option_utils. These are neither generic\nstring-processing functions nor are they generic options-parsing\nfunctions. They are functions for expanding shell-glob style patterns\nfor database object names. And they seem like they ought to be\ntogether, because they seem to do closely-related things. I'm open to\nan argument that this is wrongheaded on my part, but it looks weird to\nme the way it is.\n\nI'm pretty unimpressed by query_utils.c. The CurrentResultHandler\nstuff looks grotty, and you don't seem to really use it anywhere. And\nit seems woefully overambitious to me anyway: this doesn't apply to\nevery kind of \"result\" we've got hanging around, absolutely nothing\neven close to that, even though a name like CurrentResultHandler\nsounds very broad. It also means more global variables, which is a\nthing of which the PostgreSQL codebase already has a deplorable\noversupply. quiet_handler() and noop_handler() aren't used anywhere\neither, AFAICS.\n\nI wonder if it would be better to pass in callbacks rather than\nrelying on global variables. e.g.:\n\ntypedef void (*fatal_error_callback)(const char *fmt,...)\npg_attribute_printf(1, 2) pg_attribute_noreturn();\n\nThen you could have a few helper functions that take an argument of\ntype fatal_error_callback and throw the right fatal error for (a)\nwrong PQresultStatus() and (b) result is not one row. Do you need any\nother cases? exiting_handler() seems to think that the caller might\nwant to allow any number of tuples, or any positive number, or any\nparticular cout, but I'm not sure if all of those cases are really\nneeded.\n\nThis stuff is finnicky and hard to get right. You don't really want to\ncreate a situation where the same code keeps getting duplicated, or\nthe behavior's just a little bit inconsistent everywhere, but it also\nisn't great to build layers upon layers of abstraction around\nsomething like ExecuteSqlQuery which is, in the end, a four-line\nfunction. I don't think there's any problem with something like\npg_dump having it's own function to execute-a-query-or-die. Maybe that\nfunction ends up doing something like\nTheGenericFunctionToExecuteOrDie(my_die_fn, the_query), or maybe\npg_dump can just open-code it but have a my_die_fn to pass down to the\nglob-expansion stuff, or, well, I don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Jan 2021 16:13:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 14, 2021, at 1:13 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jan 11, 2021 at 1:16 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Added in v32, along with adding pg_amcheck to @contrib_uselibpq, @contrib_uselibpgport, and @contrib_uselibpgcommon\n> \n> exit_utils.c fails to achieve the goal of making this code independent\n> of pg_dump, because of:\n> \n> #ifdef WIN32\n> if (parallel_init_done && GetCurrentThreadId() != mainThreadId)\n> _endthreadex(code);\n> #endif\n> \n> parallel_init_done is a pg_dump-ism. Perhaps this chunk of code could\n> be a handler that gets registered using exit_nicely() rather than\n> hard-coded like this. Note that the function comments for\n> exit_nicely() are heavily implicated in this problem, since they also\n> apply to stuff that only happens in pg_dump and not other utilities.\n\nThe 0001 patch has been restructured to not have this problem.\n\n> I'm skeptical about the idea of putting functions into string_utils.c\n> with names as generic as include_filter() and exclude_filter().\n> Existing cases like fmtId() and fmtQualifiedId() are not great either,\n> but I think this is worse and that we should do some renaming. On a\n> related note, it's not clear to me why these should be classified as\n> string_utils while stuff like expand_schema_name_patterns() gets\n> classified as option_utils. These are neither generic\n> string-processing functions nor are they generic options-parsing\n> functions. They are functions for expanding shell-glob style patterns\n> for database object names. And they seem like they ought to be\n> together, because they seem to do closely-related things. I'm open to\n> an argument that this is wrongheaded on my part, but it looks weird to\n> me the way it is.\n\nThe logic to filter which relations are checked is completely restructured and is kept in pg_amcheck.c\n\n> I'm pretty unimpressed by query_utils.c. The CurrentResultHandler\n> stuff looks grotty, and you don't seem to really use it anywhere. And\n> it seems woefully overambitious to me anyway: this doesn't apply to\n> every kind of \"result\" we've got hanging around, absolutely nothing\n> even close to that, even though a name like CurrentResultHandler\n> sounds very broad. It also means more global variables, which is a\n> thing of which the PostgreSQL codebase already has a deplorable\n> oversupply. quiet_handler() and noop_handler() aren't used anywhere\n> either, AFAICS.\n> \n> I wonder if it would be better to pass in callbacks rather than\n> relying on global variables. e.g.:\n> \n> typedef void (*fatal_error_callback)(const char *fmt,...)\n> pg_attribute_printf(1, 2) pg_attribute_noreturn();\n> \n> Then you could have a few helper functions that take an argument of\n> type fatal_error_callback and throw the right fatal error for (a)\n> wrong PQresultStatus() and (b) result is not one row. Do you need any\n> other cases? exiting_handler() seems to think that the caller might\n> want to allow any number of tuples, or any positive number, or any\n> particular cout, but I'm not sure if all of those cases are really\n> needed.\n\nThe error callback stuff has been refactored in this next patch set, and also now includes handlers for parallel slots, as the src/bin/scripts/scripts_parallel.c stuff has been moved to fe_utils and made more general. As it was, there were hardcoded assumptions that are valid for reindexdb and vacuumdb, but not general enough for pg_amcheck to use. The refactoring in patches 0002 through 0005 make it more generally usable. Patch 0008 uses it in pg_amcheck.\n\n> This stuff is finnicky and hard to get right. You don't really want to\n> create a situation where the same code keeps getting duplicated, or\n> the behavior's just a little bit inconsistent everywhere, but it also\n> isn't great to build layers upon layers of abstraction around\n> something like ExecuteSqlQuery which is, in the end, a four-line\n> function. I don't think there's any problem with something like\n> pg_dump having it's own function to execute-a-query-or-die. Maybe that\n> function ends up doing something like\n> TheGenericFunctionToExecuteOrDie(my_die_fn, the_query), or maybe\n> pg_dump can just open-code it but have a my_die_fn to pass down to the\n> glob-expansion stuff, or, well, I don't know.\n\nThere are some real improvements in this next patch set.\n\nThe number of queries issued to the database to determine the databases to use is much reduced. I had been following the pattern in pg_dump, but abandoned that for something new.\n\nThe parallel slots stuff is now used for parallelism, much like what is done in vacuumdb and reindexdb.\n\nThe pg_amcheck application can now be run over one database, multiple specified databases, or all databases.\n\nRelations, schemas, and databases can be included and excluded by pattern, like \"(db1|db2|db3).myschema.(mytable|myindex)\". The real-world use-cases for this that I have in mind are things like:\n\n pg_amcheck --jobs=12 --all \\\n --exclude-relation=\"db7.schema.known_corrupt_table\" \\\n --exclude-relation=\"db*.schema.known_big_table\"\n\nand\n\n pg_amcheck --jobs=20 \\\n --include-relation=\"*.compliance.audited\"\n\nI might be missing something, but I think the interface is a superset of the interface from reindexdb and vacuumdb. None of the new interface stuff (patterns, allowing multiple databases to be given on the command line, etc) is required.\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 27 Jan 2021 21:05:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "I like 0007 quite a bit and am inclined to commit it soon, as it\ndoesn't depend on the earlier patches. But:\n\n- I think the residual comment in processSQLNamePattern beginning with\n\"Note:\" could use some wordsmithing to account for the new structure\nof things -- maybe just \"this pass\" -> \"this function\".\n- I suggest changing initializations like maxbuf = buf + 2 to maxbuf =\n&buf[2] for clarity.\n\nRegarding 0001:\n\n- My preference would be to dump on_exit_nicely_final() and just rely\non order of registration.\n- I'm not entirely sure it's a good ideal to expose something named\nfatal() like this, because that's a fairly short and general name. On\nthe other hand, it's pretty descriptive and it's not clear why someone\nincluding exit_utils.h would want any other definitiion. I guess we\ncan always change it later if it proves to be problematic; it's got a\nlot of callers and I guess there's no point in churning the code\nwithout a clear reason.\n- I don't quite see why we need this at all. Like, exit_nicely() is a\npg_dump-ism. It would make sense to centralize it if we were going to\nuse it for pg_amcheck, but you don't. If you were going to, you'd need\nto adapt 0003 to use exit_nicely() instead of exit(), but you don't,\nnor do you add any other new calls to exit_nicely() anywhere, except\nfor one in 0002. That makes the PGresultHandler stuff depend on\nexit_nicely(), which might be important if you were going to refactor\npg_dump to use that abstraction, but you don't. I'm not opposed to the\nidea of centralized exit processing for frontend utilities; indeed, it\nseems like a good idea. But this doesn't seem to get us there. AFAICS\nit just entangles pg_dump with pg_amcheck unnecessarily in a way that\ndoesn't really benefit either of them.\n\nRegarding 0002:\n\n- I don't think this is separately committable because it adds an\nabstraction but not any uses of that abstraction to demonstrate that\nit's actually any good. Perhaps it should just be merged into 0005,\nand even into parallel_slot.h vs. having its own header. I'm not\nreally sure about that, though\n- Is this really much of an abstraction layer? Like, how generic can\nthis be when the argument list includes ExecStatusType expected_status\nand int expected_ntups?\n- The logic seems to be very similar to some of the stuff that you\nmove around in 0003, like executeQuery() and executeCommand(), but it\ndoesn't get unified. I'm not necessarily saying it should be, but it's\nweird to do all this refactoring and end up with something that still\nlooks this\n\n0003, 0004, and 0006 look pretty boring; they are just moving code\naround. Is there any point in splitting the code from 0003 across two\nfiles? Maybe it's fine.\n\nIf I run pg_amcheck --all -j4 do I get a serialization boundary across\ndatabases? Like, I have to completely finish db1 before I can go onto\ndb2, even though maybe only one worker is still busy with it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Jan 2021 12:13:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> If I run pg_amcheck --all -j4 do I get a serialization boundary across\n> databases? Like, I have to completely finish db1 before I can go onto\n> db2, even though maybe only one worker is still busy with it?\n\nYes, you do. That's patterned on reindexdb and vacuumdb.\n\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 09:40:13 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I like 0007 quite a bit and am inclined to commit it soon, as it\n> doesn't depend on the earlier patches. But:\n> \n> - I think the residual comment in processSQLNamePattern beginning with\n> \"Note:\" could use some wordsmithing to account for the new structure\n> of things -- maybe just \"this pass\" -> \"this function\".\n> - I suggest changing initializations like maxbuf = buf + 2 to maxbuf =\n> &buf[2] for clarity.\n\nOk, I should be able to get you an updated version of 0007 with those changes here soon for you to commit.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 09:41:33 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Jan 28, 2021 at 12:40 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > If I run pg_amcheck --all -j4 do I get a serialization boundary across\n> > databases? Like, I have to completely finish db1 before I can go onto\n> > db2, even though maybe only one worker is still busy with it?\n>\n> Yes, you do. That's patterned on reindexdb and vacuumdb.\n\nSounds lame, but fair enough. We can leave that problem for another day.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Jan 2021 12:49:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Jan 28, 2021, at 9:49 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Jan 28, 2021 at 12:40 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> If I run pg_amcheck --all -j4 do I get a serialization boundary across\n>>> databases? Like, I have to completely finish db1 before I can go onto\n>>> db2, even though maybe only one worker is still busy with it?\n>> \n>> Yes, you do. That's patterned on reindexdb and vacuumdb.\n> \n> Sounds lame, but fair enough. We can leave that problem for another day.\n\nYeah, I agree that it's lame, and should eventually be addressed. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 09:50:51 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 28, 2021, at 9:41 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> I like 0007 quite a bit and am inclined to commit it soon, as it\n>> doesn't depend on the earlier patches. But:\n>> \n>> - I think the residual comment in processSQLNamePattern beginning with\n>> \"Note:\" could use some wordsmithing to account for the new structure\n>> of things -- maybe just \"this pass\" -> \"this function\".\n>> - I suggest changing initializations like maxbuf = buf + 2 to maxbuf =\n>> &buf[2] for clarity.\n> \n> Ok, I should be able to get you an updated version of 0007 with those changes here soon for you to commit.\n\nI made those changes, and fixed a bug that would impact the pg_amcheck callers. I'll have to extend the regression test coverage in 0008 since it obviously wasn't caught, but that's not part of this patch since there are no callers that use the dbname.schema.relname format as yet.\n\nThis is the only patch for v34, since you want to commit it separately. It's renamed as 0001 here....\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 28 Jan 2021 10:07:18 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 28, 2021, at 9:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n\nAttached is patch set 35. Per your review comments, I have restructured the patches in the following way:\n\nv33's 0007 is now the first patch, v35's 0001\n\nv33's 0001 is no more. The frontend infrastructure for error handling and exiting may be resubmitted someday in another patch, but they aren't necessary for pg_amcheck\n\nv33's 0002 is no more. The PGresultHandler stuff that it defined inspires some of what comes later in v35's 0003, but it isn't sufficiently similar to what v35 does to be thought of as moving from v33-0002 into v35-0003.\n\nv33's 0003, 0004 and 0006 are combined into v35's 0002\n\nv33's 0005 becomes v35's 0003\n\nv33's 0007 becomes v35's 0004\n\nAdditionally, pg_amcheck testing is extended beyond what v33 had in v35's new 0005 patch, but pg_amcheck doesn't depend on this new 0005 patch ever being committed, so if you don't like it, just throw it in the bit bucket.\n\n> \n> I like 0007 quite a bit and am inclined to commit it soon, as it\n> doesn't depend on the earlier patches. But:\n> \n> - I think the residual comment in processSQLNamePattern beginning with\n> \"Note:\" could use some wordsmithing to account for the new structure\n> of things -- maybe just \"this pass\" -> \"this function\".\n> - I suggest changing initializations like maxbuf = buf + 2 to maxbuf =\n> &buf[2] for clarity\n\nAlready responded to this in the v34 development a few days ago. Nothing meaningfully changes between 34 and 35.\n\n> Regarding 0001:\n> \n> - My preference would be to dump on_exit_nicely_final() and just rely\n> on order of registration.\n> - I'm not entirely sure it's a good ideal to expose something named\n> fatal() like this, because that's a fairly short and general name. On\n> the other hand, it's pretty descriptive and it's not clear why someone\n> including exit_utils.h would want any other definitiion. I guess we\n> can always change it later if it proves to be problematic; it's got a\n> lot of callers and I guess there's no point in churning the code\n> without a clear reason.\n> - I don't quite see why we need this at all. Like, exit_nicely() is a\n> pg_dump-ism. It would make sense to centralize it if we were going to\n> use it for pg_amcheck, but you don't. If you were going to, you'd need\n> to adapt 0003 to use exit_nicely() instead of exit(), but you don't,\n> nor do you add any other new calls to exit_nicely() anywhere, except\n> for one in 0002. That makes the PGresultHandler stuff depend on\n> exit_nicely(), which might be important if you were going to refactor\n> pg_dump to use that abstraction, but you don't. I'm not opposed to the\n> idea of centralized exit processing for frontend utilities; indeed, it\n> seems like a good idea. But this doesn't seem to get us there. AFAICS\n> it just entangles pg_dump with pg_amcheck unnecessarily in a way that\n> doesn't really benefit either of them.\n\nRemoved from v35.\n\n> Regarding 0002:\n> \n> - I don't think this is separately committable because it adds an\n> abstraction but not any uses of that abstraction to demonstrate that\n> it's actually any good. Perhaps it should just be merged into 0005,\n> and even into parallel_slot.h vs. having its own header. I'm not\n> really sure about that, though\n\nYeah, this is gone from v35, with hints of it moved into 0003 as part of the parallel slots refactoring.\n\n> - Is this really much of an abstraction layer? Like, how generic can\n> this be when the argument list includes ExecStatusType expected_status\n> and int expected_ntups?\n\nThe new format takes a void *context argument.\n\n> - The logic seems to be very similar to some of the stuff that you\n> move around in 0003, like executeQuery() and executeCommand(), but it\n> doesn't get unified. I'm not necessarily saying it should be, but it's\n> weird to do all this refactoring and end up with something that still\n> looks this\n\nYeah, I agree with this. The refactoring is a lot less ambitious in v35, to avoid these issues.\n\n> 0003, 0004, and 0006 look pretty boring; they are just moving code\n> around. Is there any point in splitting the code from 0003 across two\n> files? Maybe it's fine.\n\nCombined.\n\n> If I run pg_amcheck --all -j4 do I get a serialization boundary across\n> databases? Like, I have to completely finish db1 before I can go onto\n> db2, even though maybe only one worker is still busy with it?\n\nThe command line interface and corresponding semantics for specifying which tables to check, which schemas to check, and which databases to check should be the same as that for reindexdb and vacuumdb, and the behavior for handing off those targets to be checked/reindexed/vacuumed through the parallel slots interface should be the same. It seems a bit much to refactor reindexdb and vacuumdb to match pg_amcheck when pg_amcheck hasn't been accepted for commit as yet. If/when that happens, and if the project generally approves of going in this direction, I think the next step will be to refactor some of this logic out of pg_amcheck into fe_utils and use it from all three utilities. At that time, I'd like to tackle the serialization choke point in all three, and handle it in the same way for them all.\n\n\nFor the new v35-0005 patch, I have extended PostgresNode.pm with some new corruption abilities. In short, it can now take a snapshot of the files that back a relation, and can corruptly rollback those files to prior versions, in full or in part. This allows creating kinds of corruption that are hard to create through mere bit twiddling. For example, if the relation backing an index is rolled back to a prior version, amcheck's btree checking sees the index as not corrupt, but when asked to reconcile the entries in the heap with the index, it can see that not all of them are present. This gives test coverage of corruption checking functionality that is otherwise hard to achieve.\n\nTo check that the PostgresNode.pm changes themselves work, v35-0005 adds src/test/modules/corruption\n\nTo check pg_amcheck, and by implication amcheck, v35-0005 adds contrib/pg_amcheck/t/006_relfile_damage.pl\n\nOnce again, v35-0005 does not need to be committed -- pg_amcheck works just fine without it.\n\n\nYou and I have discussed this off-list, but for the record, amcheck and pg_amcheck currently only check heaps and btree indexes. Other object types, such as sequences and non-btree indexes, are not checked. Some basic sanity checking of other object types would be a good addition, and pg_amcheck has been structured in a way where it should be fairly straightforward to add support for those. The only such sanity checking that I thought could be done in a short timeframe was to check that the relation files backing the objects were not missing, and we decided off-list such checking wasn't worth much, so I didn't add it.\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 31 Jan 2021 16:05:15 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Jan 31, 2021, at 4:05 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Attached is patch set 35.\n\nI found some things to improve in the v35 patch set. Please find attached the v36 patch set, which differs from v35 in the following ways:\n\n0001 -- no changes\n\n0002 -- fixing omissions in @pgfeutilsfiles in file src/tools/msvc/Mkvcbuild.pm\n\n0003 -- no changes\n\n0004:\n -- Fixes handling of amcheck contrib module installed in non-default schema.\n -- Adds database name to corruption messages to make identifying the relation being complained about unambiguous in multi-database checks\n -- Fixes an instance where pg_amcheck was querying pg_database without schema-qualifying it\n -- Simplifying some functions in pg_amcheck.c\n -- Updating a comment to reflect the renaming of a variable that the comment mentioned by name\n\n0005 -- fixes =pod added in PostgresNode.pm. The =pod was grammatically correct so far I can tell, but rendered strangely in perldoc.\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 2 Feb 2021 15:10:34 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Feb 2, 2021 at 6:10 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> 0001 -- no changes\n\nCommitted.\n\n> 0002 -- fixing omissions in @pgfeutilsfiles in file src/tools/msvc/Mkvcbuild.pm\n\nHere are a few minor cosmetic issues with this patch:\n\n- connect_utils.c lacks a file header comment.\n- Some or perhaps all of the other file header comments need an update for 2021.\n- There's bogus hunks in the diff for string_utils.c.\n\nI think the rest of this looks good. I spent a long time puzzling over\nwhether consumeQueryResult() and processQueryResult() needed to be\nmoved, but then I realized that this patch actually makes them into\nstatic functions inside parallel_slot.c, rather than public functions\nas they were before. I like that. The only reason those functions need\nto be moved at all is so that the scripts_parallel/parallel_slot stuff\ncan continue to do its thing, so this is actually a better way of\ngrouping things together than what we have now.\n\n> 0003 -- no changes\n\nI think it would be better if there were no handler by default, and\nfailing to set one leads to an assertion failure when we get to the\npoint where one would be called.\n\nI don't think I understand the point of renaming processQueryResult\nand consumeQueryResult. Isn't that just code churn for its own sake?\n\nPGresultHandler seems too generic. How about ParallelSlotHandler or\nParallelSlotResultHandler?\n\nI'm somewhat inclined to propose s/ParallelSlot/ConnectionSlot/g but I\nguess it's better not to get sucked into renaming things.\n\nIt's a little strange that we end up with mutators to set the slot's\nhandler and handler context when we elsewhere feel free to monkey with\na slot's connection directly, but it's not a perfect world and I can't\nthink of anything I'd like better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 17:03:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Feb 3, 2021, at 2:03 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Feb 2, 2021 at 6:10 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> 0001 -- no changes\n> \n> Committed.\n\nThanks!\n\n>> 0002 -- fixing omissions in @pgfeutilsfiles in file src/tools/msvc/Mkvcbuild.pm\n\nNumbered 0001 in this next patch set.\n\n> Here are a few minor cosmetic issues with this patch:\n> \n> - connect_utils.c lacks a file header comment.\n\nFixed\n\n> - Some or perhaps all of the other file header comments need an update for 2021.\n\nFixed.\n\n> - There's bogus hunks in the diff for string_utils.c.\n\nRemoved.\n\n> I think the rest of this looks good. I spent a long time puzzling over\n> whether consumeQueryResult() and processQueryResult() needed to be\n> moved, but then I realized that this patch actually makes them into\n> static functions inside parallel_slot.c, rather than public functions\n> as they were before. I like that. The only reason those functions need\n> to be moved at all is so that the scripts_parallel/parallel_slot stuff\n> can continue to do its thing, so this is actually a better way of\n> grouping things together than what we have now.\n\n\n>> 0003 -- no changes\n\nNumbered 0002 in this next patch set.\n\n> I think it would be better if there were no handler by default, and\n> failing to set one leads to an assertion failure when we get to the\n> point where one would be called.\n\nChanged to have no default handler, and to use Assert(PointerIsValid(handler)) as you suggest.\n\n> I don't think I understand the point of renaming processQueryResult\n> and consumeQueryResult. Isn't that just code churn for its own sake?\n\nI didn't like the names. I had to constantly look back where they were defined to remember which of them processed/consumed all the results and which only processed/consumed one of them. Part of that problem was that their names are both singular. I have restored the names in this next patch set.\n\n> PGresultHandler seems too generic. How about ParallelSlotHandler or\n> ParallelSlotResultHandler?\n\nParallelSlotResultHandler works for me. I'm using that, and renaming s/TableCommandSlotHandler/TableCommandResultHandler/ to be consistent.\n\n> I'm somewhat inclined to propose s/ParallelSlot/ConnectionSlot/g but I\n> guess it's better not to get sucked into renaming things.\n\nI admit that I lost a fair amount of time on this project because I thought \"scripts_parallel.c\" and \"parallel_slot\" referred to some kind of threading, but only later looked closely enough to see that this is an event loop, not a parallel threading system. I don't think \"slot\" is terribly informative, and if we rename I don't think it needs to be part of the name we choose. ConnectionEventLoop would be more intuitive to me than either of ParallelSlot/ConnectionSlot, but this seems like bikeshedding so I'm going to ignore it for now.\n\n> It's a little strange that we end up with mutators to set the slot's\n> handler and handler context when we elsewhere feel free to monkey with\n> a slot's connection directly, but it's not a perfect world and I can't\n> think of anything I'd like better.\n\nI created those mutators in an earlier version of the patch where the slot had a few more fields to set, and it helped to have a single function call set all the fields. I agree it looks less nice now that there are only two fields to set.\n\n\nI also made changes to clean up 0003 (formerly numbered 0004)\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 4 Feb 2021 08:10:23 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Feb 4, 2021 at 11:10 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I also made changes to clean up 0003 (formerly numbered 0004)\n\n\"deduplice\" is a typo.\n\nI'm not sure that I agree with check_each_database()'s commentary\nabout why it doesn't make sense to optimize the resolve-the-databases\nstep. Like, suppose I type 'pg_amcheck sasquatch'. I think the way you\nhave it coded it's going to tell me that there are no databases to\ncheck, which might make me think I used the wrong syntax or something.\nI want it to tell me that sasquatch does not exist. If I happen to be\na cryptid believer, I may reject that explanation as inaccurate, but\nat least there's no question about what pg_amcheck thinks the problem\nis.\n\nWhy does check_each_database() go out of its way to run the main query\nwithout the always-secure search path? If there's a good reason, I\nthink it deserves a comment saying what the reason is. If there's not\na good reason, then I think it should use the always-secure search\npath for 100% of everything. Same question applies to\ncheck_one_database().\n\nParallelSlotSetHandler(free_slot, VerifyHeapamSlotHandler, sql.data)\ncould stand to be split over two lines, like you do for the nearly\nrun_command() call, so that it doesn't go past 80 columns.\n\nI suggest having two variables instead of one for amcheck_schema.\nUsing the same variable to store the unescaped value and then later\nthe escaped value is, IMHO, confusing. Whatever you call the escaped\nversion, I'd rename the function parameters elsewhere to match.\n\n\"status = PQsendQuery(conn, sql) == 1\" seems a bit uptight to me. Why\nnot just make status an int and then just \"status = PQsendQuery(conn,\nsql)\" and then test for status != 0? I don't really care if you don't\nchange this, it's not actually important. But personally I'd rather\ncode it as if any non-zero value meant success.\n\nI think the pg_log_error() in run_command() could be worded a bit\nbetter. I don't think it's a good idea to try to include the type of\nobject in there like this, because of the translatability guidelines\naround assembling messages from fragments. And I don't think it's good\nto say that the check failed because the reality is that we weren't\nable to ask for the check to be run in the first place. I would rather\nlog this as something like \"unable to send query: %s\". I would also\nassume we need to bail out entirely if that happens. I'm not totally\nsure what sorts of things can make PQsendQuery() fail but I bet it\nboils down to having lost the server connection. Should that occur,\ntrying to send queries for all of the remaining objects is going to\nresult in repeating the same error many times, which isn't going to be\nwhat anybody wants. It's unclear to me whether we should give up on\nthe whole operation but I think we have to at least give up on that\nconnection... unless I'm confused about what the failure mode is\nlikely to be here.\n\nIt looks to me like the user won't be able to tell by the exit code\nwhat happened. What I did with pg_verifybackup, and what I suggest we\ndo here, is exit(1) if anything went wrong, either in terms of failing\nto execute queries or in terms of those queries returning problem\nreports. With pg_verifybackup, I thought about trying to make it like\n0 => backup OK, 2 => backup not OK, 2 => trouble, but I found it too\nhard to distinguish what should be exit(1) and what should be exit(2)\nand the coding wasn't trivial either, so I went with the simpler\nscheme.\n\nThe opening line of appendDatabaseSelect() could be adjusted to put\nthe regexps parameter on the next line, avoiding awkward wrapping.\n\nIf they are being run with a safe search path, the queries in\nappendDatabaseSelect(), appendSchemaSelect(), etc. could be run\nwithout all the paranoia. If not, maybe they should be. The casts to\ntext don't include the paranoia: with an unsafe search path, we need\npg_catalog.text here. Or no cast at all, which seems like it ought to\nbe fine too. Not quite sure why you are doing all that casting to\ntext; the datatype is presumably 'name' and ought to collate like\ncollate \"C\" which is probably fine.\n\nIt would probably be a better idea for appendSchemaSelect to declare a\nPQExpBuffer and call initPQExpBuffer just once, and then\nresetPQExpBuffer after each use, and finally termPQExpBuffer just\nonce. The way you have it is not expensive enough to really matter,\nbut avoiding repeated allocate/free cycles is probably best.\n\nI wonder if a pattern like .foo.bar ends up meaning the same thing as\na pattern like foo.bar, with the empty database name being treated the\nsame as if nothing were specified.\n\n From the way appendTableCTE() is coded, it seems to me that if I ask\nfor tables named j* excluding tables named jam* I still might get\ntoast tables for my jam, which seems wrong.\n\nThere does not seem to be any clear benefit to defining CT_TABLE = 0\nin this case, so I would let the compiler deal with it. We should not\nbe depending on that to have any particular numeric value.\n\nWhy does pg_amcheck.c have a header file pg_amcheck.h if there's only\none source file? If you had multiple source files then the header\nwould be a reasonable place to put stuff they all need, but you don't.\n\nCopying the definitions of HEAP_TABLE_AM_OID and BTREE_AM_OID into\npg_amcheck.h or anywhere else seems bad. I think you just be doing\n#include \"catalog/pg_am_d.h\".\n\nI think I'm out of steam for today but I'll try to look at this more\nsoon. In general I think this patch and the whole series are pretty\nclose to being ready to commit, even though there are still things I\nthink need fixing here and there.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 16:04:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Thu, Feb 4, 2021 at 11:10 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Numbered 0001 in this next patch set.\n\nHi,\n\nI committed 0001 as you had it and 0002 with some more cleanups. Things I did:\n\n- Adjusted some comments.\n- Changed processQueryResult so that it didn't do foo(bar) with foo\nbeing a pointer. Generally we prefer (*foo)(bar) when it can be\nconfused with a direct function call, but wunk->foo(bar) is also\nconsidered acceptable.\n- Changed the return type of ParallelSlotResultHandler to be bool,\nbecause having it return PGresult * seemed to offer no advantages.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 16:13:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Feb 4, 2021, at 1:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Feb 4, 2021 at 11:10 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I also made changes to clean up 0003 (formerly numbered 0004)\n> \n> \"deduplice\" is a typo.\n\nFixed.\n\n> I'm not sure that I agree with check_each_database()'s commentary\n> about why it doesn't make sense to optimize the resolve-the-databases\n> step. Like, suppose I type 'pg_amcheck sasquatch'. I think the way you\n> have it coded it's going to tell me that there are no databases to\n> check, which might make me think I used the wrong syntax or something.\n> I want it to tell me that sasquatch does not exist. If I happen to be\n> a cryptid believer, I may reject that explanation as inaccurate, but\n> at least there's no question about what pg_amcheck thinks the problem\n> is.\n\nThe way v38 is coded, 'pg_amcheck sasquatch\" will return a non-zero error code with an error message, database \"sasquatch\" does not exist.\n\nThe problem only comes up if you run it like one of the following:\n\n pg_amcheck --maintenance-db postgres sasquatch\n pg_amcheck postgres sasquatch\n pg_amcheck \"sasquatch.myschema.mytable\"\n\nIn each of those, pg_amcheck first connects to the initial database (\"postgres\" or whatever) and tries to resolve all databases to check matching patterns like '^(postgres)$' and '^(sasquatch)$' and doesn't find any sasquatch matches, but also doesn't complain.\n\nIn v39, this is changed to complain when patterns do not match. This can be turned off with --no-strict-names.\n\n> Why does check_each_database() go out of its way to run the main query\n> without the always-secure search path? If there's a good reason, I\n> think it deserves a comment saying what the reason is. If there's not\n> a good reason, then I think it should use the always-secure search\n> path for 100% of everything. Same question applies to\n> check_one_database().\n\nThat bit of code survived some refactoring, but it doesn't make sense to keep it, assuming it ever made sense at all. Removed in v39. The calls to connectDatabase will always secure the search_path, so pg_amcheck need not touch that directly.\n\n> ParallelSlotSetHandler(free_slot, VerifyHeapamSlotHandler, sql.data)\n> could stand to be split over two lines, like you do for the nearly\n> run_command() call, so that it doesn't go past 80 columns.\n\nFair enough. The code has been treated to a pass through pgindent as well.\n\n> I suggest having two variables instead of one for amcheck_schema.\n> Using the same variable to store the unescaped value and then later\n> the escaped value is, IMHO, confusing. Whatever you call the escaped\n> version, I'd rename the function parameters elsewhere to match.\n\nThe escaped version is now part of a struct, so there shouldn't be any confusion about this.\n\n> \"status = PQsendQuery(conn, sql) == 1\" seems a bit uptight to me. Why\n> not just make status an int and then just \"status = PQsendQuery(conn,\n> sql)\" and then test for status != 0? I don't really care if you don't\n> change this, it's not actually important. But personally I'd rather\n> code it as if any non-zero value meant success.\n\nI couldn't remember why I coded it like that, since it doesn't look like my style, then noticed I copied that from reindexdb.c, upon which this code is patterned. I agree it looks strange, and I've changed it in v39. Unlike the call site in reindexdb, there isn't any reason for pg_amcheck to store the returned value in a variable, so in v39 it doesn't.\n\n> I think the pg_log_error() in run_command() could be worded a bit\n> better. I don't think it's a good idea to try to include the type of\n> object in there like this, because of the translatability guidelines\n> around assembling messages from fragments. And I don't think it's good\n> to say that the check failed because the reality is that we weren't\n> able to ask for the check to be run in the first place. I would rather\n> log this as something like \"unable to send query: %s\". I would also\n> assume we need to bail out entirely if that happens. I'm not totally\n> sure what sorts of things can make PQsendQuery() fail but I bet it\n> boils down to having lost the server connection. Should that occur,\n> trying to send queries for all of the remaining objects is going to\n> result in repeating the same error many times, which isn't going to be\n> what anybody wants. It's unclear to me whether we should give up on\n> the whole operation but I think we have to at least give up on that\n> connection... unless I'm confused about what the failure mode is\n> likely to be here.\n\nChanged in v39 to report the error as you suggest.\n\nIt will reconnect and retry a command one time on error. That should cover the case that the connection to the database was merely lost. If the second attempt also fails, no further retry of the same command is attempted, though commands for remaining relation targets will still be attempted, both for the database that had the error and for other remaining databases in the list.\n\nAssuming something is wrong with \"db2\", the command `pg_amcheck db1 db2 db3` could result in two failures per relation in db2 before finally moving on to db3. That seems pretty awful considering how many relations that could be, but failing to soldier on in the face of errors seems a strange design for a corruption checking tool.\n\n> It looks to me like the user won't be able to tell by the exit code\n> what happened. What I did with pg_verifybackup, and what I suggest we\n> do here, is exit(1) if anything went wrong, either in terms of failing\n> to execute queries or in terms of those queries returning problem\n> reports. With pg_verifybackup, I thought about trying to make it like\n> 0 => backup OK, 2 => backup not OK, 2 => trouble, but I found it too\n> hard to distinguish what should be exit(1) and what should be exit(2)\n> and the coding wasn't trivial either, so I went with the simpler\n> scheme.\n\nIn v39, exit(1) is used for all errors which are intended to stop the program. It is important to recognize that finding corruption is not an error in this sense. A query to verify_heapam() can fail if the relation's checksums are bad, and that happens beyond verify_heapam()'s control when the page is not allowed into the buffers. There can be errors if the file backing a relation is missing. There may be other corruption error cases that I have not yet thought about. The connections' errors get reported to the user, but pg_amcheck does not exit as a consequence of them. As discussed above, failing to send the query to the server is not viewed as a reason to exit, either. It would be hard to quantify all the failure modes, but presumably the catalogs for a database could be messed up enough to cause such failures, and I'm not sure that pg_amcheck should just abort.\n\n> \n> The opening line of appendDatabaseSelect() could be adjusted to put\n> the regexps parameter on the next line, avoiding awkward wrapping.\n> \n> If they are being run with a safe search path, the queries in\n> appendDatabaseSelect(), appendSchemaSelect(), etc. could be run\n> without all the paranoia. If not, maybe they should be. The casts to\n> text don't include the paranoia: with an unsafe search path, we need\n> pg_catalog.text here. Or no cast at all, which seems like it ought to\n> be fine too. Not quite sure why you are doing all that casting to\n> text; the datatype is presumably 'name' and ought to collate like\n> collate \"C\" which is probably fine.\n\nIn v39, everything is being run with a safe search path, and the paranoia and casts are largely gone.\n\n> It would probably be a better idea for appendSchemaSelect to declare a\n> PQExpBuffer and call initPQExpBuffer just once, and then\n> resetPQExpBuffer after each use, and finally termPQExpBuffer just\n> once. The way you have it is not expensive enough to really matter,\n> but avoiding repeated allocate/free cycles is probably best.\n\nI'm not sure what this comment refers to, but this function doesn't exist in v39.\n\n> I wonder if a pattern like .foo.bar ends up meaning the same thing as\n> a pattern like foo.bar, with the empty database name being treated the\n> same as if nothing were specified.\n\nThat's really a question of how patternToSQLRegex parses that string. In general, \"a.b.c\" => (\"^(a)$\", \"^(b)$\", \"^(c)$\"), so I would expect your example to have a database pattern \"^()$\" which should only match databases with zero length names, presumably none. I've added a regression test for this, and indeed that's what it does.\n\n> From the way appendTableCTE() is coded, it seems to me that if I ask\n> for tables named j* excluding tables named jam* I still might get\n> toast tables for my jam, which seems wrong.\n\nIn v39, the query is entirely reworked, so I can't respond directly to this, though I agree that excluding a table should mean the toast table does not automatically get included. There is an interaction, though, if you select both \"j*' and \"pg_toast.*\" and then exclude \"jam\".\n\n> There does not seem to be any clear benefit to defining CT_TABLE = 0\n> in this case, so I would let the compiler deal with it. We should not\n> be depending on that to have any particular numeric value.\n\nThe enum is removed in v39.\n\n> Why does pg_amcheck.c have a header file pg_amcheck.h if there's only\n> one source file? If you had multiple source files then the header\n> would be a reasonable place to put stuff they all need, but you don't.\n\nEverything is in pg_amcheck.c now.\n\n> Copying the definitions of HEAP_TABLE_AM_OID and BTREE_AM_OID into\n> pg_amcheck.h or anywhere else seems bad. I think you just be doing\n> #include \"catalog/pg_am_d.h\".\n\nGood point. Done.\n\n> I think I'm out of steam for today but I'll try to look at this more\n> soon. In general I think this patch and the whole series are pretty\n> close to being ready to commit, even though there are still things I\n> think need fixing here and there.\n\nReworking the code took a while. Version 39 patches attached.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 17 Feb 2021 10:46:10 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Feb 17, 2021 at 1:46 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It will reconnect and retry a command one time on error. That should cover the case that the connection to the database was merely lost. If the second attempt also fails, no further retry of the same command is attempted, though commands for remaining relation targets will still be attempted, both for the database that had the error and for other remaining databases in the list.\n>\n> Assuming something is wrong with \"db2\", the command `pg_amcheck db1 db2 db3` could result in two failures per relation in db2 before finally moving on to db3. That seems pretty awful considering how many relations that could be, but failing to soldier on in the face of errors seems a strange design for a corruption checking tool.\n\nThat doesn't seem right at all. I think a PQsendQuery() failure is so\nremote that it's probably justification for giving up on the entire\noperation. If it's caused by a problem with some object, it probably\nmeans that accessing that object caused the whole database to go down,\nand retrying the object will take the database down again. Retrying\nthe object is betting that the user interrupted connectivity between\npg_amcheck and the database but the interruption is only momentary and\nthe user actually wants to complete the operation. That seems unlikely\nto me. I think it's far more probably that the database crashed or got\nshut down and continuing is futile.\n\nMy proposal is: if we get an ERROR trying to *run* a query, give up on\nthat object but still try the other ones after reconnecting. If we get\na FATAL or PANIC trying to *run* a query, give up on the entire\noperation. If even sending a query fails, also give up.\n\n> In v39, exit(1) is used for all errors which are intended to stop the program. It is important to recognize that finding corruption is not an error in this sense. A query to verify_heapam() can fail if the relation's checksums are bad, and that happens beyond verify_heapam()'s control when the page is not allowed into the buffers. There can be errors if the file backing a relation is missing. There may be other corruption error cases that I have not yet thought about. The connections' errors get reported to the user, but pg_amcheck does not exit as a consequence of them. As discussed above, failing to send the query to the server is not viewed as a reason to exit, either. It would be hard to quantify all the failure modes, but presumably the catalogs for a database could be messed up enough to cause such failures, and I'm not sure that pg_amcheck should just abort.\n\nI agree that exit(1) should happen after any error intended to stop\nthe program. But I think it should also happen at the end of the run\nif we hit any problems for which we did not stop, so that exit(0)\nmeans your database is healthy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Feb 2021 15:56:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Feb 17, 2021 at 1:46 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Reworking the code took a while. Version 39 patches attached.\n\nRegarding the documentation, I think the Usage section at the top is\nfar too extensive and duplicates the option description section to far\ntoo great an extent. You have 21 usage examples for a command with 34\noptions. Even if we think it's a good idea to give a brief summary of\nusage, it's got to be brief; we certainly don't need examples of\nobscure special-purpose options like --maintenance-db here. Looking\nthrough the commands in \"PostgreSQL Client Applications\" and\n\"Additional Supplied Programs,\" most of them just have a synopsis\nsection and nothing like this Usage section. Those that do have a\nUsage section typically use it for a narrative description of what to\ndo with the tool (e.g. see pg_test_timing), not a long list of\nexamples. I'm inclined to think you should nuke all the examples and\nincorporate the descriptive text, to the extent that it's needed,\neither into the descriptions of the individual options or, if the\nbehavior spans many options, into the Description section.\n\nA few of these examples could move down into an Examples section at\nthe bottom, perhaps, but I think 21 is still too many. I'd try to\nlimit it to 5-7. Just hit the highlights.\n\nI also think that perhaps it's not best to break up the list of\noptions into so many different categories the way you have. Notice\nthat for example pg_dump and psql don't do this, instead putting\neverything into one ordered list, despite also having a lot of\noptions. This is arguably worse if you want to understand which\noptions are related to each other, but it's better if you are just\nlooking for something based on alphabetical order.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Feb 2021 17:09:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Feb 17, 2021, at 12:56 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Feb 17, 2021 at 1:46 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> It will reconnect and retry a command one time on error. That should cover the case that the connection to the database was merely lost. If the second attempt also fails, no further retry of the same command is attempted, though commands for remaining relation targets will still be attempted, both for the database that had the error and for other remaining databases in the list.\n>> \n>> Assuming something is wrong with \"db2\", the command `pg_amcheck db1 db2 db3` could result in two failures per relation in db2 before finally moving on to db3. That seems pretty awful considering how many relations that could be, but failing to soldier on in the face of errors seems a strange design for a corruption checking tool.\n> \n> That doesn't seem right at all. I think a PQsendQuery() failure is so\n> remote that it's probably justification for giving up on the entire\n> operation. If it's caused by a problem with some object, it probably\n> means that accessing that object caused the whole database to go down,\n> and retrying the object will take the database down again. Retrying\n> the object is betting that the user interrupted connectivity between\n> pg_amcheck and the database but the interruption is only momentary and\n> the user actually wants to complete the operation. That seems unlikely\n> to me. I think it's far more probably that the database crashed or got\n> shut down and continuing is futile.\n> \n> My proposal is: if we get an ERROR trying to *run* a query, give up on\n> that object but still try the other ones after reconnecting. If we get\n> a FATAL or PANIC trying to *run* a query, give up on the entire\n> operation. If even sending a query fails, also give up.\n\nThis is changed in v40 as you propose to exit on FATAL and PANIC level errors and on error to send a query. On lesser errors (which includes all corruption reports about btrees and some heap corruption related errors), the slot's connection is still useable, I think. Are there cases where the error is lower than FATAL and yet the connection needs to be reestablished? It does not seem so from the testing I have done, but perhaps I'm not thinking of the right sort of non-fatal error?\n\n(I'll wait to post v40 until after hearing your thoughts on this.)\n\n>> In v39, exit(1) is used for all errors which are intended to stop the program. It is important to recognize that finding corruption is not an error in this sense. A query to verify_heapam() can fail if the relation's checksums are bad, and that happens beyond verify_heapam()'s control when the page is not allowed into the buffers. There can be errors if the file backing a relation is missing. There may be other corruption error cases that I have not yet thought about. The connections' errors get reported to the user, but pg_amcheck does not exit as a consequence of them. As discussed above, failing to send the query to the server is not viewed as a reason to exit, either. It would be hard to quantify all the failure modes, but presumably the catalogs for a database could be messed up enough to cause such failures, and I'm not sure that pg_amcheck should just abort.\n> \n> I agree that exit(1) should happen after any error intended to stop\n> the program. But I think it should also happen at the end of the run\n> if we hit any problems for which we did not stop, so that exit(0)\n> means your database is healthy.\n\nIn v40, exit(1) means the program encountered fatal errors leading it to stop, and exit(2) means that a non-fatal error and/or corruption reports occurred somewhere during the processing. Otherwise, exit(0) means your database was successfully checked and is healthy.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 23 Feb 2021 09:38:29 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Feb 23, 2021 at 12:38 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> This is changed in v40 as you propose to exit on FATAL and PANIC level errors and on error to send a query. On lesser errors (which includes all corruption reports about btrees and some heap corruption related errors), the slot's connection is still useable, I think. Are there cases where the error is lower than FATAL and yet the connection needs to be reestablished? It does not seem so from the testing I have done, but perhaps I'm not thinking of the right sort of non-fatal error?\n\nI think you should assume that if you get an ERROR you can - and\nshould - continue to use the connection, but still exit non-zero at\nthe end. Perhaps one can contrive some scenario where that's not the\ncase, but if the server does the equivalent of \"ERROR: session\npermanently borked\" we should really change those to FATAL; I think\nyou can discount that possibility.\n\n> In v40, exit(1) means the program encountered fatal errors leading it to stop, and exit(2) means that a non-fatal error and/or corruption reports occurred somewhere during the processing. Otherwise, exit(0) means your database was successfully checked and is healthy.\n\nwfm.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Feb 2021 13:40:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Feb 24, 2021, at 10:40 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Feb 23, 2021 at 12:38 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> This is changed in v40 as you propose to exit on FATAL and PANIC level errors and on error to send a query. On lesser errors (which includes all corruption reports about btrees and some heap corruption related errors), the slot's connection is still useable, I think. Are there cases where the error is lower than FATAL and yet the connection needs to be reestablished? It does not seem so from the testing I have done, but perhaps I'm not thinking of the right sort of non-fatal error?\n> \n> I think you should assume that if you get an ERROR you can - and\n> should - continue to use the connection, but still exit non-zero at\n> the end. Perhaps one can contrive some scenario where that's not the\n> case, but if the server does the equivalent of \"ERROR: session\n> permanently borked\" we should really change those to FATAL; I think\n> you can discount that possibility.\n\nOk, that's how I had it, so no changes necessary.\n\n>> In v40, exit(1) means the program encountered fatal errors leading it to stop, and exit(2) means that a non-fatal error and/or corruption reports occurred somewhere during the processing. Otherwise, exit(0) means your database was successfully checked and is healthy.\n\nOther changes in v40 per our off-list discussions but not related to your on-list review comments:\n\nRemoved option --no-tables.\n\nRemoved option --no-dependents. This was a synonym for the combination of --exclude-toast and --exclude-indexes, but having such a synonym isn't all that helpful.\n\nRenamed --exclude-toast to --no-toast-expansion and changed its behavior a bit. Likewise, renamed --exclude-indexes to --no-index-expansion and change behavior. The behavioral changes are that these options now only have the effect of not automatically expanding the list of relations to check to include toast or indexes associated with relations already in the list. The prior names didn't exclusively mean that, and the behavior didn't exclusively do that.\n\nUpdated the docs per your other review email.\n\nImplemented --progress to behave much more like how it does in pg_basebackup.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 24 Feb 2021 10:55:28 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Wed, Feb 24, 2021 at 1:55 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> [ new patches ]\n\nRegarding 0001:\n\nThere seem to be whitespace-only changes to the comment for select_loop().\n\nI wonder if the ParallelSlotsSetupOneDB()/ParallelSlotsSetupMinimal()\nchanges could be simpler. First idea: Suppose you had\nParallelSlotsSetup(numslots) that just creates the slot array with 0\nconnections, and then ParallelSlotsAdopt(slots, conn, cparams) if you\nwant to make it own an existing connection. That seems like it might\nbe cleaner. Second idea: Why not get rid of ParallelSlotsSetupOneDB()\naltogether, and just let ParallelSlotsGetIdle() connect the other\nslots as required? Preconnecting all slots before we do anything is\ngood because ... of what?\n\nI also wonder if things might be simplified by introducing a wrapper\nobject, e.g. ParallelSlotArray. Suppose a ParallelSlotArray stores the\nnumber of slots (num_slots), the array of actual PGconn objects, and\nthe ConnParams to be used for new connections, and the initcmd to be\nused for new connections. Maybe also the progname. This seems like it\nwould avoid a bunch of repeated parameter passing: you could just\ncreate the ParallelSlotArray with the right contents, and then pass it\naround everywhere, instead of having to keep passing the same stuff\nin. If you want to switch to connecting to a different DB, you tweak\nthe ConnParams - maybe using an accessor function - and the system\nfigures the rest out.\n\nI wonder if it's really useful to generalize this to a point of caring\nabout all the ConnParams fields, too. Like, if you only provide\nParallelSlotUpdateDB(slotarray, dbname), then that's the only field\nthat can change so you don't need to care about the others. And maybe\nyou also don't really need to keep the ConnParams fields in every\nslot, either. Like, couldn't you just say something like: if\n(strcmp(PQdb(conn) , slotarray->cparams->dbname) != 0) { wrong DB,\ncan't reuse without a reconnect }? I know sometimes a dbname is really\na whole connection string, but perhaps we could try to fix that by\nusing PQconninfoParse() in the right place, so that what ends up in\nthe cparams is just the db name, not a whole connection string.\n\nThis is just based on a relatively short amount of time spent studying\nthe patch, so I might well be off-base here. What do you think?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Mar 2021 16:14:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "\n\n> On Mar 1, 2021, at 1:14 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Feb 24, 2021 at 1:55 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> [ new patches ]\n> \n> Regarding 0001:\n> \n> There seem to be whitespace-only changes to the comment for select_loop().\n> \n> I wonder if the ParallelSlotsSetupOneDB()/ParallelSlotsSetupMinimal()\n> changes could be simpler. First idea: Suppose you had\n> ParallelSlotsSetup(numslots) that just creates the slot array with 0\n> connections, and then ParallelSlotsAdopt(slots, conn, cparams) if you\n> want to make it own an existing connection. That seems like it might\n> be cleaner. Second idea: Why not get rid of ParallelSlotsSetupOneDB()\n> altogether, and just let ParallelSlotsGetIdle() connect the other\n> slots as required? Preconnecting all slots before we do anything is\n> good because ... of what?\n\nMostly because, if --jobs is set too high, you get an error before launching any work. I don't know that it's really a big deal if vacuumdb or reindexdb have a bunch of tasks kicked off prior to exit(1) due to not being able to open connections for all the slots, but it is a behavioral change.\n\n> I also wonder if things might be simplified by introducing a wrapper\n> object, e.g. ParallelSlotArray. Suppose a ParallelSlotArray stores the\n> number of slots (num_slots), the array of actual PGconn objects, and\n> the ConnParams to be used for new connections, and the initcmd to be\n> used for new connections. Maybe also the progname. This seems like it\n> would avoid a bunch of repeated parameter passing: you could just\n> create the ParallelSlotArray with the right contents, and then pass it\n> around everywhere, instead of having to keep passing the same stuff\n> in. If you want to switch to connecting to a different DB, you tweak\n> the ConnParams - maybe using an accessor function - and the system\n> figures the rest out.\n\nThe existing version of parallel slots (before any of my changes) could already have been written that way, but the author chose not to. I thought about making the sort of change you suggest, and decided against, mostly on the principle of stare decisis. But the idea is very appealing, and since you're on board, I think I'll go make that change.\n\n> I wonder if it's really useful to generalize this to a point of caring\n> about all the ConnParams fields, too. Like, if you only provide\n> ParallelSlotUpdateDB(slotarray, dbname), then that's the only field\n> that can change so you don't need to care about the others. And maybe\n> you also don't really need to keep the ConnParams fields in every\n> slot, either. Like, couldn't you just say something like: if\n> (strcmp(PQdb(conn) , slotarray->cparams->dbname) != 0) { wrong DB,\n> can't reuse without a reconnect }? I know sometimes a dbname is really\n> a whole connection string, but perhaps we could try to fix that by\n> using PQconninfoParse() in the right place, so that what ends up in\n> the cparams is just the db name, not a whole connection string.\n\nI went a little out of my way to avoid that, as I didn't want the next application that uses parallel slots to have to refactor it again, if for example they want to process in parallel databases listening on different ports, or to process commands issued under different roles.\n\n> This is just based on a relatively short amount of time spent studying\n> the patch, so I might well be off-base here. What do you think?\n\nI like the ParallelSlotArray idea, and will go do that now. I'm happy to defer to your judgement on the other stuff, too, but will wait to hear back from you.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 13:57:16 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "> On Mar 1, 2021, at 1:57 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 1, 2021, at 1:14 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Wed, Feb 24, 2021 at 1:55 PM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> [ new patches ]\n>> \n>> Regarding 0001:\n>> \n>> There seem to be whitespace-only changes to the comment for select_loop().\n\nI believe this is fixed in the attached patch series.\n\n>> I wonder if the ParallelSlotsSetupOneDB()/ParallelSlotsSetupMinimal()\n>> changes could be simpler. First idea: Suppose you had\n>> ParallelSlotsSetup(numslots) that just creates the slot array with 0\n>> connections, and then ParallelSlotsAdopt(slots, conn, cparams) if you\n>> want to make it own an existing connection. That seems like it might\n>> be cleaner.\n\nI used this idea. The functions are ParallelSlotsSetup(numslots, cparams) and ParallelSlotsAdoptConn(sa, conn)\n\n>> Second idea: Why not get rid of ParallelSlotsSetupOneDB()\n>> altogether, and just let ParallelSlotsGetIdle() connect the other\n>> slots as required?\n\nI did this also. \n\n>> Preconnecting all slots before we do anything is\n>> good because ... of what?\n> \n> Mostly because, if --jobs is set too high, you get an error before launching any work. I don't know that it's really a big deal if vacuumdb or reindexdb have a bunch of tasks kicked off prior to exit(1) due to not being able to open connections for all the slots, but it is a behavioral change.\n\nOn further reflection, I decided to implement these changes and not worry about the behavioral change.\n\n>> I also wonder if things might be simplified by introducing a wrapper\n>> object, e.g. ParallelSlotArray. Suppose a ParallelSlotArray stores the\n>> number of slots (num_slots), the array of actual PGconn objects, and\n>> the ConnParams to be used for new connections\n\nI did this.\n\n>> , and the initcmd to be\n>> used for new connections.\n\nI skipped this part. The initcmd argument is only handed to ParallelSlotsGetIdle(). Doing as you suggest would not really be simpler, it would just move that argument to ParallelSlotsSetup(). But I don't feel strongly about it, so I can move this, too, if you like.\n\n>> Maybe also the progname.\n\nI didn't do this either, and for the same reason. It's just a parameter to ParallelSlotsGetIdle(), so nothing is really gained by moving it to ParallelSlotsSetup().\n\n>> This seems like it\n>> would avoid a bunch of repeated parameter passing: you could just\n>> create the ParallelSlotArray with the right contents, and then pass it\n>> around everywhere, instead of having to keep passing the same stuff\n>> in. If you want to switch to connecting to a different DB, you tweak\n>> the ConnParams - maybe using an accessor function - and the system\n>> figures the rest out.\n\nRather than the slots user tweak the slot's ConnParams, ParallelSlotsGetIdle() takes a dbname argument, and uses it as ConnParams->override_dbname.\n\n> The existing version of parallel slots (before any of my changes) could already have been written that way, but the author chose not to. I thought about making the sort of change you suggest, and decided against, mostly on the principle of stare decisis. But the idea is very appealing, and since you're on board, I think I'll go make that change.\n> \n>> I wonder if it's really useful to generalize this to a point of caring\n>> about all the ConnParams fields, too. Like, if you only provide\n>> ParallelSlotUpdateDB(slotarray, dbname), then that's the only field\n>> that can change so you don't need to care about the others. And maybe\n>> you also don't really need to keep the ConnParams fields in every\n>> slot, either. Like, couldn't you just say something like: if\n>> (strcmp(PQdb(conn) , slotarray->cparams->dbname) != 0) { wrong DB,\n>> can't reuse without a reconnect }? I know sometimes a dbname is really\n>> a whole connection string, but perhaps we could try to fix that by\n>> using PQconninfoParse() in the right place, so that what ends up in\n>> the cparams is just the db name, not a whole connection string.\n> \n> I went a little out of my way to avoid that, as I didn't want the next application that uses parallel slots to have to refactor it again, if for example they want to process in parallel databases listening on different ports, or to process commands issued under different roles.\n\nThis next version has a single ConnParams for the slots array and only contemplates the dbname changing from one connection to another.\n\n>> This is just based on a relatively short amount of time spent studying\n>> the patch, so I might well be off-base here. What do you think?\n> \n> I like the ParallelSlotArray idea, and will go do that now. I'm happy to defer to your judgement on the other stuff, too, but will wait to hear back from you.\n\nRather than waiting to hear back from you, I decided to implement these ideas as separate commits in my development environment, so I can roll some of them back if you don't like them. The full patch set is attached:\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 2 Mar 2021 09:10:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Mar 2, 2021 at 12:10 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> On further reflection, I decided to implement these changes and not worry about the behavioral change.\n\nThanks.\n\n> I skipped this part. The initcmd argument is only handed to ParallelSlotsGetIdle(). Doing as you suggest would not really be simpler, it would just move that argument to ParallelSlotsSetup(). But I don't feel strongly about it, so I can move this, too, if you like.\n>\n> I didn't do this either, and for the same reason. It's just a parameter to ParallelSlotsGetIdle(), so nothing is really gained by moving it to ParallelSlotsSetup().\n\nOK. I thought it was more natural to pass a bunch of arguments at\nsetup time rather than passing a bunch of arguments at get-idle time,\nbut I don't feel strongly enough about it to insist, and somebody else\ncan always change it later if they decide I had the right idea.\n\n> Rather than the slots user tweak the slot's ConnParams, ParallelSlotsGetIdle() takes a dbname argument, and uses it as ConnParams->override_dbname.\n\nOK, but you forgot to update the comments. ParallelSlotsGetIdle()\nstill talks about a cparams argument that it no longer has.\n\nThe usual idiom for sizing a memory allocation involving\nFLEXIBLE_ARRAY_MEMBER is something like offsetof(ParallelSlotArray,\nslots) + numslots * sizeof(ParallelSlot). Your version uses sizeof();\ndon't.\n\nOther than that 0001 looks to me to be in pretty good shape now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Mar 2021 13:24:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" }, { "msg_contents": "On Tue, Mar 2, 2021 at 1:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Other than that 0001 looks to me to be in pretty good shape now.\n\nIncidentally, we might want to move this to a new thread with a better\nsubject line, since the current subject line really doesn't describe\nthe uncommitted portion of the work. And create a new CF entry, too.\n\nMoving onto 0002:\n\nThe index checking options should really be called btree index\nchecking options. I think I'd put the table options first, and the\nbtree options second. Other kinds of indexes could follow some day. I\nwould personally omit the short forms of --heapallindexed and\n--parent-check; I think we'll run out of option names too quickly if\npeople add more kinds of checks.\n\nPerhaps VerifyBtreeSlotHandler should emit a warning of some kind if\nPQntuples(res) != 0.\n\n+ /*\n+ * Test that this function works, but for now we're\nnot using the list\n+ * 'relations' that it builds.\n+ */\n+ conn = connectDatabase(&cparams, progname, opts.echo,\nfalse, true);\n\nThis comment appears to have nothing to do with the code, since\nconnectDatabase() does not build a list of 'relations'.\n\namcheck_sql seems to include paranoia, but do we need that if we're\nusing a secure search path? Similarly for other SQL queries, e.g. in\nprepare_table_command.\n\nIt might not be strictly necessary for the static functions in\npg_amcheck.c to use_three completelyDifferent NamingConventions for\nits static functions.\n\nshould_processing_continue() is one semicolon over budget.\n\nThe initializer for opts puts a comma even after the last member\ninitializer. Is that going to be portable to all compilers?\n\n+ for (failed = false, cell = opts.include.head; cell; cell = cell->next)\n\nI think failed has to be false here, because it gets initialized at\nthe top of the function. If we need to reinitialize it for some\nreason, I would prefer you do that on the previous line, separate from\nthe for loop stuff.\n\n+ char *dbrgx; /* Database regexp parsed from pattern, or\n+ * NULL */\n+ char *nsprgx; /* Schema regexp parsed from pattern, or NULL */\n+ char *relrgx; /* Relation regexp parsed from pattern, or\n+ * NULL */\n+ bool tblonly; /* true if relrgx should only match tables */\n+ bool idxonly; /* true if relrgx should only match indexes */\n\nMaybe: db_regex, nsp_regex, rel_regex, table_only, index_only?\n\nJust because it seems theoretically possible that someone will see\nnsprgx and not immediately understand what it's supposed to mean, even\nif they know that nsp is a common abbreviation for namespace in\nPostgreSQL code, and even if they also know what a regular expression\nis.\n\nYour four messages about there being nothing to check seem like they\ncould be consolidated down to one: \"nothing to check for pattern\n\\\"%s\\\"\".\n\nI would favor changing things so that once argument parsing is\ncomplete, we switch to reporting all errors that way. So in other\nwords here, and everything that follows:\n\n+ fprintf(stderr, \"%s: no databases to check\\n\", progname);\n\n+ * ParallelSlots based event loop follows.\n\n\"Main event loop.\"\n\nTo me it would read slightly better to change each reference to\n\"relations list\" to \"list of relations\", but perhaps that is too\nnitpicky.\n\nI think the two instances of goto finish could be avoided with not\nmuch work. At most a few things need to happen only if !failed, and\nmaybe not even that, if you just said \"break;\" instead.\n\n+ * Note: Heap relation corruption is returned by verify_heapam() without the\n+ * use of raising errors, but running verify_heapam() on a corrupted table may\n\nHow about \"Heap relation corruption() is reported by verify_heapam()\nvia the result set, rather than an ERROR, ...\"\n\nIt seems mighty inefficient to have a whole bunch of consecutive calls\nto remove_relation_file() or corrupt_first_page() when every such call\nstops and restarts the database. I would guess these tests will run\nnoticeably faster if you don't do that. Either the functions need to\ntake a list of arguments, or the stop/start needs to be pulled up and\ndone in the caller.\n\ncorrupt_first_page() could use a comment explaining what exactly we're\noverwriting, and in particular noting that we don't want to just\nclobber the LSN, but rather something where we can detect a wrong\nvalue.\n\nThere's a long list of calls to command_checks_all() in 003_check.pl\nthat don't actually check anything but that the command failed, but\nrun it with a bunch of different options. I don't understand the value\nof that, and suggest reducing the number of cases tested. If you want,\nyou can have tests elsewhere that focus -- perhaps by using verbose\nmode -- on checking that the right tables are being checked.\n\nThis is not yet a full review of everything in this patch -- I haven't\nsorted through all of the tests yet, or all of the new query\nconstruction logic -- but to me this looks pretty close to\ncommittable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Mar 2021 15:39:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: new heapcheck contrib module" } ]
[ { "msg_contents": "Hi!\n\nIt appears that definition of pg_statio_all_tables has bug.\n\nCREATE VIEW pg_statio_all_tables AS\n SELECT\n C.oid AS relid,\n N.nspname AS schemaname,\n C.relname AS relname,\n pg_stat_get_blocks_fetched(C.oid) -\n pg_stat_get_blocks_hit(C.oid) AS heap_blks_read,\n pg_stat_get_blocks_hit(C.oid) AS heap_blks_hit,\n sum(pg_stat_get_blocks_fetched(I.indexrelid) -\n pg_stat_get_blocks_hit(I.indexrelid))::bigint AS\nidx_blks_read,\n sum(pg_stat_get_blocks_hit(I.indexrelid))::bigint AS idx_blks_hit,\n pg_stat_get_blocks_fetched(T.oid) -\n pg_stat_get_blocks_hit(T.oid) AS toast_blks_read,\n pg_stat_get_blocks_hit(T.oid) AS toast_blks_hit,\n sum(pg_stat_get_blocks_fetched(X.indexrelid) -\n pg_stat_get_blocks_hit(X.indexrelid))::bigint AS\ntidx_blks_read,\n sum(pg_stat_get_blocks_hit(X.indexrelid))::bigint AS tidx_blks_hit\n FROM pg_class C LEFT JOIN\n pg_index I ON C.oid = I.indrelid LEFT JOIN\n pg_class T ON C.reltoastrelid = T.oid LEFT JOIN\n pg_index X ON T.oid = X.indrelid\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE C.relkind IN ('r', 't', 'm')\n GROUP BY C.oid, N.nspname, C.relname, T.oid, X.indrelid;\n\nAmong all the joined tables, only \"pg_index I\" is expected to have\nmultiple rows associated with single relation. But we do sum() for\ntoast index \"pg_index X\" as well. As the result, we multiply\nstatistics for toast index by the number of relation indexes. This is\nobviously wrong.\n\nAttached patch fixes the view definition to count toast index statistics once.\n\nAs a bugfix, I think this should be backpatched. But this patch\nrequires catalog change. Were similar cases there before? If so,\nhow did we resolve them?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 21 Apr 2020 02:44:45 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fix for pg_statio_all_tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 02:44:45AM +0300, Alexander Korotkov wrote:\n> Among all the joined tables, only \"pg_index I\" is expected to have\n> multiple rows associated with single relation. But we do sum() for\n> toast index \"pg_index X\" as well. As the result, we multiply\n> statistics for toast index by the number of relation indexes. This is\n> obviously wrong.\n\nOops.\n\n> As a bugfix, I think this should be backpatched. But this patch\n> requires catalog change. Were similar cases there before? If so,\n> how did we resolve them?\n\nA backpatch can happen in such cases, see for example b6e39ca9. In\nthis case, the resolution was done with a backpatch to\nsystem_views.sql and the release notes include an additional note\nsaying that the fix applies itself only on already-initialized\nclusters. For other clusters, it was necessary to apply a SQL query,\ngiven also in the release notes, to fix the issue (just grep for \nCVE-2017-7547 in release-9.6.sgml on the REL9_6_STABLE branch).\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 10:38:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix for pg_statio_all_tables" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 21, 2020 at 02:44:45AM +0300, Alexander Korotkov wrote:\n>> As a bugfix, I think this should be backpatched. But this patch\n>> requires catalog change. Were similar cases there before? If so,\n>> how did we resolve them?\n\n> A backpatch can happen in such cases, see for example b6e39ca9. In\n> this case, the resolution was done with a backpatch to\n> system_views.sql and the release notes include an additional note\n> saying that the fix applies itself only on already-initialized\n> clusters. For other clusters, it was necessary to apply a SQL query,\n> given also in the release notes, to fix the issue (just grep for \n> CVE-2017-7547 in release-9.6.sgml on the REL9_6_STABLE branch).\n\nYeah, but that was for a security hole. I am doubtful that the\nseverity of this problem is bad enough to justify jumping through\nsimilar hoops. Even if we fixed it and documented it, how many\nusers would bother to apply the manual correction?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 00:58:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for pg_statio_all_tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 4:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 21, 2020 at 02:44:45AM +0300, Alexander Korotkov wrote:\n> > Among all the joined tables, only \"pg_index I\" is expected to have\n> > multiple rows associated with single relation. But we do sum() for\n> > toast index \"pg_index X\" as well. As the result, we multiply\n> > statistics for toast index by the number of relation indexes. This is\n> > obviously wrong.\n>\n> Oops.\n>\n> > As a bugfix, I think this should be backpatched. But this patch\n> > requires catalog change. Were similar cases there before? If so,\n> > how did we resolve them?\n>\n> A backpatch can happen in such cases, see for example b6e39ca9. In\n> this case, the resolution was done with a backpatch to\n> system_views.sql and the release notes include an additional note\n> saying that the fix applies itself only on already-initialized\n> clusters. For other clusters, it was necessary to apply a SQL query,\n> given also in the release notes, to fix the issue (just grep for\n> CVE-2017-7547 in release-9.6.sgml on the REL9_6_STABLE branch).\n\nThank you for pointing!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 21 Apr 2020 13:17:40 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix for pg_statio_all_tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Tue, Apr 21, 2020 at 02:44:45AM +0300, Alexander Korotkov wrote:\n> >> As a bugfix, I think this should be backpatched. But this patch\n> >> requires catalog change. Were similar cases there before? If so,\n> >> how did we resolve them?\n>\n> > A backpatch can happen in such cases, see for example b6e39ca9. In\n> > this case, the resolution was done with a backpatch to\n> > system_views.sql and the release notes include an additional note\n> > saying that the fix applies itself only on already-initialized\n> > clusters. For other clusters, it was necessary to apply a SQL query,\n> > given also in the release notes, to fix the issue (just grep for\n> > CVE-2017-7547 in release-9.6.sgml on the REL9_6_STABLE branch).\n>\n> Yeah, but that was for a security hole. I am doubtful that the\n> severity of this problem is bad enough to justify jumping through\n> similar hoops. Even if we fixed it and documented it, how many\n> users would bother to apply the manual correction?\n\nSure, only most conscious users will do the manual correction. But if\nthere are only two option: backpatch it this way or don't backpatch at\nall, then I would choose the first one.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 21 Apr 2020 13:19:56 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix for pg_statio_all_tables" }, { "msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Tue, Apr 21, 2020 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, but that was for a security hole. I am doubtful that the\n>> severity of this problem is bad enough to justify jumping through\n>> similar hoops. Even if we fixed it and documented it, how many\n>> users would bother to apply the manual correction?\n\n> Sure, only most conscious users will do the manual correction. But if\n> there are only two option: backpatch it this way or don't backpatch at\n> all, then I would choose the first one.\n\nWell, if it were something that you could just do and forget, then\nmaybe. But actually, you are proposing to invest a lot of *other*\npeople's time --- notably me, as the likely author of the next\nset of release notes --- so it's not entirely up to you.\n\nGiven the lack of field complaints, I'm still of the opinion that\nthis isn't really worth back-patching.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 09:59:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for pg_statio_all_tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 4:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Tue, Apr 21, 2020 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Yeah, but that was for a security hole. I am doubtful that the\n> >> severity of this problem is bad enough to justify jumping through\n> >> similar hoops. Even if we fixed it and documented it, how many\n> >> users would bother to apply the manual correction?\n>\n> > Sure, only most conscious users will do the manual correction. But if\n> > there are only two option: backpatch it this way or don't backpatch at\n> > all, then I would choose the first one.\n>\n> Well, if it were something that you could just do and forget, then\n> maybe. But actually, you are proposing to invest a lot of *other*\n> people's time --- notably me, as the likely author of the next\n> set of release notes --- so it's not entirely up to you.\n\nSure, this is not entirely up to me.\n\n> Given the lack of field complaints, I'm still of the opinion that\n> this isn't really worth back-patching.\n\nSo, what exact strategy do you propose?\n\nI don't like idea to postpone decision of what we do with\nbackbranches. We may decide not to fix it in previous releases. But\nin order to handle this decision correctly I think we should document\nthis bug there. I'm OK with doing this. And I can put my efforts on\nfixing it in the head and backpatching the documentation. But does\nthis save significant resources in comparison with fixing bug in\nbackbranches?\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 17:23:57 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix for pg_statio_all_tables" } ]
[ { "msg_contents": "Over in \"execExprInterp() questions / How to improve scalar array op\nexpr eval?\" [1] I'd mused about how we might be able to optimized\nscalar array ops with OR'd semantics.\n\nThis patch implements a binary search for such expressions when the\narray argument is a constant so that we can avoid needing to teach\nexpression execution to cache stable values or know when a param has\nchanged.\n\nThe speed-up for the target case can pretty impressive: in my\nadmittedly contrived and relatively unscientific test with a query in\nthe form:\n\nselect count(*) from generate_series(1,100000) n(i) where i in (<1000\nrandom integers in the series>)\n\nshows ~30ms for the patch versus ~640ms on master.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/flat/CAAaqYe-UQBba7sScrucDOyHb7cDoNbWf_rcLrOWeD4ikP3_qTQ%40mail.gmail.com", "msg_date": "Mon, 20 Apr 2020 21:27:34 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>Over in \"execExprInterp() questions / How to improve scalar array op\n>expr eval?\" [1] I'd mused about how we might be able to optimized\n>scalar array ops with OR'd semantics.\n>\n>This patch implements a binary search for such expressions when the\n>array argument is a constant so that we can avoid needing to teach\n>expression execution to cache stable values or know when a param has\n>changed.\n>\n>The speed-up for the target case can pretty impressive: in my\n>admittedly contrived and relatively unscientific test with a query in\n>the form:\n>\n>select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>random integers in the series>)\n>\n>shows ~30ms for the patch versus ~640ms on master.\n>\n\nNice improvement, although 1000 items is probably a bit unusual. The\nthreshold used in the patch (9 elements) seems a bit too low - what\nresults have you seen with smaller arrays?\n\nAnother idea - would a bloom filter be useful here, as a second\noptimization? That is, for large arrays build s small bloom filter,\nallowing us to skip even the binary search.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:47:00 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n> >Over in \"execExprInterp() questions / How to improve scalar array op\n> >expr eval?\" [1] I'd mused about how we might be able to optimized\n> >scalar array ops with OR'd semantics.\n> >\n> >This patch implements a binary search for such expressions when the\n> >array argument is a constant so that we can avoid needing to teach\n> >expression execution to cache stable values or know when a param has\n> >changed.\n> >\n> >The speed-up for the target case can pretty impressive: in my\n> >admittedly contrived and relatively unscientific test with a query in\n> >the form:\n> >\n> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n> >random integers in the series>)\n> >\n> >shows ~30ms for the patch versus ~640ms on master.\n> >\n>\n> Nice improvement, although 1000 items is probably a bit unusual. The\n> threshold used in the patch (9 elements) seems a bit too low - what\n> results have you seen with smaller arrays?\n\nAt least in our systems we regularly work with 1000 batches of items,\nwhich means you get IN clauses of identifiers of that size. Admittedly\nthe most common case sees those IN clauses as simple index scans\n(e.g., WHERE <primary key> IN (...)), but it's also common to have a\nbroader query that merely filters additionally on something like \"...\nAND <some foreign key> IN (...)\" where it makes sense for the rest of\nthe quals to take precedence in generating a reasonable plan. In that\ncase, the IN becomes a regular filter, hence the idea behind the\npatch.\n\nSide note: I'd love for us to be able to treat \"IN (VALUES)\" the same\nway...but as noted in the other thread that's an extremely large\namount of work, I think. But similarly you could use a hash here\ninstead of a binary search...but this seems quite good.\n\nAs to the choice of 9 elements: I just picked that as a starting\npoint; Andres had previously commented off hand that at 8 elements\nserial scanning was faster, so I figured this was a reasonable\nstarting point for discussion.\n\nPerhaps it would make sense to determine that minimum not as a pure\nconstant but scaled based on how many rows the planner expects us to\nsee? Of course that'd be a more invasive patch...so may or may not be\nas feasible as a reasonable default.\n\n> Another idea - would a bloom filter be useful here, as a second\n> optimization? That is, for large arrays build s small bloom filter,\n> allowing us to skip even the binary search.\n\nThat's an interesting idea. I actually haven't personally worked with\nbloom filters, so didn't think about that.\n\nAre you thinking that you'd also build the filter *and* presort the\narray? Or try to get away with using only the bloom filter and not\nexpanding and sorting the array at all?\n\nThanks,\nJames\n\n\n", "msg_date": "Thu, 23 Apr 2020 09:02:26 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n>On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>> >Over in \"execExprInterp() questions / How to improve scalar array op\n>> >expr eval?\" [1] I'd mused about how we might be able to optimized\n>> >scalar array ops with OR'd semantics.\n>> >\n>> >This patch implements a binary search for such expressions when the\n>> >array argument is a constant so that we can avoid needing to teach\n>> >expression execution to cache stable values or know when a param has\n>> >changed.\n>> >\n>> >The speed-up for the target case can pretty impressive: in my\n>> >admittedly contrived and relatively unscientific test with a query in\n>> >the form:\n>> >\n>> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>> >random integers in the series>)\n>> >\n>> >shows ~30ms for the patch versus ~640ms on master.\n>> >\n>>\n>> Nice improvement, although 1000 items is probably a bit unusual. The\n>> threshold used in the patch (9 elements) seems a bit too low - what\n>> results have you seen with smaller arrays?\n>\n>At least in our systems we regularly work with 1000 batches of items,\n>which means you get IN clauses of identifiers of that size. Admittedly\n>the most common case sees those IN clauses as simple index scans\n>(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n>broader query that merely filters additionally on something like \"...\n>AND <some foreign key> IN (...)\" where it makes sense for the rest of\n>the quals to take precedence in generating a reasonable plan. In that\n>case, the IN becomes a regular filter, hence the idea behind the\n>patch.\n>\n>Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n>way...but as noted in the other thread that's an extremely large\n>amount of work, I think. But similarly you could use a hash here\n>instead of a binary search...but this seems quite good.\n>\n>As to the choice of 9 elements: I just picked that as a starting\n>point; Andres had previously commented off hand that at 8 elements\n>serial scanning was faster, so I figured this was a reasonable\n>starting point for discussion.\n>\n>Perhaps it would make sense to determine that minimum not as a pure\n>constant but scaled based on how many rows the planner expects us to\n>see? Of course that'd be a more invasive patch...so may or may not be\n>as feasible as a reasonable default.\n>\n\nNot sure. That seems a bit overcomplicated and I don't think it depends\non the number of rows the planner expects to see very much. I think we\nusually assume the linear search is cheaper for small arrays and then at\nsome point the binary search starts winning The question is where this\n\"break even\" point is.\n\nI think we usually use something like 64 or so in other places, but\nmaybe I'm wrong. The current limit 9 seems a bit too low, but I may be\nwrong. Let's not obsess about this too much, let's do some experiments\nand pick a value based on that.\n\n\n>> Another idea - would a bloom filter be useful here, as a second\n>> optimization? That is, for large arrays build s small bloom filter,\n>> allowing us to skip even the binary search.\n>\n>That's an interesting idea. I actually haven't personally worked with\n>bloom filters, so didn't think about that.\n>\n>Are you thinking that you'd also build the filter *and* presort the\n>array? Or try to get away with using only the bloom filter and not\n>expanding and sorting the array at all?\n>\n\nYeah, something like that. My intuition is the bloom filter is useful\nonly above some number of items, and the number is higher than for the\nbinary search. So we'd end up with two thresholds, first one enabling\nbinary search, the second one enabling bloom filter.\n\nOf course, the \"unknown\" variable here is how often we actually find the\nvalue in the array. If 100% of the queries has a match, then the bloom\nfilter is a waste of time. If there are no matches, it can make a\nsignificant difference.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 16:55:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, 24 Apr 2020 at 02:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n> >On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >As to the choice of 9 elements: I just picked that as a starting\n> >point; Andres had previously commented off hand that at 8 elements\n> >serial scanning was faster, so I figured this was a reasonable\n> >starting point for discussion.\n> >\n> >Perhaps it would make sense to determine that minimum not as a pure\n> >constant but scaled based on how many rows the planner expects us to\n> >see? Of course that'd be a more invasive patch...so may or may not be\n> >as feasible as a reasonable default.\n> >\n>\n> Not sure. That seems a bit overcomplicated and I don't think it depends\n> on the number of rows the planner expects to see very much. I think we\n> usually assume the linear search is cheaper for small arrays and then at\n> some point the binary search starts winning The question is where this\n> \"break even\" point is.\n>\n> I think we usually use something like 64 or so in other places, but\n> maybe I'm wrong. The current limit 9 seems a bit too low, but I may be\n> wrong. Let's not obsess about this too much, let's do some experiments\n> and pick a value based on that.\n\nIf single comparison for a binary search costs about the same as an\nequality check, then wouldn't the crossover point be much lower than\n64? The binary search should find or not find the target in log2(N)\nrather than N. ceil(log2(9)) is 4, which is of course less than 9.\nFor 64, it's 6, so are you not just doing a possible 58 equality\nchecks than necessary? Of course, it's a bit more complex as for\nvalues that *are* in the array, the linear search will, on average,\nonly check half the values. Assuming that, then 9 does not seem too\nfar off. I guess benchmarks at various crossover points would speak a\nthousand words.\n\nDavid\n\n\n", "msg_date": "Fri, 24 Apr 2020 10:09:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "Hi,\n\nOn 2020-04-24 10:09:36 +1200, David Rowley wrote:\n> If single comparison for a binary search costs about the same as an\n> equality check, then wouldn't the crossover point be much lower than\n> 64?\n\nThe costs aren't quite as simple as that though. Binary search usually\nhas issues with cache misses: In contrast to linear accesses each step\nwill be a cache miss, as the address is not predictable; and even if the\nCPU couldn't predict accesses in the linear search case, often multiple\nentries fit on a single cache line. Additionally out-of-order execution\nis usually a lot more effective for linear searches (e.g. the next\nelements can be compared before the current one is finished if that's\nwhat the branch predictor says is likely).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:35:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 23, 2020 at 10:55 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n> >On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n> >> >Over in \"execExprInterp() questions / How to improve scalar array op\n> >> >expr eval?\" [1] I'd mused about how we might be able to optimized\n> >> >scalar array ops with OR'd semantics.\n> >> >\n> >> >This patch implements a binary search for such expressions when the\n> >> >array argument is a constant so that we can avoid needing to teach\n> >> >expression execution to cache stable values or know when a param has\n> >> >changed.\n> >> >\n> >> >The speed-up for the target case can pretty impressive: in my\n> >> >admittedly contrived and relatively unscientific test with a query in\n> >> >the form:\n> >> >\n> >> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n> >> >random integers in the series>)\n> >> >\n> >> >shows ~30ms for the patch versus ~640ms on master.\n> >> >\n> >>\n> >> Nice improvement, although 1000 items is probably a bit unusual. The\n> >> threshold used in the patch (9 elements) seems a bit too low - what\n> >> results have you seen with smaller arrays?\n> >\n> >At least in our systems we regularly work with 1000 batches of items,\n> >which means you get IN clauses of identifiers of that size. Admittedly\n> >the most common case sees those IN clauses as simple index scans\n> >(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n> >broader query that merely filters additionally on something like \"...\n> >AND <some foreign key> IN (...)\" where it makes sense for the rest of\n> >the quals to take precedence in generating a reasonable plan. In that\n> >case, the IN becomes a regular filter, hence the idea behind the\n> >patch.\n> >\n> >Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n> >way...but as noted in the other thread that's an extremely large\n> >amount of work, I think. But similarly you could use a hash here\n> >instead of a binary search...but this seems quite good.\n> >\n> >As to the choice of 9 elements: I just picked that as a starting\n> >point; Andres had previously commented off hand that at 8 elements\n> >serial scanning was faster, so I figured this was a reasonable\n> >starting point for discussion.\n> >\n> >Perhaps it would make sense to determine that minimum not as a pure\n> >constant but scaled based on how many rows the planner expects us to\n> >see? Of course that'd be a more invasive patch...so may or may not be\n> >as feasible as a reasonable default.\n> >\n>\n> Not sure. That seems a bit overcomplicated and I don't think it depends\n> on the number of rows the planner expects to see very much. I think we\n> usually assume the linear search is cheaper for small arrays and then at\n> some point the binary search starts winning The question is where this\n> \"break even\" point is.\n\nWell since it has to do preprocessing work (expanding the array and\nthen sorting it), then the number of rows processed matters, right?\nFor example, doing a linear search on 1000 items only once is going to\nbe cheaper than preprocessing the array and then doing a binary\nsearch, but only a very large row count the binary search +\npreprocessing might very well win out for only a 10 element array.\n\nI'm not trying to argue for more work for myself here: I think the\noptimization is worth it on its own, and something like this could be\na further improvement on its own. But it is interesting to think\nabout.\n\nJames\n\n\n", "msg_date": "Fri, 24 Apr 2020 09:38:54 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, Apr 24, 2020 at 09:38:54AM -0400, James Coleman wrote:\n>On Thu, Apr 23, 2020 at 10:55 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n>> >On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n>> ><tomas.vondra@2ndquadrant.com> wrote:\n>> >>\n>> >> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>> >> >Over in \"execExprInterp() questions / How to improve scalar array op\n>> >> >expr eval?\" [1] I'd mused about how we might be able to optimized\n>> >> >scalar array ops with OR'd semantics.\n>> >> >\n>> >> >This patch implements a binary search for such expressions when the\n>> >> >array argument is a constant so that we can avoid needing to teach\n>> >> >expression execution to cache stable values or know when a param has\n>> >> >changed.\n>> >> >\n>> >> >The speed-up for the target case can pretty impressive: in my\n>> >> >admittedly contrived and relatively unscientific test with a query in\n>> >> >the form:\n>> >> >\n>> >> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>> >> >random integers in the series>)\n>> >> >\n>> >> >shows ~30ms for the patch versus ~640ms on master.\n>> >> >\n>> >>\n>> >> Nice improvement, although 1000 items is probably a bit unusual. The\n>> >> threshold used in the patch (9 elements) seems a bit too low - what\n>> >> results have you seen with smaller arrays?\n>> >\n>> >At least in our systems we regularly work with 1000 batches of items,\n>> >which means you get IN clauses of identifiers of that size. Admittedly\n>> >the most common case sees those IN clauses as simple index scans\n>> >(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n>> >broader query that merely filters additionally on something like \"...\n>> >AND <some foreign key> IN (...)\" where it makes sense for the rest of\n>> >the quals to take precedence in generating a reasonable plan. In that\n>> >case, the IN becomes a regular filter, hence the idea behind the\n>> >patch.\n>> >\n>> >Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n>> >way...but as noted in the other thread that's an extremely large\n>> >amount of work, I think. But similarly you could use a hash here\n>> >instead of a binary search...but this seems quite good.\n>> >\n>> >As to the choice of 9 elements: I just picked that as a starting\n>> >point; Andres had previously commented off hand that at 8 elements\n>> >serial scanning was faster, so I figured this was a reasonable\n>> >starting point for discussion.\n>> >\n>> >Perhaps it would make sense to determine that minimum not as a pure\n>> >constant but scaled based on how many rows the planner expects us to\n>> >see? Of course that'd be a more invasive patch...so may or may not be\n>> >as feasible as a reasonable default.\n>> >\n>>\n>> Not sure. That seems a bit overcomplicated and I don't think it depends\n>> on the number of rows the planner expects to see very much. I think we\n>> usually assume the linear search is cheaper for small arrays and then at\n>> some point the binary search starts winning The question is where this\n>> \"break even\" point is.\n>\n>Well since it has to do preprocessing work (expanding the array and\n>then sorting it), then the number of rows processed matters, right?\n>For example, doing a linear search on 1000 items only once is going to\n>be cheaper than preprocessing the array and then doing a binary\n>search, but only a very large row count the binary search +\n>preprocessing might very well win out for only a 10 element array.\n>\n\nHmmm, good point. Essentially the initialization (sorting of the array)\nhas some cost, and the question is how much extra per-tuple cost this\nadds. It's probably not worth it for a single lookup, but for many\nlookups it's probably OK. Let's see if I can do the math right:\n\n N - number of lookups\n K - number of array elements\n\nCost to sort the array is\n\n O(K * log(K)) = C1 * K * log(K)\n\nand the cost of a lookup is C2 * log(K), so with the extra cost amortized\nfor N lookups, the total \"per lookup\" cost is\n\n C1 * K * log(K) / N + C2 * log(K) = log(K) * (C1 * K / N + C2)\n\nWe need to compare this to the O(K) cost of simple linear search, and\nthe question is at which point the linear search gets more expensive:\n\n C3 * K = log(K) * (C1 * K / N + C2)\n\nI think we can assume that C3 is somewhere in between 0.5 and 1, i.e. if\nthere's a matching item we find it half-way through on average, and if\nthere is not we have to walk the whole array. So let's say it's 1.\n\nC1 and C2 are probably fairly low, I think - C1 is typically ~1.4 for\nrandom pivot choice IIRC, and C2 is probably similar. With both values\nbeing ~1.5 we get this:\n\n K = log(K) * (1.5 * K/N + 1.5)\n\nfor a fixed K, we get this formula for N:\n\n N = log(K) * 1.5 * K / (K - 1.5 * log(K))\n\nand for a bunch of K values the results look like this:\n\n K | N\n -------|-------\n 1 | 0\n 10 | 5.27\n 100 | 7.42\n 1000 | 10.47\n 10000 | 13.83\n 100000 | 17.27\n\ni.e. the binary search with 10k values starts winning over linear search\nwith just ~13 lookups.\n\n(Assuming I haven't made some silly mistake in the math ...)\n\nObviously, this only accounts for cost of comparisons and neglects e.g.\nthe indirect costs for less predictable memory access patterns mentioned\nby Andres in his response.\n\nBut I think it still shows the number of lookups needed for the binary\nsearch to be a win is pretty low - at least for reasonable number of\nvalues in the array. Maybe it's 20 and not 10, but I don't think that\nchanges much.\n\nThe other question is if we can get N at all and how reliable the value\nis. We can probably get the number of rows, but that will ignore other\nconditions that may eliminate the row before the binary search.\n\n>I'm not trying to argue for more work for myself here: I think the\n>optimization is worth it on its own, and something like this could be\n>a further improvement on its own. But it is interesting to think\n>about.\n>\n\nI don't know. Clearly, if the user sends a query with 10k values and\nonly does a single lookup, that won't win. And if we can reasonably and\nreliably protect against that, I wouldn't mind doing that, although it\nmeans a risk of not using the bin search in case of underestimates etc.\n\nI don't have any hard data about this, but I think we can assume the\nnumber of rows processed by the clause is (much) higher than the number\nof keys in it. If you have a clause with 10k values, then you probably\nexpect it to be applied to many rows, far more than the \"beak even\"\npoint of about 10-20 rows ...\n\nSo I wouldn't worry about this too much.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Apr 2020 23:55:15 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 23, 2020 at 04:55:51PM +0200, Tomas Vondra wrote:\n>On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n>>On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n>><tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>>>>Over in \"execExprInterp() questions / How to improve scalar array op\n>>>>expr eval?\" [1] I'd mused about how we might be able to optimized\n>>>>scalar array ops with OR'd semantics.\n>>>>\n>>>>This patch implements a binary search for such expressions when the\n>>>>array argument is a constant so that we can avoid needing to teach\n>>>>expression execution to cache stable values or know when a param has\n>>>>changed.\n>>>>\n>>>>The speed-up for the target case can pretty impressive: in my\n>>>>admittedly contrived and relatively unscientific test with a query in\n>>>>the form:\n>>>>\n>>>>select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>>>>random integers in the series>)\n>>>>\n>>>>shows ~30ms for the patch versus ~640ms on master.\n>>>>\n>>>\n>>>Nice improvement, although 1000 items is probably a bit unusual. The\n>>>threshold used in the patch (9 elements) seems a bit too low - what\n>>>results have you seen with smaller arrays?\n>>\n>>At least in our systems we regularly work with 1000 batches of items,\n>>which means you get IN clauses of identifiers of that size. Admittedly\n>>the most common case sees those IN clauses as simple index scans\n>>(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n>>broader query that merely filters additionally on something like \"...\n>>AND <some foreign key> IN (...)\" where it makes sense for the rest of\n>>the quals to take precedence in generating a reasonable plan. In that\n>>case, the IN becomes a regular filter, hence the idea behind the\n>>patch.\n>>\n>>Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n>>way...but as noted in the other thread that's an extremely large\n>>amount of work, I think. But similarly you could use a hash here\n>>instead of a binary search...but this seems quite good.\n>>\n>>As to the choice of 9 elements: I just picked that as a starting\n>>point; Andres had previously commented off hand that at 8 elements\n>>serial scanning was faster, so I figured this was a reasonable\n>>starting point for discussion.\n>>\n>>Perhaps it would make sense to determine that minimum not as a pure\n>>constant but scaled based on how many rows the planner expects us to\n>>see? Of course that'd be a more invasive patch...so may or may not be\n>>as feasible as a reasonable default.\n>>\n>\n>Not sure. That seems a bit overcomplicated and I don't think it depends\n>on the number of rows the planner expects to see very much. I think we\n>usually assume the linear search is cheaper for small arrays and then at\n>some point the binary search starts winning The question is where this\n>\"break even\" point is.\n>\n>I think we usually use something like 64 or so in other places, but\n>maybe I'm wrong. The current limit 9 seems a bit too low, but I may be\n>wrong. Let's not obsess about this too much, let's do some experiments\n>and pick a value based on that.\n>\n>\n>>>Another idea - would a bloom filter be useful here, as a second\n>>>optimization? That is, for large arrays build s small bloom filter,\n>>>allowing us to skip even the binary search.\n>>\n>>That's an interesting idea. I actually haven't personally worked with\n>>bloom filters, so didn't think about that.\n>>\n>>Are you thinking that you'd also build the filter *and* presort the\n>>array? Or try to get away with using only the bloom filter and not\n>>expanding and sorting the array at all?\n>>\n>\n>Yeah, something like that. My intuition is the bloom filter is useful\n>only above some number of items, and the number is higher than for the\n>binary search. So we'd end up with two thresholds, first one enabling\n>binary search, the second one enabling bloom filter.\n>\n>Of course, the \"unknown\" variable here is how often we actually find the\n>value in the array. If 100% of the queries has a match, then the bloom\n>filter is a waste of time. If there are no matches, it can make a\n>significant difference.\n>\n\nI did experiment with this is a bit, both to get a bit more familiar\nwith this code and to see if the bloom filter might help. The short\nanswer is the bloom filter does not seem to help at all, so I wouldn't\nbother about it too much.\n\nAttacched is an updated patch series and, script I used to collect some\nperformance measurements, and a spreadsheet with results. The patch\nseries is broken into four parts:\n\n 0001 - the original patch with binary search\n 0002 - adds GUCs to enable bin search / tweak threshold\n 0003 - allows to use bloom filter + binary search\n 0004 - try using murmurhash\n\nThe test script runs a wide range of queries with different number\nof lookups, keys in the array, match probability (i.e. fraction of\nlookups that find a match) ranging from 1% to 100%. And of course, it\nruns this with the binsearch/bloom either enabled or disabled (the\nthreshold was lowered to 1, so it's the on/off GUCs that determine\nwhether the binsearch/bloom is used).\n\nThe results are summarized in the spreadsheet, demonstrating how useless\nthe bloom filter is. There's not a single case where it would beat the\nbinary search. I believe this is because theaccess to bloom filter is\nrandom (determined by the hash function) and we don't save much compared\nto the log(K) lookups in the sorted array.\n\nThat makes sense, I think the bloom filters are meant to be used in\ncases when the main data don't fit into memory - which is not the case\nhere. But I wonder how would this change for cases with more expensive\ncomparisons - this was using just integers, so maybe strings would\nresult in different behavior.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 25 Apr 2020 00:21:06 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, Apr 24, 2020 at 5:55 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Fri, Apr 24, 2020 at 09:38:54AM -0400, James Coleman wrote:\n> >On Thu, Apr 23, 2020 at 10:55 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n> >> >On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n> >> ><tomas.vondra@2ndquadrant.com> wrote:\n> >> >>\n> >> >> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n> >> >> >Over in \"execExprInterp() questions / How to improve scalar array op\n> >> >> >expr eval?\" [1] I'd mused about how we might be able to optimized\n> >> >> >scalar array ops with OR'd semantics.\n> >> >> >\n> >> >> >This patch implements a binary search for such expressions when the\n> >> >> >array argument is a constant so that we can avoid needing to teach\n> >> >> >expression execution to cache stable values or know when a param has\n> >> >> >changed.\n> >> >> >\n> >> >> >The speed-up for the target case can pretty impressive: in my\n> >> >> >admittedly contrived and relatively unscientific test with a query in\n> >> >> >the form:\n> >> >> >\n> >> >> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n> >> >> >random integers in the series>)\n> >> >> >\n> >> >> >shows ~30ms for the patch versus ~640ms on master.\n> >> >> >\n> >> >>\n> >> >> Nice improvement, although 1000 items is probably a bit unusual. The\n> >> >> threshold used in the patch (9 elements) seems a bit too low - what\n> >> >> results have you seen with smaller arrays?\n> >> >\n> >> >At least in our systems we regularly work with 1000 batches of items,\n> >> >which means you get IN clauses of identifiers of that size. Admittedly\n> >> >the most common case sees those IN clauses as simple index scans\n> >> >(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n> >> >broader query that merely filters additionally on something like \"...\n> >> >AND <some foreign key> IN (...)\" where it makes sense for the rest of\n> >> >the quals to take precedence in generating a reasonable plan. In that\n> >> >case, the IN becomes a regular filter, hence the idea behind the\n> >> >patch.\n> >> >\n> >> >Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n> >> >way...but as noted in the other thread that's an extremely large\n> >> >amount of work, I think. But similarly you could use a hash here\n> >> >instead of a binary search...but this seems quite good.\n> >> >\n> >> >As to the choice of 9 elements: I just picked that as a starting\n> >> >point; Andres had previously commented off hand that at 8 elements\n> >> >serial scanning was faster, so I figured this was a reasonable\n> >> >starting point for discussion.\n> >> >\n> >> >Perhaps it would make sense to determine that minimum not as a pure\n> >> >constant but scaled based on how many rows the planner expects us to\n> >> >see? Of course that'd be a more invasive patch...so may or may not be\n> >> >as feasible as a reasonable default.\n> >> >\n> >>\n> >> Not sure. That seems a bit overcomplicated and I don't think it depends\n> >> on the number of rows the planner expects to see very much. I think we\n> >> usually assume the linear search is cheaper for small arrays and then at\n> >> some point the binary search starts winning The question is where this\n> >> \"break even\" point is.\n> >\n> >Well since it has to do preprocessing work (expanding the array and\n> >then sorting it), then the number of rows processed matters, right?\n> >For example, doing a linear search on 1000 items only once is going to\n> >be cheaper than preprocessing the array and then doing a binary\n> >search, but only a very large row count the binary search +\n> >preprocessing might very well win out for only a 10 element array.\n> >\n>\n> Hmmm, good point. Essentially the initialization (sorting of the array)\n> has some cost, and the question is how much extra per-tuple cost this\n> adds. It's probably not worth it for a single lookup, but for many\n> lookups it's probably OK. Let's see if I can do the math right:\n>\n> N - number of lookups\n> K - number of array elements\n>\n> Cost to sort the array is\n>\n> O(K * log(K)) = C1 * K * log(K)\n>\n> and the cost of a lookup is C2 * log(K), so with the extra cost amortized\n> for N lookups, the total \"per lookup\" cost is\n>\n> C1 * K * log(K) / N + C2 * log(K) = log(K) * (C1 * K / N + C2)\n>\n> We need to compare this to the O(K) cost of simple linear search, and\n> the question is at which point the linear search gets more expensive:\n>\n> C3 * K = log(K) * (C1 * K / N + C2)\n>\n> I think we can assume that C3 is somewhere in between 0.5 and 1, i.e. if\n> there's a matching item we find it half-way through on average, and if\n> there is not we have to walk the whole array. So let's say it's 1.\n>\n> C1 and C2 are probably fairly low, I think - C1 is typically ~1.4 for\n> random pivot choice IIRC, and C2 is probably similar. With both values\n> being ~1.5 we get this:\n>\n> K = log(K) * (1.5 * K/N + 1.5)\n>\n> for a fixed K, we get this formula for N:\n>\n> N = log(K) * 1.5 * K / (K - 1.5 * log(K))\n>\n> and for a bunch of K values the results look like this:\n>\n> K | N\n> -------|-------\n> 1 | 0\n> 10 | 5.27\n> 100 | 7.42\n> 1000 | 10.47\n> 10000 | 13.83\n> 100000 | 17.27\n>\n> i.e. the binary search with 10k values starts winning over linear search\n> with just ~13 lookups.\n>\n> (Assuming I haven't made some silly mistake in the math ...)\n>\n> Obviously, this only accounts for cost of comparisons and neglects e.g.\n> the indirect costs for less predictable memory access patterns mentioned\n> by Andres in his response.\n>\n> But I think it still shows the number of lookups needed for the binary\n> search to be a win is pretty low - at least for reasonable number of\n> values in the array. Maybe it's 20 and not 10, but I don't think that\n> changes much.\n>\n> The other question is if we can get N at all and how reliable the value\n> is. We can probably get the number of rows, but that will ignore other\n> conditions that may eliminate the row before the binary search.\n>\n> >I'm not trying to argue for more work for myself here: I think the\n> >optimization is worth it on its own, and something like this could be\n> >a further improvement on its own. But it is interesting to think\n> >about.\n> >\n>\n> I don't know. Clearly, if the user sends a query with 10k values and\n> only does a single lookup, that won't win. And if we can reasonably and\n> reliably protect against that, I wouldn't mind doing that, although it\n> means a risk of not using the bin search in case of underestimates etc.\n>\n> I don't have any hard data about this, but I think we can assume the\n> number of rows processed by the clause is (much) higher than the number\n> of keys in it. If you have a clause with 10k values, then you probably\n> expect it to be applied to many rows, far more than the \"beak even\"\n> point of about 10-20 rows ...\n>\n> So I wouldn't worry about this too much.\n\nYeah. I think it becomes a lot more interesting in the future if/when\nwe end up with a way to use this with params and not just constant\narrays. Then the \"group\" size would matter a whole lot more.\n\nFor now, the constant amount of overhead is quite small, so even if we\nonly execute it once we won't make the query that much worse (or, at\nleast, the total query time will still be very small). Also, because\nit's only applied to constants, there's a natural limit to how much\noverhead we're likely to introduce into a query.\n\nJames\n\n\n", "msg_date": "Fri, 24 Apr 2020 21:22:34 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, Apr 25, 2020 at 12:21:06AM +0200, Tomas Vondra wrote:\n>On Thu, Apr 23, 2020 at 04:55:51PM +0200, Tomas Vondra wrote:\n>>On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n>>>On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n>>><tomas.vondra@2ndquadrant.com> wrote:\n>>>>\n>>>>On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>>>>>Over in \"execExprInterp() questions / How to improve scalar array op\n>>>>>expr eval?\" [1] I'd mused about how we might be able to optimized\n>>>>>scalar array ops with OR'd semantics.\n>>>>>\n>>>>>This patch implements a binary search for such expressions when the\n>>>>>array argument is a constant so that we can avoid needing to teach\n>>>>>expression execution to cache stable values or know when a param has\n>>>>>changed.\n>>>>>\n>>>>>The speed-up for the target case can pretty impressive: in my\n>>>>>admittedly contrived and relatively unscientific test with a query in\n>>>>>the form:\n>>>>>\n>>>>>select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>>>>>random integers in the series>)\n>>>>>\n>>>>>shows ~30ms for the patch versus ~640ms on master.\n>>>>>\n>>>>\n>>>>Nice improvement, although 1000 items is probably a bit unusual. The\n>>>>threshold used in the patch (9 elements) seems a bit too low - what\n>>>>results have you seen with smaller arrays?\n>>>\n>>>At least in our systems we regularly work with 1000 batches of items,\n>>>which means you get IN clauses of identifiers of that size. Admittedly\n>>>the most common case sees those IN clauses as simple index scans\n>>>(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n>>>broader query that merely filters additionally on something like \"...\n>>>AND <some foreign key> IN (...)\" where it makes sense for the rest of\n>>>the quals to take precedence in generating a reasonable plan. In that\n>>>case, the IN becomes a regular filter, hence the idea behind the\n>>>patch.\n>>>\n>>>Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n>>>way...but as noted in the other thread that's an extremely large\n>>>amount of work, I think. But similarly you could use a hash here\n>>>instead of a binary search...but this seems quite good.\n>>>\n>>>As to the choice of 9 elements: I just picked that as a starting\n>>>point; Andres had previously commented off hand that at 8 elements\n>>>serial scanning was faster, so I figured this was a reasonable\n>>>starting point for discussion.\n>>>\n>>>Perhaps it would make sense to determine that minimum not as a pure\n>>>constant but scaled based on how many rows the planner expects us to\n>>>see? Of course that'd be a more invasive patch...so may or may not be\n>>>as feasible as a reasonable default.\n>>>\n>>\n>>Not sure. That seems a bit overcomplicated and I don't think it depends\n>>on the number of rows the planner expects to see very much. I think we\n>>usually assume the linear search is cheaper for small arrays and then at\n>>some point the binary search starts winning The question is where this\n>>\"break even\" point is.\n>>\n>>I think we usually use something like 64 or so in other places, but\n>>maybe I'm wrong. The current limit 9 seems a bit too low, but I may be\n>>wrong. Let's not obsess about this too much, let's do some experiments\n>>and pick a value based on that.\n>>\n>>\n>>>>Another idea - would a bloom filter be useful here, as a second\n>>>>optimization? That is, for large arrays build s small bloom filter,\n>>>>allowing us to skip even the binary search.\n>>>\n>>>That's an interesting idea. I actually haven't personally worked with\n>>>bloom filters, so didn't think about that.\n>>>\n>>>Are you thinking that you'd also build the filter *and* presort the\n>>>array? Or try to get away with using only the bloom filter and not\n>>>expanding and sorting the array at all?\n>>>\n>>\n>>Yeah, something like that. My intuition is the bloom filter is useful\n>>only above some number of items, and the number is higher than for the\n>>binary search. So we'd end up with two thresholds, first one enabling\n>>binary search, the second one enabling bloom filter.\n>>\n>>Of course, the \"unknown\" variable here is how often we actually find the\n>>value in the array. If 100% of the queries has a match, then the bloom\n>>filter is a waste of time. If there are no matches, it can make a\n>>significant difference.\n>>\n>\n>I did experiment with this is a bit, both to get a bit more familiar\n>with this code and to see if the bloom filter might help. The short\n>answer is the bloom filter does not seem to help at all, so I wouldn't\n>bother about it too much.\n>\n>Attacched is an updated patch series and, script I used to collect some\n>performance measurements, and a spreadsheet with results. The patch\n>series is broken into four parts:\n>\n> 0001 - the original patch with binary search\n> 0002 - adds GUCs to enable bin search / tweak threshold\n> 0003 - allows to use bloom filter + binary search\n> 0004 - try using murmurhash\n>\n>The test script runs a wide range of queries with different number\n>of lookups, keys in the array, match probability (i.e. fraction of\n>lookups that find a match) ranging from 1% to 100%. And of course, it\n>runs this with the binsearch/bloom either enabled or disabled (the\n>threshold was lowered to 1, so it's the on/off GUCs that determine\n>whether the binsearch/bloom is used).\n>\n>The results are summarized in the spreadsheet, demonstrating how useless\n>the bloom filter is. There's not a single case where it would beat the\n>binary search. I believe this is because theaccess to bloom filter is\n>random (determined by the hash function) and we don't save much compared\n>to the log(K) lookups in the sorted array.\n>\n>That makes sense, I think the bloom filters are meant to be used in\n>cases when the main data don't fit into memory - which is not the case\n>here. But I wonder how would this change for cases with more expensive\n>comparisons - this was using just integers, so maybe strings would\n>result in different behavior.\n\nOK, I tried the same test with text columns (with md5 strings), and the\nresults are about as I predicted - the bloom filter actually makes a\ndifference in this case. Depending on the number of lookups and\nselectivity (i.e. how many lookups have a match in the array) it can\nmean additional speedup up to ~5x compared to binary search alone.\n\nFor the case with 100% selectivity (i.e. all rows have a match) this\ncan't really save any time - it's usually still much faster than master,\nbut it's a bit slower than binary search.\n\nSo I think this might be worth investigating further, once the simple\nbinary search gets committed. We'll probably need to factor in the cost\nof the comparison (higher cost -> BF more useful), selectivity of the\nfilter (fewer matches -> BF more useful) and number of lookups.\n\nThis reminds me our attempts to add bloom filters to hash joins, which I\nthink ran into mostly the same challenge of deciding when the bloom\nfilter can be useful and is worth the extra work.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 25 Apr 2020 14:40:24 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, Apr 24, 2020 at 09:22:34PM -0400, James Coleman wrote:\n>On Fri, Apr 24, 2020 at 5:55 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Fri, Apr 24, 2020 at 09:38:54AM -0400, James Coleman wrote:\n>> >On Thu, Apr 23, 2020 at 10:55 AM Tomas Vondra\n>> ><tomas.vondra@2ndquadrant.com> wrote:\n>> >>\n>> >> On Thu, Apr 23, 2020 at 09:02:26AM -0400, James Coleman wrote:\n>> >> >On Thu, Apr 23, 2020 at 8:47 AM Tomas Vondra\n>> >> ><tomas.vondra@2ndquadrant.com> wrote:\n>> >> >>\n>> >> >> On Mon, Apr 20, 2020 at 09:27:34PM -0400, James Coleman wrote:\n>> >> >> >Over in \"execExprInterp() questions / How to improve scalar array op\n>> >> >> >expr eval?\" [1] I'd mused about how we might be able to optimized\n>> >> >> >scalar array ops with OR'd semantics.\n>> >> >> >\n>> >> >> >This patch implements a binary search for such expressions when the\n>> >> >> >array argument is a constant so that we can avoid needing to teach\n>> >> >> >expression execution to cache stable values or know when a param has\n>> >> >> >changed.\n>> >> >> >\n>> >> >> >The speed-up for the target case can pretty impressive: in my\n>> >> >> >admittedly contrived and relatively unscientific test with a query in\n>> >> >> >the form:\n>> >> >> >\n>> >> >> >select count(*) from generate_series(1,100000) n(i) where i in (<1000\n>> >> >> >random integers in the series>)\n>> >> >> >\n>> >> >> >shows ~30ms for the patch versus ~640ms on master.\n>> >> >> >\n>> >> >>\n>> >> >> Nice improvement, although 1000 items is probably a bit unusual. The\n>> >> >> threshold used in the patch (9 elements) seems a bit too low - what\n>> >> >> results have you seen with smaller arrays?\n>> >> >\n>> >> >At least in our systems we regularly work with 1000 batches of items,\n>> >> >which means you get IN clauses of identifiers of that size. Admittedly\n>> >> >the most common case sees those IN clauses as simple index scans\n>> >> >(e.g., WHERE <primary key> IN (...)), but it's also common to have a\n>> >> >broader query that merely filters additionally on something like \"...\n>> >> >AND <some foreign key> IN (...)\" where it makes sense for the rest of\n>> >> >the quals to take precedence in generating a reasonable plan. In that\n>> >> >case, the IN becomes a regular filter, hence the idea behind the\n>> >> >patch.\n>> >> >\n>> >> >Side note: I'd love for us to be able to treat \"IN (VALUES)\" the same\n>> >> >way...but as noted in the other thread that's an extremely large\n>> >> >amount of work, I think. But similarly you could use a hash here\n>> >> >instead of a binary search...but this seems quite good.\n>> >> >\n>> >> >As to the choice of 9 elements: I just picked that as a starting\n>> >> >point; Andres had previously commented off hand that at 8 elements\n>> >> >serial scanning was faster, so I figured this was a reasonable\n>> >> >starting point for discussion.\n>> >> >\n>> >> >Perhaps it would make sense to determine that minimum not as a pure\n>> >> >constant but scaled based on how many rows the planner expects us to\n>> >> >see? Of course that'd be a more invasive patch...so may or may not be\n>> >> >as feasible as a reasonable default.\n>> >> >\n>> >>\n>> >> Not sure. That seems a bit overcomplicated and I don't think it depends\n>> >> on the number of rows the planner expects to see very much. I think we\n>> >> usually assume the linear search is cheaper for small arrays and then at\n>> >> some point the binary search starts winning The question is where this\n>> >> \"break even\" point is.\n>> >\n>> >Well since it has to do preprocessing work (expanding the array and\n>> >then sorting it), then the number of rows processed matters, right?\n>> >For example, doing a linear search on 1000 items only once is going to\n>> >be cheaper than preprocessing the array and then doing a binary\n>> >search, but only a very large row count the binary search +\n>> >preprocessing might very well win out for only a 10 element array.\n>> >\n>>\n>> Hmmm, good point. Essentially the initialization (sorting of the array)\n>> has some cost, and the question is how much extra per-tuple cost this\n>> adds. It's probably not worth it for a single lookup, but for many\n>> lookups it's probably OK. Let's see if I can do the math right:\n>>\n>> N - number of lookups\n>> K - number of array elements\n>>\n>> Cost to sort the array is\n>>\n>> O(K * log(K)) = C1 * K * log(K)\n>>\n>> and the cost of a lookup is C2 * log(K), so with the extra cost amortized\n>> for N lookups, the total \"per lookup\" cost is\n>>\n>> C1 * K * log(K) / N + C2 * log(K) = log(K) * (C1 * K / N + C2)\n>>\n>> We need to compare this to the O(K) cost of simple linear search, and\n>> the question is at which point the linear search gets more expensive:\n>>\n>> C3 * K = log(K) * (C1 * K / N + C2)\n>>\n>> I think we can assume that C3 is somewhere in between 0.5 and 1, i.e. if\n>> there's a matching item we find it half-way through on average, and if\n>> there is not we have to walk the whole array. So let's say it's 1.\n>>\n>> C1 and C2 are probably fairly low, I think - C1 is typically ~1.4 for\n>> random pivot choice IIRC, and C2 is probably similar. With both values\n>> being ~1.5 we get this:\n>>\n>> K = log(K) * (1.5 * K/N + 1.5)\n>>\n>> for a fixed K, we get this formula for N:\n>>\n>> N = log(K) * 1.5 * K / (K - 1.5 * log(K))\n>>\n>> and for a bunch of K values the results look like this:\n>>\n>> K | N\n>> -------|-------\n>> 1 | 0\n>> 10 | 5.27\n>> 100 | 7.42\n>> 1000 | 10.47\n>> 10000 | 13.83\n>> 100000 | 17.27\n>>\n>> i.e. the binary search with 10k values starts winning over linear search\n>> with just ~13 lookups.\n>>\n>> (Assuming I haven't made some silly mistake in the math ...)\n>>\n>> Obviously, this only accounts for cost of comparisons and neglects e.g.\n>> the indirect costs for less predictable memory access patterns mentioned\n>> by Andres in his response.\n>>\n>> But I think it still shows the number of lookups needed for the binary\n>> search to be a win is pretty low - at least for reasonable number of\n>> values in the array. Maybe it's 20 and not 10, but I don't think that\n>> changes much.\n>>\n>> The other question is if we can get N at all and how reliable the value\n>> is. We can probably get the number of rows, but that will ignore other\n>> conditions that may eliminate the row before the binary search.\n>>\n>> >I'm not trying to argue for more work for myself here: I think the\n>> >optimization is worth it on its own, and something like this could be\n>> >a further improvement on its own. But it is interesting to think\n>> >about.\n>> >\n>>\n>> I don't know. Clearly, if the user sends a query with 10k values and\n>> only does a single lookup, that won't win. And if we can reasonably and\n>> reliably protect against that, I wouldn't mind doing that, although it\n>> means a risk of not using the bin search in case of underestimates etc.\n>>\n>> I don't have any hard data about this, but I think we can assume the\n>> number of rows processed by the clause is (much) higher than the number\n>> of keys in it. If you have a clause with 10k values, then you probably\n>> expect it to be applied to many rows, far more than the \"beak even\"\n>> point of about 10-20 rows ...\n>>\n>> So I wouldn't worry about this too much.\n>\n>Yeah. I think it becomes a lot more interesting in the future if/when\n>we end up with a way to use this with params and not just constant\n>arrays. Then the \"group\" size would matter a whole lot more.\n>\n\nTrue. That probably changes the calculations quite a bit.\n\n>For now, the constant amount of overhead is quite small, so even if we\n>only execute it once we won't make the query that much worse (or, at\n>least, the total query time will still be very small). Also, because\n>it's only applied to constants, there's a natural limit to how much\n>overhead we're likely to introduce into a query.\n>\n\nFWIW the results from repeated test with both int and text columns that\nI shared in [1] also have data for smaller numbers of rows. I haven't\ntried very much to minimize noise (the original goal was to test speedup\nfor large numbers of rows and large arrays, where this is not an issue).\nBut I think it still shows that the threshold of ~10 elements is in the\nright ballpark. We might use a higher value to be a bit more defensive,\nbut it's never going to be perfect for types with both cheap and more\nexpensive comparisons.\n\nOne more note - shouldn't this also tweak cost_qual_eval_walker which\ncomputes cost for ScalarArrayOpExpr? At the moment it does this:\n\n /*\n * Estimate that the operator will be applied to about half of the\n * array elements before the answer is determined.\n */\n\nbut that's appropriate for linear search.\n\n\n[1] https://www.postgresql.org/message-id/20200425124024.hsv7z6bia752uymz%40development\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 25 Apr 2020 14:59:07 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> This reminds me our attempts to add bloom filters to hash joins, which I\n> think ran into mostly the same challenge of deciding when the bloom\n> filter can be useful and is worth the extra work.\n\nSpeaking of that, it would be interesting to see how a test where you\nwrite the query as IN(VALUES(...)) instead of IN() compares. It would\nbe interesting to know if the planner is able to make a more suitable\nchoice and also to see how all the work over the years to improve Hash\nJoins compares to the bsearch with and without the bloom filter.\n\nDavid\n\n\n", "msg_date": "Sun, 26 Apr 2020 09:41:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > This reminds me our attempts to add bloom filters to hash joins, which I\n> > think ran into mostly the same challenge of deciding when the bloom\n> > filter can be useful and is worth the extra work.\n>\n> Speaking of that, it would be interesting to see how a test where you\n> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> be interesting to know if the planner is able to make a more suitable\n> choice and also to see how all the work over the years to improve Hash\n> Joins compares to the bsearch with and without the bloom filter.\n\nIt would be interesting.\n\nIt also makes one wonder about optimizing these into to hash\njoins...which I'd thought about over at [1]. I think it'd be a very\nsignificant effort though.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe_zVVOURfdPbAhssijw7yV0uKi350gQ%3D_QGDz7R%3DHpGGQ%40mail.gmail.com\n\n\n", "msg_date": "Sat, 25 Apr 2020 18:47:41 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n>On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> > This reminds me our attempts to add bloom filters to hash joins, which I\n>> > think ran into mostly the same challenge of deciding when the bloom\n>> > filter can be useful and is worth the extra work.\n>>\n>> Speaking of that, it would be interesting to see how a test where you\n>> write the query as IN(VALUES(...)) instead of IN() compares. It would\n>> be interesting to know if the planner is able to make a more suitable\n>> choice and also to see how all the work over the years to improve Hash\n>> Joins compares to the bsearch with and without the bloom filter.\n>\n>It would be interesting.\n>\n>It also makes one wonder about optimizing these into to hash\n>joins...which I'd thought about over at [1]. I think it'd be a very\n>significant effort though.\n>\n\nI modified the script to also do the join version of the query. I can\nonly run it on my laptop at the moment, so the results may be a bit\ndifferent from those I shared before, but it's interesting I think.\n\nIn most cases it's comparable to the binsearch/bloom approach, and in\nsome cases it actually beats them quite significantly. It seems to\ndepend on how expensive the comparison is - for \"int\" the comparison is\nvery cheap and there's almost no difference. For \"text\" the comparison\nis much more expensive, and there are significant speedups.\n\nFor example the test with 100k lookups in array of 10k elements and 10%\nmatch probability, the timings are these\n\n master: 62362 ms\n binsearch: 201 ms\n bloom: 65 ms\n hashjoin: 36 ms\n\nI do think the explanation is fairly simple - the bloom filter\neliminates about 90% of the expensive comparisons, so it's 20ms plus\nsome overhead to build and check the bits. The hash join probably\neliminates a lot of the remaining comparisons, because the hash table\nis sized to have one tuple per bucket.\n\nNote: I also don't claim the PoC has the most efficient bloom filter\nimplementation possible. I'm sure it could be made faster.\n\nAnyway, I'm not sure transforming this to a hash join is worth the\neffort - I agree that seems quite complex. But perhaps this suggest we\nshould not be doing binary search and instead just build a simple hash\ntable - that seems much simpler, and it'll probably give us about the\nsame benefits.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 26 Apr 2020 02:31:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >>\n> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n> >> > think ran into mostly the same challenge of deciding when the bloom\n> >> > filter can be useful and is worth the extra work.\n> >>\n> >> Speaking of that, it would be interesting to see how a test where you\n> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> >> be interesting to know if the planner is able to make a more suitable\n> >> choice and also to see how all the work over the years to improve Hash\n> >> Joins compares to the bsearch with and without the bloom filter.\n> >\n> >It would be interesting.\n> >\n> >It also makes one wonder about optimizing these into to hash\n> >joins...which I'd thought about over at [1]. I think it'd be a very\n> >significant effort though.\n> >\n>\n> I modified the script to also do the join version of the query. I can\n> only run it on my laptop at the moment, so the results may be a bit\n> different from those I shared before, but it's interesting I think.\n>\n> In most cases it's comparable to the binsearch/bloom approach, and in\n> some cases it actually beats them quite significantly. It seems to\n> depend on how expensive the comparison is - for \"int\" the comparison is\n> very cheap and there's almost no difference. For \"text\" the comparison\n> is much more expensive, and there are significant speedups.\n>\n> For example the test with 100k lookups in array of 10k elements and 10%\n> match probability, the timings are these\n>\n> master: 62362 ms\n> binsearch: 201 ms\n> bloom: 65 ms\n> hashjoin: 36 ms\n>\n> I do think the explanation is fairly simple - the bloom filter\n> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> some overhead to build and check the bits. The hash join probably\n> eliminates a lot of the remaining comparisons, because the hash table\n> is sized to have one tuple per bucket.\n>\n> Note: I also don't claim the PoC has the most efficient bloom filter\n> implementation possible. I'm sure it could be made faster.\n>\n> Anyway, I'm not sure transforming this to a hash join is worth the\n> effort - I agree that seems quite complex. But perhaps this suggest we\n> should not be doing binary search and instead just build a simple hash\n> table - that seems much simpler, and it'll probably give us about the\n> same benefits.\n\nThat's actually what I originally thought about doing, but I chose\nbinary search since it seemed a lot easier to get off the ground.\n\nIf we instead build a hash is there anything else we need to be\nconcerned about? For example, work mem? I suppose for the binary\nsearch we already have to expand the array, so perhaps it's not all\nthat meaningful relative to that...\n\nI was looking earlier at what our standard hash implementation was,\nand it seemed less obvious what was needed to set that up (so binary\nsearch seemed a faster proof of concept). If you happen to have any\npointers to similar usages I should look at, please let me know.\n\nJames\n\n\n", "msg_date": "Sun, 26 Apr 2020 14:46:19 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n>On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n>> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> >>\n>> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n>> >> > think ran into mostly the same challenge of deciding when the bloom\n>> >> > filter can be useful and is worth the extra work.\n>> >>\n>> >> Speaking of that, it would be interesting to see how a test where you\n>> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n>> >> be interesting to know if the planner is able to make a more suitable\n>> >> choice and also to see how all the work over the years to improve Hash\n>> >> Joins compares to the bsearch with and without the bloom filter.\n>> >\n>> >It would be interesting.\n>> >\n>> >It also makes one wonder about optimizing these into to hash\n>> >joins...which I'd thought about over at [1]. I think it'd be a very\n>> >significant effort though.\n>> >\n>>\n>> I modified the script to also do the join version of the query. I can\n>> only run it on my laptop at the moment, so the results may be a bit\n>> different from those I shared before, but it's interesting I think.\n>>\n>> In most cases it's comparable to the binsearch/bloom approach, and in\n>> some cases it actually beats them quite significantly. It seems to\n>> depend on how expensive the comparison is - for \"int\" the comparison is\n>> very cheap and there's almost no difference. For \"text\" the comparison\n>> is much more expensive, and there are significant speedups.\n>>\n>> For example the test with 100k lookups in array of 10k elements and 10%\n>> match probability, the timings are these\n>>\n>> master: 62362 ms\n>> binsearch: 201 ms\n>> bloom: 65 ms\n>> hashjoin: 36 ms\n>>\n>> I do think the explanation is fairly simple - the bloom filter\n>> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n>> some overhead to build and check the bits. The hash join probably\n>> eliminates a lot of the remaining comparisons, because the hash table\n>> is sized to have one tuple per bucket.\n>>\n>> Note: I also don't claim the PoC has the most efficient bloom filter\n>> implementation possible. I'm sure it could be made faster.\n>>\n>> Anyway, I'm not sure transforming this to a hash join is worth the\n>> effort - I agree that seems quite complex. But perhaps this suggest we\n>> should not be doing binary search and instead just build a simple hash\n>> table - that seems much simpler, and it'll probably give us about the\n>> same benefits.\n>\n>That's actually what I originally thought about doing, but I chose\n>binary search since it seemed a lot easier to get off the ground.\n>\n\nOK, that makes perfect sense.\n\n>If we instead build a hash is there anything else we need to be\n>concerned about? For example, work mem? I suppose for the binary\n>search we already have to expand the array, so perhaps it's not all\n>that meaningful relative to that...\n>\n\nI don't think we need to be particularly concerned about work_mem. We\ndon't care about it now, and it's not clear to me what we could do about\nit - we already have the array in memory anyway, so it's a bit futile.\nFurthermore, if we need to care about it, it probably applies to the\nbinary search too.\n\n>I was looking earlier at what our standard hash implementation was,\n>and it seemed less obvious what was needed to set that up (so binary\n>search seemed a faster proof of concept). If you happen to have any\n>pointers to similar usages I should look at, please let me know.\n>\n\nI think the hash join implementation is far too complicated. It has to\ncare about work_mem, so it implements batching, etc. That's a lot of\ncomplexity we don't need here. IMO we could use either the usual\ndynahash, or maybe even the simpler simplehash.\n\nFWIW it'd be good to verify the numbers I shared, i.e. checking that the\nbenchmarks makes sense and running it independently. I'm not aware of\nany issues but it was done late at night and only ran on my laptop.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 26 Apr 2020 22:49:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >> >>\n> >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n> >> >> > think ran into mostly the same challenge of deciding when the bloom\n> >> >> > filter can be useful and is worth the extra work.\n> >> >>\n> >> >> Speaking of that, it would be interesting to see how a test where you\n> >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> >> >> be interesting to know if the planner is able to make a more suitable\n> >> >> choice and also to see how all the work over the years to improve Hash\n> >> >> Joins compares to the bsearch with and without the bloom filter.\n> >> >\n> >> >It would be interesting.\n> >> >\n> >> >It also makes one wonder about optimizing these into to hash\n> >> >joins...which I'd thought about over at [1]. I think it'd be a very\n> >> >significant effort though.\n> >> >\n> >>\n> >> I modified the script to also do the join version of the query. I can\n> >> only run it on my laptop at the moment, so the results may be a bit\n> >> different from those I shared before, but it's interesting I think.\n> >>\n> >> In most cases it's comparable to the binsearch/bloom approach, and in\n> >> some cases it actually beats them quite significantly. It seems to\n> >> depend on how expensive the comparison is - for \"int\" the comparison is\n> >> very cheap and there's almost no difference. For \"text\" the comparison\n> >> is much more expensive, and there are significant speedups.\n> >>\n> >> For example the test with 100k lookups in array of 10k elements and 10%\n> >> match probability, the timings are these\n> >>\n> >> master: 62362 ms\n> >> binsearch: 201 ms\n> >> bloom: 65 ms\n> >> hashjoin: 36 ms\n> >>\n> >> I do think the explanation is fairly simple - the bloom filter\n> >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> >> some overhead to build and check the bits. The hash join probably\n> >> eliminates a lot of the remaining comparisons, because the hash table\n> >> is sized to have one tuple per bucket.\n> >>\n> >> Note: I also don't claim the PoC has the most efficient bloom filter\n> >> implementation possible. I'm sure it could be made faster.\n> >>\n> >> Anyway, I'm not sure transforming this to a hash join is worth the\n> >> effort - I agree that seems quite complex. But perhaps this suggest we\n> >> should not be doing binary search and instead just build a simple hash\n> >> table - that seems much simpler, and it'll probably give us about the\n> >> same benefits.\n> >\n> >That's actually what I originally thought about doing, but I chose\n> >binary search since it seemed a lot easier to get off the ground.\n> >\n>\n> OK, that makes perfect sense.\n>\n> >If we instead build a hash is there anything else we need to be\n> >concerned about? For example, work mem? I suppose for the binary\n> >search we already have to expand the array, so perhaps it's not all\n> >that meaningful relative to that...\n> >\n>\n> I don't think we need to be particularly concerned about work_mem. We\n> don't care about it now, and it's not clear to me what we could do about\n> it - we already have the array in memory anyway, so it's a bit futile.\n> Furthermore, if we need to care about it, it probably applies to the\n> binary search too.\n>\n> >I was looking earlier at what our standard hash implementation was,\n> >and it seemed less obvious what was needed to set that up (so binary\n> >search seemed a faster proof of concept). If you happen to have any\n> >pointers to similar usages I should look at, please let me know.\n> >\n>\n> I think the hash join implementation is far too complicated. It has to\n> care about work_mem, so it implements batching, etc. That's a lot of\n> complexity we don't need here. IMO we could use either the usual\n> dynahash, or maybe even the simpler simplehash.\n>\n> FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n> benchmarks makes sense and running it independently. I'm not aware of\n> any issues but it was done late at night and only ran on my laptop.\n\nSome quick calculations (don't have the scripting in a form I can\nattach yet; using this as an opportunity to hack on a genericized\nperformance testing framework of sorts) suggest your results are\ncorrect. I was also testing on my laptop, but I showed 1.) roughly\nequivalent results for IN (VALUES ...) and IN (<list>) for integers,\nbut when I switch to (short; average 3 characters long) text values I\nshow the hash join on VALUES is twice as fast as the binary search.\n\nGiven that, I'm planning to implement this as a hash lookup and share\na revised patch.\n\nJames\n\n\n", "msg_date": "Sun, 26 Apr 2020 19:41:26 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> > ><tomas.vondra@2ndquadrant.com> wrote:\n> > >>\n> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com>\nwrote:\n> > >> >>\n> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> wrote:\n> > >> >> > This reminds me our attempts to add bloom filters to hash\njoins, which I\n> > >> >> > think ran into mostly the same challenge of deciding when the\nbloom\n> > >> >> > filter can be useful and is worth the extra work.\n> > >> >>\n> > >> >> Speaking of that, it would be interesting to see how a test where\nyou\n> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It\nwould\n> > >> >> be interesting to know if the planner is able to make a more\nsuitable\n> > >> >> choice and also to see how all the work over the years to improve\nHash\n> > >> >> Joins compares to the bsearch with and without the bloom filter.\n> > >> >\n> > >> >It would be interesting.\n> > >> >\n> > >> >It also makes one wonder about optimizing these into to hash\n> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n> > >> >significant effort though.\n> > >> >\n> > >>\n> > >> I modified the script to also do the join version of the query. I can\n> > >> only run it on my laptop at the moment, so the results may be a bit\n> > >> different from those I shared before, but it's interesting I think.\n> > >>\n> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n> > >> some cases it actually beats them quite significantly. It seems to\n> > >> depend on how expensive the comparison is - for \"int\" the comparison\nis\n> > >> very cheap and there's almost no difference. For \"text\" the\ncomparison\n> > >> is much more expensive, and there are significant speedups.\n> > >>\n> > >> For example the test with 100k lookups in array of 10k elements and\n10%\n> > >> match probability, the timings are these\n> > >>\n> > >> master: 62362 ms\n> > >> binsearch: 201 ms\n> > >> bloom: 65 ms\n> > >> hashjoin: 36 ms\n> > >>\n> > >> I do think the explanation is fairly simple - the bloom filter\n> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> > >> some overhead to build and check the bits. The hash join probably\n> > >> eliminates a lot of the remaining comparisons, because the hash table\n> > >> is sized to have one tuple per bucket.\n> > >>\n> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n> > >> implementation possible. I'm sure it could be made faster.\n> > >>\n> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n> > >> effort - I agree that seems quite complex. But perhaps this suggest\nwe\n> > >> should not be doing binary search and instead just build a simple\nhash\n> > >> table - that seems much simpler, and it'll probably give us about the\n> > >> same benefits.\n> > >\n> > >That's actually what I originally thought about doing, but I chose\n> > >binary search since it seemed a lot easier to get off the ground.\n> > >\n> >\n> > OK, that makes perfect sense.\n> >\n> > >If we instead build a hash is there anything else we need to be\n> > >concerned about? For example, work mem? I suppose for the binary\n> > >search we already have to expand the array, so perhaps it's not all\n> > >that meaningful relative to that...\n> > >\n> >\n> > I don't think we need to be particularly concerned about work_mem. We\n> > don't care about it now, and it's not clear to me what we could do about\n> > it - we already have the array in memory anyway, so it's a bit futile.\n> > Furthermore, if we need to care about it, it probably applies to the\n> > binary search too.\n> >\n> > >I was looking earlier at what our standard hash implementation was,\n> > >and it seemed less obvious what was needed to set that up (so binary\n> > >search seemed a faster proof of concept). If you happen to have any\n> > >pointers to similar usages I should look at, please let me know.\n> > >\n> >\n> > I think the hash join implementation is far too complicated. It has to\n> > care about work_mem, so it implements batching, etc. That's a lot of\n> > complexity we don't need here. IMO we could use either the usual\n> > dynahash, or maybe even the simpler simplehash.\n> >\n> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n> > benchmarks makes sense and running it independently. I'm not aware of\n> > any issues but it was done late at night and only ran on my laptop.\n>\n> Some quick calculations (don't have the scripting in a form I can\n> attach yet; using this as an opportunity to hack on a genericized\n> performance testing framework of sorts) suggest your results are\n> correct. I was also testing on my laptop, but I showed 1.) roughly\n> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n> but when I switch to (short; average 3 characters long) text values I\n> show the hash join on VALUES is twice as fast as the binary search.\n>\n> Given that, I'm planning to implement this as a hash lookup and share\n> a revised patch.\n\nWhile working on this I noticed that dynahash.c line 499 has this assertion:\n\nAssert(info->entrysize >= info->keysize);\n\nDo you by any chance know why the entry would need to be larger than the\nkey? In this case I'm really treating the hash like a set (if there's a\nhash set implementation that doesn't store a value, then I'd be happy to\nuse that instead) so I've configured the entry as sizeof(bool) which is\nobviously smaller than the key.\n\nIf it helps, that line was added by Tom in fba2a104c6d.\n\nThanks,\nJames\n\nOn Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> > ><tomas.vondra@2ndquadrant.com> wrote:\n> > >>\n> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >> >>\n> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n> > >> >> > filter can be useful and is worth the extra work.\n> > >> >>\n> > >> >> Speaking of that, it would be interesting to see how a test where you\n> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> > >> >> be interesting to know if the planner is able to make a more suitable\n> > >> >> choice and also to see how all the work over the years to improve Hash\n> > >> >> Joins compares to the bsearch with and without the bloom filter.\n> > >> >\n> > >> >It would be interesting.\n> > >> >\n> > >> >It also makes one wonder about optimizing these into to hash\n> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n> > >> >significant effort though.\n> > >> >\n> > >>\n> > >> I modified the script to also do the join version of the query. I can\n> > >> only run it on my laptop at the moment, so the results may be a bit\n> > >> different from those I shared before, but it's interesting I think.\n> > >>\n> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n> > >> some cases it actually beats them quite significantly. It seems to\n> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n> > >> very cheap and there's almost no difference. For \"text\" the comparison\n> > >> is much more expensive, and there are significant speedups.\n> > >>\n> > >> For example the test with 100k lookups in array of 10k elements and 10%\n> > >> match probability, the timings are these\n> > >>\n> > >>    master:  62362 ms\n> > >>    binsearch: 201 ms\n> > >>    bloom:      65 ms\n> > >>    hashjoin:   36 ms\n> > >>\n> > >> I do think the explanation is fairly simple - the bloom filter\n> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> > >> some overhead to build and check the bits. The hash join probably\n> > >> eliminates a lot of the remaining comparisons, because the hash table\n> > >> is sized to have one tuple per bucket.\n> > >>\n> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n> > >> implementation possible. I'm sure it could be made faster.\n> > >>\n> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n> > >> should not be doing binary search and instead just build a simple hash\n> > >> table - that seems much simpler, and it'll probably give us about the\n> > >> same benefits.\n> > >\n> > >That's actually what I originally thought about doing, but I chose\n> > >binary search since it seemed a lot easier to get off the ground.\n> > >\n> >\n> > OK, that makes perfect sense.\n> >\n> > >If we instead build a hash is there anything else we need to be\n> > >concerned about? For example, work mem? I suppose for the binary\n> > >search we already have to expand the array, so perhaps it's not all\n> > >that meaningful relative to that...\n> > >\n> >\n> > I don't think we need to be particularly concerned about work_mem. We\n> > don't care about it now, and it's not clear to me what we could do about\n> > it - we already have the array in memory anyway, so it's a bit futile.\n> > Furthermore, if we need to care about it, it probably applies to the\n> > binary search too.\n> >\n> > >I was looking earlier at what our standard hash implementation was,\n> > >and it seemed less obvious what was needed to set that up (so binary\n> > >search seemed a faster proof of concept). If you happen to have any\n> > >pointers to similar usages I should look at, please let me know.\n> > >\n> >\n> > I think the hash join implementation is far too complicated. It has to\n> > care about work_mem, so it implements batching, etc. That's a lot of\n> > complexity we don't need here. IMO we could use either the usual\n> > dynahash, or maybe even the simpler simplehash.\n> >\n> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n> > benchmarks makes sense and running it independently. I'm not aware of\n> > any issues but it was done late at night and only ran on my laptop.\n>\n> Some quick calculations (don't have the scripting in a form I can\n> attach yet; using this as an opportunity to hack on a genericized\n> performance testing framework of sorts) suggest your results are\n> correct. I was also testing on my laptop, but I showed 1.) roughly\n> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n> but when I switch to (short; average 3 characters long) text values I\n> show the hash join on VALUES is twice as fast as the binary search.\n>\n> Given that, I'm planning to implement this as a hash lookup and share\n> a revised patch.\n\nWhile working on this I noticed that dynahash.c line 499 has this assertion:\n\nAssert(info->entrysize >= info->keysize);\n\nDo you by any chance know why the entry would need to be larger than the key? In this case I'm really treating the hash like a set (if there's a hash set implementation that doesn't store a value, then I'd be happy to use that instead) so I've configured the entry as sizeof(bool) which is obviously smaller than the key.\n\nIf it helps, that line was added by Tom in fba2a104c6d.Thanks,James", "msg_date": "Sun, 26 Apr 2020 23:12:40 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, 27 Apr 2020 at 15:12, James Coleman <jtc331@gmail.com> wrote:\n> While working on this I noticed that dynahash.c line 499 has this assertion:\n>\n> Assert(info->entrysize >= info->keysize);\n>\n> Do you by any chance know why the entry would need to be larger than the key?\n\nLarger or equal. They'd be equal if you the key was the data, since\nyou do need to store at least the key. Looking at the code for\nexamples where dynahash is used in that situation, I see\n_hash_finish_split().\n\nDavid\n\n\n", "msg_date": "Mon, 27 Apr 2020 15:44:35 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, Apr 26, 2020 at 11:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 27 Apr 2020 at 15:12, James Coleman <jtc331@gmail.com> wrote:\n> > While working on this I noticed that dynahash.c line 499 has this assertion:\n> >\n> > Assert(info->entrysize >= info->keysize);\n> >\n> > Do you by any chance know why the entry would need to be larger than the key?\n>\n> Larger or equal. They'd be equal if you the key was the data, since\n> you do need to store at least the key. Looking at the code for\n> examples where dynahash is used in that situation, I see\n> _hash_finish_split().\n\nAh, I was thinking of it as key and value being separate sizes added\ntogether rather than one including the other.\n\nThanks,\nJames\n\n\n", "msg_date": "Mon, 27 Apr 2020 08:40:15 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> > ><tomas.vondra@2ndquadrant.com> wrote:\n> > >>\n> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >> >>\n> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n> > >> >> > filter can be useful and is worth the extra work.\n> > >> >>\n> > >> >> Speaking of that, it would be interesting to see how a test where you\n> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> > >> >> be interesting to know if the planner is able to make a more suitable\n> > >> >> choice and also to see how all the work over the years to improve Hash\n> > >> >> Joins compares to the bsearch with and without the bloom filter.\n> > >> >\n> > >> >It would be interesting.\n> > >> >\n> > >> >It also makes one wonder about optimizing these into to hash\n> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n> > >> >significant effort though.\n> > >> >\n> > >>\n> > >> I modified the script to also do the join version of the query. I can\n> > >> only run it on my laptop at the moment, so the results may be a bit\n> > >> different from those I shared before, but it's interesting I think.\n> > >>\n> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n> > >> some cases it actually beats them quite significantly. It seems to\n> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n> > >> very cheap and there's almost no difference. For \"text\" the comparison\n> > >> is much more expensive, and there are significant speedups.\n> > >>\n> > >> For example the test with 100k lookups in array of 10k elements and 10%\n> > >> match probability, the timings are these\n> > >>\n> > >> master: 62362 ms\n> > >> binsearch: 201 ms\n> > >> bloom: 65 ms\n> > >> hashjoin: 36 ms\n> > >>\n> > >> I do think the explanation is fairly simple - the bloom filter\n> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> > >> some overhead to build and check the bits. The hash join probably\n> > >> eliminates a lot of the remaining comparisons, because the hash table\n> > >> is sized to have one tuple per bucket.\n> > >>\n> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n> > >> implementation possible. I'm sure it could be made faster.\n> > >>\n> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n> > >> should not be doing binary search and instead just build a simple hash\n> > >> table - that seems much simpler, and it'll probably give us about the\n> > >> same benefits.\n> > >\n> > >That's actually what I originally thought about doing, but I chose\n> > >binary search since it seemed a lot easier to get off the ground.\n> > >\n> >\n> > OK, that makes perfect sense.\n> >\n> > >If we instead build a hash is there anything else we need to be\n> > >concerned about? For example, work mem? I suppose for the binary\n> > >search we already have to expand the array, so perhaps it's not all\n> > >that meaningful relative to that...\n> > >\n> >\n> > I don't think we need to be particularly concerned about work_mem. We\n> > don't care about it now, and it's not clear to me what we could do about\n> > it - we already have the array in memory anyway, so it's a bit futile.\n> > Furthermore, if we need to care about it, it probably applies to the\n> > binary search too.\n> >\n> > >I was looking earlier at what our standard hash implementation was,\n> > >and it seemed less obvious what was needed to set that up (so binary\n> > >search seemed a faster proof of concept). If you happen to have any\n> > >pointers to similar usages I should look at, please let me know.\n> > >\n> >\n> > I think the hash join implementation is far too complicated. It has to\n> > care about work_mem, so it implements batching, etc. That's a lot of\n> > complexity we don't need here. IMO we could use either the usual\n> > dynahash, or maybe even the simpler simplehash.\n> >\n> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n> > benchmarks makes sense and running it independently. I'm not aware of\n> > any issues but it was done late at night and only ran on my laptop.\n>\n> Some quick calculations (don't have the scripting in a form I can\n> attach yet; using this as an opportunity to hack on a genericized\n> performance testing framework of sorts) suggest your results are\n> correct. I was also testing on my laptop, but I showed 1.) roughly\n> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n> but when I switch to (short; average 3 characters long) text values I\n> show the hash join on VALUES is twice as fast as the binary search.\n>\n> Given that, I'm planning to implement this as a hash lookup and share\n> a revised patch.\n\nI've attached a patch series as before, but with an additional patch\nthat switches to using dynahash instead of binary search.\n\nWhereas before the benchmarking ended up with a trimodal distribution\n(i.e., master with IN <list>, patch with IN <list>, and either with IN\nVALUES), the hash implementation brings us back to an effectively\nbimodal distribution -- though the hash scalar array op expression\nimplementation for text is about 5% faster than the hash join.\n\nCurrent outstanding thoughts (besides comment/naming cleanup):\n\n- The saop costing needs to be updated to match, as Tomas pointed out.\n\n- Should we be concerned about single execution cases? For example, is\nthe regression of speed on a simple SELECT x IN something we should\ntry to defeat by only kicking in the optimization if we execute in a\nloop at least twice? That might be of particular interest to pl/pgsql.\n\n- Should we have a test for an operator with a non-strict function?\nI'm not aware of any built-in ops that have that characteristic; would\nyou suggest just creating a fake one for the test?\n\nJames", "msg_date": "Mon, 27 Apr 2020 21:04:09 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, Apr 27, 2020 at 09:04:09PM -0400, James Coleman wrote:\n>On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>> >\n>> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n>> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n>> > ><tomas.vondra@2ndquadrant.com> wrote:\n>> > >>\n>> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n>> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> > >> >>\n>> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n>> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n>> > >> >> > filter can be useful and is worth the extra work.\n>> > >> >>\n>> > >> >> Speaking of that, it would be interesting to see how a test where you\n>> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n>> > >> >> be interesting to know if the planner is able to make a more suitable\n>> > >> >> choice and also to see how all the work over the years to improve Hash\n>> > >> >> Joins compares to the bsearch with and without the bloom filter.\n>> > >> >\n>> > >> >It would be interesting.\n>> > >> >\n>> > >> >It also makes one wonder about optimizing these into to hash\n>> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n>> > >> >significant effort though.\n>> > >> >\n>> > >>\n>> > >> I modified the script to also do the join version of the query. I can\n>> > >> only run it on my laptop at the moment, so the results may be a bit\n>> > >> different from those I shared before, but it's interesting I think.\n>> > >>\n>> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n>> > >> some cases it actually beats them quite significantly. It seems to\n>> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n>> > >> very cheap and there's almost no difference. For \"text\" the comparison\n>> > >> is much more expensive, and there are significant speedups.\n>> > >>\n>> > >> For example the test with 100k lookups in array of 10k elements and 10%\n>> > >> match probability, the timings are these\n>> > >>\n>> > >> master: 62362 ms\n>> > >> binsearch: 201 ms\n>> > >> bloom: 65 ms\n>> > >> hashjoin: 36 ms\n>> > >>\n>> > >> I do think the explanation is fairly simple - the bloom filter\n>> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n>> > >> some overhead to build and check the bits. The hash join probably\n>> > >> eliminates a lot of the remaining comparisons, because the hash table\n>> > >> is sized to have one tuple per bucket.\n>> > >>\n>> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n>> > >> implementation possible. I'm sure it could be made faster.\n>> > >>\n>> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n>> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n>> > >> should not be doing binary search and instead just build a simple hash\n>> > >> table - that seems much simpler, and it'll probably give us about the\n>> > >> same benefits.\n>> > >\n>> > >That's actually what I originally thought about doing, but I chose\n>> > >binary search since it seemed a lot easier to get off the ground.\n>> > >\n>> >\n>> > OK, that makes perfect sense.\n>> >\n>> > >If we instead build a hash is there anything else we need to be\n>> > >concerned about? For example, work mem? I suppose for the binary\n>> > >search we already have to expand the array, so perhaps it's not all\n>> > >that meaningful relative to that...\n>> > >\n>> >\n>> > I don't think we need to be particularly concerned about work_mem. We\n>> > don't care about it now, and it's not clear to me what we could do about\n>> > it - we already have the array in memory anyway, so it's a bit futile.\n>> > Furthermore, if we need to care about it, it probably applies to the\n>> > binary search too.\n>> >\n>> > >I was looking earlier at what our standard hash implementation was,\n>> > >and it seemed less obvious what was needed to set that up (so binary\n>> > >search seemed a faster proof of concept). If you happen to have any\n>> > >pointers to similar usages I should look at, please let me know.\n>> > >\n>> >\n>> > I think the hash join implementation is far too complicated. It has to\n>> > care about work_mem, so it implements batching, etc. That's a lot of\n>> > complexity we don't need here. IMO we could use either the usual\n>> > dynahash, or maybe even the simpler simplehash.\n>> >\n>> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n>> > benchmarks makes sense and running it independently. I'm not aware of\n>> > any issues but it was done late at night and only ran on my laptop.\n>>\n>> Some quick calculations (don't have the scripting in a form I can\n>> attach yet; using this as an opportunity to hack on a genericized\n>> performance testing framework of sorts) suggest your results are\n>> correct. I was also testing on my laptop, but I showed 1.) roughly\n>> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n>> but when I switch to (short; average 3 characters long) text values I\n>> show the hash join on VALUES is twice as fast as the binary search.\n>>\n>> Given that, I'm planning to implement this as a hash lookup and share\n>> a revised patch.\n>\n>I've attached a patch series as before, but with an additional patch\n>that switches to using dynahash instead of binary search.\n>\n\nOK. I can't take closer look at the moment, I'll check later.\n\nAny particular reasons to pick dynahash over simplehash? ISTM we're\nusing simplehash elsewhere in the executor (grouping, tidbitmap, ...),\nwhile there are not many places using dynahash for simple short-lived\nhash tables. Of course, that alone is a weak reason to insist on using\nsimplehash here, but I suppose there were reasons for not using dynahash\nand we'll end up facing the same issues here.\n\n\n>Whereas before the benchmarking ended up with a trimodal distribution\n>(i.e., master with IN <list>, patch with IN <list>, and either with IN\n>VALUES), the hash implementation brings us back to an effectively\n>bimodal distribution -- though the hash scalar array op expression\n>implementation for text is about 5% faster than the hash join.\n>\n\nNice. I'm not surprised this is a bit faster than hash join, which has\nto worry about additional stuff.\n\n>Current outstanding thoughts (besides comment/naming cleanup):\n>\n>- The saop costing needs to be updated to match, as Tomas pointed out.\n>\n>- Should we be concerned about single execution cases? For example, is\n>the regression of speed on a simple SELECT x IN something we should\n>try to defeat by only kicking in the optimization if we execute in a\n>loop at least twice? That might be of particular interest to pl/pgsql.\n>\n\nI don't follow. How is this related to the number of executions and/or\nplpgsql? I suspect you might be talking about prepared statements, but\nsurely the hash table is built for each execution anyway, even if the\nplan is reused, right?\n\nI think the issue we've been talking about is considering the number of\nlookups we expect to do in the array/hash table. But that has nothing to\ndo with plpgsql and/or multiple executions ...\n\n>- Should we have a test for an operator with a non-strict function?\n>I'm not aware of any built-in ops that have that characteristic; would\n>you suggest just creating a fake one for the test?\n>\n\nDunno, I haven't thought about this very much. In general I think the\nnew code should simply behave the same as the current code, i.e. if it\ndoes not check for strictness, we don't need either.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 28 Apr 2020 14:25:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Apr 27, 2020 at 09:04:09PM -0400, James Coleman wrote:\n> >On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n> >> <tomas.vondra@2ndquadrant.com> wrote:\n> >> >\n> >> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> >> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> >> > ><tomas.vondra@2ndquadrant.com> wrote:\n> >> > >>\n> >> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> >> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >> > >> >>\n> >> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n> >> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n> >> > >> >> > filter can be useful and is worth the extra work.\n> >> > >> >>\n> >> > >> >> Speaking of that, it would be interesting to see how a test where you\n> >> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n> >> > >> >> be interesting to know if the planner is able to make a more suitable\n> >> > >> >> choice and also to see how all the work over the years to improve Hash\n> >> > >> >> Joins compares to the bsearch with and without the bloom filter.\n> >> > >> >\n> >> > >> >It would be interesting.\n> >> > >> >\n> >> > >> >It also makes one wonder about optimizing these into to hash\n> >> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n> >> > >> >significant effort though.\n> >> > >> >\n> >> > >>\n> >> > >> I modified the script to also do the join version of the query. I can\n> >> > >> only run it on my laptop at the moment, so the results may be a bit\n> >> > >> different from those I shared before, but it's interesting I think.\n> >> > >>\n> >> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n> >> > >> some cases it actually beats them quite significantly. It seems to\n> >> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n> >> > >> very cheap and there's almost no difference. For \"text\" the comparison\n> >> > >> is much more expensive, and there are significant speedups.\n> >> > >>\n> >> > >> For example the test with 100k lookups in array of 10k elements and 10%\n> >> > >> match probability, the timings are these\n> >> > >>\n> >> > >> master: 62362 ms\n> >> > >> binsearch: 201 ms\n> >> > >> bloom: 65 ms\n> >> > >> hashjoin: 36 ms\n> >> > >>\n> >> > >> I do think the explanation is fairly simple - the bloom filter\n> >> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n> >> > >> some overhead to build and check the bits. The hash join probably\n> >> > >> eliminates a lot of the remaining comparisons, because the hash table\n> >> > >> is sized to have one tuple per bucket.\n> >> > >>\n> >> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n> >> > >> implementation possible. I'm sure it could be made faster.\n> >> > >>\n> >> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n> >> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n> >> > >> should not be doing binary search and instead just build a simple hash\n> >> > >> table - that seems much simpler, and it'll probably give us about the\n> >> > >> same benefits.\n> >> > >\n> >> > >That's actually what I originally thought about doing, but I chose\n> >> > >binary search since it seemed a lot easier to get off the ground.\n> >> > >\n> >> >\n> >> > OK, that makes perfect sense.\n> >> >\n> >> > >If we instead build a hash is there anything else we need to be\n> >> > >concerned about? For example, work mem? I suppose for the binary\n> >> > >search we already have to expand the array, so perhaps it's not all\n> >> > >that meaningful relative to that...\n> >> > >\n> >> >\n> >> > I don't think we need to be particularly concerned about work_mem. We\n> >> > don't care about it now, and it's not clear to me what we could do about\n> >> > it - we already have the array in memory anyway, so it's a bit futile.\n> >> > Furthermore, if we need to care about it, it probably applies to the\n> >> > binary search too.\n> >> >\n> >> > >I was looking earlier at what our standard hash implementation was,\n> >> > >and it seemed less obvious what was needed to set that up (so binary\n> >> > >search seemed a faster proof of concept). If you happen to have any\n> >> > >pointers to similar usages I should look at, please let me know.\n> >> > >\n> >> >\n> >> > I think the hash join implementation is far too complicated. It has to\n> >> > care about work_mem, so it implements batching, etc. That's a lot of\n> >> > complexity we don't need here. IMO we could use either the usual\n> >> > dynahash, or maybe even the simpler simplehash.\n> >> >\n> >> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n> >> > benchmarks makes sense and running it independently. I'm not aware of\n> >> > any issues but it was done late at night and only ran on my laptop.\n> >>\n> >> Some quick calculations (don't have the scripting in a form I can\n> >> attach yet; using this as an opportunity to hack on a genericized\n> >> performance testing framework of sorts) suggest your results are\n> >> correct. I was also testing on my laptop, but I showed 1.) roughly\n> >> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n> >> but when I switch to (short; average 3 characters long) text values I\n> >> show the hash join on VALUES is twice as fast as the binary search.\n> >>\n> >> Given that, I'm planning to implement this as a hash lookup and share\n> >> a revised patch.\n> >\n> >I've attached a patch series as before, but with an additional patch\n> >that switches to using dynahash instead of binary search.\n> >\n>\n> OK. I can't take closer look at the moment, I'll check later.\n>\n> Any particular reasons to pick dynahash over simplehash? ISTM we're\n> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n> while there are not many places using dynahash for simple short-lived\n> hash tables. Of course, that alone is a weak reason to insist on using\n> simplehash here, but I suppose there were reasons for not using dynahash\n> and we'll end up facing the same issues here.\n\nNo particular reason; it wasn't clear to me that there was a reason to\nprefer one or the other (and I'm not acquainted with the codebase\nenough to know the differences), so I chose dynahash because it was\neasier to find examples to follow for implementation.\n\n> >Whereas before the benchmarking ended up with a trimodal distribution\n> >(i.e., master with IN <list>, patch with IN <list>, and either with IN\n> >VALUES), the hash implementation brings us back to an effectively\n> >bimodal distribution -- though the hash scalar array op expression\n> >implementation for text is about 5% faster than the hash join.\n> >\n>\n> Nice. I'm not surprised this is a bit faster than hash join, which has\n> to worry about additional stuff.\n>\n> >Current outstanding thoughts (besides comment/naming cleanup):\n> >\n> >- The saop costing needs to be updated to match, as Tomas pointed out.\n> >\n> >- Should we be concerned about single execution cases? For example, is\n> >the regression of speed on a simple SELECT x IN something we should\n> >try to defeat by only kicking in the optimization if we execute in a\n> >loop at least twice? That might be of particular interest to pl/pgsql.\n> >\n>\n> I don't follow. How is this related to the number of executions and/or\n> plpgsql? I suspect you might be talking about prepared statements, but\n> surely the hash table is built for each execution anyway, even if the\n> plan is reused, right?\n>\n> I think the issue we've been talking about is considering the number of\n> lookups we expect to do in the array/hash table. But that has nothing to\n> do with plpgsql and/or multiple executions ...\n\nSuppose I do \"SELECT 1 IN <100 item list>\" (whether just as a\nstandalone query or in pl/pgsql). Then it doesn't make sense to use\nthe optimization, because it can't possibly win over a naive linear\nscan through the array since to build the hash we have to do that\nlinear scan anyway (I suppose in theory the hashing function could be\ndramatically faster than the equality op, so maybe it could win\noverall, but it seems unlikely to me). I'm not so concerned about this\nin any query where we have a real FROM clause because even if we end\nup with only one row, the relative penalty is low, and the potential\ngain is very high. But simple expressions in pl/pgsql, for example,\nare a case where we can know for certain (correct me if I've wrong on\nthis) that we'll only execute the expression once, which means there's\nprobably always a penalty for choosing the implementation with setup\ncosts over the default linear scan through the array.\n\n> >- Should we have a test for an operator with a non-strict function?\n> >I'm not aware of any built-in ops that have that characteristic; would\n> >you suggest just creating a fake one for the test?\n> >\n>\n> Dunno, I haven't thought about this very much. In general I think the\n> new code should simply behave the same as the current code, i.e. if it\n> does not check for strictness, we don't need either.\n\nBoth the original implementation and this optimized version have the\nfollowing short circuit condition:\n\nif (fcinfo->args[0].isnull && strictfunc)\n\nBut I'm not sure there are any existing tests to show that the \"&&\nstrictfunc\" is required.\n\nJames\n\n\n", "msg_date": "Tue, 28 Apr 2020 08:39:18 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 08:39:18AM -0400, James Coleman wrote:\n>On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Apr 27, 2020 at 09:04:09PM -0400, James Coleman wrote:\n>> >On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>> >>\n>> >> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n>> >> <tomas.vondra@2ndquadrant.com> wrote:\n>> >> >\n>> >> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n>> >> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n>> >> > ><tomas.vondra@2ndquadrant.com> wrote:\n>> >> > >>\n>> >> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n>> >> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> >> > >> >>\n>> >> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n>> >> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n>> >> > >> >> > filter can be useful and is worth the extra work.\n>> >> > >> >>\n>> >> > >> >> Speaking of that, it would be interesting to see how a test where you\n>> >> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n>> >> > >> >> be interesting to know if the planner is able to make a more suitable\n>> >> > >> >> choice and also to see how all the work over the years to improve Hash\n>> >> > >> >> Joins compares to the bsearch with and without the bloom filter.\n>> >> > >> >\n>> >> > >> >It would be interesting.\n>> >> > >> >\n>> >> > >> >It also makes one wonder about optimizing these into to hash\n>> >> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n>> >> > >> >significant effort though.\n>> >> > >> >\n>> >> > >>\n>> >> > >> I modified the script to also do the join version of the query. I can\n>> >> > >> only run it on my laptop at the moment, so the results may be a bit\n>> >> > >> different from those I shared before, but it's interesting I think.\n>> >> > >>\n>> >> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n>> >> > >> some cases it actually beats them quite significantly. It seems to\n>> >> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n>> >> > >> very cheap and there's almost no difference. For \"text\" the comparison\n>> >> > >> is much more expensive, and there are significant speedups.\n>> >> > >>\n>> >> > >> For example the test with 100k lookups in array of 10k elements and 10%\n>> >> > >> match probability, the timings are these\n>> >> > >>\n>> >> > >> master: 62362 ms\n>> >> > >> binsearch: 201 ms\n>> >> > >> bloom: 65 ms\n>> >> > >> hashjoin: 36 ms\n>> >> > >>\n>> >> > >> I do think the explanation is fairly simple - the bloom filter\n>> >> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n>> >> > >> some overhead to build and check the bits. The hash join probably\n>> >> > >> eliminates a lot of the remaining comparisons, because the hash table\n>> >> > >> is sized to have one tuple per bucket.\n>> >> > >>\n>> >> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n>> >> > >> implementation possible. I'm sure it could be made faster.\n>> >> > >>\n>> >> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n>> >> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n>> >> > >> should not be doing binary search and instead just build a simple hash\n>> >> > >> table - that seems much simpler, and it'll probably give us about the\n>> >> > >> same benefits.\n>> >> > >\n>> >> > >That's actually what I originally thought about doing, but I chose\n>> >> > >binary search since it seemed a lot easier to get off the ground.\n>> >> > >\n>> >> >\n>> >> > OK, that makes perfect sense.\n>> >> >\n>> >> > >If we instead build a hash is there anything else we need to be\n>> >> > >concerned about? For example, work mem? I suppose for the binary\n>> >> > >search we already have to expand the array, so perhaps it's not all\n>> >> > >that meaningful relative to that...\n>> >> > >\n>> >> >\n>> >> > I don't think we need to be particularly concerned about work_mem. We\n>> >> > don't care about it now, and it's not clear to me what we could do about\n>> >> > it - we already have the array in memory anyway, so it's a bit futile.\n>> >> > Furthermore, if we need to care about it, it probably applies to the\n>> >> > binary search too.\n>> >> >\n>> >> > >I was looking earlier at what our standard hash implementation was,\n>> >> > >and it seemed less obvious what was needed to set that up (so binary\n>> >> > >search seemed a faster proof of concept). If you happen to have any\n>> >> > >pointers to similar usages I should look at, please let me know.\n>> >> > >\n>> >> >\n>> >> > I think the hash join implementation is far too complicated. It has to\n>> >> > care about work_mem, so it implements batching, etc. That's a lot of\n>> >> > complexity we don't need here. IMO we could use either the usual\n>> >> > dynahash, or maybe even the simpler simplehash.\n>> >> >\n>> >> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n>> >> > benchmarks makes sense and running it independently. I'm not aware of\n>> >> > any issues but it was done late at night and only ran on my laptop.\n>> >>\n>> >> Some quick calculations (don't have the scripting in a form I can\n>> >> attach yet; using this as an opportunity to hack on a genericized\n>> >> performance testing framework of sorts) suggest your results are\n>> >> correct. I was also testing on my laptop, but I showed 1.) roughly\n>> >> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n>> >> but when I switch to (short; average 3 characters long) text values I\n>> >> show the hash join on VALUES is twice as fast as the binary search.\n>> >>\n>> >> Given that, I'm planning to implement this as a hash lookup and share\n>> >> a revised patch.\n>> >\n>> >I've attached a patch series as before, but with an additional patch\n>> >that switches to using dynahash instead of binary search.\n>> >\n>>\n>> OK. I can't take closer look at the moment, I'll check later.\n>>\n>> Any particular reasons to pick dynahash over simplehash? ISTM we're\n>> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n>> while there are not many places using dynahash for simple short-lived\n>> hash tables. Of course, that alone is a weak reason to insist on using\n>> simplehash here, but I suppose there were reasons for not using dynahash\n>> and we'll end up facing the same issues here.\n>\n>No particular reason; it wasn't clear to me that there was a reason to\n>prefer one or the other (and I'm not acquainted with the codebase\n>enough to know the differences), so I chose dynahash because it was\n>easier to find examples to follow for implementation.\n>\n\nOK, understood.\n\n>> >Whereas before the benchmarking ended up with a trimodal distribution\n>> >(i.e., master with IN <list>, patch with IN <list>, and either with IN\n>> >VALUES), the hash implementation brings us back to an effectively\n>> >bimodal distribution -- though the hash scalar array op expression\n>> >implementation for text is about 5% faster than the hash join.\n>> >\n>>\n>> Nice. I'm not surprised this is a bit faster than hash join, which has\n>> to worry about additional stuff.\n>>\n>> >Current outstanding thoughts (besides comment/naming cleanup):\n>> >\n>> >- The saop costing needs to be updated to match, as Tomas pointed out.\n>> >\n>> >- Should we be concerned about single execution cases? For example, is\n>> >the regression of speed on a simple SELECT x IN something we should\n>> >try to defeat by only kicking in the optimization if we execute in a\n>> >loop at least twice? That might be of particular interest to pl/pgsql.\n>> >\n>>\n>> I don't follow. How is this related to the number of executions and/or\n>> plpgsql? I suspect you might be talking about prepared statements, but\n>> surely the hash table is built for each execution anyway, even if the\n>> plan is reused, right?\n>>\n>> I think the issue we've been talking about is considering the number of\n>> lookups we expect to do in the array/hash table. But that has nothing to\n>> do with plpgsql and/or multiple executions ...\n>\n>Suppose I do \"SELECT 1 IN <100 item list>\" (whether just as a\n>standalone query or in pl/pgsql). Then it doesn't make sense to use\n>the optimization, because it can't possibly win over a naive linear\n>scan through the array since to build the hash we have to do that\n>linear scan anyway (I suppose in theory the hashing function could be\n>dramatically faster than the equality op, so maybe it could win\n>overall, but it seems unlikely to me).\n\nTrue. I'm sure we could construct such cases, but this is hardly the\nonly place where that would be an issue. We could create some rough\ncosting model and make a decision based on that.\n\n>I'm not so concerned about this in any query where we have a real FROM\n>clause because even if we end up with only one row, the relative\n>penalty is low, and the potential gain is very high. But simple\n>expressions in pl/pgsql, for example, are a case where we can know for\n>certain (correct me if I've wrong on this) that we'll only execute the\n>expression once, which means there's probably always a penalty for\n>choosing the implementation with setup costs over the default linear\n>scan through the array.\n>\n\nWhat do you mean by \"simple expressions\"? I'm not plpgsql expert and I\nsee it mostly as a way to glue together SQL queries, but yeah - if we\nknow a given ScalarArrayOpExpr will only be executed once, then we can\ndisable this optimization for now.\n\n>> >- Should we have a test for an operator with a non-strict function?\n>> >I'm not aware of any built-in ops that have that characteristic;\n>> >would you suggest just creating a fake one for the test?\n>> >\n>>\n>> Dunno, I haven't thought about this very much. In general I think the\n>> new code should simply behave the same as the current code, i.e. if\n>> it does not check for strictness, we don't need either.\n>\n>Both the original implementation and this optimized version have the\n>following short circuit condition:\n>\n>if (fcinfo->args[0].isnull && strictfunc)\n>\n>But I'm not sure there are any existing tests to show that the \"&&\n>strictfunc\" is required.\n>\n\nAh! You mean a regression test, not a test in the \"if condition\" sense.\nI don't see a reason not to have such test, although it's probably not\nsomething we should require from this patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 28 Apr 2020 15:26:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Tue, Apr 28, 2020 at 08:39:18AM -0400, James Coleman wrote:\n> >On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Mon, Apr 27, 2020 at 09:04:09PM -0400, James Coleman wrote:\n> >> >On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com>\n> wrote:\n> >> >>\n> >> >> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n> >> >> <tomas.vondra@2ndquadrant.com> wrote:\n> >> >> >\n> >> >> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n> >> >> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n> >> >> > ><tomas.vondra@2ndquadrant.com> wrote:\n> >> >> > >>\n> >> >> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n> >> >> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <\n> dgrowleyml@gmail.com> wrote:\n> >> >> > >> >>\n> >> >> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote:\n> >> >> > >> >> > This reminds me our attempts to add bloom filters to hash\n> joins, which I\n> >> >> > >> >> > think ran into mostly the same challenge of deciding when\n> the bloom\n> >> >> > >> >> > filter can be useful and is worth the extra work.\n> >> >> > >> >>\n> >> >> > >> >> Speaking of that, it would be interesting to see how a test\n> where you\n> >> >> > >> >> write the query as IN(VALUES(...)) instead of IN() compares.\n> It would\n> >> >> > >> >> be interesting to know if the planner is able to make a more\n> suitable\n> >> >> > >> >> choice and also to see how all the work over the years to\n> improve Hash\n> >> >> > >> >> Joins compares to the bsearch with and without the bloom\n> filter.\n> >> >> > >> >\n> >> >> > >> >It would be interesting.\n> >> >> > >> >\n> >> >> > >> >It also makes one wonder about optimizing these into to hash\n> >> >> > >> >joins...which I'd thought about over at [1]. I think it'd be a\n> very\n> >> >> > >> >significant effort though.\n> >> >> > >> >\n> >> >> > >>\n> >> >> > >> I modified the script to also do the join version of the query.\n> I can\n> >> >> > >> only run it on my laptop at the moment, so the results may be a\n> bit\n> >> >> > >> different from those I shared before, but it's interesting I\n> think.\n> >> >> > >>\n> >> >> > >> In most cases it's comparable to the binsearch/bloom approach,\n> and in\n> >> >> > >> some cases it actually beats them quite significantly. It seems\n> to\n> >> >> > >> depend on how expensive the comparison is - for \"int\" the\n> comparison is\n> >> >> > >> very cheap and there's almost no difference. For \"text\" the\n> comparison\n> >> >> > >> is much more expensive, and there are significant speedups.\n> >> >> > >>\n> >> >> > >> For example the test with 100k lookups in array of 10k elements\n> and 10%\n> >> >> > >> match probability, the timings are these\n> >> >> > >>\n> >> >> > >> master: 62362 ms\n> >> >> > >> binsearch: 201 ms\n> >> >> > >> bloom: 65 ms\n> >> >> > >> hashjoin: 36 ms\n> >> >> > >>\n> >> >> > >> I do think the explanation is fairly simple - the bloom filter\n> >> >> > >> eliminates about 90% of the expensive comparisons, so it's 20ms\n> plus\n> >> >> > >> some overhead to build and check the bits. The hash join\n> probably\n> >> >> > >> eliminates a lot of the remaining comparisons, because the hash\n> table\n> >> >> > >> is sized to have one tuple per bucket.\n> >> >> > >>\n> >> >> > >> Note: I also don't claim the PoC has the most efficient bloom\n> filter\n> >> >> > >> implementation possible. I'm sure it could be made faster.\n> >> >> > >>\n> >> >> > >> Anyway, I'm not sure transforming this to a hash join is worth\n> the\n> >> >> > >> effort - I agree that seems quite complex. But perhaps this\n> suggest we\n> >> >> > >> should not be doing binary search and instead just build a\n> simple hash\n> >> >> > >> table - that seems much simpler, and it'll probably give us\n> about the\n> >> >> > >> same benefits.\n> >> >> > >\n> >> >> > >That's actually what I originally thought about doing, but I chose\n> >> >> > >binary search since it seemed a lot easier to get off the ground.\n> >> >> > >\n> >> >> >\n> >> >> > OK, that makes perfect sense.\n> >> >> >\n> >> >> > >If we instead build a hash is there anything else we need to be\n> >> >> > >concerned about? For example, work mem? I suppose for the binary\n> >> >> > >search we already have to expand the array, so perhaps it's not\n> all\n> >> >> > >that meaningful relative to that...\n> >> >> > >\n> >> >> >\n> >> >> > I don't think we need to be particularly concerned about work_mem.\n> We\n> >> >> > don't care about it now, and it's not clear to me what we could do\n> about\n> >> >> > it - we already have the array in memory anyway, so it's a bit\n> futile.\n> >> >> > Furthermore, if we need to care about it, it probably applies to\n> the\n> >> >> > binary search too.\n> >> >> >\n> >> >> > >I was looking earlier at what our standard hash implementation\n> was,\n> >> >> > >and it seemed less obvious what was needed to set that up (so\n> binary\n> >> >> > >search seemed a faster proof of concept). If you happen to have\n> any\n> >> >> > >pointers to similar usages I should look at, please let me know.\n> >> >> > >\n> >> >> >\n> >> >> > I think the hash join implementation is far too complicated. It\n> has to\n> >> >> > care about work_mem, so it implements batching, etc. That's a lot\n> of\n> >> >> > complexity we don't need here. IMO we could use either the usual\n> >> >> > dynahash, or maybe even the simpler simplehash.\n> >> >> >\n> >> >> > FWIW it'd be good to verify the numbers I shared, i.e. checking\n> that the\n> >> >> > benchmarks makes sense and running it independently. I'm not aware\n> of\n> >> >> > any issues but it was done late at night and only ran on my laptop.\n> >> >>\n> >> >> Some quick calculations (don't have the scripting in a form I can\n> >> >> attach yet; using this as an opportunity to hack on a genericized\n> >> >> performance testing framework of sorts) suggest your results are\n> >> >> correct. I was also testing on my laptop, but I showed 1.) roughly\n> >> >> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n> >> >> but when I switch to (short; average 3 characters long) text values I\n> >> >> show the hash join on VALUES is twice as fast as the binary search.\n> >> >>\n> >> >> Given that, I'm planning to implement this as a hash lookup and share\n> >> >> a revised patch.\n> >> >\n> >> >I've attached a patch series as before, but with an additional patch\n> >> >that switches to using dynahash instead of binary search.\n> >> >\n> >>\n> >> OK. I can't take closer look at the moment, I'll check later.\n> >>\n> >> Any particular reasons to pick dynahash over simplehash? ISTM we're\n> >> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n> >> while there are not many places using dynahash for simple short-lived\n> >> hash tables. Of course, that alone is a weak reason to insist on using\n> >> simplehash here, but I suppose there were reasons for not using dynahash\n> >> and we'll end up facing the same issues here.\n> >\n> >No particular reason; it wasn't clear to me that there was a reason to\n> >prefer one or the other (and I'm not acquainted with the codebase\n> >enough to know the differences), so I chose dynahash because it was\n> >easier to find examples to follow for implementation.\n> >\n>\n> OK, understood.\n>\n> >> >Whereas before the benchmarking ended up with a trimodal distribution\n> >> >(i.e., master with IN <list>, patch with IN <list>, and either with IN\n> >> >VALUES), the hash implementation brings us back to an effectively\n> >> >bimodal distribution -- though the hash scalar array op expression\n> >> >implementation for text is about 5% faster than the hash join.\n> >> >\n> >>\n> >> Nice. I'm not surprised this is a bit faster than hash join, which has\n> >> to worry about additional stuff.\n> >>\n> >> >Current outstanding thoughts (besides comment/naming cleanup):\n> >> >\n> >> >- The saop costing needs to be updated to match, as Tomas pointed out.\n> >> >\n> >> >- Should we be concerned about single execution cases? For example, is\n> >> >the regression of speed on a simple SELECT x IN something we should\n> >> >try to defeat by only kicking in the optimization if we execute in a\n> >> >loop at least twice? That might be of particular interest to pl/pgsql.\n> >> >\n> >>\n> >> I don't follow. How is this related to the number of executions and/or\n> >> plpgsql? I suspect you might be talking about prepared statements, but\n> >> surely the hash table is built for each execution anyway, even if the\n> >> plan is reused, right?\n> >>\n> >> I think the issue we've been talking about is considering the number of\n> >> lookups we expect to do in the array/hash table. But that has nothing to\n> >> do with plpgsql and/or multiple executions ...\n> >\n> >Suppose I do \"SELECT 1 IN <100 item list>\" (whether just as a\n> >standalone query or in pl/pgsql). Then it doesn't make sense to use\n> >the optimization, because it can't possibly win over a naive linear\n> >scan through the array since to build the hash we have to do that\n> >linear scan anyway (I suppose in theory the hashing function could be\n> >dramatically faster than the equality op, so maybe it could win\n> >overall, but it seems unlikely to me).\n>\n> True. I'm sure we could construct such cases, but this is hardly the\n> only place where that would be an issue. We could create some rough\n> costing model and make a decision based on that.\n>\n> >I'm not so concerned about this in any query where we have a real FROM\n> >clause because even if we end up with only one row, the relative\n> >penalty is low, and the potential gain is very high. But simple\n> >expressions in pl/pgsql, for example, are a case where we can know for\n> >certain (correct me if I've wrong on this) that we'll only execute the\n> >expression once, which means there's probably always a penalty for\n> >choosing the implementation with setup costs over the default linear\n> >scan through the array.\n> >\n>\n> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n> see it mostly as a way to glue together SQL queries, but yeah - if we\n> know a given ScalarArrayOpExpr will only be executed once, then we can\n> disable this optimization for now.\n>\n\na := a + 1\n\nis translated to\n\nSELECT $1 + 1 and save result to var a\n\nThe queries like this \"SELECT $1 + 1\" are simple expressions. They are\nevaluated just on executor level - it skip SPI\n\nthe simple expression has not FROM clause, and have to return just one row.\nI am not sure if it is required, it has to return just one column.\n\nI am not sure if executor knows so expression is executed as simply\nexpressions. But probably it can be deduced from context\n\nPavel\n\n\n\n> >> >- Should we have a test for an operator with a non-strict function?\n> >> >I'm not aware of any built-in ops that have that characteristic;\n> >> >would you suggest just creating a fake one for the test?\n> >> >\n> >>\n> >> Dunno, I haven't thought about this very much. In general I think the\n> >> new code should simply behave the same as the current code, i.e. if\n> >> it does not check for strictness, we don't need either.\n> >\n> >Both the original implementation and this optimized version have the\n> >following short circuit condition:\n> >\n> >if (fcinfo->args[0].isnull && strictfunc)\n> >\n> >But I'm not sure there are any existing tests to show that the \"&&\n> >strictfunc\" is required.\n> >\n>\n> Ah! You mean a regression test, not a test in the \"if condition\" sense.\n> I don't see a reason not to have such test, although it's probably not\n> something we should require from this patch.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nút 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Tue, Apr 28, 2020 at 08:39:18AM -0400, James Coleman wrote:\n>On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Apr 27, 2020 at 09:04:09PM -0400, James Coleman wrote:\n>> >On Sun, Apr 26, 2020 at 7:41 PM James Coleman <jtc331@gmail.com> wrote:\n>> >>\n>> >> On Sun, Apr 26, 2020 at 4:49 PM Tomas Vondra\n>> >> <tomas.vondra@2ndquadrant.com> wrote:\n>> >> >\n>> >> > On Sun, Apr 26, 2020 at 02:46:19PM -0400, James Coleman wrote:\n>> >> > >On Sat, Apr 25, 2020 at 8:31 PM Tomas Vondra\n>> >> > ><tomas.vondra@2ndquadrant.com> wrote:\n>> >> > >>\n>> >> > >> On Sat, Apr 25, 2020 at 06:47:41PM -0400, James Coleman wrote:\n>> >> > >> >On Sat, Apr 25, 2020 at 5:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> >> > >> >>\n>> >> > >> >> On Sun, 26 Apr 2020 at 00:40, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >> > >> >> > This reminds me our attempts to add bloom filters to hash joins, which I\n>> >> > >> >> > think ran into mostly the same challenge of deciding when the bloom\n>> >> > >> >> > filter can be useful and is worth the extra work.\n>> >> > >> >>\n>> >> > >> >> Speaking of that, it would be interesting to see how a test where you\n>> >> > >> >> write the query as IN(VALUES(...)) instead of IN() compares. It would\n>> >> > >> >> be interesting to know if the planner is able to make a more suitable\n>> >> > >> >> choice and also to see how all the work over the years to improve Hash\n>> >> > >> >> Joins compares to the bsearch with and without the bloom filter.\n>> >> > >> >\n>> >> > >> >It would be interesting.\n>> >> > >> >\n>> >> > >> >It also makes one wonder about optimizing these into to hash\n>> >> > >> >joins...which I'd thought about over at [1]. I think it'd be a very\n>> >> > >> >significant effort though.\n>> >> > >> >\n>> >> > >>\n>> >> > >> I modified the script to also do the join version of the query. I can\n>> >> > >> only run it on my laptop at the moment, so the results may be a bit\n>> >> > >> different from those I shared before, but it's interesting I think.\n>> >> > >>\n>> >> > >> In most cases it's comparable to the binsearch/bloom approach, and in\n>> >> > >> some cases it actually beats them quite significantly. It seems to\n>> >> > >> depend on how expensive the comparison is - for \"int\" the comparison is\n>> >> > >> very cheap and there's almost no difference. For \"text\" the comparison\n>> >> > >> is much more expensive, and there are significant speedups.\n>> >> > >>\n>> >> > >> For example the test with 100k lookups in array of 10k elements and 10%\n>> >> > >> match probability, the timings are these\n>> >> > >>\n>> >> > >>    master:  62362 ms\n>> >> > >>    binsearch: 201 ms\n>> >> > >>    bloom:      65 ms\n>> >> > >>    hashjoin:   36 ms\n>> >> > >>\n>> >> > >> I do think the explanation is fairly simple - the bloom filter\n>> >> > >> eliminates about 90% of the expensive comparisons, so it's 20ms plus\n>> >> > >> some overhead to build and check the bits. The hash join probably\n>> >> > >> eliminates a lot of the remaining comparisons, because the hash table\n>> >> > >> is sized to have one tuple per bucket.\n>> >> > >>\n>> >> > >> Note: I also don't claim the PoC has the most efficient bloom filter\n>> >> > >> implementation possible. I'm sure it could be made faster.\n>> >> > >>\n>> >> > >> Anyway, I'm not sure transforming this to a hash join is worth the\n>> >> > >> effort - I agree that seems quite complex. But perhaps this suggest we\n>> >> > >> should not be doing binary search and instead just build a simple hash\n>> >> > >> table - that seems much simpler, and it'll probably give us about the\n>> >> > >> same benefits.\n>> >> > >\n>> >> > >That's actually what I originally thought about doing, but I chose\n>> >> > >binary search since it seemed a lot easier to get off the ground.\n>> >> > >\n>> >> >\n>> >> > OK, that makes perfect sense.\n>> >> >\n>> >> > >If we instead build a hash is there anything else we need to be\n>> >> > >concerned about? For example, work mem? I suppose for the binary\n>> >> > >search we already have to expand the array, so perhaps it's not all\n>> >> > >that meaningful relative to that...\n>> >> > >\n>> >> >\n>> >> > I don't think we need to be particularly concerned about work_mem. We\n>> >> > don't care about it now, and it's not clear to me what we could do about\n>> >> > it - we already have the array in memory anyway, so it's a bit futile.\n>> >> > Furthermore, if we need to care about it, it probably applies to the\n>> >> > binary search too.\n>> >> >\n>> >> > >I was looking earlier at what our standard hash implementation was,\n>> >> > >and it seemed less obvious what was needed to set that up (so binary\n>> >> > >search seemed a faster proof of concept). If you happen to have any\n>> >> > >pointers to similar usages I should look at, please let me know.\n>> >> > >\n>> >> >\n>> >> > I think the hash join implementation is far too complicated. It has to\n>> >> > care about work_mem, so it implements batching, etc. That's a lot of\n>> >> > complexity we don't need here. IMO we could use either the usual\n>> >> > dynahash, or maybe even the simpler simplehash.\n>> >> >\n>> >> > FWIW it'd be good to verify the numbers I shared, i.e. checking that the\n>> >> > benchmarks makes sense and running it independently. I'm not aware of\n>> >> > any issues but it was done late at night and only ran on my laptop.\n>> >>\n>> >> Some quick calculations (don't have the scripting in a form I can\n>> >> attach yet; using this as an opportunity to hack on a genericized\n>> >> performance testing framework of sorts) suggest your results are\n>> >> correct. I was also testing on my laptop, but I showed 1.) roughly\n>> >> equivalent results for IN (VALUES ...) and IN (<list>) for integers,\n>> >> but when I switch to (short; average 3 characters long) text values I\n>> >> show the hash join on VALUES is twice as fast as the binary search.\n>> >>\n>> >> Given that, I'm planning to implement this as a hash lookup and share\n>> >> a revised patch.\n>> >\n>> >I've attached a patch series as before, but with an additional patch\n>> >that switches to using dynahash instead of binary search.\n>> >\n>>\n>> OK. I can't take closer look at the moment, I'll check later.\n>>\n>> Any particular reasons to pick dynahash over simplehash? ISTM we're\n>> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n>> while there are not many places using dynahash for simple short-lived\n>> hash tables. Of course, that alone is a weak reason to insist on using\n>> simplehash here, but I suppose there were reasons for not using dynahash\n>> and we'll end up facing the same issues here.\n>\n>No particular reason; it wasn't clear to me that there was a reason to\n>prefer one or the other (and I'm not acquainted with the codebase\n>enough to know the differences), so I chose dynahash because it was\n>easier to find examples to follow for implementation.\n>\n\nOK, understood.\n\n>> >Whereas before the benchmarking ended up with a trimodal distribution\n>> >(i.e., master with IN <list>, patch with IN <list>, and either with IN\n>> >VALUES), the hash implementation brings us back to an effectively\n>> >bimodal distribution -- though the hash scalar array op expression\n>> >implementation for text is about 5% faster than the hash join.\n>> >\n>>\n>> Nice. I'm not surprised this is a bit faster than hash join, which has\n>> to worry about additional stuff.\n>>\n>> >Current outstanding thoughts (besides comment/naming cleanup):\n>> >\n>> >- The saop costing needs to be updated to match, as Tomas pointed out.\n>> >\n>> >- Should we be concerned about single execution cases? For example, is\n>> >the regression of speed on a simple SELECT x IN something we should\n>> >try to defeat by only kicking in the optimization if we execute in a\n>> >loop at least twice? That might be of particular interest to pl/pgsql.\n>> >\n>>\n>> I don't follow. How is this related to the number of executions and/or\n>> plpgsql? I suspect you might be talking about prepared statements, but\n>> surely the hash table is built for each execution anyway, even if the\n>> plan is reused, right?\n>>\n>> I think the issue we've been talking about is considering the number of\n>> lookups we expect to do in the array/hash table. But that has nothing to\n>> do with plpgsql and/or multiple executions ...\n>\n>Suppose I do \"SELECT 1 IN <100 item list>\" (whether just as a\n>standalone query or in pl/pgsql). Then it doesn't make sense to use\n>the optimization, because it can't possibly win over a naive linear\n>scan through the array since to build the hash we have to do that\n>linear scan anyway (I suppose in theory the hashing function could be\n>dramatically faster than the equality op, so maybe it could win\n>overall, but it seems unlikely to me).\n\nTrue. I'm sure we could construct such cases, but this is hardly the\nonly place where that would be an issue. We could create some rough\ncosting model and make a decision based on that.\n\n>I'm not so concerned about this in any query where we have a real FROM\n>clause because even if we end up with only one row, the relative\n>penalty is low, and the potential gain is very high. But simple\n>expressions in pl/pgsql, for example, are a case where we can know for\n>certain (correct me if I've wrong on this) that we'll only execute the\n>expression once, which means there's probably always a penalty for\n>choosing the implementation with setup costs over the default linear\n>scan through the array.\n>\n\nWhat do you mean by \"simple expressions\"? I'm not plpgsql expert and I\nsee it mostly as a way to glue together SQL queries, but yeah - if we\nknow a given ScalarArrayOpExpr will only be executed once, then we can\ndisable this optimization for now.a := a + 1is translated to SELECT $1 + 1 and save result to var aThe queries like this \"SELECT $1 + 1\" are simple expressions. They are evaluated just on executor level - it skip SPIthe simple expression has not FROM clause, and have to return just one row. I am not sure if it is required, it has to return just one column.I am not sure if executor knows so expression is executed as simply expressions. But probably it can be deduced from contextPavel\n\n>> >- Should we have a test for an operator with a non-strict function?\n>> >I'm not aware of any built-in ops that have that characteristic;\n>> >would you suggest just creating a fake one for the test?\n>> >\n>>\n>> Dunno, I haven't thought about this very much. In general I think the\n>> new code should simply behave the same as the current code, i.e. if\n>> it does not check for strictness, we don't need either.\n>\n>Both the original implementation and this optimized version have the\n>following short circuit condition:\n>\n>if (fcinfo->args[0].isnull && strictfunc)\n>\n>But I'm not sure there are any existing tests to show that the \"&&\n>strictfunc\" is required.\n>\n\nAh! You mean a regression test, not a test in the \"if condition\" sense.\nI don't see a reason not to have such test, although it's probably not\nsomething we should require from this patch.\n\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 28 Apr 2020 15:43:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n>�t 28. 4. 2020 v 15:26 odes�latel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>napsal:\n>\n>> ...\n>>\n>> >I'm not so concerned about this in any query where we have a real FROM\n>> >clause because even if we end up with only one row, the relative\n>> >penalty is low, and the potential gain is very high. But simple\n>> >expressions in pl/pgsql, for example, are a case where we can know for\n>> >certain (correct me if I've wrong on this) that we'll only execute the\n>> >expression once, which means there's probably always a penalty for\n>> >choosing the implementation with setup costs over the default linear\n>> >scan through the array.\n>> >\n>>\n>> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n>> see it mostly as a way to glue together SQL queries, but yeah - if we\n>> know a given ScalarArrayOpExpr will only be executed once, then we can\n>> disable this optimization for now.\n>>\n>\n>a := a + 1\n>\n>is translated to\n>\n>SELECT $1 + 1 and save result to var a\n>\n>The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n>evaluated just on executor level - it skip SPI\n>\n>the simple expression has not FROM clause, and have to return just one row.\n>I am not sure if it is required, it has to return just one column.\n>\n>I am not sure if executor knows so expression is executed as simply\n>expressions. But probably it can be deduced from context\n>\n\nNot sure. The executor state is created by exec_eval_simple_expr, which\ncalls ExecInitExprWithParams (and it's the only caller). And that in\nturn is the only place that leaves (state->parent == NULL). So maybe\nthat's a way to identify simple (standalone) expressions? Otherwise we\nmight add a new EEO_FLAG_* to identify these expressions explicitly.\n\nI wonder if it would be possible to identify cases when the expression\nis executed in a loop, e.g. like this:\n\n FOR i IN 1..1000 LOOP\n x := y IN (1, 2, ..., 999);\n END LOOP;\n\nin which case we only build the ScalarArrayOpExpr once, so maybe we\ncould keep the hash table for all executions. But maybe that's not\npossible or maybe it's pointless for other reasons. It sure looks a bit\nlike trying to build a query engine from FOR loop.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 28 Apr 2020 16:48:28 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "út 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n> >út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <\n> tomas.vondra@2ndquadrant.com>\n> >napsal:\n> >\n> >> ...\n> >>\n> >> >I'm not so concerned about this in any query where we have a real FROM\n> >> >clause because even if we end up with only one row, the relative\n> >> >penalty is low, and the potential gain is very high. But simple\n> >> >expressions in pl/pgsql, for example, are a case where we can know for\n> >> >certain (correct me if I've wrong on this) that we'll only execute the\n> >> >expression once, which means there's probably always a penalty for\n> >> >choosing the implementation with setup costs over the default linear\n> >> >scan through the array.\n> >> >\n> >>\n> >> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n> >> see it mostly as a way to glue together SQL queries, but yeah - if we\n> >> know a given ScalarArrayOpExpr will only be executed once, then we can\n> >> disable this optimization for now.\n> >>\n> >\n> >a := a + 1\n> >\n> >is translated to\n> >\n> >SELECT $1 + 1 and save result to var a\n> >\n> >The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n> >evaluated just on executor level - it skip SPI\n> >\n> >the simple expression has not FROM clause, and have to return just one\n> row.\n> >I am not sure if it is required, it has to return just one column.\n> >\n> >I am not sure if executor knows so expression is executed as simply\n> >expressions. But probably it can be deduced from context\n> >\n>\n> Not sure. The executor state is created by exec_eval_simple_expr, which\n> calls ExecInitExprWithParams (and it's the only caller). And that in\n> turn is the only place that leaves (state->parent == NULL). So maybe\n> that's a way to identify simple (standalone) expressions? Otherwise we\n> might add a new EEO_FLAG_* to identify these expressions explicitly.\n>\n> I wonder if it would be possible to identify cases when the expression\n> is executed in a loop, e.g. like this:\n>\n> FOR i IN 1..1000 LOOP\n> x := y IN (1, 2, ..., 999);\n> END LOOP;\n>\n> in which case we only build the ScalarArrayOpExpr once, so maybe we\n> could keep the hash table for all executions. But maybe that's not\n> possible or maybe it's pointless for other reasons. It sure looks a bit\n> like trying to build a query engine from FOR loop.\n>\n\nTheoretically it is possible, not now. But I don't think so it is\nnecessary. I cannot to remember this pattern in any plpgsql code and I\nnever seen any request on this feature.\n\nI don't think so this is task for plpgsql engine. Maybe for JIT sometimes.\n\n\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nút 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n>út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>napsal:\n>\n>> ...\n>>\n>> >I'm not so concerned about this in any query where we have a real FROM\n>> >clause because even if we end up with only one row, the relative\n>> >penalty is low, and the potential gain is very high. But simple\n>> >expressions in pl/pgsql, for example, are a case where we can know for\n>> >certain (correct me if I've wrong on this) that we'll only execute the\n>> >expression once, which means there's probably always a penalty for\n>> >choosing the implementation with setup costs over the default linear\n>> >scan through the array.\n>> >\n>>\n>> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n>> see it mostly as a way to glue together SQL queries, but yeah - if we\n>> know a given ScalarArrayOpExpr will only be executed once, then we can\n>> disable this optimization for now.\n>>\n>\n>a := a + 1\n>\n>is translated to\n>\n>SELECT $1 + 1 and save result to var a\n>\n>The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n>evaluated just on executor level - it skip SPI\n>\n>the simple expression has not FROM clause, and have to return just one row.\n>I am not sure if it is required, it has to return just one column.\n>\n>I am not sure if executor knows so expression is executed as simply\n>expressions. But probably it can be deduced from context\n>\n\nNot sure. The executor state is created by exec_eval_simple_expr, which\ncalls ExecInitExprWithParams (and it's the only caller). And that in\nturn is the only place that leaves (state->parent == NULL). So maybe\nthat's a way to identify simple (standalone) expressions? Otherwise we\nmight add a new EEO_FLAG_* to identify these expressions explicitly.\n\nI wonder if it would be possible to identify cases when the expression\nis executed in a loop, e.g. like this:\n\n     FOR i IN 1..1000 LOOP\n         x := y IN (1, 2, ..., 999);\n     END LOOP;\n\nin which case we only build the ScalarArrayOpExpr once, so maybe we\ncould keep the hash table for all executions. But maybe that's not\npossible or maybe it's pointless for other reasons. It sure looks a bit\nlike trying to build a query engine from FOR loop.Theoretically it is possible, not now. But I don't think so it is necessary. I cannot to remember this pattern in any plpgsql code and I never seen any request on this feature. I don't think so this is task for plpgsql engine. Maybe for JIT sometimes.\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 28 Apr 2020 17:17:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 11:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> út 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:\n>>\n>> On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n>> >út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> >napsal:\n>> >\n>> >> ...\n>> >>\n>> >> >I'm not so concerned about this in any query where we have a real FROM\n>> >> >clause because even if we end up with only one row, the relative\n>> >> >penalty is low, and the potential gain is very high. But simple\n>> >> >expressions in pl/pgsql, for example, are a case where we can know for\n>> >> >certain (correct me if I've wrong on this) that we'll only execute the\n>> >> >expression once, which means there's probably always a penalty for\n>> >> >choosing the implementation with setup costs over the default linear\n>> >> >scan through the array.\n>> >> >\n>> >>\n>> >> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n>> >> see it mostly as a way to glue together SQL queries, but yeah - if we\n>> >> know a given ScalarArrayOpExpr will only be executed once, then we can\n>> >> disable this optimization for now.\n>> >>\n>> >\n>> >a := a + 1\n>> >\n>> >is translated to\n>> >\n>> >SELECT $1 + 1 and save result to var a\n>> >\n>> >The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n>> >evaluated just on executor level - it skip SPI\n>> >\n>> >the simple expression has not FROM clause, and have to return just one row.\n>> >I am not sure if it is required, it has to return just one column.\n\nYes, this is what I meant by simple expressions.\n\n>> >I am not sure if executor knows so expression is executed as simply\n>> >expressions. But probably it can be deduced from context\n>> >\n>>\n>> Not sure. The executor state is created by exec_eval_simple_expr, which\n>> calls ExecInitExprWithParams (and it's the only caller). And that in\n>> turn is the only place that leaves (state->parent == NULL). So maybe\n>> that's a way to identify simple (standalone) expressions? Otherwise we\n>> might add a new EEO_FLAG_* to identify these expressions explicitly.\n\nI'll look into doing one of these.\n\n>> I wonder if it would be possible to identify cases when the expression\n>> is executed in a loop, e.g. like this:\n>>\n>> FOR i IN 1..1000 LOOP\n>> x := y IN (1, 2, ..., 999);\n>> END LOOP;\n>>\n>> in which case we only build the ScalarArrayOpExpr once, so maybe we\n>> could keep the hash table for all executions. But maybe that's not\n>> possible or maybe it's pointless for other reasons. It sure looks a bit\n>> like trying to build a query engine from FOR loop.\n>\n>\n> Theoretically it is possible, not now. But I don't think so it is necessary. I cannot to remember this pattern in any plpgsql code and I never seen any request on this feature.\n>\n> I don't think so this is task for plpgsql engine. Maybe for JIT sometimes.\n\nAgreed. I'd thought about this kind of scenario when I brought this\nup, but I think solving it would the responsibility of the pg/pgsql\ncompiler rather than the expression evaluation code, because it'd have\nto recognize the situation and setup a shared expression evaluation\ncontext to be reused each time through the loop.\n\nJames\n\n\n", "msg_date": "Tue, 28 Apr 2020 12:16:47 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "út 28. 4. 2020 v 18:17 odesílatel James Coleman <jtc331@gmail.com> napsal:\n\n> On Tue, Apr 28, 2020 at 11:18 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > út 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> napsal:\n> >>\n> >> On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n> >> >út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <\n> tomas.vondra@2ndquadrant.com>\n> >> >napsal:\n> >> >\n> >> >> ...\n> >> >>\n> >> >> >I'm not so concerned about this in any query where we have a real\n> FROM\n> >> >> >clause because even if we end up with only one row, the relative\n> >> >> >penalty is low, and the potential gain is very high. But simple\n> >> >> >expressions in pl/pgsql, for example, are a case where we can know\n> for\n> >> >> >certain (correct me if I've wrong on this) that we'll only execute\n> the\n> >> >> >expression once, which means there's probably always a penalty for\n> >> >> >choosing the implementation with setup costs over the default linear\n> >> >> >scan through the array.\n> >> >> >\n> >> >>\n> >> >> What do you mean by \"simple expressions\"? I'm not plpgsql expert and\n> I\n> >> >> see it mostly as a way to glue together SQL queries, but yeah - if we\n> >> >> know a given ScalarArrayOpExpr will only be executed once, then we\n> can\n> >> >> disable this optimization for now.\n> >> >>\n> >> >\n> >> >a := a + 1\n> >> >\n> >> >is translated to\n> >> >\n> >> >SELECT $1 + 1 and save result to var a\n> >> >\n> >> >The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n> >> >evaluated just on executor level - it skip SPI\n> >> >\n> >> >the simple expression has not FROM clause, and have to return just one\n> row.\n> >> >I am not sure if it is required, it has to return just one column.\n>\n> Yes, this is what I meant by simple expressions.\n>\n> >> >I am not sure if executor knows so expression is executed as simply\n> >> >expressions. But probably it can be deduced from context\n> >> >\n> >>\n> >> Not sure. The executor state is created by exec_eval_simple_expr, which\n> >> calls ExecInitExprWithParams (and it's the only caller). And that in\n> >> turn is the only place that leaves (state->parent == NULL). So maybe\n> >> that's a way to identify simple (standalone) expressions? Otherwise we\n> >> might add a new EEO_FLAG_* to identify these expressions explicitly.\n>\n> I'll look into doing one of these.\n>\n> >> I wonder if it would be possible to identify cases when the expression\n> >> is executed in a loop, e.g. like this:\n> >>\n> >> FOR i IN 1..1000 LOOP\n> >> x := y IN (1, 2, ..., 999);\n> >> END LOOP;\n> >>\n> >> in which case we only build the ScalarArrayOpExpr once, so maybe we\n> >> could keep the hash table for all executions. But maybe that's not\n> >> possible or maybe it's pointless for other reasons. It sure looks a bit\n> >> like trying to build a query engine from FOR loop.\n> >\n> >\n> > Theoretically it is possible, not now. But I don't think so it is\n> necessary. I cannot to remember this pattern in any plpgsql code and I\n> never seen any request on this feature.\n> >\n> > I don't think so this is task for plpgsql engine. Maybe for JIT\n> sometimes.\n>\n> Agreed. I'd thought about this kind of scenario when I brought this\n> up, but I think solving it would the responsibility of the pg/pgsql\n> compiler rather than the expression evaluation code, because it'd have\n> to recognize the situation and setup a shared expression evaluation\n> context to be reused each time through the loop.\n>\n\ncan be nice if new implementation was not slower then older in all\nenvironments and context (including plpgsql expressions)\n\nRegards\n\nPavel\n\n\n> James\n>\n\nút 28. 4. 2020 v 18:17 odesílatel James Coleman <jtc331@gmail.com> napsal:On Tue, Apr 28, 2020 at 11:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> út 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:\n>>\n>> On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n>> >út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> >napsal:\n>> >\n>> >> ...\n>> >>\n>> >> >I'm not so concerned about this in any query where we have a real FROM\n>> >> >clause because even if we end up with only one row, the relative\n>> >> >penalty is low, and the potential gain is very high. But simple\n>> >> >expressions in pl/pgsql, for example, are a case where we can know for\n>> >> >certain (correct me if I've wrong on this) that we'll only execute the\n>> >> >expression once, which means there's probably always a penalty for\n>> >> >choosing the implementation with setup costs over the default linear\n>> >> >scan through the array.\n>> >> >\n>> >>\n>> >> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n>> >> see it mostly as a way to glue together SQL queries, but yeah - if we\n>> >> know a given ScalarArrayOpExpr will only be executed once, then we can\n>> >> disable this optimization for now.\n>> >>\n>> >\n>> >a := a + 1\n>> >\n>> >is translated to\n>> >\n>> >SELECT $1 + 1 and save result to var a\n>> >\n>> >The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n>> >evaluated just on executor level - it skip SPI\n>> >\n>> >the simple expression has not FROM clause, and have to return just one row.\n>> >I am not sure if it is required, it has to return just one column.\n\nYes, this is what I meant by simple expressions.\n\n>> >I am not sure if executor knows so expression is executed as simply\n>> >expressions. But probably it can be deduced from context\n>> >\n>>\n>> Not sure. The executor state is created by exec_eval_simple_expr, which\n>> calls ExecInitExprWithParams (and it's the only caller). And that in\n>> turn is the only place that leaves (state->parent == NULL). So maybe\n>> that's a way to identify simple (standalone) expressions? Otherwise we\n>> might add a new EEO_FLAG_* to identify these expressions explicitly.\n\nI'll look into doing one of these.\n\n>> I wonder if it would be possible to identify cases when the expression\n>> is executed in a loop, e.g. like this:\n>>\n>>      FOR i IN 1..1000 LOOP\n>>          x := y IN (1, 2, ..., 999);\n>>      END LOOP;\n>>\n>> in which case we only build the ScalarArrayOpExpr once, so maybe we\n>> could keep the hash table for all executions. But maybe that's not\n>> possible or maybe it's pointless for other reasons. It sure looks a bit\n>> like trying to build a query engine from FOR loop.\n>\n>\n> Theoretically it is possible, not now. But I don't think so it is necessary. I cannot to remember this pattern in any plpgsql code and I never seen any request on this feature.\n>\n> I don't think so this is task for plpgsql engine. Maybe for JIT sometimes.\n\nAgreed. I'd thought about this kind of scenario when I brought this\nup, but I think solving it would the responsibility of the pg/pgsql\ncompiler rather than the expression evaluation code, because it'd have\nto recognize the situation and setup a shared expression evaluation\ncontext to be reused each time through the loop.can be nice if new implementation was not slower then older in all environments and context (including plpgsql expressions)RegardsPavel\n\nJames", "msg_date": "Tue, 28 Apr 2020 19:39:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 1:40 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> út 28. 4. 2020 v 18:17 odesílatel James Coleman <jtc331@gmail.com> napsal:\n>>\n>> On Tue, Apr 28, 2020 at 11:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> >\n>> >\n>> >\n>> > út 28. 4. 2020 v 16:48 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:\n>> >>\n>> >> On Tue, Apr 28, 2020 at 03:43:43PM +0200, Pavel Stehule wrote:\n>> >> >út 28. 4. 2020 v 15:26 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> >> >napsal:\n>> >> >\n>> >> >> ...\n>> >> >>\n>> >> >> >I'm not so concerned about this in any query where we have a real FROM\n>> >> >> >clause because even if we end up with only one row, the relative\n>> >> >> >penalty is low, and the potential gain is very high. But simple\n>> >> >> >expressions in pl/pgsql, for example, are a case where we can know for\n>> >> >> >certain (correct me if I've wrong on this) that we'll only execute the\n>> >> >> >expression once, which means there's probably always a penalty for\n>> >> >> >choosing the implementation with setup costs over the default linear\n>> >> >> >scan through the array.\n>> >> >> >\n>> >> >>\n>> >> >> What do you mean by \"simple expressions\"? I'm not plpgsql expert and I\n>> >> >> see it mostly as a way to glue together SQL queries, but yeah - if we\n>> >> >> know a given ScalarArrayOpExpr will only be executed once, then we can\n>> >> >> disable this optimization for now.\n>> >> >>\n>> >> >\n>> >> >a := a + 1\n>> >> >\n>> >> >is translated to\n>> >> >\n>> >> >SELECT $1 + 1 and save result to var a\n>> >> >\n>> >> >The queries like this \"SELECT $1 + 1\" are simple expressions. They are\n>> >> >evaluated just on executor level - it skip SPI\n>> >> >\n>> >> >the simple expression has not FROM clause, and have to return just one row.\n>> >> >I am not sure if it is required, it has to return just one column.\n>>\n>> Yes, this is what I meant by simple expressions.\n>>\n>> >> >I am not sure if executor knows so expression is executed as simply\n>> >> >expressions. But probably it can be deduced from context\n>> >> >\n>> >>\n>> >> Not sure. The executor state is created by exec_eval_simple_expr, which\n>> >> calls ExecInitExprWithParams (and it's the only caller). And that in\n>> >> turn is the only place that leaves (state->parent == NULL). So maybe\n>> >> that's a way to identify simple (standalone) expressions? Otherwise we\n>> >> might add a new EEO_FLAG_* to identify these expressions explicitly.\n>>\n>> I'll look into doing one of these.\n>>\n>> >> I wonder if it would be possible to identify cases when the expression\n>> >> is executed in a loop, e.g. like this:\n>> >>\n>> >> FOR i IN 1..1000 LOOP\n>> >> x := y IN (1, 2, ..., 999);\n>> >> END LOOP;\n>> >>\n>> >> in which case we only build the ScalarArrayOpExpr once, so maybe we\n>> >> could keep the hash table for all executions. But maybe that's not\n>> >> possible or maybe it's pointless for other reasons. It sure looks a bit\n>> >> like trying to build a query engine from FOR loop.\n>> >\n>> >\n>> > Theoretically it is possible, not now. But I don't think so it is necessary. I cannot to remember this pattern in any plpgsql code and I never seen any request on this feature.\n>> >\n>> > I don't think so this is task for plpgsql engine. Maybe for JIT sometimes.\n>>\n>> Agreed. I'd thought about this kind of scenario when I brought this\n>> up, but I think solving it would the responsibility of the pg/pgsql\n>> compiler rather than the expression evaluation code, because it'd have\n>> to recognize the situation and setup a shared expression evaluation\n>> context to be reused each time through the loop.\n>\n>\n> can be nice if new implementation was not slower then older in all environments and context (including plpgsql expressions)\n\nAgreed, which is why I'm going to look into preventing using the new\ncode path for simple expressions.\n\nJames\n\n\n", "msg_date": "Tue, 28 Apr 2020 14:24:54 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "I cc'd Andres given his commit introduced simplehash, so I figured\nhe'd probably have a few pointers on when each one might be useful.\n\nOn Tue, Apr 28, 2020 at 8:39 AM James Coleman <jtc331@gmail.com> wrote:\n...\n> > Any particular reasons to pick dynahash over simplehash? ISTM we're\n> > using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n> > while there are not many places using dynahash for simple short-lived\n> > hash tables. Of course, that alone is a weak reason to insist on using\n> > simplehash here, but I suppose there were reasons for not using dynahash\n> > and we'll end up facing the same issues here.\n>\n> No particular reason; it wasn't clear to me that there was a reason to\n> prefer one or the other (and I'm not acquainted with the codebase\n> enough to know the differences), so I chose dynahash because it was\n> easier to find examples to follow for implementation.\n\nDo you have any thoughts on what the trade-offs/use-cases etc. are for\ndynahash versus simple hash?\n\n From reading the commit message in b30d3ea824c it seems like simple\nhash is faster and optimized for CPU cache benefits. The comments at\nthe top of simplehash.h also discourage it's use in non\nperformance/space sensitive uses, but there isn't anything I can see\nthat explicitly tries to discuss when dynahash is useful, etc.\n\nGiven the performance notes in that commit message, I thinking\nswitching to simple hash is worth it.\n\nBut I also wonder if there might be some value in a README or comments\naddition that would be a guide to what the various hash\nimplementations are useful for. If there's interest, I could try to\ntype something short up so that we have something to make the code\nbase a bit more discoverable.\n\nJames\n\n\n", "msg_date": "Tue, 28 Apr 2020 18:22:20 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 06:22:20PM -0400, James Coleman wrote:\n>I cc'd Andres given his commit introduced simplehash, so I figured\n>he'd probably have a few pointers on when each one might be useful.\n>\n>On Tue, Apr 28, 2020 at 8:39 AM James Coleman <jtc331@gmail.com> wrote:\n>...\n>> > Any particular reasons to pick dynahash over simplehash? ISTM we're\n>> > using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n>> > while there are not many places using dynahash for simple short-lived\n>> > hash tables. Of course, that alone is a weak reason to insist on using\n>> > simplehash here, but I suppose there were reasons for not using dynahash\n>> > and we'll end up facing the same issues here.\n>>\n>> No particular reason; it wasn't clear to me that there was a reason to\n>> prefer one or the other (and I'm not acquainted with the codebase\n>> enough to know the differences), so I chose dynahash because it was\n>> easier to find examples to follow for implementation.\n>\n>Do you have any thoughts on what the trade-offs/use-cases etc. are for\n>dynahash versus simple hash?\n>\n>From reading the commit message in b30d3ea824c it seems like simple\n>hash is faster and optimized for CPU cache benefits. The comments at\n>the top of simplehash.h also discourage it's use in non\n>performance/space sensitive uses, but there isn't anything I can see\n>that explicitly tries to discuss when dynahash is useful, etc.\n>\n>Given the performance notes in that commit message, I thinking\n>switching to simple hash is worth it.\n>\n\nI recall doing some benchmarks for that patch, but it's so long I don't\nreally remember the details. But in general, I agree simplehash is a bit\nmore efficient in terms of CPU / caching.\n\nI think the changes required to switch from dynahash to simplehash are\nfairly small, so I think the best thing we can do is just try do some\nmeasurement and then decide.\n\n>But I also wonder if there might be some value in a README or comments\n>addition that would be a guide to what the various hash\n>implementations are useful for. If there's interest, I could try to\n>type something short up so that we have something to make the code\n>base a bit more discoverable.\n>\n\nI wouldn't object to that. Although maybe we should simply add some\nbasic recommendations to the comments in dynahash/simplehash.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 00:52:33 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "Hi,\n\nOn 2020-04-28 18:22:20 -0400, James Coleman wrote:\n> I cc'd Andres given his commit introduced simplehash, so I figured\n> he'd probably have a few pointers on when each one might be useful.\n> [...]\n> Do you have any thoughts on what the trade-offs/use-cases etc. are for\n> dynahash versus simple hash?\n> \n> From reading the commit message in b30d3ea824c it seems like simple\n> hash is faster and optimized for CPU cache benefits. The comments at\n> the top of simplehash.h also discourage it's use in non\n> performance/space sensitive uses, but there isn't anything I can see\n> that explicitly tries to discuss when dynahash is useful, etc.\n\nBenefits of dynahash (chained hashtable):\n- supports partitioning, useful for shared memory accessed under locks\n- better performance for large entries, as they don't need to be moved\n around in case of hash conflicts\n- stable pointers to hash table entries\n\nBenefits of simplehash (open addressing hash table):\n- no indirect function calls, known structure sizes, due to \"templated\"\n code generation (these show up substantially in profiles for dynahash)\n- considerably faster for small entries due to previous point, and due\n open addressing hash tables having better cache behaviour than chained\n hashtables\n- once set-up the interface is type safe and easier to use\n- no overhead of a separate memory context etc\n\n\n> Given the performance notes in that commit message, I thinking\n> switching to simple hash is worth it.\n\nSeems plausible to me.\n\n\n> But I also wonder if there might be some value in a README or comments\n> addition that would be a guide to what the various hash\n> implementations are useful for. If there's interest, I could try to\n> type something short up so that we have something to make the code\n> base a bit more discoverable.\n\nThat'd make sense to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Apr 2020 16:05:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 7:05 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-28 18:22:20 -0400, James Coleman wrote:\n> > I cc'd Andres given his commit introduced simplehash, so I figured\n> > he'd probably have a few pointers on when each one might be useful.\n> > [...]\n> > Do you have any thoughts on what the trade-offs/use-cases etc. are for\n> > dynahash versus simple hash?\n> >\n> > From reading the commit message in b30d3ea824c it seems like simple\n> > hash is faster and optimized for CPU cache benefits. The comments at\n> > the top of simplehash.h also discourage it's use in non\n> > performance/space sensitive uses, but there isn't anything I can see\n> > that explicitly tries to discuss when dynahash is useful, etc.\n>\n> Benefits of dynahash (chained hashtable):\n> - supports partitioning, useful for shared memory accessed under locks\n> - better performance for large entries, as they don't need to be moved\n> around in case of hash conflicts\n> - stable pointers to hash table entries\n>\n> Benefits of simplehash (open addressing hash table):\n> - no indirect function calls, known structure sizes, due to \"templated\"\n> code generation (these show up substantially in profiles for dynahash)\n> - considerably faster for small entries due to previous point, and due\n> open addressing hash tables having better cache behaviour than chained\n> hashtables\n> - once set-up the interface is type safe and easier to use\n> - no overhead of a separate memory context etc\n>\n>\n> > Given the performance notes in that commit message, I thinking\n> > switching to simple hash is worth it.\n>\n> Seems plausible to me.\n>\n>\n> > But I also wonder if there might be some value in a README or comments\n> > addition that would be a guide to what the various hash\n> > implementations are useful for. If there's interest, I could try to\n> > type something short up so that we have something to make the code\n> > base a bit more discoverable.\n>\n> That'd make sense to me.\n\nCool, I'll work on that as I have time then.\n\nOne question: what is the reasoning behind having SH_STORE_HASH? The\nonly things I could imagine would be a case where you have external\npointers to some set of values or need to be able to use the hash for\nother reasons besides the hash table (and so can avoid calculating it\ntwice), but maybe I'm missing something.\n\nThanks,\nJames\n\n\n", "msg_date": "Wed, 29 Apr 2020 10:26:12 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Wed, Apr 29, 2020 at 10:26:12AM -0400, James Coleman wrote:\n>On Tue, Apr 28, 2020 at 7:05 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2020-04-28 18:22:20 -0400, James Coleman wrote:\n>> > I cc'd Andres given his commit introduced simplehash, so I figured\n>> > he'd probably have a few pointers on when each one might be useful.\n>> > [...]\n>> > Do you have any thoughts on what the trade-offs/use-cases etc. are for\n>> > dynahash versus simple hash?\n>> >\n>> > From reading the commit message in b30d3ea824c it seems like simple\n>> > hash is faster and optimized for CPU cache benefits. The comments at\n>> > the top of simplehash.h also discourage it's use in non\n>> > performance/space sensitive uses, but there isn't anything I can see\n>> > that explicitly tries to discuss when dynahash is useful, etc.\n>>\n>> Benefits of dynahash (chained hashtable):\n>> - supports partitioning, useful for shared memory accessed under locks\n>> - better performance for large entries, as they don't need to be moved\n>> around in case of hash conflicts\n>> - stable pointers to hash table entries\n>>\n>> Benefits of simplehash (open addressing hash table):\n>> - no indirect function calls, known structure sizes, due to \"templated\"\n>> code generation (these show up substantially in profiles for dynahash)\n>> - considerably faster for small entries due to previous point, and due\n>> open addressing hash tables having better cache behaviour than chained\n>> hashtables\n>> - once set-up the interface is type safe and easier to use\n>> - no overhead of a separate memory context etc\n>>\n>>\n>> > Given the performance notes in that commit message, I thinking\n>> > switching to simple hash is worth it.\n>>\n>> Seems plausible to me.\n>>\n>>\n>> > But I also wonder if there might be some value in a README or comments\n>> > addition that would be a guide to what the various hash\n>> > implementations are useful for. If there's interest, I could try to\n>> > type something short up so that we have something to make the code\n>> > base a bit more discoverable.\n>>\n>> That'd make sense to me.\n>\n>Cool, I'll work on that as I have time then.\n>\n>One question: what is the reasoning behind having SH_STORE_HASH? The\n>only things I could imagine would be a case where you have external\n>pointers to some set of values or need to be able to use the hash for\n>other reasons besides the hash table (and so can avoid calculating it\n>twice), but maybe I'm missing something.\n>\n\nI believe it's because computing the hash may be fairly expensive for\nsome data types, in which case it may be better to just store it for\nfuture use.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:17:35 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Wed, Apr 29, 2020 at 11:17 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Apr 29, 2020 at 10:26:12AM -0400, James Coleman wrote:\n> >On Tue, Apr 28, 2020 at 7:05 PM Andres Freund <andres@anarazel.de> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2020-04-28 18:22:20 -0400, James Coleman wrote:\n> >> > I cc'd Andres given his commit introduced simplehash, so I figured\n> >> > he'd probably have a few pointers on when each one might be useful.\n> >> > [...]\n> >> > Do you have any thoughts on what the trade-offs/use-cases etc. are for\n> >> > dynahash versus simple hash?\n> >> >\n> >> > From reading the commit message in b30d3ea824c it seems like simple\n> >> > hash is faster and optimized for CPU cache benefits. The comments at\n> >> > the top of simplehash.h also discourage it's use in non\n> >> > performance/space sensitive uses, but there isn't anything I can see\n> >> > that explicitly tries to discuss when dynahash is useful, etc.\n> >>\n> >> Benefits of dynahash (chained hashtable):\n> >> - supports partitioning, useful for shared memory accessed under locks\n> >> - better performance for large entries, as they don't need to be moved\n> >> around in case of hash conflicts\n> >> - stable pointers to hash table entries\n> >>\n> >> Benefits of simplehash (open addressing hash table):\n> >> - no indirect function calls, known structure sizes, due to \"templated\"\n> >> code generation (these show up substantially in profiles for dynahash)\n> >> - considerably faster for small entries due to previous point, and due\n> >> open addressing hash tables having better cache behaviour than chained\n> >> hashtables\n> >> - once set-up the interface is type safe and easier to use\n> >> - no overhead of a separate memory context etc\n> >>\n> >>\n> >> > Given the performance notes in that commit message, I thinking\n> >> > switching to simple hash is worth it.\n> >>\n> >> Seems plausible to me.\n> >>\n> >>\n> >> > But I also wonder if there might be some value in a README or comments\n> >> > addition that would be a guide to what the various hash\n> >> > implementations are useful for. If there's interest, I could try to\n> >> > type something short up so that we have something to make the code\n> >> > base a bit more discoverable.\n> >>\n> >> That'd make sense to me.\n> >\n> >Cool, I'll work on that as I have time then.\n> >\n> >One question: what is the reasoning behind having SH_STORE_HASH? The\n> >only things I could imagine would be a case where you have external\n> >pointers to some set of values or need to be able to use the hash for\n> >other reasons besides the hash table (and so can avoid calculating it\n> >twice), but maybe I'm missing something.\n> >\n>\n> I believe it's because computing the hash may be fairly expensive for\n> some data types, in which case it may be better to just store it for\n> future use.\n\nBut is it storing it for use primarily by the hash table\nimplementation (i.e., does it need the hash stored this way to avoid\nrepeated recalculation) or for caller's use?\n\nJames\n\n\n", "msg_date": "Wed, 29 Apr 2020 11:34:24 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Wed, Apr 29, 2020 at 11:34:24AM -0400, James Coleman wrote:\n>On Wed, Apr 29, 2020 at 11:17 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Wed, Apr 29, 2020 at 10:26:12AM -0400, James Coleman wrote:\n>> >On Tue, Apr 28, 2020 at 7:05 PM Andres Freund <andres@anarazel.de> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> On 2020-04-28 18:22:20 -0400, James Coleman wrote:\n>> >> > I cc'd Andres given his commit introduced simplehash, so I figured\n>> >> > he'd probably have a few pointers on when each one might be useful.\n>> >> > [...]\n>> >> > Do you have any thoughts on what the trade-offs/use-cases etc. are for\n>> >> > dynahash versus simple hash?\n>> >> >\n>> >> > From reading the commit message in b30d3ea824c it seems like simple\n>> >> > hash is faster and optimized for CPU cache benefits. The comments at\n>> >> > the top of simplehash.h also discourage it's use in non\n>> >> > performance/space sensitive uses, but there isn't anything I can see\n>> >> > that explicitly tries to discuss when dynahash is useful, etc.\n>> >>\n>> >> Benefits of dynahash (chained hashtable):\n>> >> - supports partitioning, useful for shared memory accessed under locks\n>> >> - better performance for large entries, as they don't need to be moved\n>> >> around in case of hash conflicts\n>> >> - stable pointers to hash table entries\n>> >>\n>> >> Benefits of simplehash (open addressing hash table):\n>> >> - no indirect function calls, known structure sizes, due to \"templated\"\n>> >> code generation (these show up substantially in profiles for dynahash)\n>> >> - considerably faster for small entries due to previous point, and due\n>> >> open addressing hash tables having better cache behaviour than chained\n>> >> hashtables\n>> >> - once set-up the interface is type safe and easier to use\n>> >> - no overhead of a separate memory context etc\n>> >>\n>> >>\n>> >> > Given the performance notes in that commit message, I thinking\n>> >> > switching to simple hash is worth it.\n>> >>\n>> >> Seems plausible to me.\n>> >>\n>> >>\n>> >> > But I also wonder if there might be some value in a README or comments\n>> >> > addition that would be a guide to what the various hash\n>> >> > implementations are useful for. If there's interest, I could try to\n>> >> > type something short up so that we have something to make the code\n>> >> > base a bit more discoverable.\n>> >>\n>> >> That'd make sense to me.\n>> >\n>> >Cool, I'll work on that as I have time then.\n>> >\n>> >One question: what is the reasoning behind having SH_STORE_HASH? The\n>> >only things I could imagine would be a case where you have external\n>> >pointers to some set of values or need to be able to use the hash for\n>> >other reasons besides the hash table (and so can avoid calculating it\n>> >twice), but maybe I'm missing something.\n>> >\n>>\n>> I believe it's because computing the hash may be fairly expensive for\n>> some data types, in which case it may be better to just store it for\n>> future use.\n>\n>But is it storing it for use primarily by the hash table\n>implementation (i.e., does it need the hash stored this way to avoid\n>repeated recalculation) or for caller's use?\n>\n\nFor the hash table implementation, I think. Simplehash is using \"robin\nhood\" hashing, which needs to decide which entry to move in case of\ncollision. AFAIC that requires the knowledge of hash value for both the\nnew and existing entry, and having to compute that over and over would\nbe fairly expensive. But that's my understanding, I might be wrong and\nit's useful for external use too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:50:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n...\n> Any particular reasons to pick dynahash over simplehash? ISTM we're\n> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n> while there are not many places using dynahash for simple short-lived\n> hash tables. Of course, that alone is a weak reason to insist on using\n> simplehash here, but I suppose there were reasons for not using dynahash\n> and we'll end up facing the same issues here.\n\nI've attached a patch series that includes switching to simplehash.\nObviously we'd really just collapse all of these patches, but it's\nperhaps interesting to see the changes required to use each style\n(binary search, dynahash, simplehash).\n\nAs before, there are clearly comments and naming things to be\naddressed, but the implementation should be reasonably clean.\n\nJames", "msg_date": "Thu, 30 Apr 2020 22:20:39 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On 01/05/2020 05:20, James Coleman wrote:\n> On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> ...\n>> Any particular reasons to pick dynahash over simplehash? ISTM we're\n>> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n>> while there are not many places using dynahash for simple short-lived\n>> hash tables. Of course, that alone is a weak reason to insist on using\n>> simplehash here, but I suppose there were reasons for not using dynahash\n>> and we'll end up facing the same issues here.\n> \n> I've attached a patch series that includes switching to simplehash.\n> Obviously we'd really just collapse all of these patches, but it's\n> perhaps interesting to see the changes required to use each style\n> (binary search, dynahash, simplehash).\n> \n> As before, there are clearly comments and naming things to be\n> addressed, but the implementation should be reasonably clean.\n\nLooks good, aside from the cleanup work that you mentioned. There are a \nfew more cases that I think you could easily handle with very little \nextra code:\n\nYou could also apply the optimization for non-Const expressions, as long \nas they're stable (i.e. !contain_volatile_functions(expr)).\n\nDeconstructing the array Datum into a simple C array on first call would \nbe a win even for very small arrays and for AND semantics, even if you \ndon't use a hash table.\n\n- Heikki\n\n\n", "msg_date": "Wed, 19 Aug 2020 10:16:17 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Wed, Aug 19, 2020 at 3:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 01/05/2020 05:20, James Coleman wrote:\n> > On Tue, Apr 28, 2020 at 8:25 AM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> > ...\n> >> Any particular reasons to pick dynahash over simplehash? ISTM we're\n> >> using simplehash elsewhere in the executor (grouping, tidbitmap, ...),\n> >> while there are not many places using dynahash for simple short-lived\n> >> hash tables. Of course, that alone is a weak reason to insist on using\n> >> simplehash here, but I suppose there were reasons for not using dynahash\n> >> and we'll end up facing the same issues here.\n> >\n> > I've attached a patch series that includes switching to simplehash.\n> > Obviously we'd really just collapse all of these patches, but it's\n> > perhaps interesting to see the changes required to use each style\n> > (binary search, dynahash, simplehash).\n> >\n> > As before, there are clearly comments and naming things to be\n> > addressed, but the implementation should be reasonably clean.\n>\n> Looks good, aside from the cleanup work that you mentioned. There are a\n> few more cases that I think you could easily handle with very little\n> extra code:\n>\n> You could also apply the optimization for non-Const expressions, as long\n> as they're stable (i.e. !contain_volatile_functions(expr)).\n\nIs that true? Don't we also have to worry about whether or not the\nvalue is stable (i.e., know when a param has changed)? There have been\ndiscussions about being able to cache stable subexpressions, and my\nunderstanding was that we'd need to have that infrastructure (along\nwith the necessarily invalidation when the param changes) to be able\nto use this for non-Const expressions.\n\n> Deconstructing the array Datum into a simple C array on first call would\n> be a win even for very small arrays and for AND semantics, even if you\n> don't use a hash table.\n\nBecause you wouldn't have to repeatedly detoast it? Or some other\nreason I'm not thinking of? My intuition would have been that (aside\nfrom detoasting if necessary) there wouldn't be much real overhead in,\nfor example, an array storing integers.\n\nJames\n\n\n", "msg_date": "Tue, 8 Sep 2020 15:25:41 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On 08/09/2020 22:25, James Coleman wrote:\n> On Wed, Aug 19, 2020 at 3:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> You could also apply the optimization for non-Const expressions, as long\n>> as they're stable (i.e. !contain_volatile_functions(expr)).\n> \n> Is that true? Don't we also have to worry about whether or not the\n> value is stable (i.e., know when a param has changed)? There have been\n> discussions about being able to cache stable subexpressions, and my\n> understanding was that we'd need to have that infrastructure (along\n> with the necessarily invalidation when the param changes) to be able\n> to use this for non-Const expressions.\n\nYeah, you're right, you'd have to also check for PARAM_EXEC Params. And \nVars. I think the conditions are the same as those checked in \nmatch_clause_to_partition_key() in partprune.c (it's a long function, \nsearch for \"if (!IsA(rightop, Const))\"). Not sure it's worth the \ntrouble, then. But it would be nice to handle queries like \"WHERE column \n= ANY ($1)\"\n\n>> Deconstructing the array Datum into a simple C array on first call would\n>> be a win even for very small arrays and for AND semantics, even if you\n>> don't use a hash table.\n> \n> Because you wouldn't have to repeatedly detoast it? Or some other\n> reason I'm not thinking of? My intuition would have been that (aside\n> from detoasting if necessary) there wouldn't be much real overhead in,\n> for example, an array storing integers.\n\nDealing with NULLs and different element sizes in the array is pretty \ncomplicated. Looping through the array currently looks like this:\n\n\t/* Loop over the array elements */\n\ts = (char *) ARR_DATA_PTR(arr);\n\tbitmap = ARR_NULLBITMAP(arr);\n\tbitmask = 1;\n\n\tfor (int i = 0; i < nitems; i++)\n\t{\n\t\tDatum\t\telt;\n\t\tDatum\t\tthisresult;\n\n\t\t/* Get array element, checking for NULL */\n\t\tif (bitmap && (*bitmap & bitmask) == 0)\n\t\t{\n\t\t\tfcinfo->args[1].value = (Datum) 0;\n\t\t\tfcinfo->args[1].isnull = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\telt = fetch_att(s, typbyval, typlen);\n\t\t\ts = att_addlength_pointer(s, typlen, s);\n\t\t\ts = (char *) att_align_nominal(s, typalign);\n\t\t\tfcinfo->args[1].value = elt;\n\t\t\tfcinfo->args[1].isnull = false;\n\t\t}\n\n\t\t[do stuff with Datum/isnull]\n\n\t\t/* advance bitmap pointer if any */\n\t\tif (bitmap)\n\t\t{\n\t\t\tbitmask <<= 1;\n\t\t\tif (bitmask == 0x100)\n\t\t\t{\n\t\t\t\tbitmap++;\n\t\t\t\tbitmask = 1;\n\t\t\t}\n\t\t}\n\t}\n\nCompared with just:\n\n\tfor (int i = 0; i < nitems; i++)\n\t{\n\t\tDatum\t\telt = datums[i];\n\n\t\t[do stuff with the Datum]\n\t}\n\nI'm not sure how much difference that makes, but I presume it's not \nzero, and it seems like an easy win when you have the code to deal with \nthe Datum array representation anyway.\n\n- Heikki\n\n\n", "msg_date": "Tue, 8 Sep 2020 23:37:06 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, Sep 8, 2020 at 4:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 08/09/2020 22:25, James Coleman wrote:\n> > On Wed, Aug 19, 2020 at 3:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>\n> >> You could also apply the optimization for non-Const expressions, as long\n> >> as they're stable (i.e. !contain_volatile_functions(expr)).\n> >\n> > Is that true? Don't we also have to worry about whether or not the\n> > value is stable (i.e., know when a param has changed)? There have been\n> > discussions about being able to cache stable subexpressions, and my\n> > understanding was that we'd need to have that infrastructure (along\n> > with the necessarily invalidation when the param changes) to be able\n> > to use this for non-Const expressions.\n>\n> Yeah, you're right, you'd have to also check for PARAM_EXEC Params. And\n> Vars. I think the conditions are the same as those checked in\n> match_clause_to_partition_key() in partprune.c (it's a long function,\n> search for \"if (!IsA(rightop, Const))\"). Not sure it's worth the\n> trouble, then. But it would be nice to handle queries like \"WHERE column\n> = ANY ($1)\"\n\nIf I'm understanding properly you're imagining something in the form of:\n\nwith x as (select '{1,2,3,4,5,6,7,8,9,10}'::int[])\nselect * from t where i = any ((select * from x)::int[]);\n\nI've been playing around with this and currently have these checks:\n\ncontain_var_clause((Node *) arrayarg)\ncontain_volatile_functions((Node *) arrayarg)\ncontain_exec_param((Node *) arrayarg, NIL)\n\n(the last one I had to modify the function to handle empty lists)\n\nIf any are true, then have to disable the optimization. But for\nqueries in the form above the test contain_exec_param((Node *)\narrayarg, NIL) evaluates to true, even though we know from looking at\nthe query that the array subexpression is stable for the length of the\nquery.\n\nAm I misunderstanding what you're going for? Or is there a way to\nconfirm the expr, although an exec param, won't change?\n\nAnother interesting thing that this would imply is that we'd either have to:\n\n1. Remove the array length check altogether,\n2. Always use the hash when have a non-Const, but when a Const only if\nthe array length check passes, or\n3. Make this new expr op more fully featured by teaching it how to use\neither a straight loop through a deconstructed array or use the hash.\n\nThat last option feeds into further discussion in the below:\n\n> >> Deconstructing the array Datum into a simple C array on first call would\n> >> be a win even for very small arrays and for AND semantics, even if you\n> >> don't use a hash table.\n> >\n> > Because you wouldn't have to repeatedly detoast it? Or some other\n> > reason I'm not thinking of? My intuition would have been that (aside\n> > from detoasting if necessary) there wouldn't be much real overhead in,\n> > for example, an array storing integers.\n>\n> Dealing with NULLs and different element sizes in the array is pretty\n> complicated. Looping through the array currently looks like this:\n>\n> /* Loop over the array elements */\n> s = (char *) ARR_DATA_PTR(arr);\n> bitmap = ARR_NULLBITMAP(arr);\n> bitmask = 1;\n>\n> for (int i = 0; i < nitems; i++)\n> {\n> Datum elt;\n> Datum thisresult;\n>\n> /* Get array element, checking for NULL */\n> if (bitmap && (*bitmap & bitmask) == 0)\n> {\n> fcinfo->args[1].value = (Datum) 0;\n> fcinfo->args[1].isnull = true;\n> }\n> else\n> {\n> elt = fetch_att(s, typbyval, typlen);\n> s = att_addlength_pointer(s, typlen, s);\n> s = (char *) att_align_nominal(s, typalign);\n> fcinfo->args[1].value = elt;\n> fcinfo->args[1].isnull = false;\n> }\n>\n> [do stuff with Datum/isnull]\n>\n> /* advance bitmap pointer if any */\n> if (bitmap)\n> {\n> bitmask <<= 1;\n> if (bitmask == 0x100)\n> {\n> bitmap++;\n> bitmask = 1;\n> }\n> }\n> }\n>\n> Compared with just:\n>\n> for (int i = 0; i < nitems; i++)\n> {\n> Datum elt = datums[i];\n>\n> [do stuff with the Datum]\n> }\n>\n> I'm not sure how much difference that makes, but I presume it's not\n> zero, and it seems like an easy win when you have the code to deal with\n> the Datum array representation anyway.\n\nDoing this would necessitate option 3 above: we'd have to have this\nnew expr op be able both to use a hash or alternatively do a normal\nloop.\n\nBeing able to use this in more cases than just a Const array expr is\ncertainly interesting, but I'm not sure yet about the feasibility or\ndesirability of that at this point given the above restrictions.\n\nOne other point in favor of the additional complexity here is that\nit's likely that the above described runtime switching between hash\nand loop would be necessary (for this optimization to come into play)\nif caching of stable subexpressions ever lands. I have some interest\nin working on that...but it's also a large project.\n\nJames\n\n\n", "msg_date": "Fri, 11 Sep 2020 17:11:34 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, Sep 11, 2020 at 5:11 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Sep 8, 2020 at 4:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 08/09/2020 22:25, James Coleman wrote:\n> > > On Wed, Aug 19, 2020 at 3:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >>\n> > >> You could also apply the optimization for non-Const expressions, as long\n> > >> as they're stable (i.e. !contain_volatile_functions(expr)).\n> > >\n> > > Is that true? Don't we also have to worry about whether or not the\n> > > value is stable (i.e., know when a param has changed)? There have been\n> > > discussions about being able to cache stable subexpressions, and my\n> > > understanding was that we'd need to have that infrastructure (along\n> > > with the necessarily invalidation when the param changes) to be able\n> > > to use this for non-Const expressions.\n> >\n> > Yeah, you're right, you'd have to also check for PARAM_EXEC Params. And\n> > Vars. I think the conditions are the same as those checked in\n> > match_clause_to_partition_key() in partprune.c (it's a long function,\n> > search for \"if (!IsA(rightop, Const))\"). Not sure it's worth the\n> > trouble, then. But it would be nice to handle queries like \"WHERE column\n> > = ANY ($1)\"\n>\n> If I'm understanding properly you're imagining something in the form of:\n>\n> with x as (select '{1,2,3,4,5,6,7,8,9,10}'::int[])\n> select * from t where i = any ((select * from x)::int[]);\n>\n> I've been playing around with this and currently have these checks:\n>\n> contain_var_clause((Node *) arrayarg)\n> contain_volatile_functions((Node *) arrayarg)\n> contain_exec_param((Node *) arrayarg, NIL)\n>\n> (the last one I had to modify the function to handle empty lists)\n>\n> If any are true, then have to disable the optimization. But for\n> queries in the form above the test contain_exec_param((Node *)\n> arrayarg, NIL) evaluates to true, even though we know from looking at\n> the query that the array subexpression is stable for the length of the\n> query.\n>\n> Am I misunderstanding what you're going for? Or is there a way to\n> confirm the expr, although an exec param, won't change?\n>\n> Another interesting thing that this would imply is that we'd either have to:\n>\n> 1. Remove the array length check altogether,\n> 2. Always use the hash when have a non-Const, but when a Const only if\n> the array length check passes, or\n> 3. Make this new expr op more fully featured by teaching it how to use\n> either a straight loop through a deconstructed array or use the hash.\n>\n> That last option feeds into further discussion in the below:\n>\n> > >> Deconstructing the array Datum into a simple C array on first call would\n> > >> be a win even for very small arrays and for AND semantics, even if you\n> > >> don't use a hash table.\n> > >\n> > > Because you wouldn't have to repeatedly detoast it? Or some other\n> > > reason I'm not thinking of? My intuition would have been that (aside\n> > > from detoasting if necessary) there wouldn't be much real overhead in,\n> > > for example, an array storing integers.\n> >\n> > Dealing with NULLs and different element sizes in the array is pretty\n> > complicated. Looping through the array currently looks like this:\n> >\n> > /* Loop over the array elements */\n> > s = (char *) ARR_DATA_PTR(arr);\n> > bitmap = ARR_NULLBITMAP(arr);\n> > bitmask = 1;\n> >\n> > for (int i = 0; i < nitems; i++)\n> > {\n> > Datum elt;\n> > Datum thisresult;\n> >\n> > /* Get array element, checking for NULL */\n> > if (bitmap && (*bitmap & bitmask) == 0)\n> > {\n> > fcinfo->args[1].value = (Datum) 0;\n> > fcinfo->args[1].isnull = true;\n> > }\n> > else\n> > {\n> > elt = fetch_att(s, typbyval, typlen);\n> > s = att_addlength_pointer(s, typlen, s);\n> > s = (char *) att_align_nominal(s, typalign);\n> > fcinfo->args[1].value = elt;\n> > fcinfo->args[1].isnull = false;\n> > }\n> >\n> > [do stuff with Datum/isnull]\n> >\n> > /* advance bitmap pointer if any */\n> > if (bitmap)\n> > {\n> > bitmask <<= 1;\n> > if (bitmask == 0x100)\n> > {\n> > bitmap++;\n> > bitmask = 1;\n> > }\n> > }\n> > }\n> >\n> > Compared with just:\n> >\n> > for (int i = 0; i < nitems; i++)\n> > {\n> > Datum elt = datums[i];\n> >\n> > [do stuff with the Datum]\n> > }\n> >\n> > I'm not sure how much difference that makes, but I presume it's not\n> > zero, and it seems like an easy win when you have the code to deal with\n> > the Datum array representation anyway.\n>\n> Doing this would necessitate option 3 above: we'd have to have this\n> new expr op be able both to use a hash or alternatively do a normal\n> loop.\n>\n> Being able to use this in more cases than just a Const array expr is\n> certainly interesting, but I'm not sure yet about the feasibility or\n> desirability of that at this point given the above restrictions.\n>\n> One other point in favor of the additional complexity here is that\n> it's likely that the above described runtime switching between hash\n> and loop would be necessary (for this optimization to come into play)\n> if caching of stable subexpressions ever lands. I have some interest\n> in working on that...but it's also a large project.\n\nI've attached a cleaned up patch. Last CF it was listed in is\nhttps://commitfest.postgresql.org/29/2542/ -- what's the appropriate\nstep to take here given it's an already existing patch, but not yet\nmoved into recent CFs?\n\nThanks,\nJames", "msg_date": "Fri, 19 Mar 2021 16:40:59 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 20 Mar 2021 at 09:41, James Coleman <jtc331@gmail.com> wrote:\n> I've attached a cleaned up patch. Last CF it was listed in is\n> https://commitfest.postgresql.org/29/2542/ -- what's the appropriate\n> step to take here given it's an already existing patch, but not yet\n> moved into recent CFs?\n\nI had a look at this patch. I like the idea of using a simplehash.h\nhash table to hash the constant values so that repeat lookups can be\nperformed much more quickly, however, I'm a bit concerned that there\nare quite a few places in the code where we often just execute a\nScalarArrayOpExpr once and I'm a bit worried that we'll slow down\nexpression evaluation of those cases.\n\nThe two cases that I have in mind are:\n\n1. eval_const_expressions() where we use the executor to evaluate the\nScalarArrayOpExpr to see if the result is Const.\n2. CHECK constraints with IN clauses and single-row INSERTs.\n\nI tried to benchmark both of these but I'm struggling to get stable\nenough performance for #2, even with fsync=off. Sometimes I'm getting\nresults 2.5x slower than other runs.\n\nFor benchmarking #1 I'm also not too sure I'm getting stable enough\nresults for them to mean anything.\n\nI was running:\n\ncreate table a (a int);\n\nbench.sql: explain select * from a where a\nin(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16);\n\ndrowley@amd3990x:~$ pgbench -n -T 60 -j 64 -c 64 -f bench.sql -P 10 postgres\nMaster (6d41dd045):\ntps = 992586.991045 (without initial connection time)\ntps = 987964.990483 (without initial connection time)\ntps = 994309.670918 (without initial connection time)\n\nMaster + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\ntps = 956022.557626 (without initial connection time)\ntps = 963043.352823 (without initial connection time)\ntps = 968582.600100 (without initial connection time)\n\nThis puts the patched version about 3% slower. I'm not sure how much\nof that is changes in the binary and noise and how much is the\nneedless hashtable build done for eval_const_expressions().\n\nI wondered if we should make it the query planner's job of deciding if\nthe ScalarArrayOpExpr should be hashed or not. I ended up with the\nattached rough-cut patch that introduces HashedScalarArrayOpExpr and\nhas the query planner decide if it's going to replace\nScalarArrayOpExpr with these HashedScalarArrayOpExpr during\npreprocess_expression(). I do think that we might want to consider\nbeing a bit selective about when we do these replacements. It seems\nlikely that we'd want to do this for EXPRKIND_QUAL and maybe\nEXPRKIND_TARGET, but I imagine that converting ScalarArrayOpExpr to\nHashedScalarArrayOpExpr for EXPRKIND_VALUES would be a waste of time\nsince those will just be executed once.\n\nI tried the same above test with the\nv4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch plus the\nattached rough-cut patch and got:\n\nmaster + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\n+ v5-0002-Rough-cut-patch-for-HashedScalarArrayOpExpr.patch\ntps = 1167969.983173 (without initial connection time)\ntps = 1199636.793314 (without initial connection time)\ntps = 1190690.939963 (without initial connection time)\n\nI can't really explain why this became faster. I was expecting it just\nto reduce that slowdown of the v4 patch a little. I don't really see\nany reason why it would become faster. It's almost 20% faster which\nseems like too much to just be fluctuations in code alignment in the\nbinary.\n\nThe attached patch is still missing the required changes to\nllvmjit_expr.c. I think that was also missing from the original patch\ntoo, however.\n\nAlso, I added HashedScalarArrayOpExpr to plannodes.h. All other Expr\ntype nodes are in primnodes.h. However, I put HashedScalarArrayOpExpr\nin plannodes.h because the parser does not generate this and it's not\ngoing to be stored in the catalogue files anywhere. I'm not so sure\ninventing a new Expr type node that only can be generated by the\nplanner is a good thing to do.\n\nAnyway, wondering what you think of the idea of allowing the planner\nto choose if it's going to hash or not?\n\nIt might also be good if someone else can check if they can get a bit\nmore stable performance results from benchmarking the patches.\n\n(Also attached your v4 patch again just so anyone following along at\nhome does not need to hunt around for the correct set of patches to\napply to test this)\n\nDavid", "msg_date": "Tue, 6 Apr 2021 15:58:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, Apr 5, 2021 at 11:58 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 20 Mar 2021 at 09:41, James Coleman <jtc331@gmail.com> wrote:\n> > I've attached a cleaned up patch. Last CF it was listed in is\n> > https://commitfest.postgresql.org/29/2542/ -- what's the appropriate\n> > step to take here given it's an already existing patch, but not yet\n> > moved into recent CFs?\n>\n> I had a look at this patch. I like the idea of using a simplehash.h\n> hash table to hash the constant values so that repeat lookups can be\n> performed much more quickly, however, I'm a bit concerned that there\n> are quite a few places in the code where we often just execute a\n> ScalarArrayOpExpr once and I'm a bit worried that we'll slow down\n> expression evaluation of those cases.\n>\n> The two cases that I have in mind are:\n>\n> 1. eval_const_expressions() where we use the executor to evaluate the\n> ScalarArrayOpExpr to see if the result is Const.\n> 2. CHECK constraints with IN clauses and single-row INSERTs.\n\nThis is a good point I hadn't considered; now that you mention it, I\nthink another case would be expression evaluation in pl/pgsql.\n\n> I tried to benchmark both of these but I'm struggling to get stable\n> enough performance for #2, even with fsync=off. Sometimes I'm getting\n> results 2.5x slower than other runs.\n>\n> For benchmarking #1 I'm also not too sure I'm getting stable enough\n> results for them to mean anything.\n>\n> I was running:\n>\n> create table a (a int);\n>\n> bench.sql: explain select * from a where a\n> in(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16);\n>\n> drowley@amd3990x:~$ pgbench -n -T 60 -j 64 -c 64 -f bench.sql -P 10 postgres\n> Master (6d41dd045):\n> tps = 992586.991045 (without initial connection time)\n> tps = 987964.990483 (without initial connection time)\n> tps = 994309.670918 (without initial connection time)\n>\n> Master + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\n> tps = 956022.557626 (without initial connection time)\n> tps = 963043.352823 (without initial connection time)\n> tps = 968582.600100 (without initial connection time)\n>\n> This puts the patched version about 3% slower. I'm not sure how much\n> of that is changes in the binary and noise and how much is the\n> needless hashtable build done for eval_const_expressions().\n>\n> I wondered if we should make it the query planner's job of deciding if\n> the ScalarArrayOpExpr should be hashed or not. I ended up with the\n> attached rough-cut patch that introduces HashedScalarArrayOpExpr and\n> has the query planner decide if it's going to replace\n> ScalarArrayOpExpr with these HashedScalarArrayOpExpr during\n> preprocess_expression(). I do think that we might want to consider\n> being a bit selective about when we do these replacements. It seems\n> likely that we'd want to do this for EXPRKIND_QUAL and maybe\n> EXPRKIND_TARGET, but I imagine that converting ScalarArrayOpExpr to\n> HashedScalarArrayOpExpr for EXPRKIND_VALUES would be a waste of time\n> since those will just be executed once.\n\nIn theory we might want to cost them differently as well, though I'm\nslightly hesitant to do so at this point to avoid causing plan changes\n(I'm not sure how we would balance that concern with the potential\nthat the best plan isn't chosen).\n\n> I tried the same above test with the\n> v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch plus the\n> attached rough-cut patch and got:\n>\n> master + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\n> + v5-0002-Rough-cut-patch-for-HashedScalarArrayOpExpr.patch\n> tps = 1167969.983173 (without initial connection time)\n> tps = 1199636.793314 (without initial connection time)\n> tps = 1190690.939963 (without initial connection time)\n>\n> I can't really explain why this became faster. I was expecting it just\n> to reduce that slowdown of the v4 patch a little. I don't really see\n> any reason why it would become faster. It's almost 20% faster which\n> seems like too much to just be fluctuations in code alignment in the\n> binary.\n\nI'm not at a place where I can do good perf testing right now (just on\nmy laptop for the moment), unfortunately, so I can't confirm one way\nor the other.\n\n> The attached patch is still missing the required changes to\n> llvmjit_expr.c. I think that was also missing from the original patch\n> too, however.\n\nAh, I didn't realize that needed to be changed as well. I'll take a\nlook at that.\n\n> Also, I added HashedScalarArrayOpExpr to plannodes.h. All other Expr\n> type nodes are in primnodes.h. However, I put HashedScalarArrayOpExpr\n> in plannodes.h because the parser does not generate this and it's not\n> going to be stored in the catalogue files anywhere. I'm not so sure\n> inventing a new Expr type node that only can be generated by the\n> planner is a good thing to do.\n\nI don't know what the positives and negatives are of this.\n\n> Anyway, wondering what you think of the idea of allowing the planner\n> to choose if it's going to hash or not?\n\nIn general I think it's very reasonable. I kinda wonder if\nHashedScalarArrayOpExpr should have the ScalarArrayOpExp inlined\ninstead of maintaining a pointer, but it's not a big deal to me either\nway. It certainly adds additional code, but probably also makes the\nexecExpr code clearer.\n\n> It might also be good if someone else can check if they can get a bit\n> more stable performance results from benchmarking the patches.\n>\n> (Also attached your v4 patch again just so anyone following along at\n> home does not need to hunt around for the correct set of patches to\n> apply to test this)\n\nA few other comments:\n\n- It looks like several of the \"result is always InvalidOid\" changes\nshould get committed separately (and now)?\n- Two comment tweaks:\n\n+ * 1. The 2nd argument of the array does not contain any Vars, Params or\n\ns/array/array op/\n\n+ * worthwhile using the hashed version of ScalarArrayOpExprs rather than\n\ns/ScalarArrayOpExprs/ScalarArrayOpExpr/\n\n- Using op_hashjoinable is an improvement over my initial patch.\n- I like the name change to put HASHED/Hashed first.\n\nThanks,\nJames\n\n\n", "msg_date": "Tue, 6 Apr 2021 20:39:57 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "\nOn 4/6/21 5:58 AM, David Rowley wrote:\n> On Sat, 20 Mar 2021 at 09:41, James Coleman <jtc331@gmail.com> wrote:\n>> I've attached a cleaned up patch. Last CF it was listed in is\n>> https://commitfest.postgresql.org/29/2542/ -- what's the appropriate\n>> step to take here given it's an already existing patch, but not yet\n>> moved into recent CFs?\n> \n> I had a look at this patch. I like the idea of using a simplehash.h\n> hash table to hash the constant values so that repeat lookups can be\n> performed much more quickly, however, I'm a bit concerned that there\n> are quite a few places in the code where we often just execute a\n> ScalarArrayOpExpr once and I'm a bit worried that we'll slow down\n> expression evaluation of those cases.\n> \n> The two cases that I have in mind are:\n> \n> 1. eval_const_expressions() where we use the executor to evaluate the\n> ScalarArrayOpExpr to see if the result is Const.\n> 2. CHECK constraints with IN clauses and single-row INSERTs.\n> \n> I tried to benchmark both of these but I'm struggling to get stable\n> enough performance for #2, even with fsync=off. Sometimes I'm getting\n> results 2.5x slower than other runs.\n> \n> For benchmarking #1 I'm also not too sure I'm getting stable enough\n> results for them to mean anything.\n> \n> I was running:\n> \n> create table a (a int);\n> \n> bench.sql: explain select * from a where a\n> in(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16);\n> \n> drowley@amd3990x:~$ pgbench -n -T 60 -j 64 -c 64 -f bench.sql -P 10 postgres\n> Master (6d41dd045):\n> tps = 992586.991045 (without initial connection time)\n> tps = 987964.990483 (without initial connection time)\n> tps = 994309.670918 (without initial connection time)\n> \n> Master + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\n> tps = 956022.557626 (without initial connection time)\n> tps = 963043.352823 (without initial connection time)\n> tps = 968582.600100 (without initial connection time)\n> \n> This puts the patched version about 3% slower. I'm not sure how much\n> of that is changes in the binary and noise and how much is the\n> needless hashtable build done for eval_const_expressions().\n> \n> I wondered if we should make it the query planner's job of deciding if\n> the ScalarArrayOpExpr should be hashed or not. I ended up with the\n> attached rough-cut patch that introduces HashedScalarArrayOpExpr and\n> has the query planner decide if it's going to replace\n> ScalarArrayOpExpr with these HashedScalarArrayOpExpr during\n> preprocess_expression(). I do think that we might want to consider\n> being a bit selective about when we do these replacements. It seems\n> likely that we'd want to do this for EXPRKIND_QUAL and maybe\n> EXPRKIND_TARGET, but I imagine that converting ScalarArrayOpExpr to\n> HashedScalarArrayOpExpr for EXPRKIND_VALUES would be a waste of time\n> since those will just be executed once.\n> \n> I tried the same above test with the\n> v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch plus the\n> attached rough-cut patch and got:\n> \n> master + v4-0001-Hash-lookup-const-arrays-in-OR-d-ScalarArrayOps.patch\n> + v5-0002-Rough-cut-patch-for-HashedScalarArrayOpExpr.patch\n> tps = 1167969.983173 (without initial connection time)\n> tps = 1199636.793314 (without initial connection time)\n> tps = 1190690.939963 (without initial connection time)\n> \n> I can't really explain why this became faster. I was expecting it just\n> to reduce that slowdown of the v4 patch a little. I don't really see\n> any reason why it would become faster. It's almost 20% faster which\n> seems like too much to just be fluctuations in code alignment in the\n> binary.\n> \n\nInteresting. I tried this on the \"small\" machine I use for benchmarking,\nwith the same SQL script you used, and also with IN() containing 10 and\n100 values - so less/more than your script, which used 16 values.\n\nI only ran that with a single client, the machine only has 4 cores and\nthis should not be related to concurrency, so 1 client seems fine. The\naverage of 10 runs, 15 seconds each look like this:\n\n simple prepared 10/s 10/p 100/s 100/p\n -------------------------------------------------------------\n master 21847 59476 23343 59380 11757 56488\n v4 21546 57757 22864 57704 11572 57350\n v4+v5 23374 56089 24410 56140 14765 55302\n\nThe first two columns are your bench.sql, with -M simple or prepared.\nThe other columns are 10 or 100 values, /s is simple, /p is prepared.\n\nCompared to master:\n\n simple prepared 10/s 10/p 100/s 100/p\n -------------------------------------------------------------\n v4 98.62% 97.11% 97.95% 97.18% 98.43% 101.52%\n v4+v5 106.99% 94.31% 104.57% 94.54% 125.59% 97.90%\n\nThat seems to mostly match your observation - there's a small\nperformance hit (~2%), although that might be due to changes in the\nlayout of the binary. And v4+v5 improves that a bit (even compared to\nmaster), although I don't see the same 20% speedup.\n\nI see +25% improvement, but only with 100 values.\n\nIt's a bit strange that in prepared mode, the v5 actually hurts the\nperformance a bit.\n\nThat being said, this is a pretty extreme test case. I'm pretty sure\nthat once the table is not empty, the results will probably show a clear\nimprovement. I'll collect some of those results.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 7 Apr 2021 19:54:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, 8 Apr 2021 at 05:54, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> I only ran that with a single client, the machine only has 4 cores and\n> this should not be related to concurrency, so 1 client seems fine. The\n> average of 10 runs, 15 seconds each look like this:\n\nThanks for running these tests. The reason I added so much\nconcurrency was that this AMD machine has some weird behaviour in\nregards to power management. Every bios update I seem to get changes\nthe power management but it still is very unstable despite me having\nit on the most optimal settings in the bios. Working it a bit harder\nseems to help make it realise that there might be some urgency.\n\n\n> simple prepared 10/s 10/p 100/s 100/p\n> -------------------------------------------------------------\n> master 21847 59476 23343 59380 11757 56488\n> v4 21546 57757 22864 57704 11572 57350\n> v4+v5 23374 56089 24410 56140 14765 55302\n>\n> The first two columns are your bench.sql, with -M simple or prepared.\n> The other columns are 10 or 100 values, /s is simple, /p is prepared.\n>\n> Compared to master:\n>\n> simple prepared 10/s 10/p 100/s 100/p\n> -------------------------------------------------------------\n> v4 98.62% 97.11% 97.95% 97.18% 98.43% 101.52%\n> v4+v5 106.99% 94.31% 104.57% 94.54% 125.59% 97.90%\n>\n> That seems to mostly match your observation - there's a small\n> performance hit (~2%), although that might be due to changes in the\n> layout of the binary. And v4+v5 improves that a bit (even compared to\n> master), although I don't see the same 20% speedup.\n\nI've spent more time hacking at this patch. I had a bit of a change\nof heart earlier about having this new HashedScalarArrayOpExpr node\ntype. There were more places that I imagined that I needed to add\nhandling for it. For example, partprune.c needed to know about it to\nallow partition pruning on them. While supporting that is just a few\nlines to make a recursive call passing in the underlying\nScalarArrayOpExpr, I just didn't like the idea.\n\nInstead, I think it'll be better just to add a new field to\nScalarArrayOpExpr and have the planner set that to tell the executor\nthat it should use a hash table to perform the lookups rather than a\nlinear search. This can just be the hash function oid, which also\nsaves the executor from having to look that up.\n\nAfter quite a bit of hacking, I've ended up with the attached. I\nadded the required JIT code to teach the jit code about\nEEOP_HASHED_SCALARARRAYOP.\n\nI also wrote the missing regression test for non-strict equality ops\nand moved the declaration for the simplehash.h code into\nexecExprInterp.c and forward declared ScalarArrayOpExprHashTable in\nexprExpr.h. I also rewrote a large number of comments and fixed a few\nthings like missing permission checks for the hash function.\n\nI've not done any further performance tests yet but will start those now.\n\nDavid", "msg_date": "Thu, 8 Apr 2021 18:50:29 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, 8 Apr 2021 at 18:50, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've not done any further performance tests yet but will start those now.\n\nI ran a set of tests on this:\n\nselect * from a where a in( < 1 to 10 > );\nand\nselect * from a where a in( < 1 to 100 > );\n\nthe table \"a\" is just an empty table with a single int column.\n\nI ran \"pgbench -T 15 -c 16 -t 16\" ten times each and the resulting tps\nis averaged over the 10 runs.\n\nWith 10 items in the IN clause:\nmaster: 99887.9098314 tps\npatched: 103235.7616416 tps (3.35% faster)\n\nWith 100 items:\nmaster: 62442.4838792 tps\npatched:62275.4955754 tps (0.27% slower)\n\nThese tests are just designed to test the overhead of the additional\nplanning and expression initialisation. Testing the actual\nperformance of the patch vs master with large IN lists shows the\nexpected significant speedups.\n\nThese results show that there's not much in the way of a measurable\nslowdown in planning or executor startup from the additional code\nwhich decides if we should hash the ScalarArrayOpExpr.\n\nI think the changes in the patch are fairly isolated and the test\ncoverage is now pretty good. I'm planning on looking at the patch\nagain now and will consider pushing it for PG14.\n\nDavid\n\n\n", "msg_date": "Thu, 8 Apr 2021 22:54:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, 8 Apr 2021 at 22:54, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I think the changes in the patch are fairly isolated and the test\n> coverage is now pretty good. I'm planning on looking at the patch\n> again now and will consider pushing it for PG14.\n\nI push this with some minor cleanup from the v6 patch I posted earlier.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Apr 2021 00:00:59 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 8, 2021 at 8:01 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 8 Apr 2021 at 22:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I think the changes in the patch are fairly isolated and the test\n> > coverage is now pretty good. I'm planning on looking at the patch\n> > again now and will consider pushing it for PG14.\n>\n> I push this with some minor cleanup from the v6 patch I posted earlier.\n>\n> David\n\nThank you!\n\nI assume proper procedure for the CF entry is to move it into the\ncurrent CF and then mark it as committed, however I don't know how (or\ndon't have permissions?) to move it into the current CF. How does one\ngo about doing that?\n\nHere's the entry: https://commitfest.postgresql.org/29/2542/\n\nThanks,\nJames\n\n\n", "msg_date": "Thu, 8 Apr 2021 12:57:28 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On 2021-Apr-08, James Coleman wrote:\n\n> I assume proper procedure for the CF entry is to move it into the\n> current CF and then mark it as committed, however I don't know how (or\n> don't have permissions?) to move it into the current CF. How does one\n> go about doing that?\n> \n> Here's the entry: https://commitfest.postgresql.org/29/2542/\n\nDone, thanks.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:03:58 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Thu, Apr 8, 2021 at 1:04 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-08, James Coleman wrote:\n>\n> > I assume proper procedure for the CF entry is to move it into the\n> > current CF and then mark it as committed, however I don't know how (or\n> > don't have permissions?) to move it into the current CF. How does one\n> > go about doing that?\n> >\n> > Here's the entry: https://commitfest.postgresql.org/29/2542/\n>\n> Done, thanks.\n>\n> --\n> Álvaro Herrera 39°49'30\"S 73°17'W\n\nThanks. Is that something I should be able to do myself (should I be\nasking someone for getting privileges in the app to do so)? I'm not\nsure what the project policy is on that.\n\nJames\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:11:13 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On 2021-Apr-08, James Coleman wrote:\n\n> On Thu, Apr 8, 2021 at 1:04 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Apr-08, James Coleman wrote:\n> >\n> > > I assume proper procedure for the CF entry is to move it into the\n> > > current CF and then mark it as committed, however I don't know how (or\n> > > don't have permissions?) to move it into the current CF. How does one\n> > > go about doing that?\n> > >\n> > > Here's the entry: https://commitfest.postgresql.org/29/2542/\n> >\n> > Done, thanks.\n\n> Thanks. Is that something I should be able to do myself\n\nNo, sorry.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La vida es para el que se aventura\"\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:16:21 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On 4/8/21 2:00 PM, David Rowley wrote:\n> On Thu, 8 Apr 2021 at 22:54, David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> I think the changes in the patch are fairly isolated and the test\n>> coverage is now pretty good. I'm planning on looking at the patch\n>> again now and will consider pushing it for PG14.\n> \n> I push this with some minor cleanup from the v6 patch I posted earlier.\n> \n\nI ran the same set of benchmarks on the v6, which I think should be\nmostly identical to what was committed. I extended the test a bit to\ntest table with 0, 1, 5 and 1000 rows, and also with int and text\nvalues, to see how it works with more expensive comparators.\n\nI built binaries with gcc 9.2.0 and clang 11.0.0, the full results are\nattached. There's a bit of difference between gcc and clang, but the\ngeneral behavior is about the same, so I'll only present gcc results to\nkeep this simple. I'll only throughput comparison to master, so >1.0\nmeans good, <1.0 means bad. If you're interested in actual tps, see the\nfull results.\n\nFor the v5 patch (actually v4-0001 + v5-0002) and v6, the results are:\n\ninteger column / v5\n===================\n\n rows 10/p 100/p 16/p 10/s 100/s 16/s\n -----------------------------------------------------------\n 0 97% 97% 97% 107% 126% 108%\n 1 95% 82% 94% 108% 132% 110%\n 5 95% 83% 95% 108% 132% 110%\n 1000 129% 481% 171% 131% 382% 165%\n\n\ninteger column / v6\n===================\n\n rows 10/p 100/p 16/p 10/s 100/s 16/s\n -----------------------------------------------------------\n 0 97% 97% 97% 98% 98% 98%\n 1 96% 84% 95% 97% 97% 98%\n 5 97% 85% 96% 98% 97% 97%\n 1000 129% 489% 172% 128% 330% 162%\n\n\ntext column / v5\n================\n\n rows 10/p 100/p 16/p 10/s 100/s 16/s\n -----------------------------------------------------------\n 0 100% 100% 100% 106% 119% 108%\n 1 96% 81% 95% 107% 120% 109%\n 5 97% 82% 96% 107% 121% 109%\n 1000 291% 1622% 402% 255% 1092% 337%\n\n\ntext column / v6\n================\n\n rows 10/p 100/p 16/p 10/s 100/s 16/s\n -----------------------------------------------------------\n 0 101% 101% 101% 98% 99% 99%\n 1 98% 82% 96% 98% 96% 97%\n 5 100% 84% 98% 98% 96% 98%\n 1000 297% 1645% 408% 255% 1000% 336%\n\n\nOverall, the behavior for integer and text columns is the same, for both\npatches. There's a couple interesting observations:\n\n1) For the \"simple\" query mode, v5 helped quite a bit (20-30% speedup),\nbut v6 does not seem to help at all - it's either same or slower than\nunpatched master.\n\nI wonder why is that, and if we could get some of the speedup with v6?\nAt first I thought that maybe v5 is not building the hash table in cases\nwhere v6 does, but that shouldn't be faster than master.\n\n\n2) For the \"prepared\" mode, there's a clear performance hit the longer\nthe array is (for both v5 and v6). For 100 elements it's about 15%,\nwhich is not great.\n\nI think the reason is fairly simple - building the hash table is not\nfree, and with few rows it's not worth it - it'd be faster to just\nsearch the array directly. Unfortunately, the logic that makes the\ndecision to switch to hashing only looks at the array length only, and\nignores the number of rows entirely. So I think if we want to address\nthis, convert_saop_to_hashed_saop needs to compare\n\n has_build_cost + nrows * hash_lookup_cost\n\nand\n\n nrows * linear_lookup_cost\n\nto make reasonable decision.\n\nI was thinking that maybe we can ignore this, because people probably\nhave much larger tables in practice. But I'm not sure that's really\ntrue, because there may be other quals and it's possible the preceding\nones are quite selective, filtering most of the rows.\n\nI'm not sure how much of the necessary information we have available in\nconvert_saop_to_hashed_saop_walker, though :-( I suppose we know the\nnumber of input rows for that plan node, not sure about selectivity of\nthe other quals, though.\n\nIt's also a bit strange that we get speedup for \"simple\" protocol, while\nfor \"prepared\" it gets slower. That seems counter-intuitive, because why\nshould we see opposite outcomes in those cases? I'd assume that we'll\nsee either speedup or slowdown in both cases, with the relative change\nbeing more significant in the \"prepared\" mode.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 8 Apr 2021 23:32:13 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, 9 Apr 2021 at 09:32, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> I ran the same set of benchmarks on the v6, which I think should be\n> mostly identical to what was committed. I extended the test a bit to\n> test table with 0, 1, 5 and 1000 rows, and also with int and text\n> values, to see how it works with more expensive comparators.\n>\n> I built binaries with gcc 9.2.0 and clang 11.0.0, the full results are\n> attached. There's a bit of difference between gcc and clang, but the\n> general behavior is about the same, so I'll only present gcc results to\n> keep this simple. I'll only throughput comparison to master, so >1.0\n> means good, <1.0 means bad. If you're interested in actual tps, see the\n> full results.\n>\n> For the v5 patch (actually v4-0001 + v5-0002) and v6, the results are:\n>\n> integer column / v5\n> ===================\n>\n> rows 10/p 100/p 16/p 10/s 100/s 16/s\n> -----------------------------------------------------------\n> 0 97% 97% 97% 107% 126% 108%\n> 1 95% 82% 94% 108% 132% 110%\n> 5 95% 83% 95% 108% 132% 110%\n> 1000 129% 481% 171% 131% 382% 165%\n\nI think we should likely ignore the v5 patch now. The reason is that\nit was pretty much unfinished and there were many places that I'd not\nyet added support for the HashedScalarArrayOpExpr node type yet. This\ncould cause these nodes to be skipped during node mutations or node\nwalking which would certainly make planning faster, just not in a way\nthat's correct.\n\n> integer column / v6\n> ===================\n>\n> rows 10/p 100/p 16/p 10/s 100/s 16/s\n> -----------------------------------------------------------\n> 0 97% 97% 97% 98% 98% 98%\n> 1 96% 84% 95% 97% 97% 98%\n> 5 97% 85% 96% 98% 97% 97%\n> 1000 129% 489% 172% 128% 330% 162%\n\nThis is a really informative set of results. I can only guess that the\nslowdown of the 100/prepared query is down to building the hash table.\nI think that because the 0 rows test does not show the slowdown and we\nonly build the table when evaluating for the first time. There's a\nslightly larger hit on 1 row vs 5 rows, which makes sense since the\nrewards of the hash lookup start paying off more with more rows.\n\nLooking at your tps numbers, I think I can see why we get the drop in\nperformance with prepared statements but not simple statements. This\nseems to just be down to the fact that the planning time dominates in\nthe simple statement case. For example, the \"1 row\" test for 100/s\nfor v6 is 10023.3 tps, whereas the 100/p result is 44093.8 tps. With\nmaster, prepared gets 52400.0 tps. So we could say the hash table\nbuild costs us 8306.2 tps, or 3.59 microseconds per execution, per:\n\npostgres=# select 1000000 / 52400.0 - 1000000 / 44093.8;\n ?column?\n---------------------\n -3.5949559161508538\n(1 row)\n\nIf we look at the tps for the simple query version of the same test.\nMaster did 10309.6 tps, v6 did 10023.3 tps. If we apply that 3.59\nmicrosecond slowdown to master's tps, then we get pretty close to\nwithin 1% of the v6 tps:\n\npostgres=# select 1000000 / (1000000 / 10309.6 + 3.59);\n ?column?\n-----------------------\n 9941.6451581291294165\n(1 row)\n\n> text column / v6\n> ================\n>\n> rows 10/p 100/p 16/p 10/s 100/s 16/s\n> -----------------------------------------------------------\n> 0 101% 101% 101% 98% 99% 99%\n> 1 98% 82% 96% 98% 96% 97%\n> 5 100% 84% 98% 98% 96% 98%\n> 1000 297% 1645% 408% 255% 1000% 336%\n>\n>\n> Overall, the behavior for integer and text columns is the same, for both\n> patches. There's a couple interesting observations:\n>\n> 1) For the \"simple\" query mode, v5 helped quite a bit (20-30% speedup),\n> but v6 does not seem to help at all - it's either same or slower than\n> unpatched master.\n\nI think that's related to the fact that I didn't finish adding\nHashedScalarArrayOpExpr processing to all places that needed it.\n\n> I wonder why is that, and if we could get some of the speedup with v6?\n> At first I thought that maybe v5 is not building the hash table in cases\n> where v6 does, but that shouldn't be faster than master.\n\nI don't think v5 and v6 really do anything much differently in the\nexecutor. The only difference is really during ExecInitExprRec() when\nwe initialize the expression. With v5 we had a case\nT_HashedScalarArrayOpExpr: to handle the new node type, but in v6 we\nhave if (OidIsValid(opexpr->hashfuncid)). Oh, wait. I did add the\nmissing permissions check on the hash function, so that will account\nfor something. As far as I can see, that's required.\n\n> 2) For the \"prepared\" mode, there's a clear performance hit the longer\n> the array is (for both v5 and v6). For 100 elements it's about 15%,\n> which is not great.\n>\n> I think the reason is fairly simple - building the hash table is not\n> free, and with few rows it's not worth it - it'd be faster to just\n> search the array directly. Unfortunately, the logic that makes the\n> decision to switch to hashing only looks at the array length only, and\n> ignores the number of rows entirely. So I think if we want to address\n> this, convert_saop_to_hashed_saop needs to compare\n>\n> has_build_cost + nrows * hash_lookup_cost\n>\n> and\n>\n> nrows * linear_lookup_cost\n>\n> to make reasonable decision.\n\nI thought about that but I was really worried that the performance of\nScalarArrayOpExpr would just become too annoyingly unpredictable. You\nknow fairly well that we can often get massive row underestimations in\nthe planner. (I guess you worked on ext stats mainly because of that)\nThe problem I want to avoid is the ones where we get a big row\nunderestimation but don't really get a bad plan as a result. For\nexample a query like:\n\nSELECT * FROM big_table WHERE col1 = ... AND col2 = ... AND col3 = ...\nAND col4 IN( ... big list of values ...);\n\nIf col1, col2 and col3 are highly correlated but individually fairly\nselective, then we could massively underestimate how many rows the IN\nclause will see (assuming no rearranging was done here).\n\nI'm not completely opposed to the idea of taking the estimated rows\ninto account during planning. It might just mean having to move the\nconvert_saop_to_hashed_saop() call somewhere else. I imagine that's\nfairly trivial to do. I just have concerns about doing so.\n\n> I was thinking that maybe we can ignore this, because people probably\n> have much larger tables in practice. But I'm not sure that's really\n> true, because there may be other quals and it's possible the preceding\n> ones are quite selective, filtering most of the rows.\n>\n> I'm not sure how much of the necessary information we have available in\n> convert_saop_to_hashed_saop_walker, though :-( I suppose we know the\n> number of input rows for that plan node, not sure about selectivity of\n> the other quals, though.\n>\n> It's also a bit strange that we get speedup for \"simple\" protocol, while\n> for \"prepared\" it gets slower. That seems counter-intuitive, because why\n> should we see opposite outcomes in those cases? I'd assume that we'll\n> see either speedup or slowdown in both cases, with the relative change\n> being more significant in the \"prepared\" mode.\n\nI hope my theory above about the planner time dominating the overall\ntime shows why that is.\n\nAnother way to look at these result is by taking your tps value and\ncalculating how long it takes to do N number of transactions then\ntotalling up the time it takes. If I do that to calculate how long it\ntook each test to perform 1000 transactions and sum each test grouping\nby mode and version with rollup on mode, I get:\n\ntime values are in seconds:\n\npg-master 28.07\nprepared 11.28\nsimple 16.79\n\npg-v6 15.86\nprepared 5.23\nsimple 10.63\n\nSo, the overall result when applying the total time is 177%.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Apr 2021 11:21:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "\n\nOn 4/9/21 1:21 AM, David Rowley wrote:\n> On Fri, 9 Apr 2021 at 09:32, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I ran the same set of benchmarks on the v6, which I think should be\n>> mostly identical to what was committed. I extended the test a bit to\n>> test table with 0, 1, 5 and 1000 rows, and also with int and text\n>> values, to see how it works with more expensive comparators.\n>>\n>> I built binaries with gcc 9.2.0 and clang 11.0.0, the full results are\n>> attached. There's a bit of difference between gcc and clang, but the\n>> general behavior is about the same, so I'll only present gcc results to\n>> keep this simple. I'll only throughput comparison to master, so >1.0\n>> means good, <1.0 means bad. If you're interested in actual tps, see the\n>> full results.\n>>\n>> For the v5 patch (actually v4-0001 + v5-0002) and v6, the results are:\n>>\n>> integer column / v5\n>> ===================\n>>\n>> rows 10/p 100/p 16/p 10/s 100/s 16/s\n>> -----------------------------------------------------------\n>> 0 97% 97% 97% 107% 126% 108%\n>> 1 95% 82% 94% 108% 132% 110%\n>> 5 95% 83% 95% 108% 132% 110%\n>> 1000 129% 481% 171% 131% 382% 165%\n> \n> I think we should likely ignore the v5 patch now. The reason is that\n> it was pretty much unfinished and there were many places that I'd not\n> yet added support for the HashedScalarArrayOpExpr node type yet. This\n> could cause these nodes to be skipped during node mutations or node\n> walking which would certainly make planning faster, just not in a way\n> that's correct.\n> \n>> integer column / v6\n>> ===================\n>>\n>> rows 10/p 100/p 16/p 10/s 100/s 16/s\n>> -----------------------------------------------------------\n>> 0 97% 97% 97% 98% 98% 98%\n>> 1 96% 84% 95% 97% 97% 98%\n>> 5 97% 85% 96% 98% 97% 97%\n>> 1000 129% 489% 172% 128% 330% 162%\n> \n> This is a really informative set of results. I can only guess that the\n> slowdown of the 100/prepared query is down to building the hash table.\n> I think that because the 0 rows test does not show the slowdown and we\n> only build the table when evaluating for the first time. There's a\n> slightly larger hit on 1 row vs 5 rows, which makes sense since the\n> rewards of the hash lookup start paying off more with more rows.\n> \n\nAgreed. I think that's essentially what I wrote too.\n\n> Looking at your tps numbers, I think I can see why we get the drop in\n> performance with prepared statements but not simple statements. This\n> seems to just be down to the fact that the planning time dominates in\n> the simple statement case. For example, the \"1 row\" test for 100/s\n> for v6 is 10023.3 tps, whereas the 100/p result is 44093.8 tps. With\n> master, prepared gets 52400.0 tps. So we could say the hash table\n> build costs us 8306.2 tps, or 3.59 microseconds per execution, per:\n> \n> postgres=# select 1000000 / 52400.0 - 1000000 / 44093.8;\n> ?column?\n> ---------------------\n> -3.5949559161508538\n> (1 row)\n> \n> If we look at the tps for the simple query version of the same test.\n> Master did 10309.6 tps, v6 did 10023.3 tps. If we apply that 3.59\n> microsecond slowdown to master's tps, then we get pretty close to\n> within 1% of the v6 tps:\n> \n> postgres=# select 1000000 / (1000000 / 10309.6 + 3.59);\n> ?column?\n> -----------------------\n> 9941.6451581291294165\n> (1 row)\n> \n\nRight, that makes perfect sense.\n\n>> text column / v6\n>> ================\n>>\n>> rows 10/p 100/p 16/p 10/s 100/s 16/s\n>> -----------------------------------------------------------\n>> 0 101% 101% 101% 98% 99% 99%\n>> 1 98% 82% 96% 98% 96% 97%\n>> 5 100% 84% 98% 98% 96% 98%\n>> 1000 297% 1645% 408% 255% 1000% 336%\n>>\n>>\n>> Overall, the behavior for integer and text columns is the same, for botYep,h\n>> patches. There's a couple interesting observations:\n>>\n>> 1) For the \"simple\" query mode, v5 helped quite a bit (20-30% speedup),\n>> but v6 does not seem to help at all - it's either same or slower than\n>> unpatched master.\n> \n> I think that's related to the fact that I didn't finish adding\n> HashedScalarArrayOpExpr processing to all places that needed it.\n> \n\nNot sure I understand. Shouldn't that effectively default to the\nunpatched behavior? How could that result in code that is *faster* than\nmaster?\n\n\n>> I wonder why is that, and if we could get some of the speedup with v6?\n>> At first I thought that maybe v5 is not building the hash table in cases\n>> where v6 does, but that shouldn't be faster than master.\n> \n> I don't think v5 and v6 really do anything much differently in the\n> executor. The only difference is really during ExecInitExprRec() when\n> we initialize the expression. With v5 we had a case\n> T_HashedScalarArrayOpExpr: to handle the new node type, but in v6 we\n> have if (OidIsValid(opexpr->hashfuncid)). Oh, wait. I did add the\n> missing permissions check on the hash function, so that will account\n> for something. As far as I can see, that's required.\n> \n\nI think the check is required, but I don't think that should be so very\nexpensive - it's likely one of many other permission checks during.\n\nBut I think the puzzle is not so much about v5 vs v6, but more about v5\nvs. master. I still don't understand how v5 managed to be faster than\nmaster, but maybe I'm missing something.\n\n>> 2) For the \"prepared\" mode, there's a clear performance hit the longer\n>> the array is (for both v5 and v6). For 100 elements it's about 15%,\n>> which is not great.\n>>\n>> I think the reason is fairly simple - building the hash table is not\n>> free, and with few rows it's not worth it - it'd be faster to just\n>> search the array directly. Unfortunately, the logic that makes the\n>> decision to switch to hashing only looks at the array length only, and\n>> ignores the number of rows entirely. So I think if we want to address\n>> this, convert_saop_to_hashed_saop needs to compare\n>>\n>> has_build_cost + nrows * hash_lookup_cost\n>>\n>> and\n>>\n>> nrows * linear_lookup_cost\n>>\n>> to make reasonable decision.\n> \n> I thought about that but I was really worried that the performance of\n> ScalarArrayOpExpr would just become too annoyingly unpredictable. You\n> know fairly well that we can often get massive row underestimations in\n> the planner. (I guess you worked on ext stats mainly because of that)\n> The problem I want to avoid is the ones where we get a big row\n> underestimation but don't really get a bad plan as a result. For\n> example a query like:\n> \n> SELECT * FROM big_table WHERE col1 = ... AND col2 = ... AND col3 = ...\n> AND col4 IN( ... big list of values ...);\n> \n> If col1, col2 and col3 are highly correlated but individually fairly\n> selective, then we could massively underestimate how many rows the IN\n> clause will see (assuming no rearranging was done here).\n> \n> I'm not completely opposed to the idea of taking the estimated rows\n> into account during planning. It might just mean having to move the\n> convert_saop_to_hashed_saop() call somewhere else. I imagine that's\n> fairly trivial to do. I just have concerns about doing so.\n> \n\nHmm, yeah. I understand your concerns, but the way it works now it kinda\npenalizes correct estimates. Imagine you have workload with simple OLTP\nqueries, the queries always hit only a couple rows - but we make it run\n15% slower because we're afraid there might be under-estimate.\n\nIt's sensible to make the planning resilient to under-estimates, but I'm\nnot sure just ignoring the cardinality estimate with the justification\nit might be wrong is good strategy. In a way, all the other planning\ndecisions assume it's correct, so why should this be any different?\nWe're not using seqscan exclusively just because the selectivity might\nbe wrong, making index scan ineffective, for example.\n\nMaybe the right solution is to rely on the estimates, but then also\nenable the hashing if we significantly cross the threshold during\nexecution. So for example we might get estimate 10 rows, and calculate\nthat the hashing would start winning at 100 rows, so we start without\nhashing. But then at execution if we get 200 rows, we build the hash\ntable and start using it.\n\nYes, there's a risk that there are only 200 rows, and the time spent\nbuilding the hash table is wasted. But it's much more likely that there\nare many more rows.\n\n>> I was thinking that maybe we can ignore this, because people probably\n>> have much larger tables in practice. But I'm not sure that's really\n>> true, because there may be other quals and it's possible the preceding\n>> ones are quite selective, filtering most of the rows.\n>>\n>> I'm not sure how much of the necessary information we have available in\n>> convert_saop_to_hashed_saop_walker, though :-( I suppose we know the\n>> number of input rows for that plan node, not sure about selectivity of\n>> the other quals, though.\n>>\n>> It's also a bit strange that we get speedup for \"simple\" protocol, while\n>> for \"prepared\" it gets slower. That seems counter-intuitive, because why\n>> should we see opposite outcomes in those cases? I'd assume that we'll\n>> see either speedup or slowdown in both cases, with the relative change\n>> being more significant in the \"prepared\" mode.\n> \n> I hope my theory above about the planner time dominating the overall\n> time shows why that is.\n> \n\nI think it does explain the v6 behavior, where prepared gets slower\nwhile simple is about the same (with low row counts).\n\nBut I'm not sure I understand the v5, where simple got faster than master.\n\n> Another way to look at these result is by taking your tps value and\n> calculating how long it takes to do N number of transactions then\n> totalling up the time it takes. If I do that to calculate how long it\n> took each test to perform 1000 transactions and sum each test grouping\n> by mode and version with rollup on mode, I get:\n> \n> time values are in seconds:\n> \n> pg-master 28.07\n> prepared 11.28\n> simple 16.79\n> \n> pg-v6 15.86\n> prepared 5.23\n> simple 10.63\n> \n> So, the overall result when applying the total time is 177%.\n> \n\nNot sure how you got those numbers, or how it explains the results.\n\nE.g. on v5, the results for 100 int values / 1 row look like this:\n\n 100/p 100/s\n master 52400 10310\n v5 43446 13610\n\nI understand why the prepared mode got slower. I don't understand how\nthe simple mode got faster.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 10 Apr 2021 00:31:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 10 Apr 2021 at 10:32, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> But I think the puzzle is not so much about v5 vs v6, but more about v5\n> vs. master. I still don't understand how v5 managed to be faster than\n> master, but maybe I'm missing something.\n\nWell, v5 wrapped ScalarArrayOpExpr inside HashedScalarArrayOpExpr, but\ndidn't add cases for HashedScalarArrayOpExpr in all locations where it\nshould have. For example, fix_expr_common() has a case for\nScalarArrayOpExpr, but if we gave it a HashedScalarArrayOpExpr it\nwould have very little work to do.\n\nI've not gone and proved that's the exact reason why the planner\nbecame faster, but I really don't see any other reason.\n\n> Maybe the right solution is to rely on the estimates, but then also\n> enable the hashing if we significantly cross the threshold during\n> execution. So for example we might get estimate 10 rows, and calculate\n> that the hashing would start winning at 100 rows, so we start without\n> hashing. But then at execution if we get 200 rows, we build the hash\n> table and start using it.\n\nTo do that, we'd need to store the number of evaluations of the\nfunction somewhere. I'm not really sure that would be a good idea as I\nimagine we'd need to store that in ExprEvalStep. I imagine if that\nwas a good idea then we'd have done the same for JIT.\n\n\n> On 4/9/21 1:21 AM, David Rowley wrote:\n> > time values are in seconds:\n> >\n> > pg-master 28.07\n> > prepared 11.28\n> > simple 16.79\n> >\n> > pg-v6 15.86\n> > prepared 5.23\n> > simple 10.63\n> >\n> > So, the overall result when applying the total time is 177%.\n> >\n>\n> Not sure how you got those numbers, or how it explains the results.\n\nI got it by doing \"1 / tps * 1000\" to get the time it would take to\nexecute 1000 transactions of each of your tests. I then grouped by\npatch, mode and took the sum of the calculated number. My point was\nthat overall the patch is significantly faster. I was trying to\nhighlight that the 0 and 1 row test take up very little time and the\noverhead of building the hash table is only showing up because the\nquery executes so quickly.\n\nFWIW, I think executing a large IN clause on a table that has 0 rows\nis likely not that interesting a case to optimise for. That's not the\nsame as a query that just returns 0 rows due to filtering out hundreds\nor thousands of rows during execution. The overhead of building the\nhash table is not going to show up very easily in that sort of case.\n\n> I understand why the prepared mode got slower. I don't understand how\n> the simple mode got faster.\n\nI very much imagine v5 was faster at planning due to the unfinished\nnature of the patch. I'd not added support for HashedScalarArrayOpExpr\nin all the places I should have. That would result in the planner\nskipping lots of work that it needs to do. The way I got it to work\nwas to add it, then just add enough cases in the planner to handle\nHashedScalarArrayOpExpr so I didn't get any errors. I stopped after\nthat just to show the idea. Lack of errors does not mean it was\ncorrect. At least setrefs.c was not properly handling\nHashedScalarArrayOpExpr.\n\nI really think it would be best if we just ignore the performance of\nv5. Looking at the performance of a patch that was incorrectly\nskipping a bunch of required work does not seem that fair.\n\nDavid\n\n\n", "msg_date": "Sun, 11 Apr 2021 10:03:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "\n\nOn 4/11/21 12:03 AM, David Rowley wrote:\n> On Sat, 10 Apr 2021 at 10:32, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> But I think the puzzle is not so much about v5 vs v6, but more about v5\n>> vs. master. I still don't understand how v5 managed to be faster than\n>> master, but maybe I'm missing something.\n> \n> Well, v5 wrapped ScalarArrayOpExpr inside HashedScalarArrayOpExpr, but\n> didn't add cases for HashedScalarArrayOpExpr in all locations where it\n> should have. For example, fix_expr_common() has a case for\n> ScalarArrayOpExpr, but if we gave it a HashedScalarArrayOpExpr it\n> would have very little work to do.\n> \n> I've not gone and proved that's the exact reason why the planner\n> became faster, but I really don't see any other reason.\n> \n>> Maybe the right solution is to rely on the estimates, but then also\n>> enable the hashing if we significantly cross the threshold during\n>> execution. So for example we might get estimate 10 rows, and calculate\n>> that the hashing would start winning at 100 rows, so we start without\n>> hashing. But then at execution if we get 200 rows, we build the hash\n>> table and start using it.\n> \n> To do that, we'd need to store the number of evaluations of the\n> function somewhere. I'm not really sure that would be a good idea as I\n> imagine we'd need to store that in ExprEvalStep. I imagine if that\n> was a good idea then we'd have done the same for JIT.\n> \n\nSure, we'd need to track the number of lookups, but I'd imagine that's\nfairly cheap and we can stop once we switch to hash mode.\n\nI'm not sure \"JIT does not do that\" is really a proof it's a bad idea.\nMy guess is it wasn't considered back then, and the current heuristics\nis the simplest possible. So maybe it's the other way and we should\nconsider to do the same thing for JIT?\n\nFWIW if we look at what JIT does, it'd argue it supports the approach to\ntrust the estimates. Because if we under-estimate stuff, the cost won't\nexceed the \"jit_above_cost\" threshold, and we won't use JIT.\n\n> \n>> On 4/9/21 1:21 AM, David Rowley wrote:\n>>> time values are in seconds:\n>>>\n>>> pg-master 28.07\n>>> prepared 11.28\n>>> simple 16.79\n>>>\n>>> pg-v6 15.86\n>>> prepared 5.23\n>>> simple 10.63\n>>>\n>>> So, the overall result when applying the total time is 177%.\n>>>\n>>\n>> Not sure how you got those numbers, or how it explains the results.\n> \n> I got it by doing \"1 / tps * 1000\" to get the time it would take to\n> execute 1000 transactions of each of your tests. I then grouped by\n> patch, mode and took the sum of the calculated number. My point was\n> that overall the patch is significantly faster. I was trying to\n> highlight that the 0 and 1 row test take up very little time and the\n> overhead of building the hash table is only showing up because the\n> query executes so quickly.\n> \n\nAh, I see. TBH I don't think combining the results gives us a very\nmeaningful value - those cases were quite arbitrary, but summing them\ntogether like this assumes the workload has about 25% of each. But if\nyour workload is exclusively 0/1/5 rows it's going to be hit.\n\n> FWIW, I think executing a large IN clause on a table that has 0 rows\n> is likely not that interesting a case to optimise for. That's not the\n> same as a query that just returns 0 rows due to filtering out hundreds\n> or thousands of rows during execution. The overhead of building the\n> hash table is not going to show up very easily in that sort of case.\n> \n\nYeah, it's probably true that queries with long IN lists are probably\ndealing with many input rows. And you're right we don't really care\nabout how many rows are ultimately produced by the query (or even the\nstep with the IN list) - if we spent a lot of time to filter the rows\nbefore applying the IN list, the time to initialize the hash table is\nprobably just noise.\n\nI wonder what's the relationship between the length of the IN list and\nthe minimum number of rows needed for the hash to start winning.\n\n>> I understand why the prepared mode got slower. I don't understand how\n>> the simple mode got faster.\n> \n> I very much imagine v5 was faster at planning due to the unfinished\n> nature of the patch. I'd not added support for HashedScalarArrayOpExpr\n> in all the places I should have. That would result in the planner\n> skipping lots of work that it needs to do. The way I got it to work\n> was to add it, then just add enough cases in the planner to handle\n> HashedScalarArrayOpExpr so I didn't get any errors. I stopped after\n> that just to show the idea. Lack of errors does not mean it was\n> correct. At least setrefs.c was not properly handling\n> HashedScalarArrayOpExpr.\n> \n> I really think it would be best if we just ignore the performance of\n> v5. Looking at the performance of a patch that was incorrectly\n> skipping a bunch of required work does not seem that fair.\n> \n\nAha! I was assuming v5 was correct, but if that assumption is incorrect\nthen the whole \"v5 speedup\" is just an illusion, aAnd you're right we\nshould simply ignore that. Thanks for the explanation!\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 11 Apr 2021 00:38:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sun, 11 Apr 2021 at 10:38, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I wonder what's the relationship between the length of the IN list and\n> the minimum number of rows needed for the hash to start winning.\n\nI made the attached spreadsheet which demonstrates the crossover point\nusing the costs that I coded into cost_qual_eval_walker().\n\nIt basically shows, for large arrays, that there are fairly\nsignificant benefits to hashing for just 2 lookups and not hashing\nonly just wins for 1 lookup. However, the cost model does not account\nfor allocating memory for the hash table, which is far from free.\n\nYou can adjust the number of items in the IN clause by changing the\nvalue in cell B1. The values in B2 and B3 are what I saw the planner\nset when I tested with both INT and TEXT types.\n\nDavid", "msg_date": "Tue, 13 Apr 2021 11:23:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, 9 Apr 2021 at 00:00, David Rowley <dgrowleyml@gmail.com> wrote:\n> I push this with some minor cleanup from the v6 patch I posted earlier.\n\nI realised when working on something unrelated last night that we can\nalso do hash lookups for NOT IN too.\n\nWe'd just need to check if the operator's negator operator is\nhashable. No new fields would need to be added to ScalarArrayOpExpr.\nWe'd just set the hashfuncid to the correct value and then set the\nopfuncid to the negator function. In the executor, we'd know to check\nif the value is in the table or not in the table based on the useOr\nvalue.\n\nI'm not really sure whether lack of NOT IN support is going to be a\nsource of bug reports for PG14 or not. If it was, then it might be\nworth doing something about that for PG14. Otherwise, we can just\nleave it for future work for PG15 and beyond. I personally don't have\nany strong feelings either way, but I'm leaning towards just writing a\npatch and thinking of pushing it sometime after we branch for PG15.\n\nI've included the RMT, just in case they want to voice an opinion on that.\n\nDavid\n\n\n", "msg_date": "Tue, 13 Apr 2021 11:35:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I realised when working on something unrelated last night that we can\n> also do hash lookups for NOT IN too.\n\n... and still get the behavior right for nulls?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Apr 2021 19:42:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, 13 Apr 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I realised when working on something unrelated last night that we can\n> > also do hash lookups for NOT IN too.\n>\n> ... and still get the behavior right for nulls?\n\nYeah, it will. There are already some special cases for NULLs in the\nIN version. Those would need to be adapted for NOT IN.\n\nDavid\n\n\n", "msg_date": "Tue, 13 Apr 2021 11:49:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, Apr 12, 2021 at 7:49 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 13 Apr 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > I realised when working on something unrelated last night that we can\n> > > also do hash lookups for NOT IN too.\n> >\n> > ... and still get the behavior right for nulls?\n>\n> Yeah, it will. There are already some special cases for NULLs in the\n> IN version. Those would need to be adapted for NOT IN.\n\nI hadn't thought about using the negator operator directly that way\nwhen I initially wrote the patch.\n\nBut also I didn't think a whole lot about the NOT IN case at all --\nand there's no mention of such that I see in this thread or the\nprecursor thread. It's pretty obvious that it wasn't part of my\nimmediate need, but obviously it'd be nice to have the consistency.\n\nAll that to say this: my vote would be to put it into PG15 also.\n\nJames\n\n\n", "msg_date": "Mon, 12 Apr 2021 22:07:16 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Mon, Apr 12, 2021 at 10:07 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 7:49 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Tue, 13 Apr 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > David Rowley <dgrowleyml@gmail.com> writes:\n> > > > I realised when working on something unrelated last night that we can\n> > > > also do hash lookups for NOT IN too.\n> > >\n> > > ... and still get the behavior right for nulls?\n> >\n> > Yeah, it will. There are already some special cases for NULLs in the\n> > IN version. Those would need to be adapted for NOT IN.\n>\n> I hadn't thought about using the negator operator directly that way\n> when I initially wrote the patch.\n>\n> But also I didn't think a whole lot about the NOT IN case at all --\n> and there's no mention of such that I see in this thread or the\n> precursor thread. It's pretty obvious that it wasn't part of my\n> immediate need, but obviously it'd be nice to have the consistency.\n>\n> All that to say this: my vote would be to put it into PG15 also.\n\n...and here's a draft patch. I can take this to a new thread if you'd\nprefer; the one here already got committed, on the other hand this is\npretty strongly linked to this discussion, so I figured it made sense\nto post it here.\n\nJames", "msg_date": "Tue, 13 Apr 2021 13:40:01 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Wed, 14 Apr 2021 at 05:40, James Coleman <jtc331@gmail.com> wrote:\n> ...and here's a draft patch. I can take this to a new thread if you'd\n> prefer; the one here already got committed, on the other hand this is\n> pretty strongly linked to this discussion, so I figured it made sense\n> to post it here.\n\nI only glanced at this when you sent it and I was confused about how\nit works. The patch didn't look like how I imagined it should and I\ncouldn't see how the executor part worked without any changes.\n\nAnyway, I decided to clear up my confusion tonight and apply the patch\nto figure all this out... unfortunately, I see why I was confused\nnow. It actually does not work at all :-(\n\nYou're still passing the <> operator to get_op_hash_functions(), which\nof course is not hashable, so we just never do hashing for NOT IN.\n\nAll your tests pass just fine because the standard non-hashed code path is used.\n\nMy idea was that you'd not add any fields to ScalarArrayOpExpr and for\nsoaps with useOr == false, check if the negator of the operator is\nhashable. If so set the opfuncid to the negator operator's function.\n\nI'm a bit undecided if it's safe to set the opfuncid to the negator\nfunction. If anything were to set that again based on the opno then\nit would likely set it to the wrong thing. We can't go changing the\nopno either because EXPLAIN would display the wrong thing.\n\nAnyway, I've attached what I ended up with after spending a few hours\nlooking at this.\n\nI pretty much used all your tests as is with the exception of removing\none that looked duplicated.\n\nDavid", "msg_date": "Sat, 24 Apr 2021 22:25:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, Apr 24, 2021 at 6:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 14 Apr 2021 at 05:40, James Coleman <jtc331@gmail.com> wrote:\n> > ...and here's a draft patch. I can take this to a new thread if you'd\n> > prefer; the one here already got committed, on the other hand this is\n> > pretty strongly linked to this discussion, so I figured it made sense\n> > to post it here.\n>\n> I only glanced at this when you sent it and I was confused about how\n> it works. The patch didn't look like how I imagined it should and I\n> couldn't see how the executor part worked without any changes.\n>\n> Anyway, I decided to clear up my confusion tonight and apply the patch\n> to figure all this out... unfortunately, I see why I was confused\n> now. It actually does not work at all :-(\n>\n> You're still passing the <> operator to get_op_hash_functions(), which\n> of course is not hashable, so we just never do hashing for NOT IN.\n>\n> All your tests pass just fine because the standard non-hashed code path is used.\n\nI was surprised when it \"just worked\" too; I should have stopped to\nverify the path was being taken. Egg on my face for not doing so. :(\n\n> My idea was that you'd not add any fields to ScalarArrayOpExpr and for\n> soaps with useOr == false, check if the negator of the operator is\n> hashable. If so set the opfuncid to the negator operator's function.\n>\n> I'm a bit undecided if it's safe to set the opfuncid to the negator\n> function. If anything were to set that again based on the opno then\n> it would likely set it to the wrong thing. We can't go changing the\n> opno either because EXPLAIN would display the wrong thing.\n\nI don't personally see a reason why this is a problem. But I also\ndon't know that I have enough knowledge of the codebase to say that\ndefinitively.\n\n> Anyway, I've attached what I ended up with after spending a few hours\n> looking at this.\n\nOverall I like this approach.\n\nOne thing I think we could clean up:\n\n+ bool useOr; /* use OR or AND semantics? */\n...\n+ /* useOr == true means an IN clause, useOr == false is NOT IN */\n\nI'm wondering if the intersection of these two lines implies that\nuseOr isn't quite the right name here. Perhaps something like\n\"negated\"?\n\nOn the other hand (to make the counterargument) useOr would keep it\nconsistent with the other ones.\n\nThe other thing I got to thinking about was = ALL. It doesn't get\nturned into a hash op because the negator of = isn't hashable. I think\nit's correct that that's the determining factor, because I can't\nimagine what it would mean to hash <>. But...I wanted to confirm I\nwasn't missing something. We don't have explicit tests for that case,\nbut I'm not sure it's necessary either.\n\n> I pretty much used all your tests as is with the exception of removing\n> one that looked duplicated.\n\nSounds good.\n\nJames\n\n\n", "msg_date": "Fri, 7 May 2021 17:14:52 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 8 May 2021 at 09:15, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sat, Apr 24, 2021 at 6:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'm a bit undecided if it's safe to set the opfuncid to the negator\n> > function. If anything were to set that again based on the opno then\n> > it would likely set it to the wrong thing. We can't go changing the\n> > opno either because EXPLAIN would display the wrong thing.\n>\n> I don't personally see a reason why this is a problem. But I also\n> don't know that I have enough knowledge of the codebase to say that\n> definitively.\n\nThe reason for my concern is that if the opfuncid is set to\nInvalidOid, set_sa_opfuncid() always sets the ScalarArrayOpExpr's\nopfuncid to get_opcode(opexpr->opno) . I'm effectively setting the\nopfuncid to get_opcode(get_negator(opexpr->opno)), if anything were to\nreset the ScalarArrayOpExpr's opfuncid to InvalidOid, then\nset_sa_opfuncid() would repopulate it with the wrong value.\n\nMaybe the solution there is to teach set_sa_opfuncid() about our\nhashing NOT IN trick and have it check if (!opexpr->useOr &&\nOidIsValid(opexpr->hashfuncid)) and if that's true then do\nopexpr->opfuncid = get_opcode(get_negator(opexpr->opno)). Then we\ncould just not bothing setting opfuncid in\nconvert_saop_to_hashed_saop_walker().\n\n\n> > Anyway, I've attached what I ended up with after spending a few hours\n> > looking at this.\n>\n> Overall I like this approach.\n>\n> One thing I think we could clean up:\n>\n> + bool useOr; /* use OR or AND semantics? */\n> ...\n> + /* useOr == true means an IN clause, useOr == false is NOT IN */\n>\n> I'm wondering if the intersection of these two lines implies that\n> useOr isn't quite the right name here. Perhaps something like\n> \"negated\"?\n\nI'm not sure I want to go changing that. The whole IN() / NOT IN()\nbehaviour regarding NULLs all seems pretty weird until you mentally\nreplace a IN (1,2,3) with a = 1 OR a = 2 OR a = 3. And for the a NOT\nIN(1,2,3) case, a <> 1 AND a <> 2 AND a <> 3. People can make a bit\nmore sense of the weirdness of NULLs with NOT IN when they mentally\nconvert their expression like that. I think having that in code is\nuseful too. Any optimisations that are added must match those\nsemantics.\n\n> The other thing I got to thinking about was = ALL. It doesn't get\n> turned into a hash op because the negator of = isn't hashable. I think\n> it's correct that that's the determining factor, because I can't\n> imagine what it would mean to hash <>. But...I wanted to confirm I\n> wasn't missing something. We don't have explicit tests for that case,\n> but I'm not sure it's necessary either.\n\nIt's important to think of other cases, I just don't think there's any\nneed to do anything for that one. Remember that we have the\nrestriction of requiring a set of Consts, so for that case to be met,\nsomeone would have to write something like: col =\nALL('{1,1,1,1,1,1,1,1}'::int[]); I think if anyone comes along\ncomplaining that a query containing that is not as fast as they'd like\nthen we might tell them that they should just use: col = 1. A sanity\ncheckup might not go amiss either.\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 12:38:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Sat, 8 May 2021 at 09:15, James Coleman <jtc331@gmail.com> wrote:\n>> On Sat, Apr 24, 2021 at 6:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>> I'm a bit undecided if it's safe to set the opfuncid to the negator\n>>> function. If anything were to set that again based on the opno then\n>>> it would likely set it to the wrong thing. We can't go changing the\n>>> opno either because EXPLAIN would display the wrong thing.\n\n>> I don't personally see a reason why this is a problem. But I also\n>> don't know that I have enough knowledge of the codebase to say that\n>> definitively.\n\n> The reason for my concern is that if the opfuncid is set to\n> InvalidOid, set_sa_opfuncid() always sets the ScalarArrayOpExpr's\n> opfuncid to get_opcode(opexpr->opno).\n\nI will personally veto any design that involves setting opfuncid to\nsomething that doesn't match the opno. That's just horrid, and it\nwill break something somewhere, either immediately or down the road.\n\nI don't immediately see why you can't add an \"invert\" boolean flag to\nScalarArrayOpExpr and let the executor machinery deal with this. That'd\nhave the advantage of not having to depend on there being a negator.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 21:16:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, May 7, 2021 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Sat, 8 May 2021 at 09:15, James Coleman <jtc331@gmail.com> wrote:\n> >> On Sat, Apr 24, 2021 at 6:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >>> I'm a bit undecided if it's safe to set the opfuncid to the negator\n> >>> function. If anything were to set that again based on the opno then\n> >>> it would likely set it to the wrong thing. We can't go changing the\n> >>> opno either because EXPLAIN would display the wrong thing.\n>\n> >> I don't personally see a reason why this is a problem. But I also\n> >> don't know that I have enough knowledge of the codebase to say that\n> >> definitively.\n>\n> > The reason for my concern is that if the opfuncid is set to\n> > InvalidOid, set_sa_opfuncid() always sets the ScalarArrayOpExpr's\n> > opfuncid to get_opcode(opexpr->opno).\n>\n> I will personally veto any design that involves setting opfuncid to\n> something that doesn't match the opno. That's just horrid, and it\n> will break something somewhere, either immediately or down the road.\n\nThis is the \"project design\" style/policy I don't have. Thanks.\n\n> I don't immediately see why you can't add an \"invert\" boolean flag to\n> ScalarArrayOpExpr and let the executor machinery deal with this. That'd\n> have the advantage of not having to depend on there being a negator.\n\nDon't we need to have a negator to be able to look up the proper has\nfunction? At least somewhere in the process you'd have to convert from\nlooking up the <> op to looking up the = op and then setting the\n\"invert\" flag.\n\nJames\n\n\n", "msg_date": "Fri, 7 May 2021 21:36:58 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, May 7, 2021 at 8:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> It's important to think of other cases, I just don't think there's any\n> need to do anything for that one. Remember that we have the\n> restriction of requiring a set of Consts, so for that case to be met,\n> someone would have to write something like: col =\n> ALL('{1,1,1,1,1,1,1,1}'::int[]); I think if anyone comes along\n> complaining that a query containing that is not as fast as they'd like\n> then we might tell them that they should just use: col = 1. A sanity\n> checkup might not go amiss either.\n\nI wasn't concerned with trying to optimize this case (I don't think we\ncan anyway, at least not without adding new work, like de-duplicating\nthe array first). Though I do hope that someday I'll/we'll get around\nto getting the stable subexpressions caching patch finished, and then\nthis will be able to work for more than constant arrays.\n\nI just wanted to confirm we'd thought through the cases we can't\nhandle to ensure we're not accidentally covering them.\n\nJames\n\n\n", "msg_date": "Fri, 7 May 2021 21:49:50 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 8 May 2021 at 13:37, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't immediately see why you can't add an \"invert\" boolean flag to\n> > ScalarArrayOpExpr and let the executor machinery deal with this. That'd\n> > have the advantage of not having to depend on there being a negator.\n>\n> Don't we need to have a negator to be able to look up the proper has\n> function? At least somewhere in the process you'd have to convert from\n> looking up the <> op to looking up the = op and then setting the\n> \"invert\" flag.\n\nYeah, we *do* need to ensure there's a negator in the planner as we\nneed to use it during hash probes. It's no good checking the hash\nbucket we landed on does match with the <> operator's function. We\nwon't find many matches that way!\n\nI'm not opposed to adding some new field if that's what it takes. I'd\nimagine the new field will be something like negfuncid which will be\nInvalidOid unless the hash function is set and useOr == false\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 14:04:24 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 8 May 2021 at 14:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm not opposed to adding some new field if that's what it takes. I'd\n> imagine the new field will be something like negfuncid which will be\n> InvalidOid unless the hash function is set and useOr == false\n\nJust while this is still swapped into main memory, I've attached a\npatch that adds a new field to ScalarArrayOpExpr rather than\nrepurposing the existing field.\n\nDavid", "msg_date": "Sat, 8 May 2021 16:50:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Fri, May 7, 2021 at 9:50 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sat, 8 May 2021 at 14:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'm not opposed to adding some new field if that's what it takes. I'd\n> > imagine the new field will be something like negfuncid which will be\n> > InvalidOid unless the hash function is set and useOr == false\n>\n> Just while this is still swapped into main memory, I've attached a\n> patch that adds a new field to ScalarArrayOpExpr rather than\n> repurposing the existing field.\n>\n> David\n>\n\nHi,\n\n+ if (!OidIsValid(saop->negfuncid))\n+ record_plan_function_dependency(root, saop->hashfuncid);\n\nIs there a typo in the second line ? (root, saop->negfuncid)\n\nCheers\n\nOn Fri, May 7, 2021 at 9:50 PM David Rowley <dgrowleyml@gmail.com> wrote:On Sat, 8 May 2021 at 14:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm not opposed to adding some new field if that's what it takes.  I'd\n> imagine the new field will be something like negfuncid which will be\n> InvalidOid unless the hash function is set and useOr == false\n\nJust while this is still swapped into main memory, I've attached a\npatch that adds a new field to ScalarArrayOpExpr rather than\nrepurposing the existing field.\n\nDavidHi,+       if (!OidIsValid(saop->negfuncid))+           record_plan_function_dependency(root, saop->hashfuncid);Is there a typo in the second line ? (root, saop->negfuncid)Cheers", "msg_date": "Sat, 8 May 2021 01:21:44 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 8 May 2021 at 20:17, Zhihong Yu <zyu@yugabyte.com> wrote:\n> + if (!OidIsValid(saop->negfuncid))\n> + record_plan_function_dependency(root, saop->hashfuncid);\n>\n> Is there a typo in the second line ? (root, saop->negfuncid)\n\nYeah, that's a mistake. Thanks for checking it.\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 20:29:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Sat, 8 May 2021 at 20:29, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 8 May 2021 at 20:17, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + if (!OidIsValid(saop->negfuncid))\n> > + record_plan_function_dependency(root, saop->hashfuncid);\n> >\n> > Is there a typo in the second line ? (root, saop->negfuncid)\n>\n> Yeah, that's a mistake. Thanks for checking it.\n\nI've attached a patch which fixes the mistake mentioned above.\n\nAlso, dropped the RMT from the thread. I only introduced them when I\nwanted some input about if hashing NOT IN should be included in PG14.\nNobody seems to think that should be done.\n\nDavid", "msg_date": "Sun, 23 May 2021 18:36:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "I've been looking at the NOT IN hashing patch again and after making a\nfew minor tweaks I think it's pretty much ready to go.\n\nIf anyone feels differently, please let me know in the next couple of\ndays. Otherwise, I plan on taking a final look and pushing it soon.\n\nDavid", "msg_date": "Tue, 6 Jul 2021 22:39:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" }, { "msg_contents": "On Tue, 6 Jul 2021 at 22:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> If anyone feels differently, please let me know in the next couple of\n> days. Otherwise, I plan on taking a final look and pushing it soon.\n\nAfter doing some very minor adjustments, I pushed this. (29f45e299).\n\nThanks to James and Zhihong for reviewing.\n\nDavid\n\n\n", "msg_date": "Wed, 7 Jul 2021 16:32:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays" } ]
[ { "msg_contents": "While revising the docs for the geometric operators, I came across\nthese entries:\n\n<^\tIs below (allows touching)?\tcircle '((0,0),1)' <^ circle '((0,5),1)'\n>^\tIs above (allows touching)?\tcircle '((0,5),1)' >^ circle '((0,0),1)'\n\nThese have got more than a few problems:\n\n1. There are no such operators for circles, so the examples are pure\nfantasy.\n\n2. What these operators do exist for is points (point_below, point_above\nrespectively) and boxes (box_below_eq, box_above_eq). However, the\nstated behavior is not what the point functions actually do:\n\npoint_below(PG_FUNCTION_ARGS)\n...\n\tPG_RETURN_BOOL(FPlt(pt1->y, pt2->y));\n\npoint_above(PG_FUNCTION_ARGS)\n...\n\tPG_RETURN_BOOL(FPgt(pt1->y, pt2->y));\n\nSo point_below would be more accurately described as \"is strictly below\",\nso its operator name really ought to be <<|. And point_above is \"is\nstrictly above\", so its operator name ought to be |>>.\n\n3. The box functions do seem to be correctly documented:\n\nbox_below_eq(PG_FUNCTION_ARGS)\n...\n\tPG_RETURN_BOOL(FPle(box1->high.y, box2->low.y));\n\nbox_above_eq(PG_FUNCTION_ARGS)\n...\n\tPG_RETURN_BOOL(FPge(box1->low.y, box2->high.y));\n\nBut there are comments in the source code to the effect of\n\n * box_below_eq and box_above_eq are obsolete versions that (probably\n * erroneously) accept the equal-boundaries case. Since these are not\n * in sync with the box_left and box_right code, they are deprecated and\n * not supported in the PG 8.1 rtree operator class extension.\n\nI'm not sure how seriously to take this deprecation comment, but it\nis true that box_below (<<|) and box_above (|>>) have analogs for\nother data types while these functions don't.\n\n4. Just for extra fun, these point operators are listed in some\nGIST and SP-GIST opclasses; though the box ones are not, as per\nthat code comment.\n\nPerhaps it's too late in the v13 cycle to actually do anything\nabout this code-wise, but what should I do documentation-wise?\nI'm certainly not eager to document that these operators behave\ninconsistently depending on which type you're talking about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 00:42:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bogus documentation for bogus geometric operators" }, { "msg_contents": "> Perhaps it's too late in the v13 cycle to actually do anything\n> about this code-wise, but what should I do documentation-wise?\n> I'm certainly not eager to document that these operators behave\n> inconsistently depending on which type you're talking about.\n\nI don't think we need to worry too much about doing something in the\nv13 cycle. The geometric operators had and evidently still have so\nmany bugs. Nobody complains about them other than the developers who\nread the code.\n\nI am happy to prepare a patch for the next release to fix the current\noperators and add the missing ones.\n\n\n", "msg_date": "Tue, 28 Apr 2020 17:33:53 +0100", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Emre Hasegeli <emre@hasegeli.com> writes:\n>> Perhaps it's too late in the v13 cycle to actually do anything\n>> about this code-wise, but what should I do documentation-wise?\n>> I'm certainly not eager to document that these operators behave\n>> inconsistently depending on which type you're talking about.\n\n> I don't think we need to worry too much about doing something in the\n> v13 cycle. The geometric operators had and evidently still have so\n> many bugs. Nobody complains about them other than the developers who\n> read the code.\n\nYeah, I ended up just documenting the current state of affairs.\n\n> I am happy to prepare a patch for the next release to fix the current\n> operators and add the missing ones.\n\nSounds great!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Apr 2020 13:33:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "> While revising the docs for the geometric operators, I came across\n> these entries:\n>\n> <^ Is below (allows touching)? circle '((0,0),1)' <^ circle '((0,5),1)'\n> >^ Is above (allows touching)? circle '((0,5),1)' >^ circle '((0,0),1)'\n>\n> These have got more than a few problems:\n>\n> 1. There are no such operators for circles, so the examples are pure\n> fantasy.\n>\n> 2. What these operators do exist for is points (point_below, point_above\n> respectively) and boxes (box_below_eq, box_above_eq). However, the\n> stated behavior is not what the point functions actually do:\n>\n> point_below(PG_FUNCTION_ARGS)\n> ...\n> PG_RETURN_BOOL(FPlt(pt1->y, pt2->y));\n>\n> point_above(PG_FUNCTION_ARGS)\n> ...\n> PG_RETURN_BOOL(FPgt(pt1->y, pt2->y));\n>\n> So point_below would be more accurately described as \"is strictly below\",\n> so its operator name really ought to be <<|. And point_above is \"is\n> strictly above\", so its operator name ought to be |>>.\n\nI prepared a patch to add <<| and |>> operators for points to\ndeprecate the previous ones.\n\n> 3. The box functions do seem to be correctly documented:\n>\n> box_below_eq(PG_FUNCTION_ARGS)\n> ...\n> PG_RETURN_BOOL(FPle(box1->high.y, box2->low.y));\n>\n> box_above_eq(PG_FUNCTION_ARGS)\n> ...\n> PG_RETURN_BOOL(FPge(box1->low.y, box2->high.y));\n>\n> But there are comments in the source code to the effect of\n>\n> * box_below_eq and box_above_eq are obsolete versions that (probably\n> * erroneously) accept the equal-boundaries case. Since these are not\n> * in sync with the box_left and box_right code, they are deprecated and\n> * not supported in the PG 8.1 rtree operator class extension.\n>\n> I'm not sure how seriously to take this deprecation comment, but it\n> is true that box_below (<<|) and box_above (|>>) have analogs for\n> other data types while these functions don't.\n\nI think we should take this comment seriously and deprecate those\noperators, so the patch removes them from the documentation.\n\n> 4. Just for extra fun, these point operators are listed in some\n> GIST and SP-GIST opclasses; though the box ones are not, as per\n> that code comment.\n\nIt also updates the operator classes to support the new operators\ninstead of former ones. I don't think there are many users of them to\nnotice the change.\n\nI am adding this to the next commitfest.", "msg_date": "Fri, 21 Aug 2020 12:00:45 +0100", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "On Fri, Aug 21, 2020 at 12:00:45PM +0100, Emre Hasegeli wrote:\n> I prepared a patch to add <<| and |>> operators for points to\n> deprecate the previous ones.\n\nEmre, the CF bot complains that this does not apply anymore, so please\nprovide a rebase. By the way, I am a bit confused to see a patch\nthat adds new operators on a thread whose subject is about\ndocumentation.\n--\nMichael", "msg_date": "Sat, 5 Sep 2020 10:45:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "> Emre, the CF bot complains that this does not apply anymore, so please\n> provide a rebase. By the way, I am a bit confused to see a patch\n> that adds new operators on a thread whose subject is about\n> documentation.\n\nRebased version is attached.\n\nThe subject is about the documentation, but the post reveals\ninconsistencies of the operators. Tom Lane fixed the documentation on\nthe back branches. The new patch is to fix the operators on the\nmaster only.", "msg_date": "Mon, 7 Sep 2020 11:50:17 +0100", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "> The subject is about the documentation, but the post reveals\n> inconsistencies of the operators. Tom Lane fixed the documentation on\n> the back branches. The new patch is to fix the operators on the\n> master only.\n>\n\nNice catch, thanks!\nI agree that different operators should not have the same name and I'm\nplanning to review the patch soon. What are your ideas on the possibility\nto backpatch it also? It seems a little bit weird that the operator can\nchange its name between versions of PG.\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n The subject is about the documentation, but the post reveals\ninconsistencies of the operators.  Tom Lane fixed the documentation on\nthe back branches.  The new patch is to fix the operators on the\nmaster only.\nNice catch, thanks!I agree that different operators should not have the same name and I'm planning to review the patch soon. What are your ideas on the possibility to backpatch it also? It seems a little bit weird that the operator can change its name between versions of PG. -- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 3 Nov 2020 13:30:41 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Emre, could you please again rebase your patch on top of\n2f70fdb0644c32c4154236c2b5c241bec92eac5e\n?\n It is not applied anymore.\n\n>\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n Emre, could you please again rebase your patch on top of 2f70fdb0644c32c4154236c2b5c241bec92eac5e ? It is not applied anymore.\n-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 3 Nov 2020 14:53:12 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Emre,\n\nI've rebased and tested your proposed patch. It seems fine and sensible to\nme.\nI have only one thing to note: as this patch doesn't disable <^ and >^\noperator for boxes the existing state of documentation seem consistent to\nme:\n\nselect '((0,0),(1,1))'::box <<| '((0,1),(1,2))'::box;\n----------\n f\n\nselect '((0,0),(1,1))'::box <^ '((0,1),(1,2))'::box;\n----------\n t\n\nSo I've only reverted the changes in the documentation on geometric\nfunctions in your patch.\nPFA v3 of your patch. I'd mark it ready to commit if you agree.\n\nThank you!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 4 Nov 2020 13:02:52 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "> I've rebased and tested your proposed patch. It seems fine and sensible to me.\n\nThanks\n\n> I have only one thing to note: as this patch doesn't disable <^ and >^ operator for boxes the existing state of documentation seem consistent to me:\n>\n> select '((0,0),(1,1))'::box <<| '((0,1),(1,2))'::box;\n> ----------\n> f\n>\n> select '((0,0),(1,1))'::box <^ '((0,1),(1,2))'::box;\n> ----------\n> t\n>\n> So I've only reverted the changes in the documentation on geometric functions in your patch.\n\nYou are right. We need to keep the documentation for box operators,\nbut remove the lines for the point operators.\n\n\n", "msg_date": "Wed, 4 Nov 2020 09:32:19 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": ">\n> > I have only one thing to note: as this patch doesn't disable <^ and >^\n> operator for boxes the existing state of documentation seem consistent to\n> me:\n> >\n> > select '((0,0),(1,1))'::box <<| '((0,1),(1,2))'::box;\n> > ----------\n> > f\n> >\n> > select '((0,0),(1,1))'::box <^ '((0,1),(1,2))'::box;\n> > ----------\n> > t\n> >\n> > So I've only reverted the changes in the documentation on geometric\n> functions in your patch.\n>\n> You are right. We need to keep the documentation for box operators,\n> but remove the lines for the point operators.\n>\n\nIndeed you are right. PFA v4 with documentation removed for <^ and >^ for\npoint\nThanks!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 4 Nov 2020 13:43:18 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> [ v4-0001-Deprecate-and-replace-and-operators-for-points.patch ]\n\nI made a cursory pass over this, and two things stood out to me:\n\n1. The patch removes <^ and >^ from func.sgml, which is fine, but\nshouldn't there be an addition for the new operators? (I think\nprobably this need only be an addition of \"point\" as a possible\ninput type for <<| and |>>.) Actually, as long we're not completely\nremoving these operators, I'm not sure it's okay to make them completely\nundocumented. Maybe instead of removing, change the text to be\n\"Deprecated, use the equivalent XXX operator instead.\" Or we could\nadd a footnote similar to what was there for a previous renaming:\n\n\tNote\n\n\tBefore PostgreSQL 8.2, the containment operators @> and <@ were\n\trespectively called ~ and @. These names are still available, but\n\tare deprecated and will eventually be removed.\n\n2. I'm a bit queasy about removing these operators from the opclasses.\nI'm not sure anyone will thank us for \"the operator is still there, so\nyour query is still accepted, but it runs 1000X slower than before\".\nIt seems like more plausible answers are either \"nuke the operators\nentirely, force people to change their queries now\" or else \"support\nboth old and new names in the opclasses for awhile to come\". In\nprevious operator renamings we've generally followed the second path,\nso that's what I'm inclined to think should happen here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Nov 2020 17:16:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": ">\n> 1. The patch removes <^ and >^ from func.sgml, which is fine, but\n\nshouldn't there be an addition for the new operators? (I think\n>\nI fully agree and added \"point\" as a possible input type for <<| and |>> in\nmanual. PFA v5\n\n\n> undocumented. Maybe instead of removing, change the text to be\n> \"Deprecated, use the equivalent XXX operator instead.\" Or we could\n> add a footnote similar to what was there for a previous renaming:\n>\nThe problem that this new <<| is equivalent to <^ only for points (To\nrecap: the source of a problem is the same name of <^ operator for points\nand boxes with different meaning for these types).\n\n point\n box\n<<| |>> strictly above/below (new)\n strictly above/below\n<^ >^ strictly above/below (deprecated, but available)\n above/below\n\nSo it seems to me that trying to mention the subtle difference of\ndeprecated operator to same-named one for different data type inevitably\nmake things much worse for reader. On this reason I'd vote for complete\nnuking <^ for point type (but this is not the only way so I haven't done\nthis in v5).\n\nWhat do you think?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Fri, 13 Nov 2020 12:26:53 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> undocumented. Maybe instead of removing, change the text to be\n>> \"Deprecated, use the equivalent XXX operator instead.\" Or we could\n>> add a footnote similar to what was there for a previous renaming:\n\n> The problem that this new <<| is equivalent to <^ only for points (To\n> recap: the source of a problem is the same name of <^ operator for points\n> and boxes with different meaning for these types).\n\nI don't think it's that hard to be clear; see proposed wording below.\n\nThe other loose end is that I don't think we can take away the opclass\nentries for the old spellings, unless we're willing to visibly break\npeople's queries by removing those operator names altogether. That\ndoesn't seem like it'll fly when we haven't even deprecated the old\nnames yet. So for now, we have to support both names in the opclasses.\nI extended the patch to do that.\n\nThis version seems committable to me --- any thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 22 Nov 2020 18:52:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": ">\n> >> undocumented. Maybe instead of removing, change the text to be\n> >> \"Deprecated, use the equivalent XXX operator instead.\" Or we could\n> >> add a footnote similar to what was there for a previous renaming:\n>\n> > The problem that this new <<| is equivalent to <^ only for points (To\n> > recap: the source of a problem is the same name of <^ operator for\n> points\n> > and boxes with different meaning for these types).\n>\n> I don't think it's that hard to be clear; see proposed wording below.\n>\n> The other loose end is that I don't think we can take away the opclass\n> entries for the old spellings, unless we're willing to visibly break\n> people's queries by removing those operator names altogether. That\n> doesn't seem like it'll fly when we haven't even deprecated the old\n> names yet. So for now, we have to support both names in the opclasses.\n> I extended the patch to do that.\n>\n> This version seems committable to me --- any thoughts?\n>\nThe wording seems no problem to me. I looked into a patch and changes also\nseem sensible but I can not apply this patch because of really many\nrejects. Which commit should I use to apply it onto?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n>> undocumented.  Maybe instead of removing, change the text to be\n>> \"Deprecated, use the equivalent XXX operator instead.\"  Or we could\n>> add a footnote similar to what was there for a previous renaming:\n\n> The problem that this new <<| is equivalent to <^ only for points (To\n> recap: the source of a problem is the same name of <^  operator for points\n> and boxes with different meaning for these types).\n\nI don't think it's that hard to be clear; see proposed wording below.\n\nThe other loose end is that I don't think we can take away the opclass\nentries for the old spellings, unless we're willing to visibly break\npeople's queries by removing those operator names altogether.  That\ndoesn't seem like it'll fly when we haven't even deprecated the old\nnames yet.  So for now, we have to support both names in the opclasses.\nI extended the patch to do that.\n\nThis version seems committable to me --- any thoughts?The wording seems no problem to me. I  looked into a patch and changes also seem sensible but I can not apply this patch because of really many rejects. Which commit should I use to apply it onto?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 23 Nov 2020 13:09:34 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": ">\n> The wording seems no problem to me. I looked into a patch and changes\n>> also seem sensible but I can not apply this patch because of really many\n>> rejects. Which commit should I use to apply it onto?\n>\n> Sorry, the rejects were due to my git configuration. I will apply and make\nthe final checks soon.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThe wording seems no problem to me. I  looked into a patch and changes also seem sensible but I can not apply this patch because of really many rejects. Which commit should I use to apply it onto?Sorry, the rejects were due to my git configuration. I will apply and make the final checks soon.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 23 Nov 2020 13:17:37 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI've made another check of the final state and suppose it is ready to be pushed.\r\n\r\nPavel Borisov", "msg_date": "Mon, 23 Nov 2020 11:44:06 +0000", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus documentation for bogus geometric operators" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> I've made another check of the final state and suppose it is ready to be pushed.\n\nSounds good, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Nov 2020 11:39:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus documentation for bogus geometric operators" } ]
[ { "msg_contents": "Subject changed. Earlier it was : spin_delay() for ARM\n\nOn Fri, 17 Apr 2020 at 22:54, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 16, 2020 at 3:18 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Not relevant to the PAUSE stuff .... Note that when the parallel\n> > clients reach from 24 to 32 (which equals the machine CPUs), the TPS\n> > shoots from 454189 to 1097592 which is more than double speed gain\n> > with just a 30% increase in parallel sessions.\nFor referencem the TPS can be seen here :\nhttps://www.postgresql.org/message-id/CAJ3gD9e86GY%3DQfyfZQkb11Z%2BCVWowDiGgGThzKKwHDGU9uA2yA%40mail.gmail.com\n>\n> I've seen stuff like this too. For instance, check out the graph from\n> this 2012 blog post:\n>\n> http://rhaas.blogspot.com/2012/04/did-i-say-32-cores-how-about-64.html\n>\n> You can see that the performance growth is basically on a straight\n> line up to about 16 cores, but then it kinks downward until about 28,\n> after which it kinks sharply upward until about 36 cores.\n>\n> I think this has something to do with the process scheduling behavior\n> of Linux, because I vaguely recall some discussion where somebody did\n> benchmarking on the same hardware on both Linux and one of the BSD\n> systems, and the effect didn't appear on BSD. They had other problems,\n> like a huge drop-off at higher core counts, but they didn't have that\n> effect.\n\nAh I see.\n\nBy the way, I have observed this behaviour in both x86 and ARM,\nregardless of whether the CPUs are as low as 8, or as high as 32.\n\nBut for me, I suspect it's a combination of linux scheduler and\ninteractions between backends and pgbench clients.\n\nSo I used a custom script. I used the same point query that is\nused with the -S option, but that query is run again and again on the\nserver side without the client having to send it again and again, so\nthe pgbench clients are idle most of the time.\n\nQuery used :\nselect foo(300000);\nwhere foo(int) is defined as :\ncreate or replace function foo(iterations int) returns int as $$\ndeclare\n id int; ret int ; counter int = 0;\nbegin\n WHILE counter < iterations\n LOOP\n counter = counter + 1;\n id = random() * 3000000;\n select into ret aid from pgbench_accounts where aid = id;\n END LOOP;\n return ret;\nend $$ language plpgsql;\n\nBelow are results for 30 scale factor, with 8 CPUs :\n\nClients TPS\n2 1.255327\n4 2.414139\n6 3.532937\n8 4.586583\n10 4.557575\n12 4.517226\n14 4.551455\n18 4.593271\n\nYou can see that the tps rise is almost linearly proportional to\nincrease in clients, with no deviation in between, upto 8 clients\nwhere it does not rise because CPUs are fully utilized,\n\nIn this custom case as well, the behaviour is same for both x86 and\nARM, regardless of 8 CPUs or 32 CPUs.\n\n\n--\nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Tue, 21 Apr 2020 11:07:21 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "pgbench testing with contention scenarios" } ]
[ { "msg_contents": "Hello.\n\nThe commit a7e8ece41c adds a new member restoreCommand to\nXLogPageReadPrivate. readOneRecord doesn't make use of it but forgets\nto set NULL. That can lead to illegal pointer access.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 21 Apr 2020 15:08:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "forgotten initalization of a variable" }, { "msg_contents": "On Tue, Apr 21, 2020 at 03:08:30PM +0900, Kyotaro Horiguchi wrote:\n> The commit a7e8ece41c adds a new member restoreCommand to\n> XLogPageReadPrivate. readOneRecord doesn't make use of it but forgets\n> to set NULL. That can lead to illegal pointer access.\n\nThat's an oversight of the original commit. Now, instead of failing\neven if there is a restore command, wouldn't it be better to pass down\nthe restore_command to readOneRecord() so as we can optionally\nimprove the stability of a single record lookup? This only applies to\na checkpoint record now, but this routine could be called elsewhere in\nthe future. Please see the attached.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 17:34:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: forgotten initalization of a variable" }, { "msg_contents": "At Tue, 21 Apr 2020 17:34:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Apr 21, 2020 at 03:08:30PM +0900, Kyotaro Horiguchi wrote:\n> > The commit a7e8ece41c adds a new member restoreCommand to\n> > XLogPageReadPrivate. readOneRecord doesn't make use of it but forgets\n> > to set NULL. That can lead to illegal pointer access.\n> \n> That's an oversight of the original commit. Now, instead of failing\n> even if there is a restore command, wouldn't it be better to pass down\n> the restore_command to readOneRecord() so as we can optionally\n> improve the stability of a single record lookup? This only applies to\n\nOops! You're right.\n\n> a checkpoint record now, but this routine could be called elsewhere in\n> the future. Please see the attached.\n\nIt looks fine to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 21 Apr 2020 18:09:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: forgotten initalization of a variable" }, { "msg_contents": "On Tue, Apr 21, 2020 at 06:09:30PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 21 Apr 2020 17:34:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> a checkpoint record now, but this routine could be called elsewhere in\n>> the future. Please see the attached.\n> \n> It looks fine to me.\n\nFixed this way, then. Thanks for the report!\n--\nMichael", "msg_date": "Wed, 22 Apr 2020 08:13:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: forgotten initalization of a variable" }, { "msg_contents": "At Wed, 22 Apr 2020 08:13:02 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Apr 21, 2020 at 06:09:30PM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 21 Apr 2020 17:34:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> >> a checkpoint record now, but this routine could be called elsewhere in\n> >> the future. Please see the attached.\n> > \n> > It looks fine to me.\n> \n> Fixed this way, then. Thanks for the report!\n\nThans for fixing this!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:13:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: forgotten initalization of a variable" } ]
[ { "msg_contents": "Hi!\n\nI found concurrency bug in amcheck running on replica. When\nbtree_xlog_unlink_page() replays changes to replica, deleted page is\nleft with no items. But if amcheck steps on such deleted page\npalloc_btree_page() expects it would have items.\n\n(lldb_on_primary) b btbulkdelete\n\nprimary=# drop table test;\nprimary=# create table test as (select random() x from\ngenerate_series(1,1000000) i);\nprimary=# create index test_x_idx on test(x);\nprimary=# delete from test;\nprimary=# vacuum test;\n\n(lldb_on_replica) b bt_check_level_from_leftmost\n\nreplica=# select bt_index_check('test_x_idx');\n\n# skip to internal level\n(lldb_on_replica) c\n(lldb_on_replica) b palloc_btree_page\n# skip to non-leftmost page\n(lldb_on_replica) c\n(lldb_on_replica) c\n# concurrently delete btree pages\n(lldb_on_primary) c\n# continue with pages\n(lldb_on_replica) c\n\nFinally replica gets error.\nERROR: internal block 289 in index \"test_x_idx\" lacks high key and/or\nat least one downlink\n\nProposed fix is attached. Spotted by Konstantin Knizhnik,\nreproduction case and fix from me.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 21 Apr 2020 12:54:27 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Concurrency bug in amcheck" }, { "msg_contents": "On Tue, Apr 21, 2020 at 12:54 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I found concurrency bug in amcheck running on replica. When\n> btree_xlog_unlink_page() replays changes to replica, deleted page is\n> left with no items. But if amcheck steps on such deleted page\n> palloc_btree_page() expects it would have items.\n\nI forgot to mention that I've reproduced it on master. Quick check\nshows bug should exist since 11.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:31:13 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "Hi,\n\nPeter, Just thought you might want to see this one...\n\nOn 2020-04-21 15:31:13 +0300, Alexander Korotkov wrote:\n> On Tue, Apr 21, 2020 at 12:54 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > I found concurrency bug in amcheck running on replica. When\n> > btree_xlog_unlink_page() replays changes to replica, deleted page is\n> > left with no items. But if amcheck steps on such deleted page\n> > palloc_btree_page() expects it would have items.\n> \n> I forgot to mention that I've reproduced it on master. Quick check\n> shows bug should exist since 11.\n\n- Andres\n\n\n", "msg_date": "Wed, 22 Apr 2020 08:57:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Tue, Apr 21, 2020 at 2:54 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Proposed fix is attached. Spotted by Konstantin Knizhnik,\n> reproduction case and fix from me.\n\nI wonder if we should fix btree_xlog_unlink_page() instead of amcheck.\nWe already know that its failure to be totally consistent with the\nprimary causes problems for backwards scans -- this problem can be\nfixed at the same time:\n\nhttps://postgr.es/m/CANtu0ohkR-evAWbpzJu54V8eCOtqjJyYp3PQ_SGoBTRGXWhWRw@mail.gmail.com\n\nWe'd probably still use your patch for the backbranches if we went this way.\n\nWhat do you think?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:22:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Wed, Apr 22, 2020 at 7:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Apr 21, 2020 at 2:54 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Proposed fix is attached. Spotted by Konstantin Knizhnik,\n> > reproduction case and fix from me.\n>\n> I wonder if we should fix btree_xlog_unlink_page() instead of amcheck.\n> We already know that its failure to be totally consistent with the\n> primary causes problems for backwards scans -- this problem can be\n> fixed at the same time:\n>\n> https://postgr.es/m/CANtu0ohkR-evAWbpzJu54V8eCOtqjJyYp3PQ_SGoBTRGXWhWRw@mail.gmail.com\n>\n> We'd probably still use your patch for the backbranches if we went this way.\n>\n> What do you think?\n\nI've skip through the thread. It seems to be quite independent issue\nfrom this one. This issue is related to the fact that we leave some\nitems on deleted pages on primary, and on the same time we have no\nitems on deleted pages on standby. This inconsistency cause amcheck\npass normally on primary, but fail on standby. BackwardScan on\nstandby issue seems to be related solely on locking protocol and\nbtpo_prev, btpo_next pointers. It wasn't mention on that thread that\nwe might need hikeys on deleted pages.\n\nAssuming it doesn't seem we actually need any items on deleted pages,\nI can propose to delete them on primary as well. That would make\ncontents of primary and standby more consistent. What do you think?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 27 Apr 2020 11:51:51 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Mon, Apr 27, 2020 at 11:51 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Apr 22, 2020 at 7:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Tue, Apr 21, 2020 at 2:54 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Proposed fix is attached. Spotted by Konstantin Knizhnik,\n> > > reproduction case and fix from me.\n> >\n> > I wonder if we should fix btree_xlog_unlink_page() instead of amcheck.\n> > We already know that its failure to be totally consistent with the\n> > primary causes problems for backwards scans -- this problem can be\n> > fixed at the same time:\n> >\n> > https://postgr.es/m/CANtu0ohkR-evAWbpzJu54V8eCOtqjJyYp3PQ_SGoBTRGXWhWRw@mail.gmail.com\n> >\n> > We'd probably still use your patch for the backbranches if we went this way.\n> >\n> > What do you think?\n>\n> I've skip through the thread. It seems to be quite independent issue\n> from this one. This issue is related to the fact that we leave some\n> items on deleted pages on primary, and on the same time we have no\n> items on deleted pages on standby. This inconsistency cause amcheck\n> pass normally on primary, but fail on standby. BackwardScan on\n> standby issue seems to be related solely on locking protocol and\n> btpo_prev, btpo_next pointers. It wasn't mention on that thread that\n> we might need hikeys on deleted pages.\n>\n> Assuming it doesn't seem we actually need any items on deleted pages,\n> I can propose to delete them on primary as well. That would make\n> contents of primary and standby more consistent. What do you think?\n\nSo, my proposal is following. Backpatch my fix upthread to 11. In\nmaster additionally make _bt_unlink_halfdead_page() remove page items\non primary as well. Do you have any objections?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 27 Apr 2020 14:17:41 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Mon, Apr 27, 2020 at 4:17 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> > Assuming it doesn't seem we actually need any items on deleted pages,\n> > I can propose to delete them on primary as well. That would make\n> > contents of primary and standby more consistent. What do you think?\n>\n> So, my proposal is following. Backpatch my fix upthread to 11. In\n> master additionally make _bt_unlink_halfdead_page() remove page items\n> on primary as well. Do you have any objections?\n\nWhat I meant was that we might as well review the behavior of\n_bt_unlink_halfdead_page() here, since we have to change it anyway.\nBut I think you're right: No matter what happens or doesn't happen to\n_bt_unlink_halfdead_page(), the fact is that deleted pages may or may\nnot have a single remaining item (which was originally the \"top\nparent\" item from the page at offset number P_HIKEY), now and forever.\nWe have to conservatively assume that it could be either state, now\nand forever. That means that we definitely have to give up on the\ncheck, per your patch. So, +1 for backpatching that back to 11.\n\n(BTW, I don't think that this is a concurrency issue, except in the\nsense that a test case that recreates the false positive is sensitive\nto concurrency -- I imagine you agree with this.)\n\nI like your idea of making the primary consistent with the REDO\nroutine on the master branch only. I wonder if that will make it\npossible to change btree_mask() so that wal_consistency_checking can\ncheck deleted pages as well. The contents of a deleted page's special\narea do matter, and yet we don't currently verify that it matches (we\nuse mask_page_content() within btree_mask() for deleted pages, which\nseems inappropriately broad). In particular, the left and right\nsibling links should be consistent with the primary on a deleted page.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Apr 2020 18:04:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Tue, Apr 28, 2020 at 4:05 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Apr 27, 2020 at 4:17 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > > Assuming it doesn't seem we actually need any items on deleted pages,\n> > > I can propose to delete them on primary as well. That would make\n> > > contents of primary and standby more consistent. What do you think?\n> >\n> > So, my proposal is following. Backpatch my fix upthread to 11. In\n> > master additionally make _bt_unlink_halfdead_page() remove page items\n> > on primary as well. Do you have any objections?\n>\n> What I meant was that we might as well review the behavior of\n> _bt_unlink_halfdead_page() here, since we have to change it anyway.\n> But I think you're right: No matter what happens or doesn't happen to\n> _bt_unlink_halfdead_page(), the fact is that deleted pages may or may\n> not have a single remaining item (which was originally the \"top\n> parent\" item from the page at offset number P_HIKEY), now and forever.\n> We have to conservatively assume that it could be either state, now\n> and forever. That means that we definitely have to give up on the\n> check, per your patch. So, +1 for backpatching that back to 11.\n\nThank you. I've worked a bit on comments and commit message. I would\nappreciate you review.\n\n> (BTW, I don't think that this is a concurrency issue, except in the\n> sense that a test case that recreates the false positive is sensitive\n> to concurrency -- I imagine you agree with this.)\n\nYes, I agree it's related to concurrency, but not exactly concurrency issue.\n\n> I like your idea of making the primary consistent with the REDO\n> routine on the master branch only. I wonder if that will make it\n> possible to change btree_mask() so that wal_consistency_checking can\n> check deleted pages as well. The contents of a deleted page's special\n> area do matter, and yet we don't currently verify that it matches (we\n> use mask_page_content() within btree_mask() for deleted pages, which\n> seems inappropriately broad). In particular, the left and right\n> sibling links should be consistent with the primary on a deleted page.\n\nThank you. 2nd patch is proposed for master and makes btree page\nunlink remove all the items from the page being deleted.\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 11 May 2020 15:56:03 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Mon, May 11, 2020 at 5:56 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you. I've worked a bit on comments and commit message. I would\n> appreciate you review.\n\nThis looks good to me.\n\n> > I like your idea of making the primary consistent with the REDO\n> > routine on the master branch only. I wonder if that will make it\n> > possible to change btree_mask() so that wal_consistency_checking can\n> > check deleted pages as well. The contents of a deleted page's special\n> > area do matter, and yet we don't currently verify that it matches (we\n> > use mask_page_content() within btree_mask() for deleted pages, which\n> > seems inappropriately broad). In particular, the left and right\n> > sibling links should be consistent with the primary on a deleted page.\n>\n> Thank you. 2nd patch is proposed for master and makes btree page\n> unlink remove all the items from the page being deleted.\n\nThis looks good, but can we do the\nwal_consistency_checking/btree_mask() improvement, too?\n\nThere is no reason why it can't work with fully deleted pages. It\nalready works with half-dead pages. It would be nice to be able to\ntest this patch in that way, and it would be nice to have it in\ngeneral.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 16:06:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Wed, May 13, 2020 at 4:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, May 11, 2020 at 5:56 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Thank you. 2nd patch is proposed for master and makes btree page\n> > unlink remove all the items from the page being deleted.\n>\n> This looks good, but can we do the\n> wal_consistency_checking/btree_mask() improvement, too?\n\nYou never got around to committing the second patch (or the\nwal_consistency_checking stuff). Are you planning on picking it up\nagain?\n\nI'm currently working on this bug fix from Michail Nikolaev:\n\nhttps://postgr.es/m/CANtu0ohkR-evAWbpzJu54V8eCOtqjJyYp3PQ_SGoBTRGXWhWRw@mail.gmail.com\n\nIt would be nice if you could commit your second patch at around the\nsame time. It's related IMV.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 31 Jul 2020 17:23:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "Hi, Peter!\n\nOn Sat, Aug 1, 2020 at 3:23 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, May 13, 2020 at 4:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Mon, May 11, 2020 at 5:56 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Thank you. 2nd patch is proposed for master and makes btree page\n> > > unlink remove all the items from the page being deleted.\n> >\n> > This looks good, but can we do the\n> > wal_consistency_checking/btree_mask() improvement, too?\n>\n> You never got around to committing the second patch (or the\n> wal_consistency_checking stuff). Are you planning on picking it up\n> again?\n>\n\nThank you for your reminder. Revised patch is attached. Now, the\ncontents of deleted btree pages isn't masked. I've checked that\ninstallcheck passes with wal_consistency_checking='Btree'. I'm going to\npush this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 4 Aug 2020 17:27:09 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "Hi Alexander,\n\nOn Tue, Aug 4, 2020 at 7:27 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thank you for your reminder. Revised patch is attached. Now, the contents of deleted btree pages isn't masked. I've checked that installcheck passes with wal_consistency_checking='Btree'. I'm going to push this if no objections.\n\nThis looks good to me. One small thing, though: maybe the comments\nshould not say anything about the REDO routine -- that seems like a\ncase of \"the tail wagging the dog\" to me. Perhaps say something like:\n\n\"Remove the last pivot tuple on the page. This keeps things simple\nfor WAL consistency checking.\"\n\n(Just a suggestion.)\n\nThanks!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Aug 2020 15:58:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Wed, Aug 5, 2020 at 1:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Aug 4, 2020 at 7:27 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > Thank you for your reminder. Revised patch is attached. Now, the contents of deleted btree pages isn't masked. I've checked that installcheck passes with wal_consistency_checking='Btree'. I'm going to push this if no objections.\n>\n> This looks good to me. One small thing, though: maybe the comments\n> should not say anything about the REDO routine -- that seems like a\n> case of \"the tail wagging the dog\" to me. Perhaps say something like:\n>\n> \"Remove the last pivot tuple on the page. This keeps things simple\n> for WAL consistency checking.\"\n\nPushed. Comment is changed as you suggested. But I've replaced \"last\npivot tuple\" with \"remaining tuples\", because the page can also have a\nhigh key, which is also a tuple.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 5 Aug 2020 02:17:48 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" }, { "msg_contents": "On Tue, Aug 4, 2020 at 4:18 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Pushed. Comment is changed as you suggested. But I've replaced \"last\n> pivot tuple\" with \"remaining tuples\", because the page can also have a\n> high key, which is also a tuple.\n\nYou're right, of course.\n\nThanks again\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Aug 2020 16:19:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Concurrency bug in amcheck" } ]
[ { "msg_contents": "Hi,\n\nI have some code that I've been using in production that supports adding\nand authenticating Windows groups via the pg_ident file. It has a new\nindicator (+), that signifies the identifier is a Windows group, as in the\nfollowing example:\n\n# MAPNAME SYSTEM-USERNAME PG-USERNAME\n\"Users\" \"+User group\" postgres\n\nA new function was added to test if a user token is in the windows group:\n\n/*\n * Check if the user (sspiToken) is a member of the specified group\n */\nstatic BOOL\nsspi_user_is_in_group(HANDLE sspiToken, LPCTSTR groupName)\n\nI wanted to share this as a patch for the latest, as soon as I port it to\nv12. Does this sound reasonable?\n\nthanks,\nRussell\n\nHi,I have some code that I've been using in production that supports adding and authenticating Windows groups via the pg_ident file.  It has a new indicator (+), that signifies the identifier is a Windows group, as in the following example:# MAPNAME   SYSTEM-USERNAME       PG-USERNAME    \"Users\" \"+User group\"   postgresA new function was added to test if a user token is in the windows group:/* * Check if the user (sspiToken) is a member of the specified group */static BOOLsspi_user_is_in_group(HANDLE sspiToken, LPCTSTR groupName)I wanted to share this as a patch for the latest, as soon as I port it to v12.  Does this sound reasonable?thanks,Russell", "msg_date": "Tue, 21 Apr 2020 09:48:09 -0400", "msg_from": "The Dude <russman7474@gmail.com>", "msg_from_op": true, "msg_subject": "[SSPI] Windows group support" } ]
[ { "msg_contents": "Hi,\n\nWe are  getting a server crash on zlinux machine  if we set \njit_above_cost=0 in postgresql.conf file after configuring  PG v12 \nserver  with --with-llvm ( llvm-ttoolset-6.0)\n\nWe configured  PG v12 sources with switch --with-llvm  ( after setting \nthese variables on command prompt )\n\n  export \nLD_LIBRARY_PATH=/opt/rh/llvm-toolset-6.0/root/usr/lib64:$LD_LIBRARY_PATH\n  export LLVM_CONFIG=/opt/rh/llvm-toolset-6.0/root/usr/bin/llvm-config\n  export CLANG=/opt/rh/llvm-toolset-6.0/root/usr/bin/clang\n  export LDFLAGS=\"-Wl,-rpath,/opt/rh/llvm-toolset-6.0/root/lib64 \n${LDFLAGS}\"; export LDFLAGS\n\npostgresql.conf file -\n\n\"\nshared_preload_libraries=$libdir/llvmjit' ,\njit_provider = 'llvmjit'  ,\njit_above_cost = 0\njit=on,\n\"\n\nable to see the crash  against any sql query\n\npsql (12.2)\nType \"help\" for help.\n\npostgres=# select 5;\n2020-04-21 07:33:15.980 CDT [48149] DEBUG:  StartTransaction(1) name: \nunnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0\n2020-04-21 07:33:15.980 CDT [48149] DEBUG:  probing availability of JIT \nprovider at /home/edb/pg/edb/edbpsql/lib/postgresql/llvmjit.so\n2020-04-21 07:33:15.980 CDT [48149] DEBUG:  successfully loaded JIT \nprovider in current session\n2020-04-21 07:33:15.981 CDT [48149] DEBUG:  LLVMJIT detected CPU \"z13\", \nwith features \"\"\nterminate called after throwing an instance of 'std::bad_function_call'\n   what():  bad_function_call\nserver closed the connection unexpectedly\n     This probably means the server terminated abnormally\n     before or while processing the request.\nThe connection to the server was lost. Attempting reset: 2020-04-21 \n07:33:16.476 CDT [48137] DEBUG:  reaping dead processes\n\n*Stack trace*\n\n[edb@etpgabc bin]$ gdb -q -c data/core.31542 postgres\nReading symbols from /home/edb/pg/edb/edbpsql/bin/postgres...done.\n[New LWP 31542]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: edb postgres [local] SELECT '.\nProgram terminated with signal 6, Aborted.\n#0  0x000003ffa9841220 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install \nglibc-2.17-260.el7.s390x libedit-3.0-12.20121213cvs.el7.s390x \nlibffi-3.0.13-18.el7.s390x libgcc-4.8.5-39.el7.s390x \nlibstdc++-4.8.5-39.el7.s390x \nllvm-toolset-6.0-llvm-libs-6.0.1-5.el7.s390x \nncurses-libs-5.9-14.20130511.el7_4.s390x zlib-1.2.7-18.el7.s390x\n(gdb) bt\n#0  0x000003ffa9841220 in raise () from /lib64/libc.so.6\n#1  0x000003ffa9842aa8 in abort () from /lib64/libc.so.6\n#2  0x000003ff9f7881b4 in __gnu_cxx::__verbose_terminate_handler() () \nfrom /lib64/libstdc++.so.6\n#3  0x000003ff9f785c7e in ?? () from /lib64/libstdc++.so.6\n#4  0x000003ff9f785cb6 in std::terminate() () from /lib64/libstdc++.so.6\n#5  0x000003ff9f785f60 in __cxa_throw () from /lib64/libstdc++.so.6\n#6  0x000003ff9f7e4468 in std::__throw_bad_function_call() () from \n/lib64/libstdc++.so.6\n#7  0x000003ffa139e5c4 in \nstd::function<std::unique_ptr<llvm::orc::IndirectStubsManager, \nstd::default_delete<llvm::orc::IndirectStubsManager> > ()>::operator()() \nconst () from /opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n#8  0x000003ffa139f2a8 in LLVMOrcCreateInstance () from \n/opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n#9  0x000003ffa9c8a984 in llvm_session_initialize () at llvmjit.c:670\n#10 llvm_create_context (jitFlags=<optimized out>) at llvmjit.c:146\n#11 0x000003ffa9c98992 in llvm_compile_expr (state=0xa8c52218) at \nllvmjit_expr.c:131\n#12 0x0000000080219986 in ExecReadyExpr (state=state@entry=0xa8c52218) \nat execExpr.c:628\n#13 0x000000008021cd6e in ExecBuildProjectionInfo (targetList=<optimized \nout>, econtext=<optimized out>, slot=<optimized out>, \nparent=parent@entry=0xa8c51e30, inputDesc=inputDesc@entry=0x0) at \nexecExpr.c:472\n#14 0x0000000080232ed6 in ExecAssignProjectionInfo \n(planstate=planstate@entry=0xa8c51e30, inputDesc=inputDesc@entry=0x0) at \nexecUtils.c:504\n#15 0x0000000080250178 in ExecInitResult (node=node@entry=0xa8c4fb98, \nestate=estate@entry=0xa8c51bf0, eflags=eflags@entry=16) at nodeResult.c:221\n#16 0x000000008022c72c in ExecInitNode (node=node@entry=0xa8c4fb98, \nestate=estate@entry=0xa8c51bf0, eflags=eflags@entry=16) at \nexecProcnode.c:164\n#17 0x000000008022675e in InitPlan (eflags=16, queryDesc=0xa8c4f7d0) at \nexecMain.c:1020\n#18 standard_ExecutorStart (queryDesc=0xa8c4f7d0, eflags=16) at \nexecMain.c:266\n#19 0x0000000080388868 in PortalStart (portal=portal@entry=0xa8c91c80, \nparams=params@entry=0x0, eflags=eflags@entry=0, \nsnapshot=snapshot@entry=0x0) at pquery.c:518\n#20 0x0000000080384b2e in exec_simple_query \n(query_string=query_string@entry=0xa8c06170 \"select 5;\") at postgres.c:1176\n#21 0x00000000803852e0 in PostgresMain (argc=<optimized out>, \nargv=argv@entry=0xa8c55db8, dbname=0xa8c55c80 \"postgres\", \nusername=<optimized out>) at postgres.c:4247\n#22 0x000000008008007e in BackendRun (port=<optimized out>, \nport=<optimized out>) at postmaster.c:4437\n#23 BackendStartup (port=0xa8c4dc10) at postmaster.c:4128\n#24 ServerLoop () at postmaster.c:1704\n#25 0x000000008030c89e in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0xa8c00cc0) at postmaster.c:1377\n#26 0x00000000800811f4 in main (argc=<optimized out>, argv=0xa8c00cc0) \nat main.c:228\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi,\nWe are  getting a server crash on zlinux machine  if we set\n jit_above_cost=0 in postgresql.conf file after configuring  PG v12\n server  with --with-llvm ( llvm-ttoolset-6.0) \n\nWe configured  PG v12 sources with switch --with-llvm  ( after \n setting these variables on command prompt )\n\n  export\n LD_LIBRARY_PATH=/opt/rh/llvm-toolset-6.0/root/usr/lib64:$LD_LIBRARY_PATH\n  export\n LLVM_CONFIG=/opt/rh/llvm-toolset-6.0/root/usr/bin/llvm-config\n  export CLANG=/opt/rh/llvm-toolset-6.0/root/usr/bin/clang\n  export LDFLAGS=\"-Wl,-rpath,/opt/rh/llvm-toolset-6.0/root/lib64\n ${LDFLAGS}\"; export LDFLAGS\n postgresql.conf file - \n\n \"\n shared_preload_libraries=$libdir/llvmjit' , \n jit_provider = 'llvmjit'  ,\n jit_above_cost = 0 \n jit=on, \n \"\n able to see the crash  against any sql query\npsql (12.2)\n Type \"help\" for help.\n\n postgres=# select 5;\n 2020-04-21 07:33:15.980 CDT [48149] DEBUG:  StartTransaction(1)\n name: unnamed; blockState: DEFAULT; state: INPROGRESS,\n xid/subid/cid: 0/1/0\n 2020-04-21 07:33:15.980 CDT [48149] DEBUG:  probing availability\n of JIT provider at\n /home/edb/pg/edb/edbpsql/lib/postgresql/llvmjit.so\n 2020-04-21 07:33:15.980 CDT [48149] DEBUG:  successfully loaded\n JIT provider in current session\n 2020-04-21 07:33:15.981 CDT [48149] DEBUG:  LLVMJIT detected CPU\n \"z13\", with features \"\"\n terminate called after throwing an instance of\n 'std::bad_function_call'\n   what():  bad_function_call\n server closed the connection unexpectedly\n     This probably means the server terminated abnormally\n     before or while processing the request.\n The connection to the server was lost. Attempting reset:\n 2020-04-21 07:33:16.476 CDT [48137] DEBUG:  reaping dead processes\n\n Stack trace \n\n [edb@etpgabc bin]$ gdb -q -c data/core.31542 postgres\n Reading symbols from /home/edb/pg/edb/edbpsql/bin/postgres...done.\n [New LWP 31542]\n [Thread debugging using libthread_db enabled]\n Using host libthread_db library \"/lib64/libthread_db.so.1\".\n Core was generated by `postgres: edb postgres [local] SELECT       \n '.\n Program terminated with signal 6, Aborted.\n #0  0x000003ffa9841220 in raise () from /lib64/libc.so.6\n Missing separate debuginfos, use: debuginfo-install\n glibc-2.17-260.el7.s390x libedit-3.0-12.20121213cvs.el7.s390x\n libffi-3.0.13-18.el7.s390x libgcc-4.8.5-39.el7.s390x\n libstdc++-4.8.5-39.el7.s390x\n llvm-toolset-6.0-llvm-libs-6.0.1-5.el7.s390x\n ncurses-libs-5.9-14.20130511.el7_4.s390x zlib-1.2.7-18.el7.s390x\n (gdb) bt\n #0  0x000003ffa9841220 in raise () from /lib64/libc.so.6\n #1  0x000003ffa9842aa8 in abort () from /lib64/libc.so.6\n #2  0x000003ff9f7881b4 in __gnu_cxx::__verbose_terminate_handler()\n () from /lib64/libstdc++.so.6\n #3  0x000003ff9f785c7e in ?? () from /lib64/libstdc++.so.6\n #4  0x000003ff9f785cb6 in std::terminate() () from\n /lib64/libstdc++.so.6\n #5  0x000003ff9f785f60 in __cxa_throw () from /lib64/libstdc++.so.6\n #6  0x000003ff9f7e4468 in std::__throw_bad_function_call() () from\n /lib64/libstdc++.so.6\n #7  0x000003ffa139e5c4 in\n std::function<std::unique_ptr<llvm::orc::IndirectStubsManager,\n std::default_delete<llvm::orc::IndirectStubsManager> >\n ()>::operator()() const () from\n /opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n #8  0x000003ffa139f2a8 in LLVMOrcCreateInstance () from\n /opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n #9  0x000003ffa9c8a984 in llvm_session_initialize () at\n llvmjit.c:670\n #10 llvm_create_context (jitFlags=<optimized out>) at\n llvmjit.c:146\n #11 0x000003ffa9c98992 in llvm_compile_expr (state=0xa8c52218) at\n llvmjit_expr.c:131\n #12 0x0000000080219986 in ExecReadyExpr\n (state=state@entry=0xa8c52218) at execExpr.c:628\n #13 0x000000008021cd6e in ExecBuildProjectionInfo\n (targetList=<optimized out>, econtext=<optimized out>,\n slot=<optimized out>, parent=parent@entry=0xa8c51e30,\n inputDesc=inputDesc@entry=0x0) at execExpr.c:472\n #14 0x0000000080232ed6 in ExecAssignProjectionInfo\n (planstate=planstate@entry=0xa8c51e30,\n inputDesc=inputDesc@entry=0x0) at execUtils.c:504\n #15 0x0000000080250178 in ExecInitResult\n (node=node@entry=0xa8c4fb98, estate=estate@entry=0xa8c51bf0,\n eflags=eflags@entry=16) at nodeResult.c:221\n #16 0x000000008022c72c in ExecInitNode (node=node@entry=0xa8c4fb98,\n estate=estate@entry=0xa8c51bf0, eflags=eflags@entry=16) at\n execProcnode.c:164\n #17 0x000000008022675e in InitPlan (eflags=16, queryDesc=0xa8c4f7d0)\n at execMain.c:1020\n #18 standard_ExecutorStart (queryDesc=0xa8c4f7d0, eflags=16) at\n execMain.c:266\n #19 0x0000000080388868 in PortalStart\n (portal=portal@entry=0xa8c91c80, params=params@entry=0x0,\n eflags=eflags@entry=0, snapshot=snapshot@entry=0x0) at pquery.c:518\n #20 0x0000000080384b2e in exec_simple_query\n (query_string=query_string@entry=0xa8c06170 \"select 5;\") at\n postgres.c:1176\n #21 0x00000000803852e0 in PostgresMain (argc=<optimized out>,\n argv=argv@entry=0xa8c55db8, dbname=0xa8c55c80 \"postgres\",\n username=<optimized out>) at postgres.c:4247\n #22 0x000000008008007e in BackendRun (port=<optimized out>,\n port=<optimized out>) at postmaster.c:4437\n #23 BackendStartup (port=0xa8c4dc10) at postmaster.c:4128\n #24 ServerLoop () at postmaster.c:1704\n #25 0x000000008030c89e in PostmasterMain (argc=argc@entry=3,\n argv=argv@entry=0xa8c00cc0) at postmaster.c:1377\n #26 0x00000000800811f4 in main (argc=<optimized out>,\n argv=0xa8c00cc0) at main.c:228\n -- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Apr 2020 20:04:15 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[IBM z Systems] Getting server crash when jit_above_cost =0" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:34 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> (gdb) bt\n> #0 0x000003ffa9841220 in raise () from /lib64/libc.so.6\n> #1 0x000003ffa9842aa8 in abort () from /lib64/libc.so.6\n> #2 0x000003ff9f7881b4 in __gnu_cxx::__verbose_terminate_handler() () from /lib64/libstdc++.so.6\n> #3 0x000003ff9f785c7e in ?? () from /lib64/libstdc++.so.6\n> #4 0x000003ff9f785cb6 in std::terminate() () from /lib64/libstdc++.so.6\n> #5 0x000003ff9f785f60 in __cxa_throw () from /lib64/libstdc++.so.6\n> #6 0x000003ff9f7e4468 in std::__throw_bad_function_call() () from /lib64/libstdc++.so.6\n> #7 0x000003ffa139e5c4 in std::function<std::unique_ptr<llvm::orc::IndirectStubsManager, std::default_delete<llvm::orc::IndirectStubsManager> > ()>::operator()() const () from /opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n> #8 0x000003ffa139f2a8 in LLVMOrcCreateInstance () from /opt/rh/llvm-toolset-6.0/root/usr/lib64/libLLVM-6.0.so\n> #9 0x000003ffa9c8a984 in llvm_session_initialize () at llvmjit.c:670\n> #10 llvm_create_context (jitFlags=<optimized out>) at llvmjit.c:146\n> #11 0x000003ffa9c98992 in llvm_compile_expr (state=0xa8c52218) at llvmjit_expr.c:131\n> #12 0x0000000080219986 in ExecReadyExpr (state=state@entry=0xa8c52218) at execExpr.c:628\n> #13 0x000000008021cd6e in ExecBuildProjectionInfo (targetList=<optimized out>, econtext=<optimized out>, slot=<optimized out>, parent=parent@entry=0xa8c51e30, inputDesc=inputDesc@entry=0x0) at execExpr.c:472\n> #14 0x0000000080232ed6 in ExecAssignProjectionInfo (planstate=planstate@entry=0xa8c51e30, inputDesc=inputDesc@entry=0x0) at execUtils.c:504\n> #15 0x0000000080250178 in ExecInitResult (node=node@entry=0xa8c4fb98, estate=estate@entry=0xa8c51bf0, eflags=eflags@entry=16) at nodeResult.c:221\n> #16 0x000000008022c72c in ExecInitNode (node=node@entry=0xa8c4fb98, estate=estate@entry=0xa8c51bf0, eflags=eflags@entry=16) at execProcnode.c:164\n> #17 0x000000008022675e in InitPlan (eflags=16, queryDesc=0xa8c4f7d0) at execMain.c:1020\n> #18 standard_ExecutorStart (queryDesc=0xa8c4f7d0, eflags=16) at execMain.c:266\n> #19 0x0000000080388868 in PortalStart (portal=portal@entry=0xa8c91c80, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0) at pquery.c:518\n> #20 0x0000000080384b2e in exec_simple_query (query_string=query_string@entry=0xa8c06170 \"select 5;\") at postgres.c:1176\n> #21 0x00000000803852e0 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xa8c55db8, dbname=0xa8c55c80 \"postgres\", username=<optimized out>) at postgres.c:4247\n> #22 0x000000008008007e in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4437\n> #23 BackendStartup (port=0xa8c4dc10) at postmaster.c:4128\n> #24 ServerLoop () at postmaster.c:1704\n> #25 0x000000008030c89e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xa8c00cc0) at postmaster.c:1377\n> #26 0x00000000800811f4 in main (argc=<optimized out>, argv=0xa8c00cc0) at main.c:228\n\nHi Tushar,\n\nWhen testing this stuff on a few different platforms, I ran into a\nswitch statement in llvm that returned an empty std::function<> that\nwould throw std::bad_function_call like that, on architectures other\nthan (IIRC) x86 and ARM:\n\nhttps://www.postgresql.org/message-id/CAEepm%3D39F_B3Ou8S3OrUw%2BhJEUP3p%3DwCu0ug-TTW67qKN53g3w%40mail.gmail.com\n\nI'm not sure if you're seeing the same problem or another similar one,\nbut I know that Andres got a patch along those lines into llvm. Maybe\nyou could try on a more recent llvm release?\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:10:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [IBM z Systems] Getting server crash when jit_above_cost =0" }, { "msg_contents": "On 4/22/20 2:40 AM, Thomas Munro wrote:\n> I'm not sure if you're seeing the same problem or another similar one,\n> but I know that Andres got a patch along those lines into llvm. Maybe\n> you could try on a more recent llvm release?\nThanks a lot Thomas. it is working fine with llvm-toolset-7.0. look \nlike  issue is with llvm-toolset-6.0 .\nYesterday when we installed llvm-toolset-7  (yum install \nllvm-toolset-7.0), there was no llvm-config available under \n/opt/rh/llvm-toolset-7.0/root/usr/bin/\nso we ,chosen  llvm-toolset-6 with PG v12.\ntoday , we again fired this same yum command using asterisk , now all \nthe required file have been placed under llvm-toolset-7 directory and \nthings look fine.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:15:31 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [IBM z Systems] Getting server crash when jit_above_cost =0" } ]
[ { "msg_contents": "Hi. I read the thread. \n\nProbably this fiddle will be helpful for testing:\n\nhttps://dbfiddle.uk/?rdbms=postgres_12&fiddle=abe845142a5099d921d3729043fb8491\n\nI recently encountered a problem:\nWhy Window-specific functions do not allow DISTINCT to be used within the function argument list?\n\nsum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, group_id RANGE unbound preceeding and unbound following )\n\nbehavior is quite deterministic:\n\nORDER BY will create peers in partition\nDISTINCT will get only one peer\n\nI resolve my problem via two subqueries, but it seems this logic may\nbe applied to window functions (did not check this for other functions thought)\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Tue, 21 Apr 2020 18:06:11 +0300", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "[PATCH] distinct aggregates within a window function WIP" }, { "msg_contents": "On 4/21/20 5:06 PM, Eugen Konkov wrote:\n> Hi. I read the thread.\n> \n> Probably this fiddle will be helpful for testing:\n> \n> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=abe845142a5099d921d3729043fb8491\n> \n> I recently encountered a problem:\n> Why Window-specific functions do not allow DISTINCT to be used within the function argument list?\n> \n> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, group_id RANGE unbound preceeding and unbound following )\n> \n> behavior is quite deterministic:\n> \n> ORDER BY will create peers in partition\n> DISTINCT will get only one peer\n> \n> I resolve my problem via two subqueries, but it seems this logic may\n> be applied to window functions (did not check this for other functions thought)\n\nSorry, I do not follow. What problem did you encounter?\n\nAndreas\n\n\n\n", "msg_date": "Tue, 21 Apr 2020 17:17:00 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP" }, { "msg_contents": "Hello Andreas,\n\nTuesday, April 21, 2020, 6:17:00 PM, you wrote:\n\n> On 4/21/20 5:06 PM, Eugen Konkov wrote:\n>> Hi. I read the thread.\n>> \n>> Probably this fiddle will be helpful for testing:\n>> \n>> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=abe845142a5099d921d3729043fb8491\n>> \n>> I recently encountered a problem:\n>> Why Window-specific functions do not allow DISTINCT to be used within the function argument list?\n>> \n>> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, group_id RANGE unbound preceeding and unbound following )\n>> \n>> behavior is quite deterministic:\n>> \n>> ORDER BY will create peers in partition\n>> DISTINCT will get only one peer\n>> \n>> I resolve my problem via two subqueries, but it seems this logic may\n>> be applied to window functions (did not check this for other functions thought)\n\n> Sorry, I do not follow. What problem did you encounter?\n\nLack of DISTINCT for window function SUM\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:05:19 +0300", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP" }, { "msg_contents": "I resolve my problem https://stackoverflow.com/a/67167595/4632019:\n\nCould it be possible PG will use `filter` trick when DISTINCT is used: `sum (distinct suma)`?\nThis will benefit to not write second SELECT\n\n\nhttps://www.postgresql.org/message-id/CAN1PwonqojSAP_N91zO5Hm7Ta4Mdib-2YuUaEd0NP6Fn6XutzQ%40mail.gmail.com\n>About yours additional note, I think that it is not possible to get easy\nthe same result with appropriate use of window framing options, \n\nCan you try next approach?\n\n\n\nMy approach is https://dbfiddle.uk/?rdbms=postgres_13&fiddle=01c699f3f47ca9fca8215f8cbf556218:\nAssign row_number at each order: row_number() over (partition by agreement_id, order_id ) as nrow\nTake only first suma: filter nrow = 1\n\n\n\nwith data as (\n select * from (values\n ( 1, 1, 1, 1.0049 ), (2, 1,1,1.0049), ( 3, 1,1,1.0049 ) ,\n ( 4, 1, 2, 1.0049 ), (5, 1,2,1.0057),\n ( 6, 2, 1, 1.53 ), ( 7,2,1,2.18), ( 8,2,2,3.48 )\n ) t (id, agreement_id, order_id, suma)\n),\nintermediate as (select\n *,\n row_number() over (partition by agreement_id, order_id ) as nrow,\n (sum( suma ) over ( partition by agreement_id, order_id ))::numeric( 10, 2) as order_suma,\nfrom data)\n\nselect\n *,\n sum( order_suma ) filter (where nrow = 1) over (partition by agreement_id)\nfrom intermediate\n\n\nWednesday, April 22, 2020, 10:05:19 AM, you wrote:\n\n> Hello Andreas,\n\n> Tuesday, April 21, 2020, 6:17:00 PM, you wrote:\n\n>> On 4/21/20 5:06 PM, Eugen Konkov wrote:\n>>> Hi. I read the thread.\n\n>>> Probably this fiddle will be helpful for testing:\n\n>>> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=abe845142a5099d921d3729043fb8491\n\n>>> I recently encountered a problem:\n>>> Why Window-specific functions do not allow DISTINCT to be used within the function argument list?\n\n>>> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, group_id RANGE unbound preceeding and unbound following )\n\n>>> behavior is quite deterministic:\n\n>>> ORDER BY will create peers in partition\n>>> DISTINCT will get only one peer\n\n>>> I resolve my problem via two subqueries, but it seems this logic may\n>>> be applied to window functions (did not check this for other functions thought)\n\n>> Sorry, I do not follow. What problem did you encounter?\n\n> Lack of DISTINCT for window function SUM\n\n\n\n\n\n\n--\nBest regards,\nEugen Konkov \nRe: [PATCH] distinct aggregates within a window function WIP\n\n\nI resolve my problem  https://stackoverflow.com/a/67167595/4632019:\n\nCould it be possible PG will use `filter` trick when DISTINCT is used: `sum (distinct suma)`?\nThis will benefit to not write second SELECT\n\n\nhttps://www.postgresql.org/message-id/CAN1PwonqojSAP_N91zO5Hm7Ta4Mdib-2YuUaEd0NP6Fn6XutzQ%40mail.gmail.com\n>About yours additional note, I think that it is not possible to get easy\nthe same result with appropriate use of window framing options,\n\nCan you try next approach?\n\n\n\nMy approach is https://dbfiddle.uk/?rdbms=postgres_13&fiddle=01c699f3f47ca9fca8215f8cbf556218:\n\n\nAssign row_number at each order: row_number() over (partition by agreement_id, order_id ) as nrow\nTake only first suma: filter nrow = 1\n\n\n\n\nwith data as (\n  select * from (values\n      ( 1, 1, 1, 1.0049 ), (2, 1,1,1.0049), ( 3, 1,1,1.0049 ) ,\n      ( 4, 1, 2, 1.0049 ), (5, 1,2,1.0057),\n      ( 6, 2, 1, 1.53 ), ( 7,2,1,2.18), ( 8,2,2,3.48 )\n ) t (id, agreement_id, order_id, suma)\n),\nintermediate as (select\n *,\n row_number() over (partition by agreement_id, order_id ) as nrow,\n (sum( suma ) over ( partition by agreement_id, order_id ))::numeric( 10, 2) as order_suma,\nfrom data)\n\nselect\n  *,\n  sum( order_suma ) filter (where nrow = 1) over (partition by agreement_id)\nfrom intermediate\n\n\nWednesday, April 22, 2020, 10:05:19 AM, you wrote:\n\n> Hello Andreas,\n\n> Tuesday, April 21, 2020, 6:17:00 PM, you wrote:\n\n>> On 4/21/20 5:06 PM, Eugen Konkov wrote:\n>>> Hi. I read the thread.\n\n>>> Probably this fiddle will be helpful for testing:\n\n>>> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=abe845142a5099d921d3729043fb8491\n\n>>> I recently encountered a problem:\n>>> Why Window-specific functions do not allow DISTINCT to be used within the function argument list?\n\n>>> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, group_id RANGE unbound preceeding and unbound following )\n\n>>> behavior is quite deterministic:\n\n>>> ORDER BY will create peers in partition\n>>> DISTINCT will get only one peer\n\n>>> I  resolve  my problem via two subqueries, but it seems this logic may\n>>> be applied to window functions (did not check this for other functions thought)\n\n>> Sorry, I do not follow. What problem did you encounter?\n\n> Lack of DISTINCT for window function SUM\n\n\n\n\n\n\n--\nBest regards,\nEugen Konkov", "msg_date": "Thu, 13 May 2021 12:18:41 +0300", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP" } ]
[ { "msg_contents": "Hi All,\n I was pleasantly surprised to see that triggers can be created on FDW tables. I'm running into a problem.\n\nI create a trigger on an imported foreign table. In the procedure, I change the value of a column that is not in the triggering update statement. This change does not make it to the mysql side.\n\nCREATE OR REPLACE FUNCTION aatrigger_up() returns trigger\nAS $$\nDECLARE\nBEGIN\n\t\n\tIF NOT(row_to_json(NEW)->'pgrti' is NULL) THEN\n\t\tNEW.pgrti = 2000000000*random();\n\tEND IF;\n RAISE NOTICE 'aarigger_up %', row_to_json(NEW)::text;\n return NEW;\n\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER aarigger_up BEFORE UPDATE ON mysql.users FOR EACH ROW EXECUTE PROCEDURE aarigger_up();\nupdate mysql.users set email = 'admin@example.com' where id = 1;\t\nI can see that the value for pgrti is updated in the NOTICE in postgres. In mysql the value is not updated. If I add the target col to the statement it does go through\n\nupdate mysql.users set email = 'admin@example.com', pgrti=0 where id = 1;\t\n I need this to work to be able to detect CRUD coming from PG in a little deamon that calls pg_triggers for updates coming from mysqld; without a means to detect changes originating from pg the triggers would fire twice. Any idea where I'd change MYSQL_FDW to do this (also add fields that are updated in the trigger before firing off to mysql)?\n\nI’m seeing in https://github.com/EnterpriseDB/mysql_fdw/blob/master/deparse.c <https://github.com/EnterpriseDB/mysql_fdw/blob/master/deparse.c> in \nmysql_deparse_update\n\nThat the actual update statement is used to generate the mapping, so any col referred to in triggers would be ignored…\n\n\nTIA, stay safe!\nFrancois Payette\nHi All, I was pleasantly surprised to see that triggers can be created on FDW tables. I'm running into a problem.I create a trigger on an imported foreign table. In the procedure, I change the value of a column that is not in the triggering update statement. This change does not make it to the mysql side.CREATE OR REPLACE FUNCTION aatrigger_up() returns trigger\nAS $$\nDECLARE\nBEGIN\n\t\n\tIF NOT(row_to_json(NEW)->'pgrti' is NULL) THEN\n\t\tNEW.pgrti = 2000000000*random();\n\tEND IF;\n RAISE NOTICE 'aarigger_up %', row_to_json(NEW)::text;\n return NEW;\n\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER aarigger_up BEFORE UPDATE ON mysql.users FOR EACH ROW EXECUTE PROCEDURE aarigger_up();\nupdate mysql.users set email = 'admin@example.com' where id = 1;\t\nI can see that the value for pgrti is updated in the NOTICE in postgres. In mysql the value is not updated. If I add the target col to the statement it does go throughupdate mysql.users set email = 'admin@example.com', pgrti=0 where id = 1;\t\n I need this to work to be able to detect CRUD coming from PG in a little deamon that calls pg_triggers for updates coming from mysqld; without a means to detect changes originating from pg the triggers would fire twice. Any idea where I'd change MYSQL_FDW to do this (also add fields that are updated in the trigger before firing off to mysql)?I’m seeing in https://github.com/EnterpriseDB/mysql_fdw/blob/master/deparse.c in mysql_deparse_updateThat the actual update statement is used to generate the mapping, so any col referred to in triggers would be ignored…TIA, stay safe!Francois Payette", "msg_date": "Tue, 21 Apr 2020 19:09:03 -0400", "msg_from": "Francois Payette <francoisp@netmosphere.net>", "msg_from_op": true, "msg_subject": "MYSQL_FDW trigger BEFORE UPDATE changes to NEW on a col not in the\n update statement don't go through" }, { "msg_contents": "Hi Francois,\n\nOn Wed, Apr 22, 2020 at 8:09 AM Francois Payette\n<francoisp@netmosphere.net> wrote:\n> I create a trigger on an imported foreign table. In the procedure, I change the value of a column that is not in the triggering update statement. This change does not make it to the mysql side.\n\nI'm not an expert on mysql_fdw, so maybe I'm missing something, but I\nthink we had the same issue in postgres_fdw. See this:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b6da83d162cb0ac9f6d21082727bbd45c972c53;hp=7dc6ae37def50b5344c157eee5e029a09359f8ee\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:25:14 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MYSQL_FDW trigger BEFORE UPDATE changes to NEW on a col not in\n the update statement don't go through" } ]
[ { "msg_contents": "HI,\n\nI tried to install PG v11 and v12 on IBM z/OS using YUM command , \nfollowing -https://wiki.postgresql.org/wiki/YUM_Installation_on_z_Systems\n\nFound that 2 issues\n\n1) rpm packages  are failing  due to \" Package .....  is not signed \"\n\nPG v12 -\nPackage postgresql12-contrib-12.1-2PGDG.rhel7.s390x.rpm is not signed\nPG v11\nPackage postgresql11-libs-11.6-2PGDG.rhel7.s390x.rpm is not signed\n\n2) Rpm packages are NOT updated . still showing 11.6 and 12.1 version \nwhereas latest released version is 11.7 and 12.2.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:03:01 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[IBM z Systems] Rpm package issues." } ]
[ { "msg_contents": "Hi,\n\nGetting a server crash while creating partition table which have\nself-referencing foreign key\n\npostgres=# CREATE TABLE part1 (c1 int PRIMARY KEY, c2 int REFERENCES part1)\nPARTITION BY LIST (c1);\nCREATE TABLE\npostgres=# CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n--stack-trace\n[edb@localhost bin]$ gdb -q -c data/core.16883 postgres\nCore was generated by `postgres: edb postgres [local] CREATE TABLE\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00000039212324f5 in raise (sig=6) at\n../nptl/sysdeps/unix/sysv/linux/raise.c:64\n64 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);\nMissing separate debuginfos, use: debuginfo-install\nkeyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64\nlibcom_err-1.41.12-24.el6.x86_64 libgcc-4.4.7-23.el6.x86_64\nlibselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-58.el6_10.x86_64\nzlib-1.2.3-29.el6.x86_64\n(gdb) bt\n#0 0x00000039212324f5 in raise (sig=6) at\n../nptl/sysdeps/unix/sysv/linux/raise.c:64\n#1 0x0000003921233cd5 in abort () at abort.c:92\n#2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310\n\"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\",\nfileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n#3 0x00000000006d1b6c in CloneFkReferenced (parentRel=0x7f3c80be2400,\npartitionRel=0x7f3c80be2a50) at tablecmds.c:9046\n#4 0x00000000006d189b in CloneForeignKeyConstraints (wqueue=0x0,\nparentRel=0x7f3c80be2400, partitionRel=0x7f3c80be2a50) at tablecmds.c:8939\n#5 0x00000000006c09a8 in DefineRelation (stmt=0x2ff25b8, relkind=114 'r',\nownerId=10, typaddress=0x0, queryString=0x2f19810 \"CREATE TABLE part1_p1\nPARTITION OF part1 FOR VALUES IN (1);\")\n at tablecmds.c:1151\n#6 0x0000000000953021 in ProcessUtilitySlow (pstate=0x2ff24a0,\npstmt=0x2f1a588, queryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF\npart1 FOR VALUES IN (1);\", context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x2f1a868, qc=0x7ffffc1faa10) at\nutility.c:1154\n#7 0x0000000000952dfe in standard_ProcessUtility (pstmt=0x2f1a588,\nqueryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES\nIN (1);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n queryEnv=0x0, dest=0x2f1a868, qc=0x7ffffc1faa10) at utility.c:1067\n#8 0x0000000000951d18 in ProcessUtility (pstmt=0x2f1a588,\nqueryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES\nIN (1);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n dest=0x2f1a868, qc=0x7ffffc1faa10) at utility.c:522\n#9 0x0000000000950b48 in PortalRunUtility (portal=0x2f808c0,\npstmt=0x2f1a588, isTopLevel=true, setHoldSnapshot=false, dest=0x2f1a868,\nqc=0x7ffffc1faa10) at pquery.c:1157\n#10 0x0000000000950d6e in PortalRunMulti (portal=0x2f808c0,\nisTopLevel=true, setHoldSnapshot=false, dest=0x2f1a868, altdest=0x2f1a868,\nqc=0x7ffffc1faa10) at pquery.c:1303\n#11 0x000000000095023a in PortalRun (portal=0x2f808c0,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2f1a868,\naltdest=0x2f1a868, qc=0x7ffffc1faa10) at pquery.c:779\n#12 0x000000000094a2a3 in exec_simple_query (query_string=0x2f19810 \"CREATE\nTABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\") at postgres.c:1239\n#13 0x000000000094e38e in PostgresMain (argc=1, argv=0x2f44998,\ndbname=0x2f448b0 \"postgres\", username=0x2f44890 \"edb\") at postgres.c:4315\n#14 0x000000000089ba5d in BackendRun (port=0x2f3c7f0) at postmaster.c:4510\n#15 0x000000000089b24c in BackendStartup (port=0x2f3c7f0) at\npostmaster.c:4202\n#16 0x00000000008975be in ServerLoop () at postmaster.c:1727\n#17 0x0000000000896f07 in PostmasterMain (argc=3, argv=0x2f14240) at\npostmaster.c:1400\n#18 0x00000000007999cc in main (argc=3, argv=0x2f14240) at main.c:210\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\nHi, Getting a server crash while creating partition table which  have self-referencing foreign keypostgres=# CREATE TABLE part1 (c1 int PRIMARY KEY, c2 int REFERENCES part1) PARTITION BY LIST (c1);CREATE TABLEpostgres=# CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.--stack-trace[edb@localhost bin]$ gdb -q -c data/core.16883 postgresCore was generated by `postgres: edb postgres [local] CREATE TABLE             '.Program terminated with signal 6, Aborted.#0  0x00000039212324f5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:6464\t  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-24.el6.x86_64 libgcc-4.4.7-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-58.el6_10.x86_64 zlib-1.2.3-29.el6.x86_64(gdb) bt#0  0x00000039212324f5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64#1  0x0000003921233cd5 in abort () at abort.c:92#2  0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67#3  0x00000000006d1b6c in CloneFkReferenced (parentRel=0x7f3c80be2400, partitionRel=0x7f3c80be2a50) at tablecmds.c:9046#4  0x00000000006d189b in CloneForeignKeyConstraints (wqueue=0x0, parentRel=0x7f3c80be2400, partitionRel=0x7f3c80be2a50) at tablecmds.c:8939#5  0x00000000006c09a8 in DefineRelation (stmt=0x2ff25b8, relkind=114 'r', ownerId=10, typaddress=0x0, queryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\")    at tablecmds.c:1151#6  0x0000000000953021 in ProcessUtilitySlow (pstate=0x2ff24a0, pstmt=0x2f1a588, queryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\", context=PROCESS_UTILITY_TOPLEVEL,     params=0x0, queryEnv=0x0, dest=0x2f1a868, qc=0x7ffffc1faa10) at utility.c:1154#7  0x0000000000952dfe in standard_ProcessUtility (pstmt=0x2f1a588, queryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,     queryEnv=0x0, dest=0x2f1a868, qc=0x7ffffc1faa10) at utility.c:1067#8  0x0000000000951d18 in ProcessUtility (pstmt=0x2f1a588, queryString=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,     dest=0x2f1a868, qc=0x7ffffc1faa10) at utility.c:522#9  0x0000000000950b48 in PortalRunUtility (portal=0x2f808c0, pstmt=0x2f1a588, isTopLevel=true, setHoldSnapshot=false, dest=0x2f1a868, qc=0x7ffffc1faa10) at pquery.c:1157#10 0x0000000000950d6e in PortalRunMulti (portal=0x2f808c0, isTopLevel=true, setHoldSnapshot=false, dest=0x2f1a868, altdest=0x2f1a868, qc=0x7ffffc1faa10) at pquery.c:1303#11 0x000000000095023a in PortalRun (portal=0x2f808c0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2f1a868, altdest=0x2f1a868, qc=0x7ffffc1faa10) at pquery.c:779#12 0x000000000094a2a3 in exec_simple_query (query_string=0x2f19810 \"CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\") at postgres.c:1239#13 0x000000000094e38e in PostgresMain (argc=1, argv=0x2f44998, dbname=0x2f448b0 \"postgres\", username=0x2f44890 \"edb\") at postgres.c:4315#14 0x000000000089ba5d in BackendRun (port=0x2f3c7f0) at postmaster.c:4510#15 0x000000000089b24c in BackendStartup (port=0x2f3c7f0) at postmaster.c:4202#16 0x00000000008975be in ServerLoop () at postmaster.c:1727#17 0x0000000000896f07 in PostmasterMain (argc=3, argv=0x2f14240) at postmaster.c:1400#18 0x00000000007999cc in main (argc=3, argv=0x2f14240) at main.c:210Thanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Wed, 22 Apr 2020 13:20:55 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "create partition table caused server crashed with self-referencing\n foreign key" }, { "msg_contents": "On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Hi,\n>\n> Getting a server crash while creating partition table which have\n> self-referencing foreign key\n>\n> postgres=# CREATE TABLE part1 (c1 int PRIMARY KEY, c2 int REFERENCES\n> part1) PARTITION BY LIST (c1);\n> CREATE TABLE\n> postgres=# CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> --stack-trace\n> [edb@localhost bin]$ gdb -q -c data/core.16883 postgres\n> Core was generated by `postgres: edb postgres [local] CREATE TABLE\n> '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00000039212324f5 in raise (sig=6) at\n> ../nptl/sysdeps/unix/sysv/linux/raise.c:64\n> 64 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);\n> Missing separate debuginfos, use: debuginfo-install\n> keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64\n> libcom_err-1.41.12-24.el6.x86_64 libgcc-4.4.7-23.el6.x86_64\n> libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-58.el6_10.x86_64\n> zlib-1.2.3-29.el6.x86_64\n> (gdb) bt\n> #0 0x00000039212324f5 in raise (sig=6) at\n> ../nptl/sysdeps/unix/sysv/linux/raise.c:64\n> #1 0x0000003921233cd5 in abort () at abort.c:92\n> #2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310\n> \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\",\n> fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>\n\nLooks like this assertion is incorrect, I guess it should have check\nnumfks <= attmap->maplen instead.\n\nRegards,\nAmul\n\nOn Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Hi, Getting a server crash while creating partition table which  have self-referencing foreign keypostgres=# CREATE TABLE part1 (c1 int PRIMARY KEY, c2 int REFERENCES part1) PARTITION BY LIST (c1);CREATE TABLEpostgres=# CREATE TABLE part1_p1 PARTITION OF part1 FOR VALUES IN (1);server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.--stack-trace[edb@localhost bin]$ gdb -q -c data/core.16883 postgresCore was generated by `postgres: edb postgres [local] CREATE TABLE             '.Program terminated with signal 6, Aborted.#0  0x00000039212324f5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:6464\t  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-24.el6.x86_64 libgcc-4.4.7-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-58.el6_10.x86_64 zlib-1.2.3-29.el6.x86_64(gdb) bt#0  0x00000039212324f5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64#1  0x0000003921233cd5 in abort () at abort.c:92#2  0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67Looks like this assertion is incorrect, I guess it should have checknumfks <= attmap->maplen instead.Regards,Amul", "msg_date": "Wed, 22 Apr 2020 13:40:50 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> #2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>\n>\n> Looks like this assertion is incorrect, I guess it should have check\n> numfks <= attmap->maplen instead.\n\nEven that seems like a very strange thing to Assert. Basically it's\nsaying, make sure the number of columns in the foreign key constraint\nis less than or equal to the number of attributes in parentRel.\n\nIt's true we do disallow duplicate column names in the foreign key\nconstraint (at least since 9da867537), but why do we want an Assert to\nsay that? I don't see anything about that code that would break if we\ndid happen to allow duplicate columns in the foreign key. I'd say the\nAssert should just be removed completely.\n\nDavid\n\n\n", "msg_date": "Wed, 22 Apr 2020 20:57:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <\n> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> >> #2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310\n> \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\",\n> fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n> >\n> >\n> > Looks like this assertion is incorrect, I guess it should have check\n> > numfks <= attmap->maplen instead.\n>\n> Even that seems like a very strange thing to Assert. Basically it's\n> saying, make sure the number of columns in the foreign key constraint\n> is less than or equal to the number of attributes in parentRel.\n>\n> It's true we do disallow duplicate column names in the foreign key\n> constraint (at least since 9da867537), but why do we want an Assert to\n> say that? I don't see anything about that code that would break if we\n> did happen to allow duplicate columns in the foreign key. I'd say the\n> Assert should just be removed completely.\n>\n\nUnderstood and agree with you.\n\nAttached patch removes this assertion and does a slight tweak to regression\ntest\nto generate case where numfks != attmap->maplen, IMO, we should have this\neven if there is nothing that checks it. Thoughts?\n\nRegards,\nAmul", "msg_date": "Wed, 22 Apr 2020 14:59:38 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:59 PM amul sul <sulamul@gmail.com> wrote:\n\n>\n>\n> On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>> >\n>> > On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <\n>> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> >> #2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310\n>> \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\",\n>> fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>> >\n>> >\n>> > Looks like this assertion is incorrect, I guess it should have check\n>> > numfks <= attmap->maplen instead.\n>>\n>> Even that seems like a very strange thing to Assert. Basically it's\n>> saying, make sure the number of columns in the foreign key constraint\n>> is less than or equal to the number of attributes in parentRel.\n>>\n>> It's true we do disallow duplicate column names in the foreign key\n>> constraint (at least since 9da867537), but why do we want an Assert to\n>> say that? I don't see anything about that code that would break if we\n>> did happen to allow duplicate columns in the foreign key. I'd say the\n>> Assert should just be removed completely.\n>>\n>\n> Understood and agree with you.\n>\n> Attached patch removes this assertion and does a slight tweak to\n> regression test\n> to generate case where numfks != attmap->maplen, IMO, we should have this\n> even if there is nothing that checks it. Thoughts?\n>\n\nKindly ignore the previously attached patch, correct patch attached here.\n\nRegards,\nAmul", "msg_date": "Wed, 22 Apr 2020 15:14:27 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, 22 Apr 2020 at 21:30, amul sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>> >\n>> > On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> >> #2 0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>> >\n>> >\n>> > Looks like this assertion is incorrect, I guess it should have check\n>> > numfks <= attmap->maplen instead.\n>>\n>> Even that seems like a very strange thing to Assert. Basically it's\n>> saying, make sure the number of columns in the foreign key constraint\n>> is less than or equal to the number of attributes in parentRel.\n>>\n>> It's true we do disallow duplicate column names in the foreign key\n>> constraint (at least since 9da867537), but why do we want an Assert to\n>> say that? I don't see anything about that code that would break if we\n>> did happen to allow duplicate columns in the foreign key. I'd say the\n>> Assert should just be removed completely.\n>\n>\n> Understood and agree with you.\n\nI pushed a patch to remove the Assert. I didn't really feel a need to\nmake any adjustments to the regression tests for this. The Assert was\nclearly out of place, it's hard to imagine that this could ever get\nbroken again.\n\nDavid\n\n\n", "msg_date": "Wed, 22 Apr 2020 22:21:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:45 PM amul sul <sulamul@gmail.com> wrote:\n\n>\n>\n> On Wed, Apr 22, 2020 at 2:59 PM amul sul <sulamul@gmail.com> wrote:\n>\n>>\n>>\n>> On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com>\n>> wrote:\n>>\n>>> On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>>> >\n>>> > On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <\n>>> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>> >> #2 0x0000000000acd16a in ExceptionalCondition\n>>> (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23\n>>> \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at\n>>> assert.c:67\n>>> >\n>>> >\n>>> > Looks like this assertion is incorrect, I guess it should have check\n>>> > numfks <= attmap->maplen instead.\n>>>\n>>> Even that seems like a very strange thing to Assert. Basically it's\n>>> saying, make sure the number of columns in the foreign key constraint\n>>> is less than or equal to the number of attributes in parentRel.\n>>>\n>>> It's true we do disallow duplicate column names in the foreign key\n>>> constraint (at least since 9da867537), but why do we want an Assert to\n>>> say that? I don't see anything about that code that would break if we\n>>> did happen to allow duplicate columns in the foreign key. I'd say the\n>>> Assert should just be removed completely.\n>>>\n>>\n>> Understood and agree with you.\n>>\n>> Attached patch removes this assertion and does a slight tweak to\n>> regression test\n>> to generate case where numfks != attmap->maplen, IMO, we should have this\n>> even if there is nothing that checks it. Thoughts?\n>>\n>\n> Kindly ignore the previously attached patch, correct patch attached here.\n>\n\nI did a quick test of the fix, the assertion failure is fixed and\nregression is not reporting any failures.\n\n\n>\n> Regards,\n> Amul\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Wed, Apr 22, 2020 at 2:45 PM amul sul <sulamul@gmail.com> wrote:On Wed, Apr 22, 2020 at 2:59 PM amul sul <sulamul@gmail.com> wrote:On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> #2  0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>\n>\n> Looks like this assertion is incorrect, I guess it should have check\n> numfks <= attmap->maplen instead.\n\nEven that seems like a very strange thing to Assert. Basically it's\nsaying, make sure the number of columns in the foreign key constraint\nis less than or equal to the number of attributes in parentRel.\n\nIt's true we do disallow duplicate column names in the foreign key\nconstraint (at least since 9da867537), but why do we want an Assert to\nsay that?  I don't see anything about that code that would break if we\ndid happen to allow duplicate columns in the foreign key.  I'd say the\nAssert should just be removed completely. Understood and agree with you. Attached patch removes this assertion and does a slight tweak to regression testto generate case where numfks != attmap->maplen, IMO, we should have thiseven if there is nothing that checks it. Thoughts?Kindly ignore the previously attached patch, correct patch attached here.I did a quick test of the fix, the assertion failure is fixed and regression is not reporting any failures. Regards,Amul\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Wed, 22 Apr 2020 15:44:07 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "Thanks all for quick fix and push.\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\nOn Wed, Apr 22, 2020 at 4:14 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n\n>\n>\n> On Wed, Apr 22, 2020 at 2:45 PM amul sul <sulamul@gmail.com> wrote:\n>\n>>\n>>\n>> On Wed, Apr 22, 2020 at 2:59 PM amul sul <sulamul@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com>\n>>> wrote:\n>>>\n>>>> On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>>>> >\n>>>> > On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <\n>>>> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>>> >> #2 0x0000000000acd16a in ExceptionalCondition\n>>>> (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23\n>>>> \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at\n>>>> assert.c:67\n>>>> >\n>>>> >\n>>>> > Looks like this assertion is incorrect, I guess it should have check\n>>>> > numfks <= attmap->maplen instead.\n>>>>\n>>>> Even that seems like a very strange thing to Assert. Basically it's\n>>>> saying, make sure the number of columns in the foreign key constraint\n>>>> is less than or equal to the number of attributes in parentRel.\n>>>>\n>>>> It's true we do disallow duplicate column names in the foreign key\n>>>> constraint (at least since 9da867537), but why do we want an Assert to\n>>>> say that? I don't see anything about that code that would break if we\n>>>> did happen to allow duplicate columns in the foreign key. I'd say the\n>>>> Assert should just be removed completely.\n>>>>\n>>>\n>>> Understood and agree with you.\n>>>\n>>> Attached patch removes this assertion and does a slight tweak to\n>>> regression test\n>>> to generate case where numfks != attmap->maplen, IMO, we should have\n>>> this\n>>> even if there is nothing that checks it. Thoughts?\n>>>\n>>\n>> Kindly ignore the previously attached patch, correct patch attached here.\n>>\n>\n> I did a quick test of the fix, the assertion failure is fixed and\n> regression is not reporting any failures.\n>\n>\n>>\n>> Regards,\n>> Amul\n>>\n>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : http://www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> EMAIL: mailto: ahsan.hadi@highgo.ca\n>\n\nThanks all for quick fix and push.Thanks & Regards,Rajkumar RaghuwanshiOn Wed, Apr 22, 2020 at 4:14 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:On Wed, Apr 22, 2020 at 2:45 PM amul sul <sulamul@gmail.com> wrote:On Wed, Apr 22, 2020 at 2:59 PM amul sul <sulamul@gmail.com> wrote:On Wed, Apr 22, 2020 at 2:27 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 22 Apr 2020 at 20:11, amul sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 1:21 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> #2  0x0000000000acd16a in ExceptionalCondition (conditionName=0xc32310 \"numfks == attmap->maplen\", errorType=0xc2ea23 \"FailedAssertion\", fileName=0xc2f0bf \"tablecmds.c\", lineNumber=9046) at assert.c:67\n>\n>\n> Looks like this assertion is incorrect, I guess it should have check\n> numfks <= attmap->maplen instead.\n\nEven that seems like a very strange thing to Assert. Basically it's\nsaying, make sure the number of columns in the foreign key constraint\nis less than or equal to the number of attributes in parentRel.\n\nIt's true we do disallow duplicate column names in the foreign key\nconstraint (at least since 9da867537), but why do we want an Assert to\nsay that?  I don't see anything about that code that would break if we\ndid happen to allow duplicate columns in the foreign key.  I'd say the\nAssert should just be removed completely. Understood and agree with you. Attached patch removes this assertion and does a slight tweak to regression testto generate case where numfks != attmap->maplen, IMO, we should have thiseven if there is nothing that checks it. Thoughts?Kindly ignore the previously attached patch, correct patch attached here.I did a quick test of the fix, the assertion failure is fixed and regression is not reporting any failures. Regards,Amul\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Wed, 22 Apr 2020 17:04:04 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, Apr 22, 2020 at 10:21:21PM +1200, David Rowley wrote:\n> I pushed a patch to remove the Assert. I didn't really feel a need to\n> make any adjustments to the regression tests for this. The Assert was\n> clearly out of place, it's hard to imagine that this could ever get\n> broken again.\n\nStill, it seems to me that there could be a point in having a test for\npartitioned tables with FKs referencing themselves. We have such\ntests for plain tables for example.\n--\nMichael", "msg_date": "Wed, 22 Apr 2020 20:50:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" }, { "msg_contents": "On Wed, 22 Apr 2020 at 23:50, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 22, 2020 at 10:21:21PM +1200, David Rowley wrote:\n> > I pushed a patch to remove the Assert. I didn't really feel a need to\n> > make any adjustments to the regression tests for this. The Assert was\n> > clearly out of place, it's hard to imagine that this could ever get\n> > broken again.\n>\n> Still, it seems to me that there could be a point in having a test for\n> partitioned tables with FKs referencing themselves. We have such\n> tests for plain tables for example.\n\nThe reason I didn't take the additional tests that were proposed by\nAmul was that I didn't think that they really added any additional\ncoverage to the code that remained. They would only have served to\nensure that nobody went and added the same Assert back again. Since\nthe Assert was clearly out of place, I didn't think it was worthy of\nburdening our build farm with the additional overhead of the modified\ntest from now until the end of time.\n\nI'd put it akin to fixing a spelling mistake in an error message and\nadding a special test specifically to ensure the spelling is correct.\nWould we add a test for that? Likely not. Would someone reintroduce\nthe spelling mistake again? Likely not.\n\nI think discussions for any tests beyond the scope of this fix are\nprobably for another thread. I'm happy to join any discussions about\nit there.\n\nDavid\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:50:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create partition table caused server crashed with\n self-referencing foreign key" } ]
[ { "msg_contents": "Hi!\nI wrote extension for postgresql that is collecting statistics about errors in logfile.\n\nThis extension counts the number of messages of each type and code.\nIt's designed to enable monitoring tools. I'm going to use it as a data source for plot of number of errors, warnings and fatals.\nSource code is here: https://github.com/munakoiso/logerrors\n\nDesign considerations.\nThere is a hash table in shared memory that contains counters of each type (currently only fatal, error, warning) and every possible error code. \nIn emit_log_hook messages are updating this info in hash table. Then bgworker every n seconds collects and prepares this info.\npg_log_errors_stats() function is using this table in shared memory can show stats at last n*k seconds.\n\nHere is an example of usage:\npostgres=# select * from pg_log_errors_stats();\n time_interval | type | message | count\n ---------------+---------+----------------------+-------\n | WARNING | TOTAL | 0\n | ERROR | TOTAL | 3\n 600 | ERROR | ERRCODE_SYNTAX_ERROR | 3\n 5 | ERROR | ERRCODE_SYNTAX_ERROR | 2\n | FATAL | TOTAL | 0\n\nIt would be very cool if someone gave me some feedback.\n\n-- \nSviatoslav Ermilin\nYandex\n\n\n", "msg_date": "Wed, 22 Apr 2020 16:02:00 +0500", "msg_from": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?=\n <munakoiso@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Extension to monitor errors in log" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile poking at the build system I stumbled upon some trivial trailer\ncomment inconsistencies in config/c-compiler.m4. They can be fixed\neither way: either changing the macro names or changing the comment. PFA\na patch that keeps the macro names.\n\nIn hindsight though, it seems that PGAC_PRINTF_ARCHETYPE was meant to be\nPGAC_C_PRINTF_ARCHETYPE, judging by a half-second squint of surrounding,\nsimilar macros. Thoughts?\n\nAlso in hindsight: it seems that, as suggested in the trailer typo,\nPGAC_PROG_CXX_VAR_OPT (a la the C version PGAC_PROG_CC_VAR_OPT) would be\na good addition if we ever want to add the negative warning flags (for\nstarter, Wno-unused-command-line-argument for clang++) to CXXFLAGS, but\nI assume it wasn't there in the final patch because we didn't use it\n(presumably because the patch was minimized?). Thoughts?\n\nCheers,\nJesse", "msg_date": "Wed, 22 Apr 2020 07:17:05 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Header / Trailer Comment Typos for M4 macros" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> While poking at the build system I stumbled upon some trivial trailer\n> comment inconsistencies in config/c-compiler.m4. They can be fixed\n> either way: either changing the macro names or changing the comment. PFA\n> a patch that keeps the macro names.\n\nPushed, thanks.\n\n> In hindsight though, it seems that PGAC_PRINTF_ARCHETYPE was meant to be\n> PGAC_C_PRINTF_ARCHETYPE, judging by a half-second squint of surrounding,\n> similar macros. Thoughts?\n\nMaybe, but I doubt it's worth renaming.\n\n> Also in hindsight: it seems that, as suggested in the trailer typo,\n> PGAC_PROG_CXX_VAR_OPT (a la the C version PGAC_PROG_CC_VAR_OPT) would be\n> a good addition if we ever want to add the negative warning flags (for\n> starter, Wno-unused-command-line-argument for clang++) to CXXFLAGS, but\n> I assume it wasn't there in the final patch because we didn't use it\n> (presumably because the patch was minimized?). Thoughts?\n\nI'd be inclined not to add it till we have an actual use for it.\nDead code tends to break silently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 15:29:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Header / Trailer Comment Typos for M4 macros" }, { "msg_contents": "On Wed, Apr 22, 2020 at 12:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jesse Zhang <sbjesse@gmail.com> writes:\n> > either way: either changing the macro names or changing the comment. PFA\n> > a patch that keeps the macro names.\n>\n> Pushed, thanks.\n>\nThanks!\n\n> > Also in hindsight: it seems that, as suggested in the trailer typo,\n> > PGAC_PROG_CXX_VAR_OPT (a la the C version PGAC_PROG_CC_VAR_OPT) would be\n> > a good addition if we ever want to add the negative warning flags (for\n> > starter, Wno-unused-command-line-argument for clang++) to CXXFLAGS, but\n> > I assume it wasn't there in the final patch because we didn't use it\n> > (presumably because the patch was minimized?). Thoughts?\n>\n> I'd be inclined not to add it till we have an actual use for it.\n> Dead code tends to break silently.\n>\nFor sure. I feel the same about dead code.\n\nI didn't make my question clear though: I'm curious what motivated the\noriginal addition of -Wno-unused-command-line-argument in commit\n73b416b2e412, and how that problem did't quite manifest itself with Clang++.\nThe commit mentioned pthread flags, but I tried taking out\nWno-unused-command-line-argument from configure.in and it produces no\nwarnings on my laptop (I know, that's a bad excuse). For context, I'm\nrunning Clang 11 on Ubuntu amd64.\n\nCheers,\nJesse\n\n\n", "msg_date": "Wed, 22 Apr 2020 17:12:14 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Header / Trailer Comment Typos for M4 macros" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> I didn't make my question clear though: I'm curious what motivated the\n> original addition of -Wno-unused-command-line-argument in commit\n> 73b416b2e412, and how that problem did't quite manifest itself with Clang++.\n\nWe didn't then have the convention of mentioning relevant mailing list\nthreads in the commit log, but some excavation finds\n\nhttps://www.postgresql.org/message-id/flat/CALkS6B8Ei3yffHTnUsAovCPmO9kPTpgCArwyod7Ju2eWBm6%3DBA%40mail.gmail.com\n\nSo it seems to have been specific to clang circa version 6.0. Maybe\nthe clang boys thought better of this behavior more recently?\n\n[ experiments... ] I see no warnings on current macOS (Apple clang\nversion 11.0.3) after removing the switch. So I guess they did fix it.\n\nWe're pretty conservative about dropping support for old toolchains,\nthough, so I doubt that we'd want to remove this configure check.\nEspecially if we don't know how long ago clang changed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 20:33:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Header / Trailer Comment Typos for M4 macros" } ]
[ { "msg_contents": "I looked into the cause of several recent buildfarm failures:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=handfish&dt=2020-04-20%2020%3A32%3A23\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2020-04-18%2018%3A20%3A12\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=devario&dt=2020-02-27%2014%3A18%3A34\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2020-01-17%2022%3A56%3A13\n\nall of which look like this:\n\ndiff -U3 /home/filiperosset/dev/build-farm-11/HEAD/pgsql.build/src/test/regress/expected/triggers.out /home/filiperosset/dev/build-farm-11/HEAD/pgsql.build/src/test/regress/results/triggers.out\n--- /home/filiperosset/dev/build-farm-11/HEAD/pgsql.build/src/test/regress/expected/triggers.out\t2020-03-19 17:54:52.037720127 -0300\n+++ /home/filiperosset/dev/build-farm-11/HEAD/pgsql.build/src/test/regress/results/triggers.out\t2020-04-20 17:44:02.024230072 -0300\n@@ -2559,22 +2559,7 @@\n FROM information_schema.triggers\n WHERE event_object_table IN ('parent', 'child1', 'child2', 'child3')\n ORDER BY trigger_name COLLATE \"C\", 2;\n- trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table \n---------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+----------------------------\n- child1_delete_trig | DELETE | public | child1 | 1 | | STATEMENT | AFTER | old_table | \n- child1_insert_trig | INSERT | public | child1 | 1 | | STATEMENT | AFTER | | new_table\n- child1_update_trig | UPDATE | public | child1 | 1 | | STATEMENT | AFTER | old_table | new_table\n- child2_delete_trig | DELETE | public | child2 | 1 | | STATEMENT | AFTER | old_table | \n- child2_insert_trig | INSERT | public | child2 | 1 | | STATEMENT | AFTER | | new_table\n- child2_update_trig | UPDATE | public | child2 | 1 | | STATEMENT | AFTER | old_table | new_table\n- child3_delete_trig | DELETE | public | child3 | 1 | | STATEMENT | AFTER | old_table | \n- child3_insert_trig | INSERT | public | child3 | 1 | | STATEMENT | AFTER | | new_table\n- child3_update_trig | UPDATE | public | child3 | 1 | | STATEMENT | AFTER | old_table | new_table\n- parent_delete_trig | DELETE | public | parent | 1 | | STATEMENT | AFTER | old_table | \n- parent_insert_trig | INSERT | public | parent | 1 | | STATEMENT | AFTER | | new_table\n- parent_update_trig | UPDATE | public | parent | 1 | | STATEMENT | AFTER | old_table | new_table\n-(12 rows)\n-\n+ERROR: cache lookup failed for function 22540\n -- insert directly into children sees respective child-format tuples\n insert into child1 values ('AAA', 42);\n NOTICE: trigger = child1_insert_trig, new table = (AAA,42)\n\nWhat appears to be happening is that the concurrent updatable_views\ntest creates/deletes some triggers and trigger functions, and with\nbad timing luck that results in a cache lookup failure within\npg_get_triggerdef(). This isn't a new problem (note that crake's\nfailure is on v12 not HEAD) but it seems that it's gotten much more\nprobable of late, probably because of unrelated test changes\naltering the timing of these tests.\n\nIt's fairly annoying that the triggers view could be prone to failing due\nto concurrent DDL, but I don't see any near-term fix for that general\nproblem --- as long as the ruleutils infrastructure relies on syscache\nlookups in any way, we're going to have some problems of that sort.\nBut what's *really* annoying is that the view can fail due to DDL\nchanges that aren't even on any of the tables being looked at. That's\nbecause the planner is failing to push down the WHERE condition, meaning\nthat this isn't only a regression instability issue but a rather severe\nperformance problem: there is no way for a query on the triggers view\nto not evaluate the entire view.\n\nThe reason for the pushdown failure is that the triggers view uses\na window function to compute its action_order output:\n\n rank() OVER (PARTITION BY n.oid, c.oid, em.num, t.tgtype & 1, t.tgtype & 66 ORDER BY t.tgname)\n\nand the planner can't push the restriction on relname below that for fear\nof changing the window function's results. It does know that restrictions\non the PARTITION BY columns can be pushed down, but the query restriction\nis not on any of those columns.\n\nHowever, it's not like the query restriction is unrelated to that\npartition condition, it's just not quite the same thing.\nI experimented with\n\n- rank() OVER (PARTITION BY n.oid, c.oid, em.num, t.tgtype & 1, t.tgtype & 66 ORDER BY t.tgname)\n+ rank() OVER (PARTITION BY n.nspname::sql_identifier, c.relname::sql_identifier, em.num, t.tgtype & 1, t.tgtype & 66 ORDER BY t.tgname)\n\nso as to make the first two partitioning columns actually match the\nrelevant view output columns, and lo, the planner now successfully\npushes the table-name constraint down to the pg_class relation scan.\n(With a little more work, we could make the other partitioning columns\nalso match view output columns exactly, but I'm not sure that any of\nthose are worth messing with. Restrictions on the table and schema\nnames seem like the only things likely to be interesting for\nperformance.)\n\nThis should not only fix the regression instability but offer a pretty\nsignificant speedup for real-world use of the triggers view, so I\npropose squeezing it in even though we're past feature freeze.\n\nHowever ... what about the back branches? As noted, we are seeing\nthis now in the back branches, at least v12. Are we willing to make\na change in the information_schema in back branches to make the\ninstability go away? (And if not, how else could we fix it?)\n\nI note that this is actually a performance regression in the triggers\nview: before v11, when this rank() call was added, the planner had\nno problem pushing down restrictions on the table name.\n\nI think that a reasonable case could be made for back-patching this\nchange as far as v11, though of course without a catversion bump.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 18:27:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Regression instability + performance issue in TRIGGERS view" }, { "msg_contents": "On 2020-04-23 00:27, Tom Lane wrote:\n> This should not only fix the regression instability but offer a pretty\n> significant speedup for real-world use of the triggers view, so I\n> propose squeezing it in even though we're past feature freeze.\n\nI think that's fine.\n\n> However ... what about the back branches? As noted, we are seeing\n> this now in the back branches, at least v12. Are we willing to make\n> a change in the information_schema in back branches to make the\n> instability go away? (And if not, how else could we fix it?)\n\nThat would be okay with me.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 08:53:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Regression instability + performance issue in TRIGGERS view" } ]
[ { "msg_contents": "Hi,\nper Coverity.\n\n1. assign_zero: Assigning: opaque = NULL.\n2. Condition buf < 0, taking true branch.\n3. Condition !(((PageHeader)page)->pd_upper == 0), taking false branch.\n4. Condition blkno != orig_blkno, taking true branch.\n5. Condition _bt_page_recyclable(page), taking false branch.\nCID 1314742 (#2 of 2): Explicit null dereferenced (FORWARD_NULL)\n6. var_deref_op: Dereferencing null pointer opaque.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 22 Apr 2020 19:54:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] FIx explicit null dereference pointer (nbtree.c)" }, { "msg_contents": "On Wed, Apr 22, 2020 at 3:55 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> per Coverity.\n\nSome Postgres hackers have access to a dedicated coverity\ninstallation, and this issue has probably already been dismissed.\n\n> 1. assign_zero: Assigning: opaque = NULL.\n> 2. Condition buf < 0, taking true branch.\n> 3. Condition !(((PageHeader)page)->pd_upper == 0), taking false branch.\n> 4. Condition blkno != orig_blkno, taking true branch.\n> 5. Condition _bt_page_recyclable(page), taking false branch.\n> CID 1314742 (#2 of 2): Explicit null dereferenced (FORWARD_NULL)\n> 6. var_deref_op: Dereferencing null pointer opaque.\n\nThis is a false positive. btvacuumpage() is supposed to be a recursive\nfunction, but in practice the only caller always uses the same block\nnumber for both blkno and orig_blkno -- the tail recursion is actually\nimplemented using goto/a loop.\n\nMaybe we should make the btvacuumpage() orig_blkno argument into a\nlocal variable, though. It doesn't feel particularly natural to think\nof btvacuumpage() as a recursive function.\n\nI don't like this comment:\n\n /*\n * This is really tail recursion, but if the compiler is too stupid to\n * optimize it as such, we'd eat an uncomfortably large amount of stack\n * space per recursion level (due to the arrays used to track details of\n * deletable/updatable items). A failure is improbable since the number\n * of levels isn't likely to be large ... but just in case, let's\n * hand-optimize into a loop.\n */\n if (recurse_to != P_NONE)\n {\n blkno = recurse_to;\n goto restart;\n }\n\nThis almost sounds like it's talking about the number of levels of the\ntree, where having more than 5 levels is highly unlikely, and having\nmore than 10 levels is probably absolutely impossible with standard\nBLCKSZ, even with the largest possible tuples (I have tried). We\nprocess levels of the tree recursively during page splits (and page\ndeletions, which are quite unrelated to this code), but this is very\ndifferent. Roughly speaking, it's bound in size by the number of page\nsplits that happen while the VACUUM begins. I'm willing to believe\nthat that's quite rare, but I also think that there might be workloads\nwhere the physical scan has to \"backtrack\" thousands of times.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Apr 2020 17:24:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] FIx explicit null dereference pointer (nbtree.c)" }, { "msg_contents": "Em qua., 22 de abr. de 2020 às 21:24, Peter Geoghegan <pg@bowt.ie> escreveu:\n\n> On Wed, Apr 22, 2020 at 3:55 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > per Coverity.\n>\n> Some Postgres hackers have access to a dedicated coverity\n> installation, and this issue has probably already been dismissed.\n>\nI will take a note.\n\n>\n> > 1. assign_zero: Assigning: opaque = NULL.\n> > 2. Condition buf < 0, taking true branch.\n> > 3. Condition !(((PageHeader)page)->pd_upper == 0), taking false branch.\n> > 4. Condition blkno != orig_blkno, taking true branch.\n> > 5. Condition _bt_page_recyclable(page), taking false branch.\n> > CID 1314742 (#2 of 2): Explicit null dereferenced (FORWARD_NULL)\n> > 6. var_deref_op: Dereferencing null pointer opaque.\n>\n> This is a false positive. btvacuumpage() is supposed to be a recursive\n> function, but in practice the only caller always uses the same block\n> number for both blkno and orig_blkno -- the tail recursion is actually\n> implemented using goto/a loop.\n>\nThis means that it is impossible for these conditions described by Coverity\nto happen on the first call, when the var opaque is NULL.\n\nregards,\nRanier Vilela\n\nEm qua., 22 de abr. de 2020 às 21:24, Peter Geoghegan <pg@bowt.ie> escreveu:On Wed, Apr 22, 2020 at 3:55 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> per Coverity.\n\nSome Postgres hackers have access to a dedicated coverity\ninstallation, and this issue has probably already been dismissed.I will take a note. \n\n> 1. assign_zero: Assigning: opaque = NULL.\n> 2. Condition buf < 0, taking true branch.\n> 3. Condition !(((PageHeader)page)->pd_upper == 0), taking false branch.\n> 4. Condition blkno != orig_blkno, taking true branch.\n> 5. Condition _bt_page_recyclable(page), taking false branch.\n> CID 1314742 (#2 of 2): Explicit null dereferenced (FORWARD_NULL)\n> 6. var_deref_op: Dereferencing null pointer opaque.\n\nThis is a false positive. btvacuumpage() is supposed to be a recursive\nfunction, but in practice the only caller always uses the same block\nnumber for both blkno and orig_blkno -- the tail recursion is actually\nimplemented using goto/a loop.This means that it is impossible for these conditions described by Coverity to happen on the first call, when the var opaque is NULL.regards,Ranier Vilela", "msg_date": "Wed, 22 Apr 2020 22:28:11 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] FIx explicit null dereference pointer (nbtree.c)" } ]
[ { "msg_contents": "Patch 0001 fixes this issue with vpath postgres builds:\n\n$ make -C src/test/regress install-tests\n/usr/bin/install: cannot create regular file\n'PGPREFIX/lib/postgresql/regress/PGPREFIX/src/test/regress/expected/errors.out':\nNo such file or directory\nmake: *** [GNUmakefile:90: install-tests] Error 1\n\n(where PGPREFIX is your --prefix)\n\nIt also makes the install-tests target a toplevel target for convenience.\n\nThree related bonus patches are attached in case anyone thinks they're a\ngood idea:\n\n- 0002 changes the install location of src/test/regress's install-tests\noutput files (sql/, expected/ etc) to $(pkglibdir)/pgxs/src/test/regress so\nthat PGXS resolves it as $(top_srcdir)/src/test/regress, same as for\nin-tree builds. Presently it installs in $(pkglibdir)/regress/ for some\nreason. This patch applies on top of 0001. It will affect packagers.\n\n- 0003 makes the toplevel install-tests target also install\nsrc/test/isolation test resources and the test modules. This patch applies\non top of either 0001 or 0002, depending on whether you want to include\n0002.\n\n- 0004 makes the dummy 'check' target in pgxs.mk optional for extensions\nthat define the new PGXS variable NO_DUMMY_CHECK_TARGET . This lets\nextensions that want to define a 'check' target do so without having make\ncomplain at them about redefined targets. This patch is independent of the\nothers and can apply on master directly.\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 23 Apr 2020 12:55:19 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix install-tests target for vpath builds" }, { "msg_contents": "On Thu, 23 Apr 2020 at 12:55, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> Patch 0001 fixes this issue with vpath postgres builds:\n>\n> $ make -C src/test/regress install-tests\n> /usr/bin/install: cannot create regular file\n> 'PGPREFIX/lib/postgresql/regress/PGPREFIX/src/test/regress/expected/errors.out':\n> No such file or directory\n> make: *** [GNUmakefile:90: install-tests] Error 1\n>\n> (where PGPREFIX is your --prefix)\n>\n> It also makes the install-tests target a toplevel target for convenience.\n>\n>\nPoke?\n\nAnybody willing to pick up a vpath build fix?\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 23 Apr 2020 at 12:55, Craig Ringer <craig@2ndquadrant.com> wrote:Patch 0001 fixes this issue with vpath postgres builds:$ make -C src/test/regress install-tests/usr/bin/install: cannot create regular file 'PGPREFIX/lib/postgresql/regress/PGPREFIX/src/test/regress/expected/errors.out': No such file or directorymake: *** [GNUmakefile:90: install-tests] Error 1(where PGPREFIX is your --prefix)It also makes the install-tests target a toplevel target for convenience.Poke?Anybody willing to pick up a vpath build fix?--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Tue, 26 May 2020 10:06:53 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix install-tests target for vpath builds" }, { "msg_contents": "On 4/23/20 12:55 AM, Craig Ringer wrote:\n> Patch 0001 fixes this issue with vpath postgres builds:\n>\n> $ make -C src/test/regress install-tests\n> /usr/bin/install: cannot create regular file\n> 'PGPREFIX/lib/postgresql/regress/PGPREFIX/src/test/regress/expected/errors.out':\n> No such file or directory\n> make: *** [GNUmakefile:90: install-tests] Error 1\n>\n> (where PGPREFIX is your --prefix)\n>\n> It also makes the install-tests target a toplevel target for convenience.\n>\n> Three related bonus patches are attached in case anyone thinks they're\n> a good idea:\n>\n> - 0002 changes the install location of src/test/regress's\n> install-tests output files (sql/, expected/ etc) to\n> $(pkglibdir)/pgxs/src/test/regress so that PGXS resolves it as\n> $(top_srcdir)/src/test/regress, same as for in-tree builds. Presently\n> it installs in $(pkglibdir)/regress/ for some reason. This patch\n> applies on top of 0001. It will affect packagers.\n>\n> - 0003 makes the toplevel install-tests target also install\n> src/test/isolation test resources and the test modules. This patch\n> applies on top of either 0001 or 0002, depending on whether you want\n> to include 0002.\n>\n> - 0004 makes the dummy 'check' target in pgxs.mk <http://pgxs.mk>\n> optional for extensions that define the new PGXS\n> variable NO_DUMMY_CHECK_TARGET . This lets extensions that want to\n> define a 'check' target do so without having make complain at them\n> about redefined targets. This patch is independent of the others and\n> can apply on master directly.\n>\n>\n>\n\n\nI've come up with a slightly nicer version of your patch 1, which I\npropose to commit and backpatch before long.\n\n\nI'll leave the others for another day. Let's revisit after we get\nthrough the release.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 May 2020 17:58:25 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix install-tests target for vpath builds" }, { "msg_contents": "On 2020-May-29, Andrew Dunstan wrote:\n\n> I've come up with a slightly nicer version of your patch 1, which I\n> propose to commit and backpatch before long.\n\nLooks good to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 18:37:20 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix install-tests target for vpath builds" } ]
[ { "msg_contents": "Hi,\n\nI'd like to propose to introduce the +(pg_lsn, int8) and -(pg_lsn, int8)\noperators. The + operator allows us to add the number of bytes into pg_lsn,\nresulting new pg_lsn. The - operator allows us to substract the number\nof bytes from pg_lsn, resulting new pg_lsn. Thought?\nI sometimes need these features for debuging purpose.\n\nAttached is the patch implementing those operators.\nOf course, this is the dev item for v14.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 23 Apr 2020 18:21:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "+(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Thu, Apr 23, 2020 at 5:21 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I'd like to propose to introduce the +(pg_lsn, int8) and -(pg_lsn, int8)\n> operators. The + operator allows us to add the number of bytes into pg_lsn,\n> resulting new pg_lsn. The - operator allows us to substract the number\n> of bytes from pg_lsn, resulting new pg_lsn. Thought?\n> I sometimes need these features for debuging purpose.\n\nFor anyone who missed it, this idea was popular on Twitter:\n\nhttps://twitter.com/fujii_masao/status/1252652020487487488\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Apr 2020 08:09:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Thu, Apr 23, 2020 at 2:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I'd like to propose to introduce the +(pg_lsn, int8) and -(pg_lsn, int8)\n> operators. The + operator allows us to add the number of bytes into pg_lsn,\n> resulting new pg_lsn. The - operator allows us to substract the number\n> of bytes from pg_lsn, resulting new pg_lsn. Thought?\n> I sometimes need these features for debuging purpose.\n\nAs it's presented in the patch I don't see much value in calling it as\nLSN arithmetic. If we could do something like LSN of Nth WAL record\n+/- <number of WAL records, n> = LSN of N+/- n th log record that\nwould be interesting. :)\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 23 Apr 2020 21:57:43 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Thu, Apr 23, 2020 at 12:28 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> As it's presented in the patch I don't see much value in calling it as\n> LSN arithmetic. If we could do something like LSN of Nth WAL record\n> +/- <number of WAL records, n> = LSN of N+/- n th log record that\n> would be interesting. :)\n\nWell, that would mean that the value of x + 1 would depend not only on\nx but on the contents of WAL, and that it would be uncomputable\nwithout having the WAL available, and that adding large values would\nbe quite expensive.\n\nI much prefer Fujii Masao's proposal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:21:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Thu, Apr 23, 2020 at 08:09:22AM -0400, Robert Haas wrote:\n> For anyone who missed it, this idea was popular on Twitter:\n> \n> https://twitter.com/fujii_masao/status/1252652020487487488\n\n(For the sake of the archives)\nTo which Alvaro, Robert, Fabrízio de Royes Mello, Julien Rouhaud and I\nanswered positively to.\n--\nMichael", "msg_date": "Fri, 24 Apr 2020 16:24:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Fri, 24 Apr 2020 16:24:14 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Apr 23, 2020 at 08:09:22AM -0400, Robert Haas wrote:\n> > For anyone who missed it, this idea was popular on Twitter:\n> > \n> > https://twitter.com/fujii_masao/status/1252652020487487488 \n> \n> (For the sake of the archives)\n> To which Alvaro, Robert, Fabrízio de Royes Mello, Julien Rouhaud and I\n> answered positively to.\n\nAnd me, discretely, with a little heart.\n\n\n", "msg_date": "Fri, 24 Apr 2020 12:15:26 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "At Fri, 24 Apr 2020 12:15:26 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Fri, 24 Apr 2020 16:24:14 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Thu, Apr 23, 2020 at 08:09:22AM -0400, Robert Haas wrote:\n> > > For anyone who missed it, this idea was popular on Twitter:\n> > > \n> > > https://twitter.com/fujii_masao/status/1252652020487487488 \n> > \n> > (For the sake of the archives)\n> > To which Alvaro, Robert, Fabrízio de Royes Mello, Julien Rouhaud and I\n> > answered positively to.\n> \n> And me, discretely, with a little heart.\n\n+1. I actually sometimes need it.\n\ny the way, -(pg_lsn, pg_lsn) yields a numeric. I feel that it could be\nconfusing that the new operators takes a bigint. We need to cast the\nsecond term to bigint in the following expression.\n\n'2/20'::pg_lsn + ('1/10'::pg_lsn - '1/5'::pg_lsn)\n\nThe new + operator is not commutative. I'm not sure it is the right\ndesgin to make it commutative, but it would be irritatibe if it is\nnot. (Or maybe we should implement them as functions rather than\noperators..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Apr 2020 10:41:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Sun, Apr 26, 2020 at 9:41 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> +1. I actually sometimes need it.\n>\n> y the way, -(pg_lsn, pg_lsn) yields a numeric.\n\nIt might be a good idea to use numeric here, too. Because int8 is\nsigned, it's not big enough to cover the whole range of LSNs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 27 Apr 2020 12:24:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On 2020/04/28 1:24, Robert Haas wrote:\n> On Sun, Apr 26, 2020 at 9:41 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> +1. I actually sometimes need it.\n>>\n>> y the way, -(pg_lsn, pg_lsn) yields a numeric.\n> \n> It might be a good idea to use numeric here, too. Because int8 is\n> signed, it's not big enough to cover the whole range of LSNs.\n\nYes. Attached is the updated version of the patch, which introduces\n+(pg_lsn, numeric) and -(pg_lsn, numeric) operators.\nTo implement them, I added also numeric_pg_lsn() function that converts\nnumeric to pg_lsn.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 28 Apr 2020 12:56:19 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Tue, Apr 28, 2020 at 12:56:19PM +0900, Fujii Masao wrote:\n> Yes. Attached is the updated version of the patch, which introduces\n> +(pg_lsn, numeric) and -(pg_lsn, numeric) operators.\n> To implement them, I added also numeric_pg_lsn() function that converts\n> numeric to pg_lsn.\n\n- those write-ahead log locations.\n+ those write-ahead log locations. Also the number of bytes can be added\n+ into and substracted from LSN using the <literal>+</literal> and\n+ <literal>-</literal> operators, respectively.\nThat's short. Should this mention the restriction with numeric (or\njust recommend its use) because we don't have a 64b unsigned type\ninternally, basically Robert's point? \n\n+ /* XXX would it be better to return NULL? */\n+ if (NUMERIC_IS_NAN(num))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot convert NaN to pg_lsn\")));\nThat would be good to test, and an error sounds fine to me.\n--\nMichael", "msg_date": "Tue, 28 Apr 2020 15:03:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "\n\nOn 2020/04/28 15:03, Michael Paquier wrote:\n> On Tue, Apr 28, 2020 at 12:56:19PM +0900, Fujii Masao wrote:\n>> Yes. Attached is the updated version of the patch, which introduces\n>> +(pg_lsn, numeric) and -(pg_lsn, numeric) operators.\n>> To implement them, I added also numeric_pg_lsn() function that converts\n>> numeric to pg_lsn.\n> \n> - those write-ahead log locations.\n> + those write-ahead log locations. Also the number of bytes can be added\n> + into and substracted from LSN using the <literal>+</literal> and\n> + <literal>-</literal> operators, respectively.\n> That's short. Should this mention the restriction with numeric (or\n> just recommend its use) because we don't have a 64b unsigned type\n> internally, basically Robert's point?\n\nThanks for the review! What about the following description?\n\n-----------------\nAlso the number of bytes can be added into and substracted from LSN using the\n<literal>+(pg_lsn,numeric)</literal> and <literal>-(pg_lsn,numeric)</literal>\noperators, respectively. Note that the calculated LSN should be in the range\nof <type>pg_lsn</type> type, i.e., between <literal>0/0</literal> and\n<literal>FFFFFFFF/FFFFFFFF</literal>.\n-----------------\n\n> \n> + /* XXX would it be better to return NULL? */\n> + if (NUMERIC_IS_NAN(num))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot convert NaN to pg_lsn\")));\n> That would be good to test, and an error sounds fine to me.\n\nYou mean that we should add the test that goes through this code block,\ninto the regression test?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 30 Apr 2020 23:40:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On Thu, Apr 30, 2020 at 11:40:59PM +0900, Fujii Masao wrote:\n> Also the number of bytes can be added into and substracted from LSN using the\n> <literal>+(pg_lsn,numeric)</literal> and <literal>-(pg_lsn,numeric)</literal>\n> operators, respectively. Note that the calculated LSN should be in the range\n> of <type>pg_lsn</type> type, i.e., between <literal>0/0</literal> and\n> <literal>FFFFFFFF/FFFFFFFF</literal>.\n> -----------------\n\nThat reads fine.\n\n>> + /* XXX would it be better to return NULL? */\n>> + if (NUMERIC_IS_NAN(num))\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> + errmsg(\"cannot convert NaN to pg_lsn\")));\n>> That would be good to test, and an error sounds fine to me.\n> \n> You mean that we should add the test that goes through this code block,\n> into the regression test?\n\nYes, that looks worth making sure to track, especially if the behavior\nof this code changes in the future.\n--\nMichael", "msg_date": "Sat, 2 May 2020 11:29:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "At Tue, 28 Apr 2020 12:56:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Yes. Attached is the updated version of the patch, which introduces\n> +(pg_lsn, numeric) and -(pg_lsn, numeric) operators.\n> To implement them, I added also numeric_pg_lsn() function that\n> converts numeric to pg_lsn.\n\n+ into and substracted from LSN using the <literal>+</literal> and\n\ns/substracted/subtracted/\n(This still remains in the latest version)\n\n+static bool\n+numericvar_to_uint64(const NumericVar *var, uint64 *result)\n\nOther numricvar_to_xxx() functions return an integer value that means\nsuccess by 0 and failure by -1, which is one of standard signature of\nthis kind of functions. I don't see a reason for this function to\nhave different signatures from them.\n\n+\t/* XXX would it be better to return NULL? */\n+\tif (NUMERIC_IS_NAN(num))\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t\t errmsg(\"cannot convert NaN to pg_lsn\")));\n\nThe ERROR seems perfect to me since NaN is out of the domain of\nLSN. log(-1) results in a similar error.\n\nOn the other hand, the code above makes the + operator behave as the\nfollows.\n\n=# SELECT '1/1'::pg_lsn + 'NaN'::numeric;\nERROR: cannot convert NaN to pg_lsn\n\nThis looks somewhat different from what actually wrong is.\n\n+\tchar\t\tbuf[256];\n+\n+\t/* Convert to numeric */\n+\tsnprintf(buf, sizeof(buf), UINT64_FORMAT, lsn);\n\nThe values larger than 2^64 is useless. So 32 (or any value larger\nthan 21) is enough for the buffer length.\n\nBy the way coudln't we use int128 instead for internal arithmetic? I\nthink that makes the code simpler.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 May 2020 11:21:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "\n\nOn 2020/05/02 11:29, Michael Paquier wrote:\n> On Thu, Apr 30, 2020 at 11:40:59PM +0900, Fujii Masao wrote:\n>> Also the number of bytes can be added into and substracted from LSN using the\n>> <literal>+(pg_lsn,numeric)</literal> and <literal>-(pg_lsn,numeric)</literal>\n>> operators, respectively. Note that the calculated LSN should be in the range\n>> of <type>pg_lsn</type> type, i.e., between <literal>0/0</literal> and\n>> <literal>FFFFFFFF/FFFFFFFF</literal>.\n>> -----------------\n> \n> That reads fine.\n\nOk, I will update the docs in that way.\n\n> \n>>> + /* XXX would it be better to return NULL? */\n>>> + if (NUMERIC_IS_NAN(num))\n>>> + ereport(ERROR,\n>>> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>>> + errmsg(\"cannot convert NaN to pg_lsn\")));\n>>> That would be good to test, and an error sounds fine to me.\n>>\n>> You mean that we should add the test that goes through this code block,\n>> into the regression test?\n> \n> Yes, that looks worth making sure to track, especially if the behavior\n> of this code changes in the future.\n\nOk, I will add that regression test.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 May 2020 13:15:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "\n\nOn 2020/05/07 11:21, Kyotaro Horiguchi wrote:\n> At Tue, 28 Apr 2020 12:56:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Yes. Attached is the updated version of the patch, which introduces\n>> +(pg_lsn, numeric) and -(pg_lsn, numeric) operators.\n>> To implement them, I added also numeric_pg_lsn() function that\n>> converts numeric to pg_lsn.\n> \n> + into and substracted from LSN using the <literal>+</literal> and\n> \n> s/substracted/subtracted/\n> (This still remains in the latest version)\n\nThanks! Will fix this.\n\n> \n> +static bool\n> +numericvar_to_uint64(const NumericVar *var, uint64 *result)\n> \n> Other numricvar_to_xxx() functions return an integer value that means\n> success by 0 and failure by -1, which is one of standard signature of\n> this kind of functions. I don't see a reason for this function to\n> have different signatures from them.\n\nUnless I'm missing something, other functions also return boolean.\nFor example,\n\nstatic bool numericvar_to_int32(const NumericVar *var, int32 *result);\nstatic bool numericvar_to_int64(const NumericVar *var, int64 *result);\n\n\n> \n> +\t/* XXX would it be better to return NULL? */\n> +\tif (NUMERIC_IS_NAN(num))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"cannot convert NaN to pg_lsn\")));\n> \n> The ERROR seems perfect to me since NaN is out of the domain of\n> LSN. log(-1) results in a similar error.\n> \n> On the other hand, the code above makes the + operator behave as the\n> follows.\n> \n> =# SELECT '1/1'::pg_lsn + 'NaN'::numeric;\n> ERROR: cannot convert NaN to pg_lsn\n> \n> This looks somewhat different from what actually wrong is.\n\nYou mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n\"the number of bytes to add/subtract cannnot be NaN\" when NaN is specified?\n\n> \n> +\tchar\t\tbuf[256];\n> +\n> +\t/* Convert to numeric */\n> +\tsnprintf(buf, sizeof(buf), UINT64_FORMAT, lsn);\n> \n> The values larger than 2^64 is useless. So 32 (or any value larger\n> than 21) is enough for the buffer length.\n\nCould you tell me what the actual problem is when buf[256] is used?\n\n> \n> By the way coudln't we use int128 instead for internal arithmetic? I\n> think that makes the code simpler.\n\nI'm not sure if int128 is available in every environments.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 May 2020 13:17:01 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On 2020/05/07 13:15, Fujii Masao wrote:\n> \n> \n> On 2020/05/02 11:29, Michael Paquier wrote:\n>> On Thu, Apr 30, 2020 at 11:40:59PM +0900, Fujii Masao wrote:\n>>> Also the number of bytes can be added into and substracted from LSN using the\n>>> <literal>+(pg_lsn,numeric)</literal> and <literal>-(pg_lsn,numeric)</literal>\n>>> operators, respectively. Note that the calculated LSN should be in the range\n>>> of <type>pg_lsn</type> type, i.e., between <literal>0/0</literal> and\n>>> <literal>FFFFFFFF/FFFFFFFF</literal>.\n>>> -----------------\n>>\n>> That reads fine.\n> \n> Ok, I will update the docs in that way.\n\nDone.\n\n> \n>>\n>>>> +�� /* XXX would it be better to return NULL? */\n>>>> +�� if (NUMERIC_IS_NAN(num))\n>>>> +������ ereport(ERROR,\n>>>> +�������������� (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>>>> +��������������� errmsg(\"cannot convert NaN to pg_lsn\")));\n>>>> That would be good to test, and an error sounds fine to me.\n>>>\n>>> You mean that we should add the test that goes through this code block,\n>>> into the regression test?\n>>\n>> Yes, that looks worth making sure to track, especially if the behavior\n>> of this code changes in the future.\n> \n> Ok, I will add that regression test.\n\nDone. Attached is the updated version of the patch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 7 May 2020 15:25:24 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "At Thu, 7 May 2020 13:17:01 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/05/07 11:21, Kyotaro Horiguchi wrote:\n> > +static bool\n> > +numericvar_to_uint64(const NumericVar *var, uint64 *result)\n> > Other numricvar_to_xxx() functions return an integer value that means\n> > success by 0 and failure by -1, which is one of standard signature of\n> > this kind of functions. I don't see a reason for this function to\n> > have different signatures from them.\n> \n> Unless I'm missing something, other functions also return boolean.\n> For example,\n> \n> static bool numericvar_to_int32(const NumericVar *var, int32 *result);\n> static bool numericvar_to_int64(const NumericVar *var, int64 *result);\n\nMmm. \n\n> \n> > +\t/* XXX would it be better to return NULL? */\n> > +\tif (NUMERIC_IS_NAN(num))\n> > +\t\tereport(ERROR,\n> > +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > +\t\t\t\t errmsg(\"cannot convert NaN to pg_lsn\")));\n> > The ERROR seems perfect to me since NaN is out of the domain of\n> > LSN. log(-1) results in a similar error.\n> > On the other hand, the code above makes the + operator behave as the\n> > follows.\n> > =# SELECT '1/1'::pg_lsn + 'NaN'::numeric;\n> > ERROR: cannot convert NaN to pg_lsn\n> > This looks somewhat different from what actually wrong is.\n> \n> You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n> \"the number of bytes to add/subtract cannnot be NaN\" when NaN is\n> specified?\n\nThe function is called while executing an expression, so \"NaN cannot\nbe used in this expression\" or something like that would work.\n\n\n> > +\tchar\t\tbuf[256];\n> > +\n> > +\t/* Convert to numeric */\n> > +\tsnprintf(buf, sizeof(buf), UINT64_FORMAT, lsn);\n> > The values larger than 2^64 is useless. So 32 (or any value larger\n> > than 21) is enough for the buffer length.\n> \n> Could you tell me what the actual problem is when buf[256] is used?\n\nIt's just a waste of stack depth by over 200 bytes. I doesn't lead to\nan actual problem but it is evidently useless.\n\n> > By the way coudln't we use int128 instead for internal arithmetic? I\n> > think that makes the code simpler.\n> \n> I'm not sure if int128 is available in every environments.\n\nIn second thought, I found that we don't have enough substitute\nfunctions for the platforms without a native implement. Instead,\nthere are some overflow-safe uint64 math functions, that is,\npg_add/sub_u64_overflow. This patch defines numeric_pg_lsn which is\nsubstantially numeric_uint64. By using them, for example, we can make\npg_lsn_pli mainly with integer arithmetic as follows.\n\nDatum\npg_lsn_pli(..)\n{\n XLogRecPtr lsn = PG_GETARG_LSN(0);\n Datum num_nbytes = PG_GETARG_DATUM(1);\n Datum u64_nbytes =\n DatumGetInt64(DirectFunctionCall1(numeric_pg_lsn, num_nbytes));\n XLogRecPtr result;\n\n if (pg_add_u64_overflow(lsn, u64_nbytes, &result))\n elog(ERROR, \"result out of range\");\n\nPG_RETURN_LSN(result);\n}\n\nIf invalid values are given as the addend, the following message would\nmake sense.\n\n=# select '1/1::pg_lsn + 'NaN'::numeric;\nERROR: cannot use NaN in this expression\n=# select '1/1::pg_lsn + '-1'::numeric;\nERROR: numeric value out of range for this expression\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 May 2020 10:00:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "\n\nOn 2020/05/08 10:00, Kyotaro Horiguchi wrote:\n> At Thu, 7 May 2020 13:17:01 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/05/07 11:21, Kyotaro Horiguchi wrote:\n>>> +static bool\n>>> +numericvar_to_uint64(const NumericVar *var, uint64 *result)\n>>> Other numricvar_to_xxx() functions return an integer value that means\n>>> success by 0 and failure by -1, which is one of standard signature of\n>>> this kind of functions. I don't see a reason for this function to\n>>> have different signatures from them.\n>>\n>> Unless I'm missing something, other functions also return boolean.\n>> For example,\n>>\n>> static bool numericvar_to_int32(const NumericVar *var, int32 *result);\n>> static bool numericvar_to_int64(const NumericVar *var, int64 *result);\n> \n> Mmm.\n> \n>>\n>>> +\t/* XXX would it be better to return NULL? */\n>>> +\tif (NUMERIC_IS_NAN(num))\n>>> +\t\tereport(ERROR,\n>>> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>>> +\t\t\t\t errmsg(\"cannot convert NaN to pg_lsn\")));\n>>> The ERROR seems perfect to me since NaN is out of the domain of\n>>> LSN. log(-1) results in a similar error.\n>>> On the other hand, the code above makes the + operator behave as the\n>>> follows.\n>>> =# SELECT '1/1'::pg_lsn + 'NaN'::numeric;\n>>> ERROR: cannot convert NaN to pg_lsn\n>>> This looks somewhat different from what actually wrong is.\n>>\n>> You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n>> \"the number of bytes to add/subtract cannnot be NaN\" when NaN is\n>> specified?\n> \n> The function is called while executing an expression, so \"NaN cannot\n> be used in this expression\" or something like that would work.\n\nThis sounds ambiguous. I like to use clearer messages like\n\ncannot add NaN to pg_lsn\ncannot subtract NaN from pg_lsn\n\n>>> +\tchar\t\tbuf[256];\n>>> +\n>>> +\t/* Convert to numeric */\n>>> +\tsnprintf(buf, sizeof(buf), UINT64_FORMAT, lsn);\n>>> The values larger than 2^64 is useless. So 32 (or any value larger\n>>> than 21) is enough for the buffer length.\n>>\n>> Could you tell me what the actual problem is when buf[256] is used?\n> \n> It's just a waste of stack depth by over 200 bytes. I doesn't lead to\n> an actual problem but it is evidently useless.\n> \n>>> By the way coudln't we use int128 instead for internal arithmetic? I\n>>> think that makes the code simpler.\n>>\n>> I'm not sure if int128 is available in every environments.\n> \n> In second thought, I found that we don't have enough substitute\n> functions for the platforms without a native implement. Instead,\n> there are some overflow-safe uint64 math functions, that is,\n> pg_add/sub_u64_overflow. This patch defines numeric_pg_lsn which is\n> substantially numeric_uint64. By using them, for example, we can make\n> pg_lsn_pli mainly with integer arithmetic as follows.\n\nSorry, I'm not sure what the benefit of this approach...\n\n> \n> Datum\n> pg_lsn_pli(..)\n> {\n> XLogRecPtr lsn = PG_GETARG_LSN(0);\n> Datum num_nbytes = PG_GETARG_DATUM(1);\n> Datum u64_nbytes =\n> DatumGetInt64(DirectFunctionCall1(numeric_pg_lsn, num_nbytes));\n> XLogRecPtr result;\n> \n> if (pg_add_u64_overflow(lsn, u64_nbytes, &result))\n> elog(ERROR, \"result out of range\");\n> \n> PG_RETURN_LSN(result);\n> }\n> \n> If invalid values are given as the addend, the following message would\n> make sense.\n> \n> =# select '1/1::pg_lsn + 'NaN'::numeric;\n> ERROR: cannot use NaN in this expression\n> =# select '1/1::pg_lsn + '-1'::numeric;\n> ERROR: numeric value out of range for this expression\n\nCould you tell me why we should reject this calculation?\nIMO it's ok to add the negative number, and which is possible\nwith the latest patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 May 2020 11:31:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "At Fri, 8 May 2020 11:31:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> >> You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n> >> \"the number of bytes to add/subtract cannnot be NaN\" when NaN is\n> >> specified?\n> > The function is called while executing an expression, so \"NaN cannot\n> > be used in this expression\" or something like that would work.\n> \n> This sounds ambiguous. I like to use clearer messages like\n> \n> cannot add NaN to pg_lsn\n> cannot subtract NaN from pg_lsn\n\nThey works fine to me.\n\n> >> I'm not sure if int128 is available in every environments.\n> > In second thought, I found that we don't have enough substitute\n> > functions for the platforms without a native implement. Instead,\n> > there are some overflow-safe uint64 math functions, that is,\n> > pg_add/sub_u64_overflow. This patch defines numeric_pg_lsn which is\n> > substantially numeric_uint64. By using them, for example, we can make\n> > pg_lsn_pli mainly with integer arithmetic as follows.\n> \n> Sorry, I'm not sure what the benefit of this approach...\n\n(If we don't allow negative nbytes,)\nWe accept numeric so that the operators can accept values out of range\nof int64, but we don't need to perform all arithmetic in numeric. That\napproach does less numeric arithmetic, that is, faster and simpler.\nWe don't need to string'ify LSN with it. That avoid stack consumption.\n\n> > If invalid values are given as the addend, the following message would\n> > make sense.\n> > =# select '1/1::pg_lsn + 'NaN'::numeric;\n> > ERROR: cannot use NaN in this expression\n> > =# select '1/1::pg_lsn + '-1'::numeric;\n> > ERROR: numeric value out of range for this expression\n> \n> Could you tell me why we should reject this calculation?\n> IMO it's ok to add the negative number, and which is possible\n> with the latest patch.\n\nSorry, I misread the patch as it rejected -1 for *nbytes*, by seeing\nnumeric_pg_lsn.\n\nFinally, I'm convinced that we lack required integer arithmetic\ninfrastructure to perform the objective.\n\nThe patch looks good to me except the size of buf[], but I don't\nstrongly object to that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 May 2020 12:10:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On 2020/05/08 12:10, Kyotaro Horiguchi wrote:\n> At Fri, 8 May 2020 11:31:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n>>>> \"the number of bytes to add/subtract cannnot be NaN\" when NaN is\n>>>> specified?\n>>> The function is called while executing an expression, so \"NaN cannot\n>>> be used in this expression\" or something like that would work.\n>>\n>> This sounds ambiguous. I like to use clearer messages like\n>>\n>> cannot add NaN to pg_lsn\n>> cannot subtract NaN from pg_lsn\n> \n> They works fine to me.\n\nOk, I updated pg_lsn_pli() and pg_lsn_mii() so that they emit an error\nwhen NaN is specified as the number of bytes.\n\n\n>>>> I'm not sure if int128 is available in every environments.\n>>> In second thought, I found that we don't have enough substitute\n>>> functions for the platforms without a native implement. Instead,\n>>> there are some overflow-safe uint64 math functions, that is,\n>>> pg_add/sub_u64_overflow. This patch defines numeric_pg_lsn which is\n>>> substantially numeric_uint64. By using them, for example, we can make\n>>> pg_lsn_pli mainly with integer arithmetic as follows.\n>>\n>> Sorry, I'm not sure what the benefit of this approach...\n> \n> (If we don't allow negative nbytes,)\n> We accept numeric so that the operators can accept values out of range\n> of int64, but we don't need to perform all arithmetic in numeric. That\n> approach does less numeric arithmetic, that is, faster and simpler.\n> We don't need to string'ify LSN with it. That avoid stack consumption.\n> \n>>> If invalid values are given as the addend, the following message would\n>>> make sense.\n>>> =# select '1/1::pg_lsn + 'NaN'::numeric;\n>>> ERROR: cannot use NaN in this expression\n>>> =# select '1/1::pg_lsn + '-1'::numeric;\n>>> ERROR: numeric value out of range for this expression\n>>\n>> Could you tell me why we should reject this calculation?\n>> IMO it's ok to add the negative number, and which is possible\n>> with the latest patch.\n> \n> Sorry, I misread the patch as it rejected -1 for *nbytes*, by seeing\n> numeric_pg_lsn.\n> \n> Finally, I'm convinced that we lack required integer arithmetic\n> infrastructure to perform the objective.\n> \n> The patch looks good to me except the size of buf[], but I don't\n> strongly object to that.\n\nOk, I changed the size of buf[] to 32.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 9 May 2020 23:40:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "At Sat, 9 May 2020 23:40:15 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/05/08 12:10, Kyotaro Horiguchi wrote:\n> > At Fri, 8 May 2020 11:31:42 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >>>> You mean that pg_lsn_pli() and pg_lsn_mii() should emit an error like\n> >>>> \"the number of bytes to add/subtract cannnot be NaN\" when NaN is\n> >>>> specified?\n> >>> The function is called while executing an expression, so \"NaN cannot\n> >>> be used in this expression\" or something like that would work.\n> >>\n> >> This sounds ambiguous. I like to use clearer messages like\n> >>\n> >> cannot add NaN to pg_lsn\n> >> cannot subtract NaN from pg_lsn\n> > They works fine to me.\n> \n> Ok, I updated pg_lsn_pli() and pg_lsn_mii() so that they emit an error\n> when NaN is specified as the number of bytes.\n\nIt's fine with me.\n\n> > Sorry, I misread the patch as it rejected -1 for *nbytes*, by seeing\n> > numeric_pg_lsn.\n> > Finally, I'm convinced that we lack required integer arithmetic\n> > infrastructure to perform the objective.\n> > The patch looks good to me except the size of buf[], but I don't\n> > strongly object to that.\n> \n> Ok, I changed the size of buf[] to 32.\n> Attached is the updated version of the patch.\n\nThank you very much! The patch looks good to me.\n\nregard.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 11 May 2020 13:00:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "Hi,\r\n\r\nThe patch looks fine to me, however there is one hunk failing for the test case, so it needs to be rebased.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 29 Jun 2020 16:03:53 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "On 2020/06/30 1:03, Asif Rehman wrote:\n> Hi,\n> \n> The patch looks fine to me, however there is one hunk failing for the test case, so it needs to be rebased.\n\nThanks for the check! Attached is the updated version of the patch.\n\n> \n> The new status of this patch is: Waiting on Author\n\nI will change the status back to Needs Review.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 30 Jun 2020 01:21:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nThe patch looks good to me.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 30 Jun 2020 10:54:46 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" }, { "msg_contents": "\n\nOn 2020/06/30 19:54, Asif Rehman wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> The patch looks good to me.\n> \n> The new status of this patch is: Ready for Committer\n\nThanks for the review! Pushed.\n\nRegards, \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Jun 2020 23:58:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: +(pg_lsn, int8) and -(pg_lsn, int8) operators" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nIf has 0 full groups, \"we don't need to do anything\" and need goes to next.\nOtherwise a integer division by zero, can raise.\n\ncomments extracted trom explain.c:\n /*\n* Since we never have any prefix groups unless we've first sorted\n* a full groups and transitioned modes (copying the tuples into a\n* prefix group), we don't need to do anything if there were 0 full\n* groups.\n*/\n\nregards,\nRanier Vilela", "msg_date": "Thu, 23 Apr 2020 09:37:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n>\n> Per Coverity.\n>\n> If has 0 full groups, \"we don't need to do anything\" and need goes to next.\n> Otherwise a integer division by zero, can raise.\n>\n> comments extracted trom explain.c:\n> /*\n> * Since we never have any prefix groups unless we've first sorted\n> * a full groups and transitioned modes (copying the tuples into a\n> * prefix group), we don't need to do anything if there were 0 full\n> * groups.\n> */\n\nThis does look like a fairly obvious thinko on my part, and the patch\nlooks correct to me.\n\nTomas: agreed?\n\nThanks,\nJames\n\n\n", "msg_date": "Thu, 23 Apr 2020 16:12:34 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n>On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Per Coverity.\n>>\n>> If has 0 full groups, \"we don't need to do anything\" and need goes to next.\n>> Otherwise a integer division by zero, can raise.\n>>\n>> comments extracted trom explain.c:\n>> /*\n>> * Since we never have any prefix groups unless we've first sorted\n>> * a full groups and transitioned modes (copying the tuples into a\n>> * prefix group), we don't need to do anything if there were 0 full\n>> * groups.\n>> */\n>\n>This does look like a fairly obvious thinko on my part, and the patch\n>looks correct to me.\n>\n>Tomas: agreed?\n>\n\nSo how do we actually get the division by zero? It seems to me the fix\nprevents a division by zero with 0 full groups and >0 prefix groups,\nbut can that actually happen?\n\nBut can that actually happen? Doesn't the comment quoted in the report\nactually suggest otherwise? If this\n\n (fullsortGroupInfo->groupCount == 0 &&\n prefixsortGroupInfo->groupCount == 0)\n\nevaluates to false, and\n\n (fullsortGroupInfo->groupCount == 0)\n\nthis evaluates to true, then clearly there would have to be 0 full\ngroups and >0 prefix groups. But the comment says that can't happen,\nunless I misunderstand what it's saying.\n\nI've tried to trigger the issue, but without success ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 00:02:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "Em sex., 8 de mai. de 2020 às 19:02, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n> >On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Per Coverity.\n> >>\n> >> If has 0 full groups, \"we don't need to do anything\" and need goes to\n> next.\n> >> Otherwise a integer division by zero, can raise.\n> >>\n> >> comments extracted trom explain.c:\n> >> /*\n> >> * Since we never have any prefix groups unless we've first sorted\n> >> * a full groups and transitioned modes (copying the tuples into a\n> >> * prefix group), we don't need to do anything if there were 0 full\n> >> * groups.\n> >> */\n> >\n> >This does look like a fairly obvious thinko on my part, and the patch\n> >looks correct to me.\n> >\n> >Tomas: agreed?\n> >\n>\n> So how do we actually get the division by zero? It seems to me the fix\n> prevents a division by zero with 0 full groups and >0 prefix groups,\n> but can that actually happen?\n>\n> But can that actually happen? Doesn't the comment quoted in the report\n> actually suggest otherwise? If this\n>\n> (fullsortGroupInfo->groupCount == 0 &&\n> prefixsortGroupInfo->groupCount == 0)\n>\n\n> First this line, contradicts the comments. According to the comments,\nif ( fullsortGroupInfo->groupCount == 0) is true, there is no need to do\nanything else, next.\nSo anyway, we don't need to test anything anymore.\n\nNow, to happen the division by zero, (prefixsortGroupInfo->groupCount == 0,\nneeds to be true too,\nMaybe this is not happening, but if it happens, it divides by zero, just\nbelow, so if an unnecessary test and adds a risk, why not, remove it?\n\n\n> evaluates to false, and\n>\n> (fullsortGroupInfo->groupCount == 0)\n>\n> this evaluates to true, then clearly there would have to be 0 full\n> groups and >0 prefix groups. But the comment says that can't happen,\n> unless I misunderstand what it's saying.\n>\nComments says:\n\"we don't need to do anything if there were 0 full groups.\"\n\nregards,\nRanier Vilela\n\nEm sex., 8 de mai. de 2020 às 19:02, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n>On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Per Coverity.\n>>\n>> If has 0 full groups, \"we don't need to do anything\" and need goes to next.\n>> Otherwise a integer division by zero, can raise.\n>>\n>> comments extracted trom explain.c:\n>>  /*\n>> * Since we never have any prefix groups unless we've first sorted\n>> * a full groups and transitioned modes (copying the tuples into a\n>> * prefix group), we don't need to do anything if there were 0 full\n>> * groups.\n>> */\n>\n>This does look like a fairly obvious thinko on my part, and the patch\n>looks correct to me.\n>\n>Tomas: agreed?\n>\n\nSo how do we actually get the division by zero? It seems to me the fix\nprevents  a division by zero with 0 full groups and >0 prefix groups,\nbut can that actually happen?\n\nBut can that actually happen? Doesn't the comment quoted in the report\nactually suggest otherwise? If this\n\n   (fullsortGroupInfo->groupCount == 0 &&\n    prefixsortGroupInfo->groupCount == 0)\nFirst this line, contradicts the comments. According to the comments,if (\nfullsortGroupInfo->groupCount == 0) is true, there is no need to do anything else, next.So anyway, we don't need to test anything anymore.Now, to happen the division by zero, \n(prefixsortGroupInfo->groupCount == 0, needs to be true too,Maybe this is not happening, but if it happens, it divides by zero, just below, so if an unnecessary test and adds a risk, why not, remove it?\n\n\n\nevaluates to false, and\n\n   (fullsortGroupInfo->groupCount == 0)\n\nthis evaluates to true, then clearly there would have to be 0 full\ngroups and >0 prefix groups. But the comment says that can't happen,\nunless I misunderstand what it's saying.Comments says:\"we don't need to do anything if there were 0 full groups.\"regards,Ranier Vilela", "msg_date": "Fri, 8 May 2020 19:25:36 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Fri, May 08, 2020 at 07:25:36PM -0300, Ranier Vilela wrote:\n>Em sex., 8 de mai. de 2020 �s 19:02, Tomas Vondra <\n>tomas.vondra@2ndquadrant.com> escreveu:\n>\n>> On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n>> >On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com>\n>> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> Per Coverity.\n>> >>\n>> >> If has 0 full groups, \"we don't need to do anything\" and need goes to\n>> next.\n>> >> Otherwise a integer division by zero, can raise.\n>> >>\n>> >> comments extracted trom explain.c:\n>> >> /*\n>> >> * Since we never have any prefix groups unless we've first sorted\n>> >> * a full groups and transitioned modes (copying the tuples into a\n>> >> * prefix group), we don't need to do anything if there were 0 full\n>> >> * groups.\n>> >> */\n>> >\n>> >This does look like a fairly obvious thinko on my part, and the patch\n>> >looks correct to me.\n>> >\n>> >Tomas: agreed?\n>> >\n>>\n>> So how do we actually get the division by zero? It seems to me the fix\n>> prevents a division by zero with 0 full groups and >0 prefix groups,\n>> but can that actually happen?\n>>\n>> But can that actually happen? Doesn't the comment quoted in the report\n>> actually suggest otherwise? If this\n>>\n>> (fullsortGroupInfo->groupCount == 0 &&\n>> prefixsortGroupInfo->groupCount == 0)\n>>\n>\n>> First this line, contradicts the comments. According to the comments,\n>if ( fullsortGroupInfo->groupCount == 0) is true, there is no need to do\n>anything else, next.\n>So anyway, we don't need to test anything anymore.\n>\n>Now, to happen the division by zero, (prefixsortGroupInfo->groupCount == 0,\n>needs to be true too,\n>Maybe this is not happening, but if it happens, it divides by zero, just\n>below, so if an unnecessary test and adds a risk, why not, remove it?\n>\n\nWell, I'd like to understand what the bug is. If possible, I'd like to\nadd a test case, for example.\n\n>\n>> evaluates to false, and\n>>\n>> (fullsortGroupInfo->groupCount == 0)\n>>\n>> this evaluates to true, then clearly there would have to be 0 full\n>> groups and >0 prefix groups. But the comment says that can't happen,\n>> unless I misunderstand what it's saying.\n>>\n>Comments says:\n>\"we don't need to do anything if there were 0 full groups.\"\n>\n\nTrue. But it also implies that in order to have prefix groups we need to\nhave a full group first. Which implies that\n\n (#full == 0) && (#prefix != 0)\n\nis not really possible.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 01:20:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Fri, May 8, 2020 at 7:20 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Fri, May 08, 2020 at 07:25:36PM -0300, Ranier Vilela wrote:\n> >Em sex., 8 de mai. de 2020 às 19:02, Tomas Vondra <\n> >tomas.vondra@2ndquadrant.com> escreveu:\n> >\n> >> On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n> >> >On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com>\n> >> wrote:\n> >> >>\n> >> >> Hi,\n> >> >>\n> >> >> Per Coverity.\n> >> >>\n> >> >> If has 0 full groups, \"we don't need to do anything\" and need goes to\n> >> next.\n> >> >> Otherwise a integer division by zero, can raise.\n> >> >>\n> >> >> comments extracted trom explain.c:\n> >> >> /*\n> >> >> * Since we never have any prefix groups unless we've first sorted\n> >> >> * a full groups and transitioned modes (copying the tuples into a\n> >> >> * prefix group), we don't need to do anything if there were 0 full\n> >> >> * groups.\n> >> >> */\n> >> >\n> >> >This does look like a fairly obvious thinko on my part, and the patch\n> >> >looks correct to me.\n> >> >\n> >> >Tomas: agreed?\n> >> >\n> >>\n> >> So how do we actually get the division by zero? It seems to me the fix\n> >> prevents a division by zero with 0 full groups and >0 prefix groups,\n> >> but can that actually happen?\n> >>\n> >> But can that actually happen? Doesn't the comment quoted in the report\n> >> actually suggest otherwise? If this\n> >>\n> >> (fullsortGroupInfo->groupCount == 0 &&\n> >> prefixsortGroupInfo->groupCount == 0)\n> >>\n> >\n> >> First this line, contradicts the comments. According to the comments,\n> >if ( fullsortGroupInfo->groupCount == 0) is true, there is no need to do\n> >anything else, next.\n> >So anyway, we don't need to test anything anymore.\n> >\n> >Now, to happen the division by zero, (prefixsortGroupInfo->groupCount == 0,\n> >needs to be true too,\n> >Maybe this is not happening, but if it happens, it divides by zero, just\n> >below, so if an unnecessary test and adds a risk, why not, remove it?\n> >\n>\n> Well, I'd like to understand what the bug is. If possible, I'd like to\n> add a test case, for example.\n>\n> >\n> >> evaluates to false, and\n> >>\n> >> (fullsortGroupInfo->groupCount == 0)\n> >>\n> >> this evaluates to true, then clearly there would have to be 0 full\n> >> groups and >0 prefix groups. But the comment says that can't happen,\n> >> unless I misunderstand what it's saying.\n> >>\n> >Comments says:\n> >\"we don't need to do anything if there were 0 full groups.\"\n> >\n>\n> True. But it also implies that in order to have prefix groups we need to\n> have a full group first. Which implies that\n>\n> (#full == 0) && (#prefix != 0)\n>\n> is not really possible.\n\nThere are always full sort groups before any prefix groups can happen,\nso we know (even though the tooling doesn't) that the 2nd test can\nnever contradict the first.\n\nSo it's not a bug per se in that we can never reach the place where\nthe divide by zero would occur, but checking prefix group count\n(always 0 if full group count is 0) is confusing at best (and wasn't\nintentional), so we should remove it.\n\nJames\n\n\n", "msg_date": "Fri, 8 May 2020 19:33:03 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Fri, May 08, 2020 at 07:33:03PM -0400, James Coleman wrote:\n>On Fri, May 8, 2020 at 7:20 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Fri, May 08, 2020 at 07:25:36PM -0300, Ranier Vilela wrote:\n>> >Em sex., 8 de mai. de 2020 �s 19:02, Tomas Vondra <\n>> >tomas.vondra@2ndquadrant.com> escreveu:\n>> >\n>> >> On Thu, Apr 23, 2020 at 04:12:34PM -0400, James Coleman wrote:\n>> >> >On Thu, Apr 23, 2020 at 8:38 AM Ranier Vilela <ranier.vf@gmail.com>\n>> >> wrote:\n>> >> >>\n>> >> >> Hi,\n>> >> >>\n>> >> >> Per Coverity.\n>> >> >>\n>> >> >> If has 0 full groups, \"we don't need to do anything\" and need goes to\n>> >> next.\n>> >> >> Otherwise a integer division by zero, can raise.\n>> >> >>\n>> >> >> comments extracted trom explain.c:\n>> >> >> /*\n>> >> >> * Since we never have any prefix groups unless we've first sorted\n>> >> >> * a full groups and transitioned modes (copying the tuples into a\n>> >> >> * prefix group), we don't need to do anything if there were 0 full\n>> >> >> * groups.\n>> >> >> */\n>> >> >\n>> >> >This does look like a fairly obvious thinko on my part, and the patch\n>> >> >looks correct to me.\n>> >> >\n>> >> >Tomas: agreed?\n>> >> >\n>> >>\n>> >> So how do we actually get the division by zero? It seems to me the fix\n>> >> prevents a division by zero with 0 full groups and >0 prefix groups,\n>> >> but can that actually happen?\n>> >>\n>> >> But can that actually happen? Doesn't the comment quoted in the report\n>> >> actually suggest otherwise? If this\n>> >>\n>> >> (fullsortGroupInfo->groupCount == 0 &&\n>> >> prefixsortGroupInfo->groupCount == 0)\n>> >>\n>> >\n>> >> First this line, contradicts the comments. According to the comments,\n>> >if ( fullsortGroupInfo->groupCount == 0) is true, there is no need to do\n>> >anything else, next.\n>> >So anyway, we don't need to test anything anymore.\n>> >\n>> >Now, to happen the division by zero, (prefixsortGroupInfo->groupCount == 0,\n>> >needs to be true too,\n>> >Maybe this is not happening, but if it happens, it divides by zero, just\n>> >below, so if an unnecessary test and adds a risk, why not, remove it?\n>> >\n>>\n>> Well, I'd like to understand what the bug is. If possible, I'd like to\n>> add a test case, for example.\n>>\n>> >\n>> >> evaluates to false, and\n>> >>\n>> >> (fullsortGroupInfo->groupCount == 0)\n>> >>\n>> >> this evaluates to true, then clearly there would have to be 0 full\n>> >> groups and >0 prefix groups. But the comment says that can't happen,\n>> >> unless I misunderstand what it's saying.\n>> >>\n>> >Comments says:\n>> >\"we don't need to do anything if there were 0 full groups.\"\n>> >\n>>\n>> True. But it also implies that in order to have prefix groups we need to\n>> have a full group first. Which implies that\n>>\n>> (#full == 0) && (#prefix != 0)\n>>\n>> is not really possible.\n>\n>There are always full sort groups before any prefix groups can happen,\n>so we know (even though the tooling doesn't) that the 2nd test can\n>never contradict the first.\n>\n>So it's not a bug per se in that we can never reach the place where\n>the divide by zero would occur, but checking prefix group count\n>(always 0 if full group count is 0) is confusing at best (and wasn't\n>intentional), so we should remove it.\n>\n\nOK, thanks for the clarification.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 02:10:58 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> There are always full sort groups before any prefix groups can happen,\n> so we know (even though the tooling doesn't) that the 2nd test can\n> never contradict the first.\n\nSo maybe an assertion enforcing that would be appropriate?\nUntested, but:\n\n-\t\t\tif (fullsortGroupInfo->groupCount == 0 &&\n-\t\t\t\tprefixsortGroupInfo->groupCount == 0)\n+\t\t\tif (fullsortGroupInfo->groupCount == 0)\n+\t\t\t{\n+\t\t\t\tAssert(prefixsortGroupInfo->groupCount == 0);\n \t\t\t\tcontinue;\n+\t\t\t}\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 00:45:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "Em sáb., 9 de mai. de 2020 às 01:45, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> James Coleman <jtc331@gmail.com> writes:\n> > There are always full sort groups before any prefix groups can happen,\n> > so we know (even though the tooling doesn't) that the 2nd test can\n> > never contradict the first.\n>\n> So maybe an assertion enforcing that would be appropriate?\n> Untested, but:\n>\n> - if (fullsortGroupInfo->groupCount == 0 &&\n> - prefixsortGroupInfo->groupCount == 0)\n> + if (fullsortGroupInfo->groupCount == 0)\n> + {\n> + Assert(prefixsortGroupInfo->groupCount ==\n> 0);\n> continue;\n> + }\n>\nI agree, asserts always help.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 9 May 2020 06:48:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n>Em s�b., 9 de mai. de 2020 �s 01:45, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n>> James Coleman <jtc331@gmail.com> writes:\n>> > There are always full sort groups before any prefix groups can happen,\n>> > so we know (even though the tooling doesn't) that the 2nd test can\n>> > never contradict the first.\n>>\n>> So maybe an assertion enforcing that would be appropriate?\n>> Untested, but:\n>>\n>> - if (fullsortGroupInfo->groupCount == 0 &&\n>> - prefixsortGroupInfo->groupCount == 0)\n>> + if (fullsortGroupInfo->groupCount == 0)\n>> + {\n>> + Assert(prefixsortGroupInfo->groupCount ==\n>> 0);\n>> continue;\n>> + }\n>>\n>I agree, asserts always help.\n>\n\nThat doesn't work, because the prefixSortGroupInfo is used before\nassignment, producing compile-time warnings.\n\nI've pushed a simpler fix without the assert. If we want to make this\ncheck, perhaps doing it in incremental sort itself would be better than\ndoing it in explain.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 19:44:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "Em sáb., 9 de mai. de 2020 às 14:44, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n> >Em sáb., 9 de mai. de 2020 às 01:45, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >\n> >> James Coleman <jtc331@gmail.com> writes:\n> >> > There are always full sort groups before any prefix groups can happen,\n> >> > so we know (even though the tooling doesn't) that the 2nd test can\n> >> > never contradict the first.\n> >>\n> >> So maybe an assertion enforcing that would be appropriate?\n> >> Untested, but:\n> >>\n> >> - if (fullsortGroupInfo->groupCount == 0 &&\n> >> - prefixsortGroupInfo->groupCount == 0)\n> >> + if (fullsortGroupInfo->groupCount == 0)\n> >> + {\n> >> + Assert(prefixsortGroupInfo->groupCount\n> ==\n> >> 0);\n> >> continue;\n> >> + }\n> >>\n> >I agree, asserts always help.\n> >\n>\n> That doesn't work, because the prefixSortGroupInfo is used before\n> assignment, producing compile-time warnings.\n>\n> I've pushed a simpler fix without the assert. If we want to make this\n> check, perhaps doing it in incremental sort itself would be better than\n> doing it in explain.\n>\nThanks anyway for the commit.\nBut if you used the first version of my patch, would the author be me and\nam I as reported?\nWhat does it take to be considered the author?\n\nregards,\nRanier Vilela\n\nEm sáb., 9 de mai. de 2020 às 14:44, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n>Em sáb., 9 de mai. de 2020 às 01:45, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n>> James Coleman <jtc331@gmail.com> writes:\n>> > There are always full sort groups before any prefix groups can happen,\n>> > so we know (even though the tooling doesn't) that the 2nd test can\n>> > never contradict the first.\n>>\n>> So maybe an assertion enforcing that would be appropriate?\n>> Untested, but:\n>>\n>> -                       if (fullsortGroupInfo->groupCount == 0 &&\n>> -                               prefixsortGroupInfo->groupCount == 0)\n>> +                       if (fullsortGroupInfo->groupCount == 0)\n>> +                       {\n>> +                               Assert(prefixsortGroupInfo->groupCount ==\n>> 0);\n>>                                 continue;\n>> +                       }\n>>\n>I agree, asserts always help.\n>\n\nThat doesn't work, because the prefixSortGroupInfo is used before\nassignment, producing compile-time warnings.\n\nI've pushed a simpler fix without the assert. If we want to make this\ncheck, perhaps doing it in incremental sort itself would be better than\ndoing it in explain.Thanks anyway for the commit.But if you used the first version of my patch, would the author be me and am I as reported?What does it take to be considered the author?regards,Ranier Vilela", "msg_date": "Sat, 9 May 2020 14:51:50 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "On Sat, May 09, 2020 at 02:51:50PM -0300, Ranier Vilela wrote:\n>Em s�b., 9 de mai. de 2020 �s 14:44, Tomas Vondra <\n>tomas.vondra@2ndquadrant.com> escreveu:\n>\n>> On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n>> >Em s�b., 9 de mai. de 2020 �s 01:45, Tom Lane <tgl@sss.pgh.pa.us>\n>> escreveu:\n>> >\n>> >> James Coleman <jtc331@gmail.com> writes:\n>> >> > There are always full sort groups before any prefix groups can happen,\n>> >> > so we know (even though the tooling doesn't) that the 2nd test can\n>> >> > never contradict the first.\n>> >>\n>> >> So maybe an assertion enforcing that would be appropriate?\n>> >> Untested, but:\n>> >>\n>> >> - if (fullsortGroupInfo->groupCount == 0 &&\n>> >> - prefixsortGroupInfo->groupCount == 0)\n>> >> + if (fullsortGroupInfo->groupCount == 0)\n>> >> + {\n>> >> + Assert(prefixsortGroupInfo->groupCount\n>> ==\n>> >> 0);\n>> >> continue;\n>> >> + }\n>> >>\n>> >I agree, asserts always help.\n>> >\n>>\n>> That doesn't work, because the prefixSortGroupInfo is used before\n>> assignment, producing compile-time warnings.\n>>\n>> I've pushed a simpler fix without the assert. If we want to make this\n>> check, perhaps doing it in incremental sort itself would be better than\n>> doing it in explain.\n>>\n>Thanks anyway for the commit.\n>But if you used the first version of my patch, would the author be me and\n>am I as reported?\n>What does it take to be considered the author?\n>\n\nApologies. I should have listed you as an author, not just in the\nreported-by field.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 22:48:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" }, { "msg_contents": "Em sáb., 9 de mai. de 2020 às 17:48, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Sat, May 09, 2020 at 02:51:50PM -0300, Ranier Vilela wrote:\n> >Em sáb., 9 de mai. de 2020 às 14:44, Tomas Vondra <\n> >tomas.vondra@2ndquadrant.com> escreveu:\n> >\n> >> On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n> >> >Em sáb., 9 de mai. de 2020 às 01:45, Tom Lane <tgl@sss.pgh.pa.us>\n> >> escreveu:\n> >> >\n> >> >> James Coleman <jtc331@gmail.com> writes:\n> >> >> > There are always full sort groups before any prefix groups can\n> happen,\n> >> >> > so we know (even though the tooling doesn't) that the 2nd test can\n> >> >> > never contradict the first.\n> >> >>\n> >> >> So maybe an assertion enforcing that would be appropriate?\n> >> >> Untested, but:\n> >> >>\n> >> >> - if (fullsortGroupInfo->groupCount == 0 &&\n> >> >> - prefixsortGroupInfo->groupCount == 0)\n> >> >> + if (fullsortGroupInfo->groupCount == 0)\n> >> >> + {\n> >> >> +\n> Assert(prefixsortGroupInfo->groupCount\n> >> ==\n> >> >> 0);\n> >> >> continue;\n> >> >> + }\n> >> >>\n> >> >I agree, asserts always help.\n> >> >\n> >>\n> >> That doesn't work, because the prefixSortGroupInfo is used before\n> >> assignment, producing compile-time warnings.\n> >>\n> >> I've pushed a simpler fix without the assert. If we want to make this\n> >> check, perhaps doing it in incremental sort itself would be better than\n> >> doing it in explain.\n> >>\n> >Thanks anyway for the commit.\n> >But if you used the first version of my patch, would the author be me and\n> >am I as reported?\n> >What does it take to be considered the author?\n> >\n>\n> Apologies. I should have listed you as an author, not just in the\n> reported-by field.\n>\nApologies accepted.\n\nThank you.\n\nRanier VIilela\n\nEm sáb., 9 de mai. de 2020 às 17:48, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Sat, May 09, 2020 at 02:51:50PM -0300, Ranier Vilela wrote:\n>Em sáb., 9 de mai. de 2020 às 14:44, Tomas Vondra <\n>tomas.vondra@2ndquadrant.com> escreveu:\n>\n>> On Sat, May 09, 2020 at 06:48:59AM -0300, Ranier Vilela wrote:\n>> >Em sáb., 9 de mai. de 2020 às 01:45, Tom Lane <tgl@sss.pgh.pa.us>\n>> escreveu:\n>> >\n>> >> James Coleman <jtc331@gmail.com> writes:\n>> >> > There are always full sort groups before any prefix groups can happen,\n>> >> > so we know (even though the tooling doesn't) that the 2nd test can\n>> >> > never contradict the first.\n>> >>\n>> >> So maybe an assertion enforcing that would be appropriate?\n>> >> Untested, but:\n>> >>\n>> >> -                       if (fullsortGroupInfo->groupCount == 0 &&\n>> >> -                               prefixsortGroupInfo->groupCount == 0)\n>> >> +                       if (fullsortGroupInfo->groupCount == 0)\n>> >> +                       {\n>> >> +                               Assert(prefixsortGroupInfo->groupCount\n>> ==\n>> >> 0);\n>> >>                                 continue;\n>> >> +                       }\n>> >>\n>> >I agree, asserts always help.\n>> >\n>>\n>> That doesn't work, because the prefixSortGroupInfo is used before\n>> assignment, producing compile-time warnings.\n>>\n>> I've pushed a simpler fix without the assert. If we want to make this\n>> check, perhaps doing it in incremental sort itself would be better than\n>> doing it in explain.\n>>\n>Thanks anyway for the commit.\n>But if you used the first version of my patch, would the author be me and\n>am I as reported?\n>What does it take to be considered the author?\n>\n\nApologies. I should have listed you as an author, not just in the\nreported-by field.Apologies accepted.Thank you.Ranier VIilela", "msg_date": "Sat, 9 May 2020 18:11:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix division by zero (explain.c)" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nIf test oldtuple can be NULL, I mean it can really be NULL.\nOn DELETE process, if oldtuple is NULL, log error and continue.\nSo UPDATE must have the same treatment.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 23 Apr 2020 10:47:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix Null pointer dereferences (pgoutput.c)" }, { "msg_contents": "On Thu, Apr 23, 2020 at 10:48 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n>\n> Per Coverity.\n>\n> If test oldtuple can be NULL, I mean it can really be NULL.\n> On DELETE process, if oldtuple is NULL, log error and continue.\n> So UPDATE must have the same treatment.\n\nI think I too had noticed this when working on my patch to move this\ncode to a different location in that function, posted here:\nhttps://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTPU0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 00:48:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix Null pointer dereferences (pgoutput.c)" } ]
[ { "msg_contents": "Hi,\nPer Coverity.\n\nread_controlfile alloc memory with pg_malloc and fail in releasing the\nmemory.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 23 Apr 2020 15:20:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "Hi,\n\nOn 2020-04-23 15:20:59 -0300, Ranier Vilela wrote:\n> Per Coverity.\n> \n> read_controlfile alloc memory with pg_malloc and fail in releasing the\n> memory.\n\nSeriously, this is getting really ridiculous. You're posting badly\nvetted, often nearly verbatim, coverity reports. Many of them are\nobvious false positives. This is just producing noise.\n\nPlease stop.\n\n\n> diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c\n> index 233441837f..673ab0204c 100644\n> --- a/src/bin/pg_resetwal/pg_resetwal.c\n> +++ b/src/bin/pg_resetwal/pg_resetwal.c\n> @@ -608,6 +608,7 @@ read_controlfile(void)\n> \tlen = read(fd, buffer, PG_CONTROL_FILE_SIZE);\n> \tif (len < 0)\n> \t{\n> +\t\tpg_free(buffer);\t\t\n> \t\tpg_log_error(\"could not read file \\\"%s\\\": %m\", XLOG_CONTROL_FILE);\n> \t\texit(1);\n> \t}\n\nThere's an exit() two lines later, this is obviously not necessary.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:27:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "Em qui., 23 de abr. de 2020 às 15:27, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2020-04-23 15:20:59 -0300, Ranier Vilela wrote:\n> > Per Coverity.\n> >\n> > read_controlfile alloc memory with pg_malloc and fail in releasing the\n> > memory.\n>\n> Seriously, this is getting really ridiculous. You're posting badly\n> vetted, often nearly verbatim, coverity reports. Many of them are\n> obvious false positives. This is just producing noise.\n>\nI do not agree in any way. At the very least what I am reporting is\nsuspect. And if I already propose a solution even if it is not the best, it\nis much better than being silent and missing the opportunity to fix a bug.\nRidiculous is your lack of education.\n\n\n>\n> Please stop.\n>\nI will ignore.\n\n\n> > diff --git a/src/bin/pg_resetwal/pg_resetwal.c\n> b/src/bin/pg_resetwal/pg_resetwal.c\n> > index 233441837f..673ab0204c 100644\n> > --- a/src/bin/pg_resetwal/pg_resetwal.c\n> > +++ b/src/bin/pg_resetwal/pg_resetwal.c\n> > @@ -608,6 +608,7 @@ read_controlfile(void)\n> > len = read(fd, buffer, PG_CONTROL_FILE_SIZE);\n> > if (len < 0)\n> > {\n> > + pg_free(buffer);\n> > pg_log_error(\"could not read file \\\"%s\\\": %m\",\n> XLOG_CONTROL_FILE);\n> > exit(1);\n> > }\n>\n> There's an exit() two lines later, this is obviously not necessary.\n>\nExcess.\n\nDid you read patch all over?\n\n memcpy(&ControlFile, buffer, sizeof(ControlFile));\n+ pg_free(buffer);\n\n /* return false if WAL segment size is not valid */\n if (!IsValidWalSegSize(ControlFile.xlog_seg_size))\n@@ -644,6 +646,7 @@ read_controlfile(void)\n\n return true;\n }\n+ pg_free(buffer);\n\n /* Looks like it's a mess. */\n pg_log_warning(\"pg_control exists but is broken or wrong version;\nignoring it\");\n\nReport for Coverity:\n\n*** CID 1425435: Resource leaks (RESOURCE_LEAK)\n/dll/postgres/src/bin/pg_resetwal/pg_resetwal.c: 650 in read_controlfile()\n644\n645 return true;\n646 }\n647\n648 /* Looks like it's a mess. */\n649 pg_log_warning(\"pg_control exists but is broken or wrong version;\nignoring it\");\n>>> CID 1425435: Resource leaks (RESOURCE_LEAK)\n>>> Variable \"buffer\" going out of scope leaks the storage it points to.\n650 return false;\n651 }\n652\n653\n654 /*\n655 * Guess at pg_control values when we can't read\n\nregards,\nRanier Vilela", "msg_date": "Thu, 23 Apr 2020 15:40:21 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "On Thu, Apr 23, 2020 at 11:41 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> And if I already propose a solution even if it is not the best, it is much better than being silent and missing the opportunity to fix a bug.\n\nThe problem with that theory is that you're not creating any value\nover simply running Coverity directly. Your patches don't seem to be\nbased on any real analysis beyond what makes Coverity stop\ncomplaining, which is not helpful.\n\nFor example, the nbtree.c/btvacuumpage() issue you reported yesterday\ninvolved a NULL pointer dereference, but if the code path in question\never dereferenced the NULL pointer then it would be fundamentally\nwrong in many other ways, probably leading to data corruption. The fix\nthat you posted obviously completely missed the point. Even when\nCoverity identifies a serious issue, it usually needs to be carefully\ninterpreted.\n\nAnybody can run Coverity. Many of us do. Maybe the approach you've\ntaken would have had a noticeable benefit if you were not dealing with\na codebase that has already been subject to lots of triage of Coverity\nissues. But that's not the case.\n\n> Ridiculous is your lack of education.\n\nThis isn't helping you at all.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:27:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "On Thu, Apr 23, 2020 at 2:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I do not agree in any way. At the very least what I am reporting is suspect. And if I already propose a solution even if it is not the best, it is much better than being silent and missing the opportunity to fix a bug.\n> Ridiculous is your lack of education.\n\nThat's rather rude. I doubt that you know anything about how much\neducation Andres does nor does not have. The fact that he doesn't\nagree with you does not mean that he is poorly educated.\n\nOn the substance of the issue, I see from the commit log that you've\ngotten a few real issues fixed -- but I also agree with Andres that\nyou've reported a lot of things that are not real issues, and that\ntakes up other people's time looking at things that really don't\nmatter. Please make an effort not to report things that don't actually\nneed to be fixed.\n\npg_resetwal exits very quickly, generally in a small fraction of a\nsecond. The allocation you're at pains to free only happens once per\nexecution and allocates only 8kB. Trying to free allocations that are\ntiny and short-lived has no benefit. It's better to let the program\nexit that much quicker, at which point all the memory is freed anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:43:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "Em qui., 23 de abr. de 2020 às 16:27, Peter Geoghegan <pg@bowt.ie> escreveu:\n\n> On Thu, Apr 23, 2020 at 11:41 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > And if I already propose a solution even if it is not the best, it is\n> much better than being silent and missing the opportunity to fix a bug.\n>\n> The problem with that theory is that you're not creating any value\n> over simply running Coverity directly. Your patches don't seem to be\n> based on any real analysis beyond what makes Coverity stop\n> complaining, which is not helpful.\n>\nIn some cases, this may be true. But in fact I already fixed some bugs with\nthis technique. Even you already used a patch of mine to provide a fix.\nWasn't that helpful?\n1.\nhttps://www.postgresql.org/message-id/CAEudQAqJ%3DMVqJd4MHi%3DiMLismngE4GJqdiEZa1isxF3Pem-udg%40mail.gmail.com\n\n\n> For example, the nbtree.c/btvacuumpage() issue you reported yesterday\n> involved a NULL pointer dereference, but if the code path in question\n> ever dereferenced the NULL pointer then it would be fundamentally\n> wrong in many other ways, probably leading to data corruption. The fix\n> that you posted obviously completely missed the point. Even when\n> Coverity identifies a serious issue, it usually needs to be carefully\n> interpreted.\n>\nI disagree. In case of nbtree.c/btvacuumpag().\nIf you are validating \"opaque\" pointer, in three different ways to proceed\nwith cleaning, nothing more correct than validating the most important\nfirst, if the pointer is really valid. And that is what the patch does.\n\n\n>\n> Anybody can run Coverity. Many of us do. Maybe the approach you've\n> taken would have had a noticeable benefit if you were not dealing with\n> a codebase that has already been subject to lots of triage of Coverity\n> issues.\n>\nSorry, but the plsql-bugs list, has many reports of segmentation faults,\nthat shouldn't exist, if everyone uses Coverity or other tools, after\nwriting the code.\n\n\n> > Ridiculous is your lack of education.\n>\n> This isn't helping you at all.\n>\nConsideration and respect first.\n\nregards,\nRanier Vilela\n\nEm qui., 23 de abr. de 2020 às 16:27, Peter Geoghegan <pg@bowt.ie> escreveu:On Thu, Apr 23, 2020 at 11:41 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> And if I already propose a solution even if it is not the best, it is much better than being silent and missing the opportunity to fix a bug.\n\nThe problem with that theory is that you're not creating any value\nover simply running Coverity directly. Your patches don't seem to be\nbased on any real analysis beyond what makes Coverity stop\ncomplaining, which is not helpful.In some cases, this may be true. But in fact I already fixed some bugs with this technique. Even you already used a patch of mine to provide a fix. Wasn't that helpful?1. https://www.postgresql.org/message-id/CAEudQAqJ%3DMVqJd4MHi%3DiMLismngE4GJqdiEZa1isxF3Pem-udg%40mail.gmail.com \n\nFor example, the nbtree.c/btvacuumpage() issue you reported yesterday\ninvolved a NULL pointer dereference, but if the code path in question\never dereferenced the NULL pointer then it would be fundamentally\nwrong in many other ways, probably leading to data corruption. The fix\nthat you posted obviously completely missed the point. Even when\nCoverity identifies a serious issue, it usually needs to be carefully\ninterpreted.I disagree.  In case of nbtree.c/btvacuumpag().If you are validating \"opaque\" pointer, in three different ways to proceed with cleaning, nothing more correct than validating the most important first, if the pointer is really valid. And that is what the patch does. \n\nAnybody can run Coverity. Many of us do. Maybe the approach you've\ntaken would have had a noticeable benefit if you were not dealing with\na codebase that has already been subject to lots of triage of Coverity\nissues.Sorry, but the plsql-bugs list, has many reports of segmentation faults, that shouldn't exist, if everyone uses Coverity or other tools, after writing the code. \n> Ridiculous is your lack of education.\n\nThis isn't helping you at all.Consideration and respect first.regards,Ranier Vilela", "msg_date": "Thu, 23 Apr 2020 16:47:00 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" }, { "msg_contents": "Em qui., 23 de abr. de 2020 às 16:43, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Thu, Apr 23, 2020 at 2:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > I do not agree in any way. At the very least what I am reporting is\n> suspect. And if I already propose a solution even if it is not the best, it\n> is much better than being silent and missing the opportunity to fix a bug.\n> > Ridiculous is your lack of education.\n>\n> That's rather rude. I doubt that you know anything about how much\n> education Andres does nor does not have. The fact that he doesn't\n> agree with you does not mean that he is poorly educated.\n>\nSorry Robert.\n\n\n>\n> On the substance of the issue, I see from the commit log that you've\n> gotten a few real issues fixed -- but I also agree with Andres that\n> you've reported a lot of things that are not real issues, and that\n> takes up other people's time looking at things that really don't\n> matter. Please make an effort not to report things that don't actually\n> need to be fixed.\n\nAll my patches don't just leave my head. It comes from reports of analysis\ntools, by themselves, they are already suspect.\nI confess that FATAL error log, confused me a lot and since then, I have\ntried my best not to make the same mistakes.\n\n>\n>\n> pg_resetwal exits very quickly, generally in a small fraction of a\n> second. The allocation you're at pains to free only happens once per\n> execution and allocates only 8kB. Trying to free allocations that are\n> tiny and short-lived has no benefit. It's better to let the program\n> exit that much quicker, at which point all the memory is freed anyway.\n>\nRead_controlfile is a function, as it stands, it is useless to be reused.\n\nbest regards,\nRanier Vilela\n\nEm qui., 23 de abr. de 2020 às 16:43, Robert Haas <robertmhaas@gmail.com> escreveu:On Thu, Apr 23, 2020 at 2:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I do not agree in any way. At the very least what I am reporting is suspect. And if I already propose a solution even if it is not the best, it is much better than being silent and missing the opportunity to fix a bug.\n> Ridiculous is your lack of education.\n\nThat's rather rude. I doubt that you know anything about how much\neducation Andres does nor does not have. The fact that he doesn't\nagree with you does not mean that he is poorly educated.Sorry Robert. \n\nOn the substance of the issue, I see from the commit log that you've\ngotten a few real issues fixed -- but I also agree with Andres that\nyou've reported a lot of things that are not real issues, and that\ntakes up other people's time looking at things that really don't\nmatter. Please make an effort not to report things that don't actually\nneed to be fixed.All my patches don't just leave my head. It comes from reports of analysis tools, by themselves, they are already suspect. I confess that FATAL error log, confused me a lot and since then, I have tried my best not to make the same mistakes. \n\npg_resetwal exits very quickly, generally in a small fraction of a\nsecond. The allocation you're at pains to free only happens once per\nexecution and allocates only 8kB. Trying to free allocations that are\ntiny and short-lived has no benefit. It's better to let the program\nexit that much quicker, at which point all the memory is freed anyway.Read_controlfile is a function, as it stands, it is useless to be reused. best regards,Ranier Vilela", "msg_date": "Thu, 23 Apr 2020 16:57:14 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] FIx resource leaks (pg_resetwal.c)" } ]
[ { "msg_contents": "Hi,\nPer Coverity.\n\nverify_manifest_checksum, declare and can utilize array of uint8, without\ninitializing it.\nWhile here, I applied the quick exit technique, to avoid unnecessary\ncomputations, if it is possible to avoid them.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 23 Apr 2020 16:29:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix possible Uninitialized variables (parse_manifest.c)" } ]
[ { "msg_contents": "Hi, \n\nThere are two unexpected codes for me about wait events for timeline \nhistory file.\nPlease let me know your thoughts whether if we need to change.\n\n\n1. readTimeLineHistory() function in timeline.c\n\nThe readTimeLineHistory() reads a timeline history file, \nbut it doesn't report “WAIT_EVENT_TIMELINE_HISTORY_READ\".\n\nIn my understanding, sscanf() is blocking read. \nSo, it's important to report a wait event.\n\n2. writeTimeLineHistory() function in timeline.c\n\nThe writeTimeLineHistory() function may write a timeline history file \ntwice,\nbut it reports “WAIT_EVENT_TIMELINE_HISTORY_WRITE\" only once.\n\nIt makes sense to report a wait event twice, because both of them use \nwrite().\n\nI attached a patch to mention the code line number.\n\n\nI checked the commit log which \"WAIT_EVENT_TIMELINE_HISTORY_READ\" and\n\"WAIT_EVENT_TIMELINE_HISTORY_WRITE\" are committed and the discussion \nabout it.\nBut I can't find the reason.\n\nPlease give me your comments.\nIf we need to change, I can make a patch to fix them.\n\n\nBy the way, which is correct \"timeline's history file\" or \"timeline \nhistory file\"?\nThe timeline.c has both. In my understanding, the latter is correct. If \nso, I will modify together.\n\nRegards,\n\n--\nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 24 Apr 2020 11:29:43 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "\n\nOn 2020/04/24 11:29, Masahiro Ikeda wrote:\n> Hi,\n> \n> There are two unexpected codes for me about wait events for timeline history file.\n> Please let me know your thoughts whether if we need to change.\n> \n> \n> 1. readTimeLineHistory() function in timeline.c\n> \n> The readTimeLineHistory() reads a timeline history file,\n> but it doesn't report “WAIT_EVENT_TIMELINE_HISTORY_READ\".\n\nYeah, this sounds strange.\n\n> In my understanding, sscanf() is blocking read.\n> So, it's important to report a wait event.\n\nShouldn't the wait event be reported during fgets() rather than sscanf()?\n\n> 2. writeTimeLineHistory() function in timeline.c\n> \n> The writeTimeLineHistory() function may write a timeline history file twice,\n> but it reports “WAIT_EVENT_TIMELINE_HISTORY_WRITE\" only once.\n> \n> It makes sense to report a wait event twice, because both of them use write().\n\nYes.\n\n> I attached a patch to mention the code line number.\n> \n> \n> I checked the commit log which \"WAIT_EVENT_TIMELINE_HISTORY_READ\" and\n> \"WAIT_EVENT_TIMELINE_HISTORY_WRITE\" are committed and the discussion about it.\n> But I can't find the reason.\n> \n> Please give me your comments.\n> If we need to change, I can make a patch to fix them.\n\nThanks! I agree to fix those issues.\n\n> By the way, which is correct \"timeline's history file\" or \"timeline history file\"?\n> The timeline.c has both. In my understanding, the latter is correct. If so, I will modify together.\n\nMaybe both are correct?? I have no strong opinion about this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 27 Apr 2020 12:25:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On 2020-04-27 12:25, Fujii Masao wrote:\n> On 2020/04/24 11:29, Masahiro Ikeda wrote:\n>> Hi,\n>> \n>> There are two unexpected codes for me about wait events for timeline \n>> history file.\n>> Please let me know your thoughts whether if we need to change.\n>> \n>> \n>> 1. readTimeLineHistory() function in timeline.c\n>> \n>> The readTimeLineHistory() reads a timeline history file,\n>> but it doesn't report “WAIT_EVENT_TIMELINE_HISTORY_READ\".\n> \n> Yeah, this sounds strange.\n> \n>> In my understanding, sscanf() is blocking read.\n>> So, it's important to report a wait event.\n> \n> Shouldn't the wait event be reported during fgets() rather than \n> sscanf()?\n> \n>> 2. writeTimeLineHistory() function in timeline.c\n>> \n>> The writeTimeLineHistory() function may write a timeline history file \n>> twice,\n>> but it reports “WAIT_EVENT_TIMELINE_HISTORY_WRITE\" only once.\n>> \n>> It makes sense to report a wait event twice, because both of them use \n>> write().\n> \n> Yes.\n> \n>> I attached a patch to mention the code line number.\n>> \n>> \n>> I checked the commit log which \"WAIT_EVENT_TIMELINE_HISTORY_READ\" and\n>> \"WAIT_EVENT_TIMELINE_HISTORY_WRITE\" are committed and the discussion \n>> about it.\n>> But I can't find the reason.\n>> \n>> Please give me your comments.\n>> If we need to change, I can make a patch to fix them.\n> \n> Thanks! I agree to fix those issues.\n\nThanks for the comments. I attach a patch to fix those issues.\nPlease review it.\n\n>> By the way, which is correct \"timeline's history file\" or \"timeline \n>> history file\"?\n>> The timeline.c has both. In my understanding, the latter is correct. \n>> If so, I will modify together.\n> \n> Maybe both are correct?? I have no strong opinion about this.\n\nOK, I didn't fix it at this time.\n\nRegards,\n-- \nMasahiro Ikeda", "msg_date": "Tue, 28 Apr 2020 11:10:00 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "\n\nOn 2020/04/28 11:10, Masahiro Ikeda wrote:\n> On 2020-04-27 12:25, Fujii Masao wrote:\n>> On 2020/04/24 11:29, Masahiro Ikeda wrote:\n>>> Hi,\n>>>\n>>> There are two unexpected codes for me about wait events for timeline history file.\n>>> Please let me know your thoughts whether if we need to change.\n>>>\n>>>\n>>> 1. readTimeLineHistory() function in timeline.c\n>>>\n>>> The readTimeLineHistory() reads a timeline history file,\n>>> but it doesn't report “WAIT_EVENT_TIMELINE_HISTORY_READ\".\n>>\n>> Yeah, this sounds strange.\n>>\n>>> In my understanding, sscanf() is blocking read.\n>>> So, it's important to report a wait event.\n>>\n>> Shouldn't the wait event be reported during fgets() rather than sscanf()?\n>>\n>>> 2. writeTimeLineHistory() function in timeline.c\n>>>\n>>> The writeTimeLineHistory() function may write a timeline history file twice,\n>>> but it reports “WAIT_EVENT_TIMELINE_HISTORY_WRITE\" only once.\n>>>\n>>> It makes sense to report a wait event twice, because both of them use write().\n>>\n>> Yes.\n>>\n>>> I attached a patch to mention the code line number.\n>>>\n>>>\n>>> I checked the commit log which \"WAIT_EVENT_TIMELINE_HISTORY_READ\" and\n>>> \"WAIT_EVENT_TIMELINE_HISTORY_WRITE\" are committed and the discussion about it.\n>>> But I can't find the reason.\n>>>\n>>> Please give me your comments.\n>>> If we need to change, I can make a patch to fix them.\n>>\n>> Thanks! I agree to fix those issues.\n> \n> Thanks for the comments. I attach a patch to fix those issues.\n> Please review it.\n\nThanks for the patch!\n\n prevend = InvalidXLogRecPtr;\n+ pgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n while (fgets(fline, sizeof(fline), fd) != NULL)\n {\n /* skip leading whitespace and check for # comment */\n@@ -172,6 +173,7 @@ readTimeLineHistory(TimeLineID targetTLI)\n \n /* we ignore the remainder of each line */\n }\n+ pgstat_report_wait_end();\n\nIsn't it safer to report the wait event during fgets() rather than putting\nthose calls around the whole loop, like other code does? For example,\nwriteTimeLineHistory() reports the wait event during read() rather than\nwhole loop.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 28 Apr 2020 14:49:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On Tue, Apr 28, 2020 at 02:49:00PM +0900, Fujii Masao wrote:\n> Isn't it safer to report the wait event during fgets() rather than putting\n> those calls around the whole loop, like other code does? For example,\n> writeTimeLineHistory() reports the wait event during read() rather than\n> whole loop.\n\nYeah, I/O wait events should be taken only during the duration of the\nsystem calls. Particularly here, you may finish with an elog() that\ncauses the wait event to be set longer than it should, leading to a\nrather incorrect state if a snapshot of pg_stat_activity is taken.\n--\nMichael", "msg_date": "Tue, 28 Apr 2020 15:09:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On 2020-04-28 15:09, Michael Paquier wrote:\n> On Tue, Apr 28, 2020 at 02:49:00PM +0900, Fujii Masao wrote:\n>> Isn't it safer to report the wait event during fgets() rather than \n>> putting\n>> those calls around the whole loop, like other code does? For example,\n>> writeTimeLineHistory() reports the wait event during read() rather \n>> than\n>> whole loop.\n> \n> Yeah, I/O wait events should be taken only during the duration of the\n> system calls. Particularly here, you may finish with an elog() that\n> causes the wait event to be set longer than it should, leading to a\n> rather incorrect state if a snapshot of pg_stat_activity is taken.\n> --\n\nThanks for your comments.\n\nI fixed it to report the wait event during fgets() only.\nPlease review the v2 patch I attached.\n\nRegard,\n-- \nMasahiro Ikeda", "msg_date": "Tue, 28 Apr 2020 17:42:32 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "\n\nOn 2020/04/28 17:42, Masahiro Ikeda wrote:\n> On 2020-04-28 15:09, Michael Paquier wrote:\n>> On Tue, Apr 28, 2020 at 02:49:00PM +0900, Fujii Masao wrote:\n>>> Isn't it safer to report the wait event during fgets() rather than putting\n>>> those calls around the whole loop, like other code does? For example,\n>>> writeTimeLineHistory() reports the wait event during read() rather than\n>>> whole loop.\n>>\n>> Yeah, I/O wait events should be taken only during the duration of the\n>> system calls.� Particularly here, you may finish with an elog() that\n>> causes the wait event to be set longer than it should, leading to a\n>> rather incorrect state if a snapshot of pg_stat_activity is taken.\n>> -- \n> \n> Thanks for your comments.\n> \n> I fixed it to report the wait event during fgets() only.\n> Please review the v2 patch I attached.\n\nThanks for updating the patch! Here are the review comments from me.\n\n+\t\tchar\t *result;\n+\t\tpgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n+\t\tresult = fgets(fline, sizeof(fline), fd);\n+\t\tpgstat_report_wait_end();\n+\t\tif (result == NULL)\n+\t\t\tbreak;\n+\n \t\t/* skip leading whitespace and check for # comment */\n \t\tchar\t *ptr;\n\nSince the variable name \"result\" has been already used in this function,\nit should be renamed.\n\nThe code should not be inject into the variable declaration block.\n\nWhen reading this patch, I found that IO-error in fgets() has not\nbeen checked there. Though this is not the issue that you reported,\nbut it seems better to fix it together. So what about adding\nthe following code?\n\n\tif (ferror(fd))\n\t\tereport(ERROR,\n\t\t\t(errcode_for_file_access(),\n\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 1 May 2020 00:25:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On 2020-05-01 00:25, Fujii Masao wrote:\n> On 2020/04/28 17:42, Masahiro Ikeda wrote:\n>> On 2020-04-28 15:09, Michael Paquier wrote:\n>>> On Tue, Apr 28, 2020 at 02:49:00PM +0900, Fujii Masao wrote:\n>>>> Isn't it safer to report the wait event during fgets() rather than \n>>>> putting\n>>>> those calls around the whole loop, like other code does? For \n>>>> example,\n>>>> writeTimeLineHistory() reports the wait event during read() rather \n>>>> than\n>>>> whole loop.\n>>> \n>>> Yeah, I/O wait events should be taken only during the duration of the\n>>> system calls.  Particularly here, you may finish with an elog() that\n>>> causes the wait event to be set longer than it should, leading to a\n>>> rather incorrect state if a snapshot of pg_stat_activity is taken.\n>>> --\n>> \n>> Thanks for your comments.\n>> \n>> I fixed it to report the wait event during fgets() only.\n>> Please review the v2 patch I attached.\n> \n> Thanks for updating the patch! Here are the review comments from me.\n> \n> +\t\tchar\t *result;\n> +\t\tpgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n> +\t\tresult = fgets(fline, sizeof(fline), fd);\n> +\t\tpgstat_report_wait_end();\n> +\t\tif (result == NULL)\n> +\t\t\tbreak;\n> +\n> \t\t/* skip leading whitespace and check for # comment */\n> \t\tchar\t *ptr;\n> \n> Since the variable name \"result\" has been already used in this \n> function,\n> it should be renamed.\n\nSorry for that.\n\nI thought to rename it, but I changed to use feof()\nfor clarify the difference from ferror().\n\n\n> The code should not be inject into the variable declaration block.\n\nThanks for the comment.\nI moved the code block after the variable declaration block.\n\n\n> When reading this patch, I found that IO-error in fgets() has not\n> been checked there. Though this is not the issue that you reported,\n> but it seems better to fix it together. So what about adding\n> the following code?\n> \n> \tif (ferror(fd))\n> \t\tereport(ERROR,\n> \t\t\t(errcode_for_file_access(),\n> \t\t\t errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n\nThanks, I agree your comment.\nI added the above code to the v3 patch I attached.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 01 May 2020 10:07:21 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On 2020/05/01 10:07, Masahiro Ikeda wrote:\n> On 2020-05-01 00:25, Fujii Masao wrote:\n>> On 2020/04/28 17:42, Masahiro Ikeda wrote:\n>>> On 2020-04-28 15:09, Michael Paquier wrote:\n>>>> On Tue, Apr 28, 2020 at 02:49:00PM +0900, Fujii Masao wrote:\n>>>>> Isn't it safer to report the wait event during fgets() rather than putting\n>>>>> those calls around the whole loop, like other code does? For example,\n>>>>> writeTimeLineHistory() reports the wait event during read() rather than\n>>>>> whole loop.\n>>>>\n>>>> Yeah, I/O wait events should be taken only during the duration of the\n>>>> system calls.  Particularly here, you may finish with an elog() that\n>>>> causes the wait event to be set longer than it should, leading to a\n>>>> rather incorrect state if a snapshot of pg_stat_activity is taken.\n>>>> -- \n>>>\n>>> Thanks for your comments.\n>>>\n>>> I fixed it to report the wait event during fgets() only.\n>>> Please review the v2 patch I attached.\n>>\n>> Thanks for updating the patch! Here are the review comments from me.\n>>\n>> +        char       *result;\n>> +        pgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n>> +        result = fgets(fline, sizeof(fline), fd);\n>> +        pgstat_report_wait_end();\n>> +        if (result == NULL)\n>> +            break;\n>> +\n>>          /* skip leading whitespace and check for # comment */\n>>          char       *ptr;\n>>\n>> Since the variable name \"result\" has been already used in this function,\n>> it should be renamed.\n> \n> Sorry for that.\n> \n> I thought to rename it, but I changed to use feof()\n> for clarify the difference from ferror().\n> \n> \n>> The code should not be inject into the variable declaration block.\n> \n> Thanks for the comment.\n> I moved the code block after the variable declaration block.\n> \n> \n>> When reading this patch, I found that IO-error in fgets() has not\n>> been checked there. Though this is not the issue that you reported,\n>> but it seems better to fix it together. So what about adding\n>> the following code?\n>>\n>>     if (ferror(fd))\n>>         ereport(ERROR,\n>>             (errcode_for_file_access(),\n>>              errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n> \n> Thanks, I agree your comment.\n> I added the above code to the v3 patch I attached.\n\nThanks for updating the patch! It looks good to me.\n\nI applied cosmetic changes to the patch (attached). Barring any objection,\nI will push this patch (also back-patch to v10 where wait-event for timeline\nfile was added).\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 1 May 2020 12:04:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On Fri, May 01, 2020 at 12:04:56PM +0900, Fujii Masao wrote:\n> I applied cosmetic changes to the patch (attached). Barring any objection,\n> I will push this patch (also back-patch to v10 where wait-event for timeline\n> file was added).\n\nSorry for arriving late to the party. I have one tiny comment.\n\n> +\t\tpgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n> +\t\tres = fgets(fline, sizeof(fline), fd);\n> +\t\tpgstat_report_wait_end();\n> +\t\tif (ferror(fd))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n> +\t\tif (res == NULL)\n> +\t\t\tbreak;\n\nIt seems to me that there is no point to check ferror() if fgets()\ndoes not return NULL, no?\n--\nMichael", "msg_date": "Sat, 2 May 2020 11:24:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On 2020/05/02 11:24, Michael Paquier wrote:\n> On Fri, May 01, 2020 at 12:04:56PM +0900, Fujii Masao wrote:\n>> I applied cosmetic changes to the patch (attached). Barring any objection,\n>> I will push this patch (also back-patch to v10 where wait-event for timeline\n>> file was added).\n> \n> Sorry for arriving late to the party. I have one tiny comment.\n> \n>> +\t\tpgstat_report_wait_start(WAIT_EVENT_TIMELINE_HISTORY_READ);\n>> +\t\tres = fgets(fline, sizeof(fline), fd);\n>> +\t\tpgstat_report_wait_end();\n>> +\t\tif (ferror(fd))\n>> +\t\t\tereport(ERROR,\n>> +\t\t\t\t\t(errcode_for_file_access(),\n>> +\t\t\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n>> +\t\tif (res == NULL)\n>> +\t\t\tbreak;\n> \n> It seems to me that there is no point to check ferror() if fgets()\n> does not return NULL, no?\n\nYeah, so I updated the patch so that ferror() is called only\nwhen fgets() returns NULL. Attached is the updated version of\nthe patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 7 May 2020 16:51:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "On Thu, May 07, 2020 at 04:51:16PM +0900, Fujii Masao wrote:\n> Yeah, so I updated the patch so that ferror() is called only\n> when fgets() returns NULL. Attached is the updated version of\n> the patch.\n\nThanks for the new patch. LGTM.\n--\nMichael", "msg_date": "Thu, 7 May 2020 21:24:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" }, { "msg_contents": "\n\nOn 2020/05/07 21:24, Michael Paquier wrote:\n> On Thu, May 07, 2020 at 04:51:16PM +0900, Fujii Masao wrote:\n>> Yeah, so I updated the patch so that ferror() is called only\n>> when fgets() returns NULL. Attached is the updated version of\n>> the patch.\n> \n> Thanks for the new patch. LGTM.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 May 2020 10:42:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Why are wait events not reported even though it reads/writes a\n timeline history file?" } ]
[ { "msg_contents": "Hi\n\nplpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's control\nvariable, TG_WHEN, TG_OP, ..\n\nCurrently these variables are not protected, what can be source of\nproblems, mainly for not experienced users. I propose mark these variables\nas constant.\n\n-- today\npostgres=# do $$ begin for i in 1..10 loop raise notice 'i=%', i; i := 20;\nend loop; end; $$;\nNOTICE: i=1\nNOTICE: i=2\nNOTICE: i=3\nNOTICE: i=4\nNOTICE: i=5\nNOTICE: i=6\nNOTICE: i=7\nNOTICE: i=8\nNOTICE: i=9\nNOTICE: i=10\nDO\n\n-- after patch\npostgres=# do $$ begin for i in 1..10 loop raise notice 'i=%', i; i := 20;\nend loop; end; $$;\nERROR: variable \"i\" is declared CONSTANT\nLINE 1: ... begin for i in 1..10 loop raise notice 'i=%', i; i := 20; e...\n\nThese variables are protected in PL/SQL too.\n\nComments, notes?\n\nRegards\n\nPavel\n\np.s. this is simple implementation - just for function demo. Maybe can be\nbetter to introduce new plpgsql_variable's flag like is_protected or\nsimilar than using isconst.", "msg_date": "Fri, 24 Apr 2020 08:53:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - plpgsql - all plpgsql auto variables should be constant" }, { "msg_contents": "On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's control variable, TG_WHEN, TG_OP, ..\n>\n> Currently these variables are not protected, what can be source of problems, mainly for not experienced users. I propose mark these variables as constant.\n>\n\n+1 for general idea.\n\n> -- today\n> postgres=# do $$ begin for i in 1..10 loop raise notice 'i=%', i; i := 20; end loop; end; $$;\n> NOTICE: i=1\n> NOTICE: i=2\n> NOTICE: i=3\n> NOTICE: i=4\n> NOTICE: i=5\n> NOTICE: i=6\n> NOTICE: i=7\n> NOTICE: i=8\n> NOTICE: i=9\n> NOTICE: i=10\n> DO\n>\n> -- after patch\n> postgres=# do $$ begin for i in 1..10 loop raise notice 'i=%', i; i := 20; end loop; end; $$;\n> ERROR: variable \"i\" is declared CONSTANT\n\nCONSTANT looks odd in this context since i's value changes. But you\nalready have a proposal to change that.\n\n>\n> p.s. this is simple implementation - just for function demo. Maybe can be better to introduce new plpgsql_variable's flag like is_protected or similar than using isconst.\n\nYes, I think that will help. In this case PL/SQL says that \"i\" can not\nbe used as an assignment target. That's not very clear but something\non those lines will help.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 24 Apr 2020 17:06:33 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's control variable, TG_WHEN, TG_OP, ..\n>> Currently these variables are not protected, what can be source of problems, mainly for not experienced users. I propose mark these variables as constant.\n\n> +1 for general idea.\n\nI'm skeptical. If we'd marked them that way from day one, it would have\nbeen fine, but to change it now is a whole different discussion. I think\nthe odds that anybody will thank us are much smaller than the odds that\nthere will be complaints. In particular, I'd be just about certain that\nthere are people out there who are changing FOUND and loop control\nvariables manually, and they will not appreciate us breaking their code.\n\nAs for the trigger variables specifically, what is the rationale\nfor marking TG_OP read-only but not OLD and NEW? But it is dead\ncertain that we won't get away with making the latter two read-only.\n\nIn short, -1. This ship sailed about twenty years ago.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 10:07:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "pá 24. 4. 2020 v 16:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's\n> control variable, TG_WHEN, TG_OP, ..\n> >> Currently these variables are not protected, what can be source of\n> problems, mainly for not experienced users. I propose mark these variables\n> as constant.\n>\n> > +1 for general idea.\n>\n> I'm skeptical. If we'd marked them that way from day one, it would have\n> been fine, but to change it now is a whole different discussion. I think\n> the odds that anybody will thank us are much smaller than the odds that\n> there will be complaints. In particular, I'd be just about certain that\n> there are people out there who are changing FOUND and loop control\n> variables manually, and they will not appreciate us breaking their code.\n>\n\nThis is not black/white issue. Maybe can sense to modify the FOUND\nvariable, but modification of control variable has not any sense. The\nupdated value is rewriten by runtime any iteration. You cannot to use\nmodification of control variable to skip some iterations like in C.\n\n\n> As for the trigger variables specifically, what is the rationale\n> for marking TG_OP read-only but not OLD and NEW? But it is dead\n> certain that we won't get away with making the latter two read-only.\n>\n\nFor before triggers the NEW have to be updated. Any other maybe should be\nprotected, but there is little bit different kind of informations.\n\n\n> In short, -1. This ship sailed about twenty years ago.\n>\n> regards, tom lane\n>\n\npá 24. 4. 2020 v 16:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's control variable, TG_WHEN, TG_OP, ..\n>> Currently these variables are not protected, what can be source of problems, mainly for not experienced users. I propose mark these variables as constant.\n\n> +1 for general idea.\n\nI'm skeptical.  If we'd marked them that way from day one, it would have\nbeen fine, but to change it now is a whole different discussion.  I think\nthe odds that anybody will thank us are much smaller than the odds that\nthere will be complaints.  In particular, I'd be just about certain that\nthere are people out there who are changing FOUND and loop control\nvariables manually, and they will not appreciate us breaking their code.This is not black/white issue. Maybe can sense to modify the FOUND variable, but modification of control variable has not any sense. The updated value is rewriten by runtime any iteration. You cannot to use modification of control variable to skip some iterations like in C.\n\nAs for the trigger variables specifically, what is the rationale\nfor marking TG_OP read-only but not OLD and NEW?  But it is dead\ncertain that we won't get away with making the latter two read-only.For before triggers the NEW have to be updated. Any other maybe should be protected, but there is little bit different kind of informations. \n\nIn short, -1.  This ship sailed about twenty years ago.\n\n                        regards, tom lane", "msg_date": "Fri, 24 Apr 2020 16:47:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "At Fri, 24 Apr 2020 16:47:28 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> pá 24. 4. 2020 v 16:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> \n> > Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > > On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\n> > wrote:\n> > >> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's\n> > control variable, TG_WHEN, TG_OP, ..\n> > >> Currently these variables are not protected, what can be source of\n> > problems, mainly for not experienced users. I propose mark these variables\n> > as constant.\n> >\n> > > +1 for general idea.\n> >\n> > I'm skeptical. If we'd marked them that way from day one, it would have\n> > been fine, but to change it now is a whole different discussion. I think\n> > the odds that anybody will thank us are much smaller than the odds that\n> > there will be complaints. In particular, I'd be just about certain that\n> > there are people out there who are changing FOUND and loop control\n> > variables manually, and they will not appreciate us breaking their code.\n> >\n> \n> This is not black/white issue. Maybe can sense to modify the FOUND\n> variable, but modification of control variable has not any sense. The\n> updated value is rewriten by runtime any iteration. You cannot to use\n> modification of control variable to skip some iterations like in C.\n\nIt seems to me, the loop structure is not a parallel of for() in C. It\nis rather a parallel of foreach of Perl or \"for in range()\" in\nPython. So it is natural to me that the i is assignable and reset with\nthe next value at every iteration. I believe that there are many\nexisting cases where the control variable is modified in a loop.\n\nOn the other hand, I'm not sure about FOUND and the similars and I\ndon't have a firm opinion them. I don't see a use case where they need\nto be assignable. However, I don't see a clear reason they mustn't be\nassignable, too. (And the behavior is documented at least for FOUND.)\n\n> > As for the trigger variables specifically, what is the rationale\n> > for marking TG_OP read-only but not OLD and NEW? But it is dead\n> > certain that we won't get away with making the latter two read-only.\n> >\n> \n> For before triggers the NEW have to be updated. Any other maybe should be\n> protected, but there is little bit different kind of informations.\n>\n> > In short, -1. This ship sailed about twenty years ago.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Apr 2020 12:02:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "po 27. 4. 2020 v 5:02 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Fri, 24 Apr 2020 16:47:28 +0200, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote in\n> > pá 24. 4. 2020 v 16:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >\n> > > Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > > > On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <\n> pavel.stehule@gmail.com>\n> > > wrote:\n> > > >> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's\n> > > control variable, TG_WHEN, TG_OP, ..\n> > > >> Currently these variables are not protected, what can be source of\n> > > problems, mainly for not experienced users. I propose mark these\n> variables\n> > > as constant.\n> > >\n> > > > +1 for general idea.\n> > >\n> > > I'm skeptical. If we'd marked them that way from day one, it would\n> have\n> > > been fine, but to change it now is a whole different discussion. I\n> think\n> > > the odds that anybody will thank us are much smaller than the odds that\n> > > there will be complaints. In particular, I'd be just about certain\n> that\n> > > there are people out there who are changing FOUND and loop control\n> > > variables manually, and they will not appreciate us breaking their\n> code.\n> > >\n> >\n> > This is not black/white issue. Maybe can sense to modify the FOUND\n> > variable, but modification of control variable has not any sense. The\n> > updated value is rewriten by runtime any iteration. You cannot to use\n> > modification of control variable to skip some iterations like in C.\n>\n> It seems to me, the loop structure is not a parallel of for() in C. It\n> is rather a parallel of foreach of Perl or \"for in range()\" in\n> Python. So it is natural to me that the i is assignable and reset with\n> the next value at every iteration. I believe that there are many\n> existing cases where the control variable is modified in a loop.\n>\n\nit is based on PL/SQL language and this language is based on ADA.\n\nThere loop parameter is constant\n\nhttps://www.adaic.org/resources/add_content/standards/05aarm/html/AA-5-5.html\n\nRegards\n\nPavel\n\n\n> On the other hand, I'm not sure about FOUND and the similars and I\n> don't have a firm opinion them. I don't see a use case where they need\n> to be assignable. However, I don't see a clear reason they mustn't be\n> assignable, too. (And the behavior is documented at least for FOUND.)\n>\n> > > As for the trigger variables specifically, what is the rationale\n> > > for marking TG_OP read-only but not OLD and NEW? But it is dead\n> > > certain that we won't get away with making the latter two read-only.\n> > >\n> >\n> > For before triggers the NEW have to be updated. Any other maybe should be\n> > protected, but there is little bit different kind of informations.\n> >\n> > > In short, -1. This ship sailed about twenty years ago.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\npo 27. 4. 2020 v 5:02 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Fri, 24 Apr 2020 16:47:28 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> pá 24. 4. 2020 v 16:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> \n> > Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > > On Fri, Apr 24, 2020 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\n> > wrote:\n> > >> plpgsql generate lot of auto variables - FOUND, SQLERRM, cycle's\n> > control variable, TG_WHEN, TG_OP, ..\n> > >> Currently these variables are not protected, what can be source of\n> > problems, mainly for not experienced users. I propose mark these variables\n> > as constant.\n> >\n> > > +1 for general idea.\n> >\n> > I'm skeptical.  If we'd marked them that way from day one, it would have\n> > been fine, but to change it now is a whole different discussion.  I think\n> > the odds that anybody will thank us are much smaller than the odds that\n> > there will be complaints.  In particular, I'd be just about certain that\n> > there are people out there who are changing FOUND and loop control\n> > variables manually, and they will not appreciate us breaking their code.\n> >\n> \n> This is not black/white issue. Maybe can sense to modify the FOUND\n> variable, but modification of control variable has not any sense. The\n> updated value is rewriten by runtime any iteration. You cannot to use\n> modification of control variable to skip some iterations like in C.\n\nIt seems to me, the loop structure is not a parallel of for() in C. It\nis rather a parallel of foreach of Perl or \"for in range()\" in\nPython. So it is natural to me that the i is assignable and reset with\nthe next value at every iteration.  I believe that there are many\nexisting cases where the control variable is modified in a loop.it is based on PL/SQL language and this language is based on ADA.There loop parameter is constant https://www.adaic.org/resources/add_content/standards/05aarm/html/AA-5-5.htmlRegardsPavel\n\nOn the other hand, I'm not sure about FOUND and the similars and I\ndon't have a firm opinion them. I don't see a use case where they need\nto be assignable. However, I don't see a clear reason they mustn't be\nassignable, too. (And the behavior is documented at least for FOUND.)\n\n> > As for the trigger variables specifically, what is the rationale\n> > for marking TG_OP read-only but not OLD and NEW?  But it is dead\n> > certain that we won't get away with making the latter two read-only.\n> >\n> \n> For before triggers the NEW have to be updated. Any other maybe should be\n> protected, but there is little bit different kind of informations.\n>\n> > In short, -1.  This ship sailed about twenty years ago.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 27 Apr 2020 06:56:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "po 27. 4. 2020 v 16:26 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> On Fri, 24 Apr 2020 at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I'm skeptical. If we'd marked them that way from day one, it would have\n> > been fine, but to change it now is a whole different discussion. I think\n> > the odds that anybody will thank us are much smaller than the odds that\n> > there will be complaints. In particular, I'd be just about certain that\n> > there are people out there who are changing FOUND and loop control\n> > variables manually, and they will not appreciate us breaking their code.\n>\n> I kind of doubt it would break anybody's code. But I also doubt it's\n> actually going to help anybody. It's not exactly an easy bug to write,\n> so meh, I can't really get worked up either way about this.\n>\n> > As for the trigger variables specifically, what is the rationale\n> > for marking TG_OP read-only but not OLD and NEW? But it is dead\n> > certain that we won't get away with making the latter two read-only.\n>\n> But, uh, this actually seems like it might help people. Obviously we\n> can't make NEW constant for BEFORE triggers, but for AFTER triggers it\n> would actually be catching quite an easy-to-write bug. I bet plenty of\n> people accidentally define triggers as AFTER triggers which are\n> intending to modify the columns being stored and then don't understand\n> why they aren't working.\n>\n> They might not even find out right away if the trigger only modifies\n> the columns sometimes so it could be the kind of latent bug that\n> catches people in production (which wouldn't be improved by the patch\n> but at least it would produce an error rather than silent data\n> corruption).\n>\n> The only valid use cases that maybe would cause some pain would be\n> people using the same function for BEFORE *and* AFTER triggers and\n> where that code is written to just assign to NEW in both cases. That\n> seems like it would be odd though since we're not talking about an\n> audit function or something like that if the function is assigning to\n> NEW...\n>\n\nthis behave can be dynamic and it can be active only for AFTER trigger\n\n\n\n> --\n> greg\n>\n\npo 27. 4. 2020 v 16:26 odesílatel Greg Stark <stark@mit.edu> napsal:On Fri, 24 Apr 2020 at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I'm skeptical.  If we'd marked them that way from day one, it would have\n> been fine, but to change it now is a whole different discussion.  I think\n> the odds that anybody will thank us are much smaller than the odds that\n> there will be complaints.  In particular, I'd be just about certain that\n> there are people out there who are changing FOUND and loop control\n> variables manually, and they will not appreciate us breaking their code.\n\nI kind of doubt it would break anybody's code. But I also doubt it's\nactually going to help anybody. It's not exactly an easy bug to write,\nso meh, I can't really get worked up either way about this.\n\n> As for the trigger variables specifically, what is the rationale\n> for marking TG_OP read-only but not OLD and NEW?  But it is dead\n> certain that we won't get away with making the latter two read-only.\n\nBut, uh, this actually seems like it might help people. Obviously we\ncan't make NEW constant for BEFORE triggers, but for AFTER triggers it\nwould actually be catching quite an easy-to-write bug. I bet plenty of\npeople accidentally define triggers as AFTER triggers which are\nintending to modify the columns being stored and then don't understand\nwhy they aren't working.\n\nThey might not even find out right away if the trigger only modifies\nthe columns sometimes so it could be the kind of latent bug that\ncatches people in production (which wouldn't be improved by the patch\nbut at least it would produce an error rather than silent data\ncorruption).\n\nThe only valid use cases that maybe would cause some pain would be\npeople using the same function for BEFORE *and* AFTER triggers and\nwhere that code is written to just assign to NEW in both cases. That\nseems like it would be odd though since we're not talking about an\naudit function or something like that if the function is assigning to\nNEW...this behave can be dynamic and it can be active only for AFTER trigger \n\n-- \ngreg", "msg_date": "Mon, 27 Apr 2020 19:14:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "On Mon, Apr 27, 2020 at 7:56 PM Greg Stark <stark@mit.edu> wrote:\n>\n> On Fri, 24 Apr 2020 at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I'm skeptical. If we'd marked them that way from day one, it would have\n> > been fine, but to change it now is a whole different discussion. I think\n> > the odds that anybody will thank us are much smaller than the odds that\n> > there will be complaints. In particular, I'd be just about certain that\n> > there are people out there who are changing FOUND and loop control\n> > variables manually, and they will not appreciate us breaking their code.\n>\n> I kind of doubt it would break anybody's code. But I also doubt it's\n> actually going to help anybody. It's not exactly an easy bug to write,\n> so meh, I can't really get worked up either way about this.\n\nWe could retain the old behaviour by using a GUC which defaults to old\nbehaviour. More GUCs means more confusion, this once guc under plpgsql\nextension might actually help.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 28 Apr 2020 17:04:59 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" }, { "msg_contents": "út 28. 4. 2020 v 13:35 odesílatel Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> napsal:\n\n> On Mon, Apr 27, 2020 at 7:56 PM Greg Stark <stark@mit.edu> wrote:\n> >\n> > On Fri, 24 Apr 2020 at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > I'm skeptical. If we'd marked them that way from day one, it would\n> have\n> > > been fine, but to change it now is a whole different discussion. I\n> think\n> > > the odds that anybody will thank us are much smaller than the odds that\n> > > there will be complaints. In particular, I'd be just about certain\n> that\n> > > there are people out there who are changing FOUND and loop control\n> > > variables manually, and they will not appreciate us breaking their\n> code.\n> >\n> > I kind of doubt it would break anybody's code. But I also doubt it's\n> > actually going to help anybody. It's not exactly an easy bug to write,\n> > so meh, I can't really get worked up either way about this.\n>\n> We could retain the old behaviour by using a GUC which defaults to old\n> behaviour. More GUCs means more confusion, this once guc under plpgsql\n> extension might actually help.\n>\n\nI am not sure if other GUC can help (in this case). Probably it cannot be\ndefault, and beginners has zero knowledge to enable this or similar GUC.\n\nThis week I enhanced plpgsql_check about new check\nhttps://github.com/okbob/plpgsql_check related to this feature.\n\nI afraid so people who needs these checks and some help probably doesn't\nknow about this extension.\n\n\n\n\n\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nút 28. 4. 2020 v 13:35 odesílatel Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> napsal:On Mon, Apr 27, 2020 at 7:56 PM Greg Stark <stark@mit.edu> wrote:\n>\n> On Fri, 24 Apr 2020 at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I'm skeptical.  If we'd marked them that way from day one, it would have\n> > been fine, but to change it now is a whole different discussion.  I think\n> > the odds that anybody will thank us are much smaller than the odds that\n> > there will be complaints.  In particular, I'd be just about certain that\n> > there are people out there who are changing FOUND and loop control\n> > variables manually, and they will not appreciate us breaking their code.\n>\n> I kind of doubt it would break anybody's code. But I also doubt it's\n> actually going to help anybody. It's not exactly an easy bug to write,\n> so meh, I can't really get worked up either way about this.\n\nWe could retain the old behaviour by using a GUC which defaults to old\nbehaviour. More GUCs means more confusion, this once guc under plpgsql\nextension might actually help.I am not sure if other GUC can help (in this case). Probably it cannot be default, and beginners has zero knowledge to enable this or similar GUC. This week I enhanced plpgsql_check about new check https://github.com/okbob/plpgsql_check related to this feature. I afraid so people who needs these checks and some help probably doesn't know about this extension.  \n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 28 Apr 2020 13:41:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - all plpgsql auto variables should be\n constant" } ]
[ { "msg_contents": "Hey,\n\nOur application sends millions of rows to the database every hour\nusing the COPY\nIN protocol.\nWe've switched recently from TEXT based COPY to the BINARY one.\nWe've noticed a slight performance increase, mostly because we don't need\nto escape the content anymore.\n\nUnfortunately the binary protocol's output ended up being slightly bigger\nthan the text one (e.g. for one payload it's *373MB* now, was *356MB* before\n)\nWe would like to share our thoughts on how we may be able to improve that,\nif you're open to suggestions.\n\nIt's possible our request is related to what the doc already refers to as:\n\n> It is anticipated that a future extension might add a header field that\n> allows per-column format codes to be specified.\n\n\n----\n\nCurrently every row in BINARY defines the number of columns (2 bytes) and\nevery column defines its size (4 bytes per column) - see\nhttps://www.postgresql.org/docs/12/sql-copy.html#id-1.9.3.55.9.4.6.\nNULL values are currently sent as a two byte -1 value.\n\nGiven that BINARY can't do any type conversion anyway, we should be able to\ndeduce the expected size of most columns - while keeping the size prefixes\nfor the dynamic ones (e.g. BYTEA or TEXT).\n\nThe extension part of the header (\nhttps://www.postgresql.org/docs/12/sql-copy.html#id-1.9.3.55.9.4.5:~:text=Header%20extension%20area%20length)\nwould allow us to keep this backwards compatible by switching between the\ntwo versions.\nIf we don't want to use this part of the header for the BINARY format,\nmaybe we could add a FIXED modifier to the COPY IN sql definition?\n\nOr alternatively if we don't want to deduce their counts and sizes for some\nreason, could we get away with just sending it once and having every row\nfollow the single header?\n\n----\n\nBy skipping the column count and sizes for every row, in our example this\nchange would reduce the payload to *332MB* (most of our payload is binary,\nlightweight structures consisting of numbers only could see a >*2x*\ndecrease in size).\n\nFor dynamic content, where we have to provide the size in advance we could\nsend that in variable length encoding\n<https://en.wikipedia.org/wiki/Variable-length_code> instead (e.g. the sign\nbit could signal whether the next byte is still part of the size). Variable\nlength sizes would allow us to define a special NULL character as well.\nIn our case this change would reduce our payload further to *317MB.*\n\nIn summary, these proposed changes would allow us to reduce the payload\nsize by roughly *15% -* but would expect even greater gains in general.\n\n\nThanks,\n* Lőrinc Pap*\n\n-- \nLőrinc Pap\nSenior Software Engineer\n<https://gradle.com/>\n\nHey,Our application sends millions of rows to the database every hour using the COPY IN protocol.We've switched recently from TEXT based COPY to the BINARY one.We've noticed a slight performance increase, mostly because we don't need to escape the content anymore.Unfortunately the binary protocol's output ended up being slightly bigger than the text one (e.g. for one payload it's 373MB now, was 356MB before)We would like to share our thoughts on how we may be able to improve that, if you're open to suggestions.It's possible our request is related to what the doc already refers to as:It is anticipated that a future extension might add a header field that allows per-column format codes to be specified.---- Currently every row in BINARY defines the number of columns (2 bytes) and every column defines its size (4 bytes per column) - see https://www.postgresql.org/docs/12/sql-copy.html#id-1.9.3.55.9.4.6.NULL values are currently sent as a two byte -1 value.Given that BINARY can't do any type conversion anyway, we should be able to deduce the expected size of most columns - while keeping the size prefixes for the dynamic ones (e.g. BYTEA or TEXT).The extension part of the header (https://www.postgresql.org/docs/12/sql-copy.html#id-1.9.3.55.9.4.5:~:text=Header%20extension%20area%20length) would allow us to keep this backwards compatible by switching between the two versions.If we don't want to use this part of the header for the BINARY format, maybe we could add a FIXED modifier to the COPY IN sql definition?Or alternatively if we don't want to deduce their counts and sizes for some reason, could we get away with just sending it once and having every row follow the single header?----By skipping the column count and sizes for every row, in our example this change would reduce the payload to 332MB (most of our payload is binary, lightweight structures consisting of numbers only could see a >2x decrease in size).For dynamic content, where we have to provide the size in advance we could send that in variable length encoding instead (e.g. the sign bit could signal whether the next byte is still part of the size). Variable length sizes would allow us to define a special NULL character as well.In our case this change would reduce our payload further to 317MB.In summary, these proposed changes would allow us to reduce the payload size by roughly 15% - but would expect even greater gains in general.Thanks, Lőrinc Pap-- Lőrinc PapSenior Software Engineer", "msg_date": "Fri, 24 Apr 2020 12:53:00 +0200", "msg_from": "=?UTF-8?Q?L=C5=91rinc_Pap?= <lorinc@gradle.com>", "msg_from_op": true, "msg_subject": "Binary COPY IN size reduction" }, { "msg_contents": "=?UTF-8?Q?L=C5=91rinc_Pap?= <lorinc@gradle.com> writes:\n> We've switched recently from TEXT based COPY to the BINARY one.\n> We've noticed a slight performance increase, mostly because we don't need\n> to escape the content anymore.\n> Unfortunately the binary protocol's output ended up being slightly bigger\n> than the text one (e.g. for one payload it's *373MB* now, was *356MB* before)\n> ...\n> By skipping the column count and sizes for every row, in our example this\n> change would reduce the payload to *332MB* (most of our payload is binary,\n> lightweight structures consisting of numbers only could see a >*2x*\n> decrease in size).\n\nTBH, that amount of gain does not seem to be worth the enormous\ncompatibility costs of introducing a new COPY data format. What you\npropose also makes the format a great deal less robust (readers are\nless able to detect errors), which has other costs. I'd vote no.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 10:19:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Binary COPY IN size reduction" }, { "msg_contents": "Thanks for the quick response, Tom!\nWhat about implementing only the first part of my proposal, i.e. BINARY\nCOPY without the redundant column count & size info?\nThat would already be a big win - I agree the rest of the proposed changes\nwould only complicate the usage, but I'd argue that leaving out duplicated\ninfo would even simplify it!\n\nI'll give a better example this time - writing *1.8* million rows with\ncolumn types bigint, integer, smallint results in the following COPY IN\npayloads:\n\n*20.8MB* - Text protocol\n*51.3MB* - Binary protocol\n*25.6MB* - Binary, without column size info (proposal)\n\n\nI.e. this would make the binary protocol almost as small as the text one\n(which isn't an unreasonable expectation, I think), while making it easier\nto use at the same time.\n\nThanks for your time,\nLőrinc\n\nOn Fri, Apr 24, 2020 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?L=C5=91rinc_Pap?= <lorinc@gradle.com> writes:\n> > We've switched recently from TEXT based COPY to the BINARY one.\n> > We've noticed a slight performance increase, mostly because we don't need\n> > to escape the content anymore.\n> > Unfortunately the binary protocol's output ended up being slightly bigger\n> > than the text one (e.g. for one payload it's *373MB* now, was *356MB*\n> before)\n> > ...\n> > By skipping the column count and sizes for every row, in our example this\n> > change would reduce the payload to *332MB* (most of our payload is\n> binary,\n> > lightweight structures consisting of numbers only could see a >*2x*\n> > decrease in size).\n>\n> TBH, that amount of gain does not seem to be worth the enormous\n> compatibility costs of introducing a new COPY data format. What you\n> propose also makes the format a great deal less robust (readers are\n> less able to detect errors), which has other costs. I'd vote no.\n>\n> regards, tom lane\n>\n\n\n-- \nLőrinc Pap\nSenior Software Engineer\n<https://gradle.com/>\n\nThanks for the quick response, Tom!What about implementing only the first part of my proposal, i.e. BINARY COPY without the redundant column count & size info?That would already be a big win - I agree the rest of the proposed changes would only complicate the usage, but I'd argue that leaving out duplicated info would even simplify it!I'll give a better example this time - writing 1.8 million rows with column types bigint, integer, smallint results in the following COPY IN payloads:20.8MB - Text protocol51.3MB - Binary protocol25.6MB - Binary, without column size info (proposal)I.e. this would make the binary protocol almost as small as the text one (which isn't an unreasonable expectation, I think), while making it easier to use at the same time.Thanks for your time,LőrincOn Fri, Apr 24, 2020 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?L=C5=91rinc_Pap?= <lorinc@gradle.com> writes:\n> We've switched recently from TEXT based COPY to the BINARY one.\n> We've noticed a slight performance increase, mostly because we don't need\n> to escape the content anymore.\n> Unfortunately the binary protocol's output ended up being slightly bigger\n> than the text one (e.g. for one payload it's *373MB* now, was *356MB* before)\n> ...\n> By skipping the column count and sizes for every row, in our example this\n> change would reduce the payload to *332MB* (most of our payload is binary,\n> lightweight structures consisting of numbers only could see a >*2x*\n> decrease in size).\n\nTBH, that amount of gain does not seem to be worth the enormous\ncompatibility costs of introducing a new COPY data format.  What you\npropose also makes the format a great deal less robust (readers are\nless able to detect errors), which has other costs.  I'd vote no.\n\n                        regards, tom lane\n-- Lőrinc PapSenior Software Engineer", "msg_date": "Tue, 28 Apr 2020 14:13:47 +0200", "msg_from": "=?UTF-8?Q?L=C5=91rinc_Pap?= <lorinc@gradle.com>", "msg_from_op": true, "msg_subject": "Re: Binary COPY IN size reduction" }, { "msg_contents": "Greetings,\n\n* Lőrinc Pap (lorinc@gradle.com) wrote:\n> Thanks for the quick response, Tom!\n\nWe prefer to not top-post on these lists, just fyi.\n\n> What about implementing only the first part of my proposal, i.e. BINARY\n> COPY without the redundant column count & size info?\n\nFor my part, at least, I like the idea- but I'd encourage thinking about\nwhat we might do in a mixed-response situation too, as that's something\nthat's been discussed as at least desirable. As long as we aren't\nending up painting ourselves into a corner somehow (which it doesn't\nseem like we are, but I've not looked deeply at it) and we don't break\nany existing clients, I'd generally be supportive of such an\nimprovement. Constantly sending \"this 4-byte int is 4 bytes long\"\ncertainly does seem like a waste of bandwidth.\n\nThanks,\n\nStephen", "msg_date": "Tue, 28 Apr 2020 10:41:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Binary COPY IN size reduction" } ]
[ { "msg_contents": "src/timezone/README mentions\n\n When there has been a new release of Windows (probably including Service\n Packs), the list of matching timezones need to be updated. Run the\n script in src/tools/win32tzlist.pl on a Windows machine running this new\n release and apply any new timezones that it detects. Never remove any\n mappings in case they are removed in Windows, since we still need to\n match properly on the old version.\n\nIt's been some years since this was last done (a79a68562, looks like).\nAnybody want to check if updates are needed?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 11:01:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Anybody want to check for Windows timezone updates?" }, { "msg_contents": "On Fri, Apr 24, 2020 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> It's been some years since this was last done (a79a68562, looks like).\n> Anybody want to check if updates are needed?\n>\n\nPlease find attached the output from Windows Server 2019.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 24 Apr 2020 19:42:47 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Anybody want to check for Windows timezone updates?" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> Please find attached the output from Windows Server 2019.\n\nThanks! That was a bit tedious --- I suppose it's not quite worth\nautomating further, but I did make some effort to remove the cross-version\nformatting hazards in that table.\n\nCould you verify that there are no complaints now with HEAD?\nI might've fat-fingered something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 17:55:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Anybody want to check for Windows timezone updates?" }, { "msg_contents": "On Fri, Apr 24, 2020 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Thanks! That was a bit tedious --- I suppose it's not quite worth\n> automating further, but I did make some effort to remove the cross-version\n> formatting hazards in that table.\n>\n\nThat might explain why it gets updated not so often. Forcing errors with a\nSpanish installation, so please forget about the description, the new\nformat is more consistent with findtimezone.c:\n\n {\n /* (UTC+04:30) Kabul */\n \"Hora estßndar de Afganistßn\", \"Hora de verano de\nAfganistßn\",\n \"FIXME\"\n },\n\nCould you verify that there are no complaints now with HEAD?\n> I might've fat-fingered something.\n>\n\nNothing gets reported with current HEAD.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Apr 24, 2020 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThanks!  That was a bit tedious --- I suppose it's not quite worth\nautomating further, but I did make some effort to remove the cross-version\nformatting hazards in that table.That might explain why it gets updated not so often. Forcing errors with a Spanish installation, so please forget about the description, the new format is more consistent with findtimezone.c:        {                /* (UTC+04:30) Kabul */                \"Hora estßndar de Afganistßn\", \"Hora de verano de Afganistßn\",                \"FIXME\"        },Could you verify that there are no complaints now with HEAD?\nI might've fat-fingered something.Nothing gets reported with current HEAD.Regards,Juan José Santamaría Flecha", "msg_date": "Sat, 25 Apr 2020 11:05:35 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Anybody want to check for Windows timezone updates?" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n>> Could you verify that there are no complaints now with HEAD?\n>> I might've fat-fingered something.\n\n> Nothing gets reported with current HEAD.\n\nOK, thanks for checking!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Apr 2020 10:36:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Anybody want to check for Windows timezone updates?" } ]
[ { "msg_contents": "Hi,\n\nIt has come to my attention that PostgreSQL has a bunch of code to\nread and write 'tar' archives and it's kind of a mess. Attached are\ntwo patches. The second one was written first, and does some modest\ncleanups, most notably replacing the use of the constants 512 and the\nformula (x + 511) & ~511 - x in lotsa places with a new constant\nTAR_BLOCK_SIZE and a new static line function\ntarPaddingBytesRequired(). In developing that patch, I found a bug, so\nthe first patch (which was written second) fixes it. The problem is\nhere:\n\n if (state.is_recovery_guc_supported)\n {\n tarCreateHeader(header, \"standby.signal\", NULL,\n 0, /* zero-length file */\n pg_file_create_mode, 04000, 02000,\n time(NULL));\n\n writeTarData(&state, header, sizeof(header));\n writeTarData(&state, zerobuf, 511);\n }\n\nWe have similar code in many places -- because evidently nobody\nthought it would be a good idea to have all the logic for reading and\nwriting tarfiles in a centralized location rather than having many\ncopies of it -- and typically it's written to pad the block out to a\nmultiple of 512 bytes. But here, the file is 0 bytes long, and then we\nadd 511 zero bytes. This results in a tarfile whose length is not a\nmultiple of the TAR block size:\n\n[rhaas ~]$ pg_basebackup -D pgbackup -Ft && ls -l pgbackup\ntotal 80288\n-rw------- 1 rhaas staff 135255 Apr 24 11:04 backup_manifest\n-rw------- 1 rhaas staff 24183296 Apr 24 11:04 base.tar\n-rw------- 1 rhaas staff 16778752 Apr 24 11:04 pg_wal.tar\n[rhaas ~]$ rm -rf pgbackup\n[rhaas ~]$ pg_basebackup -D pgbackup -Ft -R && ls -l pgbackup\ntotal 80288\n-rw------- 1 rhaas staff 135255 Apr 24 11:04 backup_manifest\n-rw------- 1 rhaas staff 24184319 Apr 24 11:04 base.tar\n-rw------- 1 rhaas staff 16778752 Apr 24 11:04 pg_wal.tar\n[rhaas ~]$ perl -e 'print $_ % 512, \"\\n\" for qw(24183296 24184319 16778752);'\n0\n511\n0\n\nThat seems bad. At first, I thought maybe the problem was the fact\nthat we were adding 511 zero bytes here instead of 512, but then I\nrealized the real problem is that we shouldn't be adding any zero\nbytes at all. A zero-byte file is already padded out to a multiple of\n512, because 512 * 0 = 0. The problem happens not to have any adverse\nconsequences in this case because this is the last thing that gets put\ninto the tar file, and the end-of-tar-file marker is 1024 zero bytes,\nso the extra 511 zero bytes we're adding here get interpreted as the\nbeginning of the end-of-file marker, and the first 513 bytes of what\nwe intended as the actual end of file marker get interpreted as the\nrest of it. Then there are 511 bytes of garbage zeros at the end but\nmy version of tar, at least, does not care.\n\nHowever, it's possible to make it blow up pretty good with a slightly\ndifferent test case, because there's one case in master where we\ninsert one extra file -- backup_manifest -- into the tar file AFTER we\ninsert standby.signal. That case is when we're writing the output to\nstdout:\n\n[rhaas ~]$ pg_basebackup -D - -Ft -Xnone -R > everything.tar\nNOTICE: WAL archiving is not enabled; you must ensure that all\nrequired WAL segments are copied through other means to complete the\nbackup\n[rhaas ~]$ tar tf everything.tar | grep manifest\ntar: Damaged tar archive\ntar: Retrying...\ntar: Damaged tar archive\ntar: Retrying...\n(it repeats this ~100 times and then exits)\n\nIf I change the offending writeTarData(&state, zerobuf, 511) to\nwriteTarData(&state, zerobuf, 512), then the complaint about a damaged\narchive goes away, but the backup_manifest file doesn't appear to be\nincluded in the archive, because apparently one spurious 512-byte\nblock of zeroes is enough to make my version of tar think it's hit the\nend of the archive:\n\n[rhaas ~]$ tar tf everything.tar | grep manifest\n[rhaas ~]$\n\nIf I remove the offending writeTarData(&state, zerobuf, 511) line\nentirely - which I believe to the correct fix - then it works as\nexpected:\n\n[rhaas ~]$ tar tf everything.tar | grep manifest\nbackup_manifest\n\nThis problem appears to have been introduced by commit\n2dedf4d9a899b36d1a8ed29be5efbd1b31a8fe85, \"Integrate recovery.conf\ninto postgresql.conf\". The code has been substantially modified twice\nsince then, but those modifications seem to have just preserved the\nbug first introduced in that commit.\n\nI tentatively propose to do the following:\n\n1. Commit 0001, which removes the 511 bytes of bogus padding and thus\nfixes the bug, to master in the near future.\n\n2. Possibly back-patch 0001 to v12, where the bug first appeared. I'm\nnot sure this is strictly necessary, because in v12, standby.signal is\nalways the very last thing in the tarfile, so there shouldn't be an\nissue unless somebody has a version of tar that cares about the 511\nbytes of trailing garbage. I will do this if people think it's a good\nidea; otherwise I'll skip it.\n\n3. Commit 0002, the aforementioned cleanup patch, to master, either\nimmediately if people are OK with that, or else after we branch. I am\ninclined to regard the widespread use of the arbitrary constants 512\nand 511 as something of a hazard that ought to be corrected sooner\nrather than later, but someone could not-unreasonably take the view\nthat it's unnecessary tinkering post-feature freeze.\n\nLong term, it might be wiser to either switch to using a real\narchiving library rather than a bunch of hand-rolled code, or at the\nvery least centralize things better so that it's not so easy to make\nmistakes of this type. However, I don't see that as a reasonable\nargument against either of these patches.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 24 Apr 2020 12:06:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "tar-related code in PostgreSQL" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> We have similar code in many places -- because evidently nobody\n> thought it would be a good idea to have all the logic for reading and\n> writing tarfiles in a centralized location rather than having many\n> copies of it -- and typically it's written to pad the block out to a\n> multiple of 512 bytes. But here, the file is 0 bytes long, and then we\n> add 511 zero bytes. This results in a tarfile whose length is not a\n> multiple of the TAR block size:\n\nBleah. Whether or not the nearest copy of tar happens to spit up on\nthat, it's a clear violation of the POSIX standard for tar files.\nI'd vote for back-patching your 0001.\n\nI'd lean mildly to holding 0002 until after we branch. It probably\nwon't break anything, but it probably won't fix anything either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 12:27:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tar-related code in PostgreSQL" }, { "msg_contents": "On Fri, Apr 24, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bleah. Whether or not the nearest copy of tar happens to spit up on\n> that, it's a clear violation of the POSIX standard for tar files.\n> I'd vote for back-patching your 0001.\n\nDone.\n\n> I'd lean mildly to holding 0002 until after we branch. It probably\n> won't break anything, but it probably won't fix anything either.\n\nTrue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 27 Apr 2020 14:07:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tar-related code in PostgreSQL" }, { "msg_contents": "On Mon, Apr 27, 2020 at 2:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I'd lean mildly to holding 0002 until after we branch. It probably\n> > won't break anything, but it probably won't fix anything either.\n>\n> True.\n\nCommitted now.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Jun 2020 16:49:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tar-related code in PostgreSQL" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch works perfectly.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Sun, 28 Jun 2020 15:23:20 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tar-related code in PostgreSQL" }, { "msg_contents": "On Sun, Jun 28, 2020 at 11:24 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> The patch works perfectly.\n>\n> The new status of this patch is: Ready for Committer\n\nThanks, but this was committed on June 15th, as per my previous email.\nPerhaps I forgot to update the CommitFest application....\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Jun 2020 07:52:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tar-related code in PostgreSQL" }, { "msg_contents": "> On 29 Jun 2020, at 13:52, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Sun, Jun 28, 2020 at 11:24 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>> The new status of this patch is: Ready for Committer\n> \n> Thanks, but this was committed on June 15th, as per my previous email.\n> Perhaps I forgot to update the CommitFest application....\n\nDone now, marked as committed.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 10:15:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: tar-related code in PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nThe PostgreSQL 13 Release Management Team is pleased to announce that\nthe release date for PostgreSQL 13 Beta 1 is set to be 2020-05-21. The\nOpen Items page is updated to reflect this.\n\nWe’re excited to make the Beta available for testing and receive some\nearly feedback around the latest major release of PostgreSQL.\n\nPlease let us know if you have any questions.\n\nThanks,\n\nJonathan", "msg_date": "Fri, 24 Apr 2020 12:27:41 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 13 Beta 1 Release: 2020-05-21" }, { "msg_contents": "On 4/24/20 12:27 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> The PostgreSQL 13 Release Management Team is pleased to announce that\n> the release date for PostgreSQL 13 Beta 1 is set to be 2020-05-21. The\n> Open Items page is updated to reflect this.\n> \n> We’re excited to make the Beta available for testing and receive some\n> early feedback around the latest major release of PostgreSQL.\n\nJust a reminder that the Beta 1 release is this upcoming Thursday. The\nOpen Items list is pretty small at this point, but if you are planning\nto commit anything before the release, please be sure to do so over the\nweekend so we can wrap on Monday.\n\nThanks!\n\nJonathan", "msg_date": "Fri, 15 May 2020 21:24:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release: 2020-05-21" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Just a reminder that the Beta 1 release is this upcoming Thursday. The\n> Open Items list is pretty small at this point, but if you are planning\n> to commit anything before the release, please be sure to do so over the\n> weekend so we can wrap on Monday.\n\nPursuant to that, I'm hoping to be done with the wait-events naming issues\nby the end of the weekend. The hardest parts of that (the event types\nthat are connected to other things) are all done as of a few minutes ago.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 May 2020 21:54:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release: 2020-05-21" } ]
[ { "msg_contents": "Hi\n\nLast release of pspg supports stream mode - it means so you can open psql\nin one terminal, redirect output to named pipe. In second terminal you can\nstart pspg and read input from named pipe. Then you can see and edit SQL in\none terminal, and you can see a result in second terminal.\n\nIt is working very well, but it is not too robust how I would. I miss a\nsome message in communication that can ensure synchronization - some\nspecial char that can be used as separator between two results. Now, it is\nbased on detection and evaluation positions of empty rows.\n\nI had a idea using some invisible chars, that are usually ignored (and use\nthese special characters only when user would it).\n\nThere are possible characters:\n\n03 ETX .. end of text\n28 FS .. file separator\n29 GS .. group separator\n\nWhat do you think about this?\n\nRegards\n\nPavel\n\nHiLast release of pspg supports stream mode - it means so you can open psql in one terminal, redirect output to named pipe. In second terminal you can start pspg and read input from named pipe. Then you can see and edit SQL in one terminal, and you can see a result in second terminal. It is working very well, but it is not too robust how I would. I miss a some message in communication that can ensure synchronization - some special char that can be used as separator between two results. Now, it is based on detection and evaluation positions of empty rows.I had a idea using some invisible chars, that are usually ignored (and use these special characters only when user would it).There are possible characters:03 ETX .. end of text28 FS .. file separator29 GS .. group separatorWhat do you think about this?RegardsPavel", "msg_date": "Fri, 24 Apr 2020 19:54:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "psql - pager support - using invisible chars for signalling end of\n report" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I had a idea using some invisible chars, that are usually ignored (and use\n> these special characters only when user would it).\n\nAnd what will happen when those characters are in the data?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 15:33:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I had a idea using some invisible chars, that are usually ignored (and\n> use\n> > these special characters only when user would it).\n>\n> And what will happen when those characters are in the data?\n>\n\nIt will be used on pager side as signal so previous rows was really last\nrow of result, and new row will be related to new result.\n\n\n\n> regards, tom lane\n>\n\npá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I had a idea using some invisible chars, that are usually ignored (and use\n> these special characters only when user would it).\n\nAnd what will happen when those characters are in the data?It will be used on pager side as signal so previous rows was really last row of result, and new row will be related to new result. \n\n                        regards, tom lane", "msg_date": "Fri, 24 Apr 2020 23:51:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> And what will happen when those characters are in the data?\n\n> It will be used on pager side as signal so previous rows was really last\n> row of result, and new row will be related to new result.\n\nIn other words, it will misbehave badly if those characters appear\nin the query result. Doesn't sound acceptable to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 20:12:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "On Fri, Apr 24, 2020 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> And what will happen when those characters are in the data?\n>\n> > It will be used on pager side as signal so previous rows was really last\n> > row of result, and new row will be related to new result.\n>\n> In other words, it will misbehave badly if those characters appear\n> in the query result. Doesn't sound acceptable to me.\n>\n\nRandom thought but NUL isn't allowed in data so could it be used as a\nprotocol flag?\n\nDavid J.\n\nOn Fri, Apr 24, 2020 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> And what will happen when those characters are in the data?\n\n> It will be used on pager side as signal so previous rows was really last\n> row of result, and new row will be related to new result.\n\nIn other words, it will misbehave badly if those characters appear\nin the query result.  Doesn't sound acceptable to me.Random thought but NUL isn't allowed in data so could it be used as a protocol flag?David J.", "msg_date": "Fri, 24 Apr 2020 17:19:28 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "so 25. 4. 2020 v 2:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> And what will happen when those characters are in the data?\n>\n> > It will be used on pager side as signal so previous rows was really last\n> > row of result, and new row will be related to new result.\n>\n> In other words, it will misbehave badly if those characters appear\n> in the query result. Doesn't sound acceptable to me.\n>\n\nIt should not be problem, I think\n\na) it can be applied as special char only when row before was empty\nb) psql formates this char inside query result, so should not be possible\nto find binary this value inside result.\n\npostgres=# select e'AHOJ' || chr(5) || 'JJJJ';\n┌──────────────┐\n│ ?column? │\n╞══════════════╡\n│ AHOJ\\x05JJJJ │\n└──────────────┘\n(1 row)\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nso 25. 4. 2020 v 2:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> And what will happen when those characters are in the data?\n\n> It will be used on pager side as signal so previous rows was really last\n> row of result, and new row will be related to new result.\n\nIn other words, it will misbehave badly if those characters appear\nin the query result.  Doesn't sound acceptable to me.It should not be problem, I think a) it can be applied as special char only when row before was emptyb) psql formates this char inside query result, so should not be possible to find binary this value inside result.postgres=# select e'AHOJ' || chr(5) || 'JJJJ';┌──────────────┐│   ?column?   │╞══════════════╡│ AHOJ\\x05JJJJ │└──────────────┘(1 row)RegardsPavel\n\n                        regards, tom lane", "msg_date": "Sat, 25 Apr 2020 05:52:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "On Sat, Apr 25, 2020 at 3:53 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> so 25. 4. 2020 v 2:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > pá 24. 4. 2020 v 21:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> >> And what will happen when those characters are in the data?\n>>\n>> > It will be used on pager side as signal so previous rows was really last\n>> > row of result, and new row will be related to new result.\n>>\n>> In other words, it will misbehave badly if those characters appear\n>> in the query result. Doesn't sound acceptable to me.\n>\n>\n> It should not be problem, I think\n>\n> a) it can be applied as special char only when row before was empty\n> b) psql formates this char inside query result, so should not be possible to find binary this value inside result.\n>\n> postgres=# select e'AHOJ' || chr(5) || 'JJJJ';\n> ┌──────────────┐\n.> │ ?column? │\n> ╞══════════════╡\n> │ AHOJ\\x05JJJJ │\n> └──────────────┘\n> (1 row)\n\nThis sounds better than the QUERY_SEPARATOR hack from commit\n664d757531e, and similar kludges elsewhere. I think Pavel and David\nare right about NUL being impossible in psql query output, no?\n\n\n", "msg_date": "Wed, 6 Sep 2023 15:07:00 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "> On 6 Sep 2023, at 05:07, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> This sounds better than the QUERY_SEPARATOR hack from commit\n> 664d757531e, and similar kludges elsewhere. I think Pavel and David\n> are right about NUL being impossible in psql query output, no?\n\n+1, I would love to be able to rip out that hack.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 7 Sep 2023 10:57:52 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" }, { "msg_contents": "On 06.09.23 05:07, Thomas Munro wrote:\n> This sounds better than the QUERY_SEPARATOR hack from commit\n> 664d757531e, and similar kludges elsewhere. I think Pavel and David\n> are right about NUL being impossible in psql query output, no?\n\nNote:\n\n -z, --field-separator-zero\n set field separator for unaligned output to \nzero byte\n -0, --record-separator-zero\n set record separator for unaligned output to \nzero byte\n\n\n\n", "msg_date": "Tue, 12 Sep 2023 09:55:53 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: psql - pager support - using invisible chars for signalling end\n of report" } ]
[ { "msg_contents": "I recently expressed an interest in using Valgrind memcheck to detect\naccess to pages whose buffers do not have a pin held in the backend,\nor do not have a buffer lock held (the latter check makes sense for\npages owned by index access methods). I came up with a quick and dirty\npatch, that I confirmed found a bug in nbtree VACUUM that I spotted\nrandomly:\n\nhttps://postgr.es/m/CAH2-Wz=WRu6NMWtit2weDnuGxdsWeNyFygeBP_zZ2Sso0YAGFg@mail.gmail.com\n\n(This is a bug in commit 857f9c36cda.)\n\nAlvaro wrote a similar patch back in 2015, that I'd forgotten about\nbut was reminded of today:\n\nhttps://postgr.es/m/20150723195349.GW5596@postgresql.org\n\nI took his version (which was better than my rough original) and\nrebased it -- that's attached as the first patch. The second patch is\nsomething that takes the general idea further by having nbtree mark\npages whose buffers lack a buffer lock (that may or may not have a\nbuffer pin) as NOACCESS in a similar way.\n\nThis second patch detected two more bugs in nbtree page deletion by\nrunning the regression tests with Valgrind memcheck. These additional\nbugs are probably of lower severity than the first one, since we at\nleast have a buffer pin (we just don't have buffer locks). All three\nbugs are very similar, though: they all involve dereferencing a\npointer to the special area of a page at a point where the underlying\nbuffer is no longer safe to access.\n\nThe final two patches fix the two newly discovered bugs -- I don't\nhave a fix for the first bug yet, since that one is more complicated\n(and probably more serious). The regression tests run with Valgrind\nwill complain about all three bugs if you just apply the first two\npatches (though you only need the first patch to see a complaint about\nthe first, more serious bug when the tests are run).\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 24 Apr 2020 18:37:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Fri, Apr 24, 2020 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The final two patches fix the two newly discovered bugs -- I don't\n> have a fix for the first bug yet, since that one is more complicated\n> (and probably more serious).\n\nI pushed both of the two fixes that I posted yesterday -- fixes for\nthe benign \"no buffer lock held\" issues in nbtree page deletion. We\nstill need a fix for the more serious issue, though. And that will\nneed to be backpatched down to v11. I'll try to get to that next week.\n\nAttached is v2 of the patch series. This version has a centralized\ndescription of what we require from nbtree code above _bt_getbuf(),\nalongside existing, similar commentary. Here's the general idea: If we\nlock a buffer, that has to go through one of the wrapper functions\nthat knows about Valgrind in all cases. It's presented as a stricter\nversion of what happens in bufmgr.c for all buffers from all access\nmethods.\n\nI also added something about how the nbtree Valgrind client requests\n(those added by the second patch in the series) assume that the\nbufmgr.c client requests (from the first patch) also take place. We\nneed to be able to rely on bufmgr.c to \"clean up\" in the event of an\nerror, so the second patch has a strict dependency on the first. If\nthe transaction aborts, we can rely on bufmgr.c marking the buffer's\npage as defined when the backend acquires its first buffer pin on the\nbuffer in the next transaction (doesn't matter whether or not the same\nblock/page is in the same buffer as before). This is why we can't use\nclient requests for local buffers (not that we'd want to; the existing\naset.c Valgrind client requests take care of them automatically).\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 25 Apr 2020 17:17:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "> On 26 Apr 2020, at 02:17, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> Attached is v2 of the patch series.\n\nThis patch fails to apply to HEAD due to conflicts in nbtpage.c, can you please\nsubmit a rebased version?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 16:48:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 7:48 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> This patch fails to apply to HEAD due to conflicts in nbtpage.c, can you please\n> submit a rebased version?\n\nI attach the rebased patch series.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Thu, 2 Jul 2020 10:11:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "As a general overview, the series of patches in the mail thread do match their description. The addition of the stricter, explicit use of instrumentation does improve the design as the distinction of the use cases requiring a pin or a lock is made more clear. The added commentary is descriptive and appears grammatically correct, at least to a non native speaker.\r\n\r\nUnfortunately though, the two bug fixes do not seem to apply.\r\n\r\nAlso, there is a small issue regarding the process, not the content of the patches. In CF app there is a latest attachment (v3-0002-Add-nbtree-Valgrind-buffer-lock-checks.patch) which does not appear in the mail thread. Before changing the status, I will kindly ask for the complete latest series that applies in the mail thread.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 06 Jul 2020 08:34:56 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On 02.07.2020 20:11, Peter Geoghegan wrote:\n> On Thu, Jul 2, 2020 at 7:48 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> This patch fails to apply to HEAD due to conflicts in nbtpage.c, can you please\n>> submit a rebased version?\n> I attach the rebased patch series.\n>\n> Thanks\n\nIt's impressive that this check helped to find several bugs.\n\nI only noticed small inconsistency in the new comment for \n_bt_conditionallockbuf().\n\nIt says \"Note: Caller is responsible for calling _bt_checkpage() on \nsuccess.\", while in _bt_getbuf() the call is not followed by \n_bt_checkpage().\nMoreover, _bt_page_recyclable() contradicts _bt_checkpage() checks.\n\nOther than that, patches look good to me, so move them to \"Ready For \nCommitter\".\n\nAre you planning to add same checks for other access methods?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 16 Jul 2020 20:24:51 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Thu, Jul 16, 2020 at 10:24 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> It's impressive that this check helped to find several bugs.\n\nWhile it's definitely true that it *could* have detected the bug fixed\nby commit b0229f26, it's kind of debatable whether or not the bugs I\nfixed in commit fa7ff642 and commit 7154aa16 (which actually were\nfound using this new instrumentation) were truly bugs.\n\nThe behavior in question was probably safe, since only the\nspecial/opaque page area was accessed -- and with at least a buffer\npin held. But it's not worth having a debate about whether or not it\nshould be considered safe. There is no downside to not having a simple\nstrict rule that's easy to enforce. Also, I myself spotted some bugs\nin the skip scan patch series at one point that would also be caught\nby the new instrumentation.\n\n> I only noticed small inconsistency in the new comment for\n> _bt_conditionallockbuf().\n>\n> It says \"Note: Caller is responsible for calling _bt_checkpage() on\n> success.\", while in _bt_getbuf() the call is not followed by\n> _bt_checkpage().\n> Moreover, _bt_page_recyclable() contradicts _bt_checkpage() checks.\n\nNice catch.\n\n> Other than that, patches look good to me, so move them to \"Ready For\n> Committer\".\n\nPushed the first patch just now, and intend to push the other one soon. Thanks!\n\n> Are you planning to add same checks for other access methods?\n\nNot at the moment, but it does seem like my approach could be\ngeneralized to other index access methods.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:53:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Jul 16, 2020 at 10:24 AM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n>> Other than that, patches look good to me, so move them to \"Ready For\n>> Committer\".\n\n> Pushed the first patch just now, and intend to push the other one soon. Thanks!\n\nI wonder whether skink's failure today is due to this change:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2020-07-18%2018%3A01%3A10\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 22:36:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Sat, Jul 18, 2020 at 7:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder whether skink's failure today is due to this change:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2020-07-18%2018%3A01%3A10\n\nThat seems extremely likely. I think that I need to do something like\nwhat you see in the attached.\n\nAnyway, I'll take care of it tomorrow. Sorry for missing it before my commit.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 18 Jul 2020 19:45:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Mon, Jul 6, 2020 at 1:35 AM Georgios Kokolatos\n<gkokolatos@protonmail.com> wrote:\n> As a general overview, the series of patches in the mail thread do match their description. The addition of the stricter, explicit use of instrumentation does improve the design as the distinction of the use cases requiring a pin or a lock is made more clear. The added commentary is descriptive and appears grammatically correct, at least to a non native speaker.\n\nI didn't see this review until now because it ended up in gmail's spam\nfolder. :-(\n\nThanks for taking a look at it!\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 21 Jul 2020 14:52:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "On Fri, Jul 17, 2020 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Pushed the first patch just now, and intend to push the other one soon. Thanks!\n\nPushed the second piece of this (the nbtree patch) just now.\n\nThanks for the review!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 21 Jul 2020 15:53:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" }, { "msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, July 21, 2020 11:52 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Jul 6, 2020 at 1:35 AM Georgios Kokolatos\n> gkokolatos@protonmail.com wrote:\n>\n> > As a general overview, the series of patches in the mail thread do match their description. The addition of the stricter, explicit use of instrumentation does improve the design as the distinction of the use cases requiring a pin or a lock is made more clear. The added commentary is descriptive and appears grammatically correct, at least to a non native speaker.\n>\n> I didn't see this review until now because it ended up in gmail's spam\n> folder. :-(\n>\n> Thanks for taking a look at it!\n\nNo worries at all. It happens and it was beneficial for me to read the patch.\n\n//Georgios\n>\n> ----------------------------------------------------------------------------------------------------------------------\n>\n> Peter Geoghegan\n\n\n\n\n", "msg_date": "Wed, 22 Jul 2020 07:57:31 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Using Valgrind to detect faulty buffer accesses (no pin or buffer\n content lock held)" } ]
[ { "msg_contents": "psql slash usage show two options for listing foreign tables.\n\n \\dE[S+] [PATTERN] list foreign tables\n \\det[+] [PATTERN] list foreign tables\n\nThis seems a little odd especially when the output of both of these\ncommands is different.\n\npostgres=# \\dE+\n List of relations\n Schema | Name | Type | Owner\n--------+------+---------------+--------\n public | foo | foreign table | highgo\n(1 row)\n\npostgres=# \\det\n List of foreign tables\n Schema | Table | Server\n--------+-------+---------\n public | foo | orc_srv\n(1 row)\n\n\n\"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\"\ndisplays \"List of foreign tables\". So, to differentiate the two, I suggest\nto change the help message for \"\\dE\" to:\n\n \\dE[S+] [PATTERN] list foreign relations\n\nOne could argue that both essentially mean the same thing, however,\nconsidering \"\\dE+\" also outputs size, it makes sense IMHO to make this\nchange (as PG documentation: relation is essentially a mathematical term\nfor table). Attached is the patch that makes this change.\n\nRegards.\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus", "msg_date": "Sun, 26 Apr 2020 00:29:11 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Improving psql slash usage help message" }, { "msg_contents": "On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com>\nwrote:\n\n>\n> \"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\"\n> displays \"List of foreign tables\". So, to differentiate the two, I suggest\n> to change the help message for \"\\dE\" to:\n>\n> \\dE[S+] [PATTERN] list foreign relations\n>\n> One could argue that both essentially mean the same thing, however,\n> considering \"\\dE+\" also outputs size, it makes sense IMHO to make this\n> change (as PG documentation: relation is essentially a mathematical term\n> for table). Attached is the patch that makes this change.\n>\n\nhelp.c and the documentation need to be synchronized a bit more than this\nsingle issue.\n\nCalling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does\nconvey that there is a difference - not sure it a huge improvement though.\nThe \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be\nmoved together to show the fact that they are basically \"list relation\nnames [of this type only]\" while \"\\det\" is \"list foreign table info\".\n\nDavid J.\n\nOn Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\" displays \"List of foreign tables\". So, to differentiate the two, I suggest to change the help message for \"\\dE\" to:  \\dE[S+] [PATTERN]      list foreign relationsOne could argue that both essentially mean the same thing, however, considering \"\\dE+\" also outputs size, it makes sense IMHO to make this change (as PG documentation: relation is essentially a mathematical term for table). Attached is the patch that makes this change.help.c and the documentation need to be synchronized a bit more than this single issue.Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does convey that there is a difference - not sure it a huge improvement though.  The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be moved together to show the fact that they are basically \"list relation names [of this type only]\" while \"\\det\" is \"list foreign table info\".David J.", "msg_date": "Sat, 25 Apr 2020 13:03:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "On Sun, Apr 26, 2020 at 1:03 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n> wrote:\n>\n>>\n>> \"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\"\n>> displays \"List of foreign tables\". So, to differentiate the two, I suggest\n>> to change the help message for \"\\dE\" to:\n>>\n>> \\dE[S+] [PATTERN] list foreign relations\n>>\n>> One could argue that both essentially mean the same thing, however,\n>> considering \"\\dE+\" also outputs size, it makes sense IMHO to make this\n>> change (as PG documentation: relation is essentially a mathematical term\n>> for table). Attached is the patch that makes this change.\n>>\n>\n> help.c and the documentation need to be synchronized a bit more than this\n> single issue.\n>\n> Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does\n> convey that there is a difference - not sure it a huge improvement though.\n> The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be\n> moved together to show the fact that they are basically \"list relation\n> names [of this type only]\" while \"\\det\" is \"list foreign table info\".\n>\n\nI think from a user perspective, grouping these wouldn't be helpful. For\nexample, it may cause FDW related commands to be spread through out the\nhelp. Currently, those are nicely grouped together, which makes life\nrelatively easy IMO.\n\n\n>\n> David J.\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Sun, Apr 26, 2020 at 1:03 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\" displays \"List of foreign tables\". So, to differentiate the two, I suggest to change the help message for \"\\dE\" to:  \\dE[S+] [PATTERN]      list foreign relationsOne could argue that both essentially mean the same thing, however, considering \"\\dE+\" also outputs size, it makes sense IMHO to make this change (as PG documentation: relation is essentially a mathematical term for table). Attached is the patch that makes this change.help.c and the documentation need to be synchronized a bit more than this single issue.Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does convey that there is a difference - not sure it a huge improvement though.  The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be moved together to show the fact that they are basically \"list relation names [of this type only]\" while \"\\det\" is \"list foreign table info\".I think from a user perspective, grouping these wouldn't be helpful. For example, it may cause FDW related commands to be spread through out the help. Currently, those are nicely grouped together, which makes life relatively easy IMO. David J.\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Mon, 27 Apr 2020 12:25:45 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThis small documentation patch makes the document more accurate for psql terminal help.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 08 Jun 2020 10:24:58 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> On Sun, Apr 26, 2020 at 1:03 AM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>> On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n>> wrote:\n>>> \"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\"\n>>> displays \"List of foreign tables\". So, to differentiate the two, I suggest\n>>> to change the help message for \"\\dE\" to:\n>>> \\dE[S+] [PATTERN] list foreign relations\n\n>> help.c and the documentation need to be synchronized a bit more than this\n>> single issue.\n>> Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does\n>> convey that there is a difference - not sure it a huge improvement though.\n>> The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be\n>> moved together to show the fact that they are basically \"list relation\n>> names [of this type only]\" while \"\\det\" is \"list foreign table info\".\n\nFWIW, I agree with David on this point. I find it bizarre and unhelpful\nthat slashUsage shows \\dt, \\di, etc as separate commands and fails to\nindicate that they can be combined. We could merge these entries into\n\n\tfprintf(output, _(\" \\\\d[tivmsE][S+] [PATRN] list relations of specified type(s)\\n\"));\n\nwhich would both remind people that they can give more than one type,\nand shorten the already-overly-long list.\n\n> I think from a user perspective, grouping these wouldn't be helpful. For\n> example, it may cause FDW related commands to be spread through out the\n> help.\n\nThat seems quite irrelevant to this proposal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Jul 2020 11:15:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "So are you suggesting to not fix this or do a more detailed review and\nassess what other psql messages can be grouped together.\n\nOn Sun, Jul 12, 2020 at 8:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> > On Sun, Apr 26, 2020 at 1:03 AM David G. Johnston <\n> > david.g.johnston@gmail.com> wrote:\n> >> On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n> >> wrote:\n> >>> \"\\dE\" displays the list with a \"List of relations\" heading whereas\n> \"\\det\"\n> >>> displays \"List of foreign tables\". So, to differentiate the two, I\n> suggest\n> >>> to change the help message for \"\\dE\" to:\n> >>> \\dE[S+] [PATTERN] list foreign relations\n>\n> >> help.c and the documentation need to be synchronized a bit more than\n> this\n> >> single issue.\n> >> Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does\n> >> convey that there is a difference - not sure it a huge improvement\n> though.\n> >> The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably\n> be\n> >> moved together to show the fact that they are basically \"list relation\n> >> names [of this type only]\" while \"\\det\" is \"list foreign table info\".\n>\n> FWIW, I agree with David on this point. I find it bizarre and unhelpful\n> that slashUsage shows \\dt, \\di, etc as separate commands and fails to\n> indicate that they can be combined. We could merge these entries into\n>\n> fprintf(output, _(\" \\\\d[tivmsE][S+] [PATRN] list relations of\n> specified type(s)\\n\"));\n>\n> which would both remind people that they can give more than one type,\n> and shorten the already-overly-long list.\n>\n> > I think from a user perspective, grouping these wouldn't be helpful. For\n> > example, it may cause FDW related commands to be spread through out the\n> > help.\n>\n> That seems quite irrelevant to this proposal.\n>\n> regards, tom lane\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nSo are you suggesting to not fix this or do a more detailed review and assess what other psql messages can be grouped together.On Sun, Jul 12, 2020 at 8:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> On Sun, Apr 26, 2020 at 1:03 AM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>> On Sat, Apr 25, 2020 at 12:29 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n>> wrote:\n>>> \"\\dE\" displays the list with a \"List of relations\" heading whereas \"\\det\"\n>>> displays \"List of foreign tables\". So, to differentiate the two, I suggest\n>>> to change the help message for \"\\dE\" to:\n>>> \\dE[S+] [PATTERN]      list foreign relations\n\n>> help.c and the documentation need to be synchronized a bit more than this\n>> single issue.\n>> Calling it \"foreign relation\" for \\dE and \"foreign table\" for \\det does\n>> convey that there is a difference - not sure it a huge improvement though.\n>> The \"\\d[Eimstv]\" family of meta-commands should, in the help, probably be\n>> moved together to show the fact that they are basically \"list relation\n>> names [of this type only]\" while \"\\det\" is \"list foreign table info\".\n\nFWIW, I agree with David on this point.  I find it bizarre and unhelpful\nthat slashUsage shows \\dt, \\di, etc as separate commands and fails to\nindicate that they can be combined.  We could merge these entries into\n\n        fprintf(output, _(\"  \\\\d[tivmsE][S+] [PATRN] list relations of specified type(s)\\n\"));\n\nwhich would both remind people that they can give more than one type,\nand shorten the already-overly-long list.\n\n> I think from a user perspective, grouping these wouldn't be helpful. For\n> example, it may cause FDW related commands to be spread through out the\n> help.\n\nThat seems quite irrelevant to this proposal.\n\n                        regards, tom lane\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Tue, 21 Jul 2020 16:51:49 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> So are you suggesting to not fix this or do a more detailed review and\n> assess what other psql messages can be grouped together.\n\nI was just imagining merging the entries for the commands that are\nimplemented by listTables(). If you see something else that would\nbe worth doing, feel free to suggest it, but that wasn't what I was\nthinking of.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jul 2020 11:10:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "On 2020-07-21 17:10, Tom Lane wrote:\n> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n>> So are you suggesting to not fix this or do a more detailed review and\n>> assess what other psql messages can be grouped together.\n> \n> I was just imagining merging the entries for the commands that are\n> implemented by listTables(). If you see something else that would\n> be worth doing, feel free to suggest it, but that wasn't what I was\n> thinking of.\n\nIt used to be like that, but it was changed here: \n9491c82f7103d62824d3132b8c7dc44609f2f56b\n\nI'm not sure which way is better. Right now, a question like \"how do I \nlist all indexes\" is easily answered by the help output. Under the \nother scheme, it's hidden behind a layer of metasyntax.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Jul 2020 22:38:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "Status update for a commitfest entry.\r\nThis thread was inactive for a while. Is anyone going to continue working on it?\r\n\r\nMy two cents on the topic: \r\nI don’t see it as a big problem in the first place. In the source code, \\dE refers to foreign tables and \\de refers to forign servers. So, it seems more reasonable to me, to rename the latter.\r\n \\dE[S+] [PATTERN] list foreign tables\r\n \\det[+] [PATTERN] list foreign servers\r\n\r\nI also do not support merging the entries for different commands. I think that current help output is easier to read.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 02 Nov 2020 15:02:22 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" }, { "msg_contents": "On 02.11.2020 18:02, Anastasia Lubennikova wrote:\n> Status update for a commitfest entry.\n> This thread was inactive for a while. Is anyone going to continue working on it?\n>\n> My two cents on the topic:\n> I don’t see it as a big problem in the first place. In the source code, \\dE refers to foreign tables and \\de refers to forign servers. So, it seems more reasonable to me, to rename the latter.\n> \\dE[S+] [PATTERN] list foreign tables\n> \\det[+] [PATTERN] list foreign servers\n>\n> I also do not support merging the entries for different commands. I think that current help output is easier to read.\n>\n> The new status of this patch is: Waiting on Author\n\n\nStatus update for a commitfest entry.\n\nThis entry was inactive during this CF, so I've marked it as returned \nwith feedback. Feel free to resubmit an updated version to a future \ncommitfest.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Sun, 29 Nov 2020 22:28:30 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving psql slash usage help message" } ]
[ { "msg_contents": "Starting 2019-11-17, jacana and bowerbird (different compiler, same machine?)\nhave failed four times like this:\n\n# poll_query_until timed out executing this query:\n# SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'tap_sub';\n# expecting this output:\n# t\n# last actual query output:\n# \n# with stderr:\n# Looks like your test exited with 25 just after 8.\n[11:06:11] t/001_rep_changes.pl .. \nDubious, test returned 25 (wstat 6400, 0x1900)\nFailed 9/17 subtests \n\nEvery such run:\n sysname │ snapshot │ branch │ bfurl\n───────────┼─────────────────────┼────────┼───────────────────────────────────────────────────────────────────────────────────────────────\n bowerbird │ 2019-11-17 15:22:42 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2019-11-17%2015%3A22%3A42\n bowerbird │ 2020-01-10 17:30:49 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2020-01-10%2017%3A30%3A49\n jacana │ 2020-04-05 00:00:27 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-04-05%2000%3A00%3A27\n jacana │ 2020-04-16 00:00:27 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-04-16%2000%3A00%3A27\n\nThe dates and branches suggest either a v13 regression (hence my concern) or a\nregression in the underlying machine. In each affected run, other tests\ncompleted at their normal speed, but 001_rep_changes.pl failed after 397s (a\nnormal run takes <30s). In the publisher log, the failed run[1] is missing\n\"replication connection authorized\", \"IDENTIFY_SYSTEM\", etc. Subscriber logs\ndo not differ; failed and good runs both have 'logical replication apply\nworker for subscription \"tap_sub\" has started'. That suggests a subscriber\nstuck in ApplyWorkerMain(), somewhere between that log message and the end of\nwalrcv_connect. I failed to reproduce this. Andrew, are you interested in\nattempting to reproduce it and/or identify the blockage?\n\nThanks,\nnm\n\n\n==== [1] master log with delay (failed run)\n...\n2020-04-15 20:56:40.334 EDT [5e97ad48.247c:3] 001_rep_changes.pl LOG: statement: DELETE FROM tab_rep\n2020-04-15 20:56:40.334 EDT [5e97ad48.247c:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.015 user=pgrunner database=postgres host=127.0.0.1 port=65317\n2020-04-15 20:56:40.381 EDT [5e97ad39.1964:4] LOG: received fast shutdown request\n2020-04-15 20:56:40.381 EDT [5e97ad39.1964:5] LOG: aborting any active transactions\n2020-04-15 20:56:40.396 EDT [5e97ad39.1964:6] LOG: background worker \"logical replication launcher\" (PID 9920) exited with exit code 1\n2020-04-15 20:56:40.396 EDT [5e97ad39.1a7c:1] LOG: shutting down\n2020-04-15 20:56:40.538 EDT [5e97ad47.1ce8:9] tap_sub LOG: disconnection: session time: 0:00:00.541 user=pgrunner database=postgres host=127.0.0.1 port=65313\n2020-04-15 20:56:40.569 EDT [5e97ad39.1964:7] LOG: database system is shut down\n2020-04-15 20:56:40.780 EDT [5e97ad48.c0:1] LOG: starting PostgreSQL 13devel on x86_64-w64-mingw32, compiled by x86_64-w64-mingw32-gcc.exe (x86_64-win32-sjlj-rev0, Built by MinGW-W64 project) 7.3.0, 64-bit\n2020-04-15 20:56:40.780 EDT [5e97ad48.c0:2] LOG: listening on IPv4 address \"127.0.0.1\", port 58418\n2020-04-15 20:56:40.811 EDT [5e97ad48.36c:1] LOG: database system was shut down at 2020-04-15 20:56:40 EDT\n2020-04-15 20:56:40.873 EDT [5e97ad48.c0:3] LOG: database system is ready to accept connections\n2020-04-15 20:56:41.015 EDT [5e97ad49.1888:1] [unknown] LOG: connection received: host=127.0.0.1 port=65319\n2020-04-15 20:56:41.029 EDT [5e97ad49.1888:2] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=001_rep_changes.pl\n2020-04-15 20:56:41.044 EDT [5e97ad49.1888:3] 001_rep_changes.pl LOG: statement: SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'tap_sub';\n... repeated a total of 1800 times ...\n2020-04-15 21:02:58.960 EDT [5e97aec2.21f8:1] [unknown] LOG: connection received: host=127.0.0.1 port=50920\n2020-04-15 21:02:58.960 EDT [5e97aec2.21f8:2] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=001_rep_changes.pl\n2020-04-15 21:02:58.975 EDT [5e97aec2.21f8:3] 001_rep_changes.pl LOG: statement: SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'tap_sub';\n2020-04-15 21:02:58.975 EDT [5e97aec2.21f8:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.015 user=pgrunner database=postgres host=127.0.0.1 port=50920\n<test script gives up hope>\n2020-04-15 21:02:59.148 EDT [5e97ad48.c0:4] LOG: received immediate shutdown request\n2020-04-15 21:02:59.148 EDT [5e97ad48.2148:1] WARNING: terminating connection because of crash of another server process\n2020-04-15 21:02:59.148 EDT [5e97ad48.2148:2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2020-04-15 21:02:59.148 EDT [5e97ad48.2148:3] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2020-04-15 21:02:59.164 EDT [5e97ad48.c0:5] LOG: database system is shut down\n\n==== master log without delay (good run)\n...\n2020-04-15 11:17:24.868 EDT [5e972584.19a0:3] 001_rep_changes.pl LOG: statement: DELETE FROM tab_rep\n2020-04-15 11:17:24.868 EDT [5e972584.19a0:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.015 user=pgrunner database=postgres host=127.0.0.1 port=60986\n2020-04-15 11:17:24.931 EDT [5e972575.1594:4] LOG: received fast shutdown request\n2020-04-15 11:17:24.931 EDT [5e972575.1594:5] LOG: aborting any active transactions\n2020-04-15 11:17:24.947 EDT [5e972575.1594:6] LOG: background worker \"logical replication launcher\" (PID 9988) exited with exit code 1\n2020-04-15 11:17:24.947 EDT [5e972575.185c:1] LOG: shutting down\n2020-04-15 11:17:25.072 EDT [5e972584.560:9] tap_sub LOG: disconnection: session time: 0:00:00.636 user=pgrunner database=postgres host=127.0.0.1 port=60983\n2020-04-15 11:17:25.104 EDT [5e972575.1594:7] LOG: database system is shut down\n2020-04-15 11:17:25.312 EDT [5e972585.164:1] LOG: starting PostgreSQL 13devel on x86_64-w64-mingw32, compiled by x86_64-w64-mingw32-gcc.exe (x86_64-win32-sjlj-rev0, Built by MinGW-W64 project) 7.3.0, 64-bit\n2020-04-15 11:17:25.312 EDT [5e972585.164:2] LOG: listening on IPv4 address \"127.0.0.1\", port 60213\n2020-04-15 11:17:25.343 EDT [5e972585.c7c:1] LOG: database system was shut down at 2020-04-15 11:17:25 EDT\n2020-04-15 11:17:25.390 EDT [5e972585.164:3] LOG: database system is ready to accept connections\n2020-04-15 11:17:25.531 EDT [5e972585.d40:1] [unknown] LOG: connection received: host=127.0.0.1 port=60988\n2020-04-15 11:17:25.552 EDT [5e972585.d40:2] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=001_rep_changes.pl\n2020-04-15 11:17:25.567 EDT [5e972585.d40:3] 001_rep_changes.pl LOG: statement: SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'tap_sub';\n2020-04-15 11:17:25.583 EDT [5e972585.d40:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.051 user=pgrunner database=postgres host=127.0.0.1 port=60988\n2020-04-15 11:17:25.646 EDT [5e972585.1044:1] [unknown] LOG: connection received: host=127.0.0.1 port=60987\n2020-04-15 11:17:25.646 EDT [5e972585.1044:2] [unknown] LOG: replication connection authorized: user=pgrunner application_name=tap_sub\n2020-04-15 11:17:25.661 EDT [5e972585.1044:3] tap_sub LOG: received replication command: IDENTIFY_SYSTEM\n2020-04-15 11:17:25.661 EDT [5e972585.1044:4] tap_sub LOG: received replication command: START_REPLICATION SLOT \"tap_sub\" LOGICAL 0/15C1DF8 (proto_version '1', publication_names '\"tap_pub_ins_only\"')\n2020-04-15 11:17:25.661 EDT [5e972585.1044:5] tap_sub LOG: starting logical decoding for slot \"tap_sub\"\n2020-04-15 11:17:25.661 EDT [5e972585.1044:6] tap_sub DETAIL: Streaming transactions committing after 0/15A7678, reading WAL from 0/15A7678.\n2020-04-15 11:17:25.661 EDT [5e972585.1044:7] tap_sub LOG: logical decoding found consistent point at 0/15A7678\n2020-04-15 11:17:25.661 EDT [5e972585.1044:8] tap_sub DETAIL: There are no running transactions.\n2020-04-15 11:17:25.787 EDT [5e972585.1878:1] [unknown] LOG: connection received: host=127.0.0.1 port=60993\n2020-04-15 11:17:25.803 EDT [5e972585.1878:2] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=001_rep_changes.pl\n2020-04-15 11:17:25.803 EDT [5e972585.1878:3] 001_rep_changes.pl LOG: statement: SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'tap_sub';\n2020-04-15 11:17:25.819 EDT [5e972585.1878:4] 001_rep_changes.pl LOG: disconnection: session time: 0:00:00.031 user=pgrunner database=postgres host=127.0.0.1 port=60993\n2020-04-15 11:17:26.102 EDT [5e972586.fcc:1] [unknown] LOG: connection received: host=127.0.0.1 port=60996\n2020-04-15 11:17:26.102 EDT [5e972586.fcc:2] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=001_rep_changes.pl\n2020-04-15 11:17:26.117 EDT [5e972586.fcc:3] 001_rep_changes.pl LOG: statement: ALTER PUBLICATION tap_pub_ins_only SET (publish = 'insert, delete')\n...\n\n\n", "msg_date": "Sat, 25 Apr 2020 18:27:48 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "001_rep_changes.pl timeout on jacana/bowerbird" } ]
[ { "msg_contents": "Hi hackers,\n\nMy local build of master started failing last night with this error:\n\nllvmjit_inline.cpp:59:10: fatal error: 'llvm/IR/CallSite.h' file not found\n#include <llvm/IR/CallSite.h>\n ^~~~~~~~~~~~~~~~~~~~\n\nI searched my inbox and the archive, strange that nobody else is seeing\nthis.\n\nTurns out that LLVM has recently removed \"llvm/IR/CallSite.h\" in\n(unreleased) version 11 [1][2]. To fix the build I tried conditionally\n(on LLVM_VERSION_MAJOR < 11) including CallSite.h, but that looks yuck.\nThen I poked at llvmjit_inline.cpp a bit and found that CallSite.h\ndoesn't seem to be really necessary. PFA a patch that simply removes\nthis #include.\n\nIn addition, I've done the due dilligence of trying to build against\nLLVM versions 8, 9, 10.\n\nCheers,\nJesse\n\n[1] LLVM Differential Revision: https://reviews.llvm.org/D78794\n[2] LLVM commit https://github.com/llvm/llvm-project/commit/2c24051bacd2\n\"[CallSite removal] Rename CallSite.h to AbstractCallSite.h. NFC\"", "msg_date": "Sat, 25 Apr 2020 21:41:20 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Fix compilation failure against LLVM 11" }, { "msg_contents": "On Sat, Apr 25, 2020 at 09:41:20PM -0700, Jesse Zhang wrote:\n> I searched my inbox and the archive, strange that nobody else is seeing\n> this.\n> \n> Turns out that LLVM has recently removed \"llvm/IR/CallSite.h\" in\n> (unreleased) version 11 [1][2]. To fix the build I tried conditionally\n> (on LLVM_VERSION_MAJOR < 11) including CallSite.h, but that looks yuck.\n> Then I poked at llvmjit_inline.cpp a bit and found that CallSite.h\n> doesn't seem to be really necessary. PFA a patch that simply removes\n> this #include.\n> \n> In addition, I've done the due dilligence of trying to build against\n> LLVM versions 8, 9, 10.\n\nLLVM 11 has not been released yet. Do you think that this part or\neven more are subject to change before the 11 release? My take would\nbe to wait more before fixing this issue and make sure that our code\nis fixed when their code is GA'd.\n--\nMichael", "msg_date": "Mon, 27 Apr 2020 15:21:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "Hi Michael,\n\nOn Sun, Apr 26, 2020 at 11:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Apr 25, 2020 at 09:41:20PM -0700, Jesse Zhang wrote:\n> > I searched my inbox and the archive, strange that nobody else is seeing\n> > this.\n> >\n> > Turns out that LLVM has recently removed \"llvm/IR/CallSite.h\" in\n> > (unreleased) version 11 [1][2]. To fix the build I tried conditionally\n> > (on LLVM_VERSION_MAJOR < 11) including CallSite.h, but that looks yuck.\n> > Then I poked at llvmjit_inline.cpp a bit and found that CallSite.h\n> > doesn't seem to be really necessary. PFA a patch that simply removes\n> > this #include.\n> >\n> > In addition, I've done the due dilligence of trying to build against\n> > LLVM versions 8, 9, 10.\n>\n> LLVM 11 has not been released yet. Do you think that this part or\n> even more are subject to change before the 11 release? My take would\n> be to wait more before fixing this issue and make sure that our code\n> is fixed when their code is GA'd.\n> --\n> Michael\n\nAre you expressing a concern against \"churning\" this part of the code in\nreaction to upstream LLVM changes? I'd agree with you in general. But\nthen the question we need to ask is \"will we need to revert this 3 weeks\nfrom now if upstream reverts their changes?\", or \"we change X to Y now,\nwill we need to instead change X to Z 3 weeks later?\". In that frame of\nmind, the answer is simply \"no\" w.r.t this patch: it's removing an\n#include that simply has been dead: the upstream change merely exposed\nit.\n\nOTOH, is your concern more around \"how many more dead #include will LLVM\n11 reveal before September?\", I'm open to suggestions. I personally have\na bias to keep things working.\n\nCheers,\nJesse\n\n\n", "msg_date": "Mon, 27 Apr 2020 07:48:54 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "On Mon, Apr 27, 2020 at 07:48:54AM -0700, Jesse Zhang wrote:\n> Are you expressing a concern against \"churning\" this part of the code in\n> reaction to upstream LLVM changes? I'd agree with you in general. But\n> then the question we need to ask is \"will we need to revert this 3 weeks\n> from now if upstream reverts their changes?\", or \"we change X to Y now,\n> will we need to instead change X to Z 3 weeks later?\".\n\nMy concerns are a mix of all that, because we may finish by doing the\nsame verification work multiple times instead of fixing all existing\nissues at once. A good thing is that we may be able to conclude\nrather soon, it looks like LLVM releases a new major version every 3\nmonths or so.\n\n> In that frame of\n> mind, the answer is simply \"no\" w.r.t this patch: it's removing an\n> #include that simply has been dead: the upstream change merely exposed\n> it.\n\nThe docs claim support for LLVM down to 3.9. Are versions older than\n8 fine with your proposed change?\n\n> OTOH, is your concern more around \"how many more dead #include will LLVM\n> 11 reveal before September?\", I'm open to suggestions. I personally have\n> a bias to keep things working.\n\nThis position can have advantages, though it seems to me that we\nshould still wait to see if there are more issues popping up.\n--\nMichael", "msg_date": "Tue, 28 Apr 2020 13:56:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "Hi,\n\nOn 2020-04-28 13:56:23 +0900, Michael Paquier wrote:\n> On Mon, Apr 27, 2020 at 07:48:54AM -0700, Jesse Zhang wrote:\n> > Are you expressing a concern against \"churning\" this part of the code in\n> > reaction to upstream LLVM changes? I'd agree with you in general. But\n> > then the question we need to ask is \"will we need to revert this 3 weeks\n> > from now if upstream reverts their changes?\", or \"we change X to Y now,\n> > will we need to instead change X to Z 3 weeks later?\".\n> \n> My concerns are a mix of all that, because we may finish by doing the\n> same verification work multiple times instead of fixing all existing\n> issues at once. A good thing is that we may be able to conclude\n> rather soon, it looks like LLVM releases a new major version every 3\n> months or so.\n\nGiven the low cost of a change like this, and the fact that we have a\nbuildfarm animal building recent trunk versions of llvm, I think it's\nbetter to just backpatch now.\n\n> > In that frame of\n> > mind, the answer is simply \"no\" w.r.t this patch: it's removing an\n> > #include that simply has been dead: the upstream change merely exposed\n> > it.\n> \n> The docs claim support for LLVM down to 3.9. Are versions older than\n> 8 fine with your proposed change?\n\nI'll check.\n\n\n> > OTOH, is your concern more around \"how many more dead #include will LLVM\n> > 11 reveal before September?\", I'm open to suggestions. I personally have\n> > a bias to keep things working.\n> \n> This position can have advantages, though it seems to me that we\n> should still wait to see if there are more issues popping up.\n\nWhat's the benefit? The cost of checking this will be not meaningfully\nlower if there's other things to check as well, given their backward\ncompat story presumably is different.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Apr 2020 22:19:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "Hi Michael,\n\nOn Mon, Apr 27, 2020 at 9:56 PM Michael Paquier wrote:\n> rather soon, it looks like LLVM releases a new major version every 3\n> months or so.\n\nFYI LLVM has a six-month release cadence [1], the next release is\nexpected coming September (I can't tell whether you were joking).\n\nCheers,\nJesse\n\n[1] https://llvm.org/docs/HowToReleaseLLVM.html#release-timeline\n\n\n", "msg_date": "Tue, 28 Apr 2020 07:15:49 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "Hi Andres,\nOn the mensiversary of the last response, what can I do to help move\nthis along (aside from heeding the advice \"don't use LLVM HEAD\")?\n\nMichael Paquier expressed concerns over flippant churn upthread:\n\nOn Mon, Apr 27, 2020 at 10:19 PM Andres Freund wrote:\nAF> On 2020-04-28 13:56:23 +0900, Michael Paquier wrote:\nMP> > On Mon, Apr 27, 2020 at 07:48:54AM -0700, Jesse Zhang wrote:\nJZ> > > Are you expressing a concern against \"churning\" this part of the\nJZ> > > code in reaction to upstream LLVM changes? I'd agree with you in\nJZ> > > general. But then the question we need to ask is \"will we need\nJZ> > > to revert this 3 weeks from now if upstream reverts their\nJZ> > > changes?\", or \"we change X to Y now, will we need to instead\nJZ> > > change X to Z 3 weeks later?\".\n> >\nMP> > My concerns are a mix of all that, because we may finish by doing\nMP> > the same verification work multiple times instead of fixing all\nMP> > existing issues at once. A good thing is that we may be able to\nMP> > conclude rather soon, it looks like LLVM releases a new major\nMP> > version every 3 months or so.\n>\nAF> Given the low cost of a change like this, and the fact that we have\nAF> a buildfarm animal building recent trunk versions of llvm, I think\nAF> it's better to just backpatch now.\n\nFor bystanders: Andres and I seemed to agree that this is unlikely to be\nflippant (in addition to other benefits mentioned below). We haven't\ndiscussed more on this, though I'm uncertain we had a consensus.\n\n>\nJZ> > > In that frame of mind, the answer is simply \"no\" w.r.t this\nJZ> > > patch: it's removing an #include that simply has been dead: the\nJZ> > > upstream change merely exposed it.\n> >\nMP> > The docs claim support for LLVM down to 3.9. Are versions older\nMP> > than 8 fine with your proposed change?\n>\nAF> I'll check.\n>\n\nHow goes the checking? I was 99% certain it'd work but that might have\nbeen my excuse to be lazy. Do you need help on this?\n\n>\nJZ> > > OTOH, is your concern more around \"how many more dead #include\nJZ> > > will LLVM 11 reveal before September?\", I'm open to suggestions.\nJZ> > > I personally have a bias to keep things working.\n> >\nMP> > This position can have advantages, though it seems to me that we\nMP> > should still wait to see if there are more issues popping up.\n>\nAF> What's the benefit? The cost of checking this will be not\nAF> meaningfully lower if there's other things to check as well, given\nAF> their backward compat story presumably is different.\n\nFor bystanders: Andres and I argued for \"fixing this sooner and\nbackpatch\" and Michael suggested \"wait longer and whack all moles\". We\nhave waited, and there seems to be only one mole (finding all dead\nunbroken \"include\"s was left as an exercise for the reader). Have we\ncome to an agreement on this?\n\nCheers,\nJesse\n\n\n", "msg_date": "Wed, 27 May 2020 07:49:45 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "On Wed, May 27, 2020 at 07:49:45AM -0700, Jesse Zhang wrote:\n> For bystanders: Andres and I argued for \"fixing this sooner and\n> backpatch\" and Michael suggested \"wait longer and whack all moles\". We\n> have waited, and there seems to be only one mole (finding all dead\n> unbroken \"include\"s was left as an exercise for the reader). Have we\n> come to an agreement on this?\n\nIf Andres could take care of this issue as he feels is suited, that's\nOK for me. I could look at that and play with llvm builds. Now I am\nnot really familiar with it, so it would take me some time but we are\nall here to learn :)\n\nPlease note that I would still wait for their next GA release to plug\nin any extra holes at the same time. @Jesse: or is this change\nactually part of the upcoming 10.0.1?\n--\nMichael", "msg_date": "Thu, 28 May 2020 17:07:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "On Thu, May 28, 2020 at 1:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Please note that I would still wait for their next GA release to plug\n> in any extra holes at the same time. @Jesse: or is this change\n> actually part of the upcoming 10.0.1?\n\nNo a refactoring like this was not in the back branches (nor is it\nexpected).\n\nThanks,\nJesse\n\n\n", "msg_date": "Thu, 28 May 2020 11:49:42 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "On 2020-05-28 17:07:46 +0900, Michael Paquier wrote:\n> Please note that I would still wait for their next GA release to plug\n> in any extra holes at the same time. @Jesse: or is this change\n> actually part of the upcoming 10.0.1?\n\nWhy? I plan to just commit this change now.\n\n\n", "msg_date": "Thu, 28 May 2020 15:07:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" }, { "msg_contents": "Hi,\n\nOn 2020-05-27 07:49:45 -0700, Jesse Zhang wrote:\n> On the mensiversary of the last response, what can I do to help move\n> this along (aside from heeding the advice \"don't use LLVM HEAD\")?\n\nSorry, I had looked at it at point with the intent to commit it, and hit\nsome stupid small snags (*). And then I forgot about it. Pushed.\n\nThanks for the patch!\n\nAndres Freund\n\n\n(*) first I didn't see the problem, because I accidentally had an old\nversion of the header around. Then I couldn't immediately build old\nversions of pg + LLVM due to my existing installation needing to be\nrebuilt. Then there were compiler errors, due to a too new\ncompiler. Etc.\n\n\n", "msg_date": "Thu, 28 May 2020 15:28:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix compilation failure against LLVM 11" } ]
[ { "msg_contents": "Hi,\n\nbasebackup.c's code to read from files uses fread() and friends. This\nis not great, because it's not documented to set errno. See commit\n286af0ce12117bc673b97df6228d1a666594d247 and related discussion. It\nseems like a better idea would be to use pg_pgread(), which not only\ndoes set errno, but also lets us eliminate a bit of code that uses\nfseek().\n\nThere are a couple of other things here that can also be improved. One\nis that it seems like a good idea to set a wait event while doing I/O\nhere, as we do elsewhere. Another is that it seems like a good idea to\nreport short reads in a non-confusing, non-wrong sort of way. I here\nadopted the convention previously mentioned in\nhttp://postgr.es/m/20200128020303.GA1552@paquier.xyz\n\nPatch, for v14, attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 27 Apr 2020 12:33:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "improving basebackup.c's file-reading code" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThe idea and the patch looks good to me. \r\n\r\nIt makes sense to change fread to basebackup_read_file which internally calls pg_pread which is a portable function as opposed to read. As you've mentioned, this is much better for error handling.\r\n\r\nI guess it is more of a personal choice, but I would suggest making the while conditions consistent as well. The while loop in the patch @ line 121 conditions over return value of \"basebackup_read_file\" whereas @ line 177, it has a condition \"(len < statbuf->st_size)\".\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 29 Apr 2020 09:51:09 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving basebackup.c's file-reading code" }, { "msg_contents": "On Wed, Apr 29, 2020 at 5:52 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> The idea and the patch looks good to me.\n>\n> It makes sense to change fread to basebackup_read_file which internally calls pg_pread which is a portable function as opposed to read. As you've mentioned, this is much better for error handling.\n\nThanks for the review. I have now committed the patch, after rebasing\nand adjusting one comment slightly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 17 Jun 2020 11:45:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improving basebackup.c's file-reading code" }, { "msg_contents": "> On 17 Jun 2020, at 17:45, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Apr 29, 2020 at 5:52 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>> The idea and the patch looks good to me.\n>> \n>> It makes sense to change fread to basebackup_read_file which internally calls pg_pread which is a portable function as opposed to read. As you've mentioned, this is much better for error handling.\n> \n> Thanks for the review. I have now committed the patch, after rebasing\n> and adjusting one comment slightly.\n\nAs this went in, can we close the 2020-07 CF entry or is there anything left in\nthe patchseries?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 25 Jun 2020 16:29:48 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: improving basebackup.c's file-reading code" }, { "msg_contents": "On Thu, Jun 25, 2020 at 10:29 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> As this went in, can we close the 2020-07 CF entry or is there anything left in\n> the patchseries?\n\nDone. Thanks for the reminder.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 10:54:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improving basebackup.c's file-reading code" } ]
[ { "msg_contents": "Hey, everyone.\n\nIt's been quite a while since I last contributed a patch but, as this is a\nnew feature, I wanted to gather feedback before doing so. I've found this\nfunctionality, already in use at Universität Tübingen, to be both highly\nbeneficial in many of my queries as well as a small change to Postgres - a\ngood trade-off IMO.\n\nAs you know, Postgres currently supports SQL:1999 recursive common table\nexpressions, constructed using WITH RECURSIVE, which permit the computation\nof growing relations (e.g., transitive closures.) Although it is possible\nto use this recursion for general-purpose iterations, doing so is a\ndeviation from its intended use case. Accordingly, an iterative-only use of\nWITH RECURSIVE often results in sizable overhead and poor optimizer\ndecisions. In this email, I'd like to propose a similar but\niterative-optimized form of CTE - WITH ITERATIVE.\n\nIn that it can reference a relation within its definition, this iterative\nvariant has similar capabilities as recursive CTEs. In contrast to\nrecursive CTEs, however, rather than appending new tuples, this variant\nperforms a replacement of the intermediate relation. As such, the final\nresult consists of a relation containing tuples computed during the last\niteration only. Why would we want this?\n\nThis type of computation pattern is often found in data and graph mining\nalgorithms. In PageRank, for example, the initial ranks are updated in each\niteration. In clustering algorithms, assignments of data tuples to clusters\nare updated in every iteration. Something these types of algorithms have in\ncommon is that they operate on fixed-size datasets, where only specific\nvalues (ranks, assigned clusters, etc.) are updated.\n\nRather than stopping when no new tuples are generated, WITH ITERATIVE stops\nwhen a user-defined predicate evaluates to true. By providing a\nnon-appending iteration concept with a while-loop-style stop criterion, we\ncan massively speed up query execution due to better optimizer decisions.\nAlthough it is possible to implement the types of algorithms mentioned\nabove using recursive CTEs, this feature has two significant advantages:\n\nFirst, iterative CTEs consume significantly less memory. Consider a CTE of\nN tuples and I iterations. With recursive CTEs, such a relation would grow\nto (N * I) tuples. With iterative CTEs, however, all prior iterations are\ndiscarded early. As such, the size of the relation would remain N,\nrequiring only a maximum of (2 * N) tuples for comparisons of the current\nand the prior iteration. Furthermore, in queries where the number of\nexecuted iterations represents the stop criterion, recursive CTEs require\neven more additional per-tuple overhead - to carry along the iteration\ncounter.\n\nSecond, iterative CTEs have lower query response times. Because of its\nsmaller size, the time to scan and process the intermediate relation, to\nre-compute ranks, clusters, etc., is significantly reduced.\n\nIn short, iterative CTEs retain the flexibility of recursive CTEs, but\noffer a significant advantage for algorithms which don't require results\ncomputed throughout all iterations. As this feature deviates only slightly\nfrom WITH RECURSIVE, there is very little code required to support it (~250\nloc.)\n\nIf there's any interest, I'll clean-up their patch and submit. Thoughts?\n\n-- \nJonah H. Harris\n\nHey, everyone.It's been quite a while since I last contributed a patch but, as this is a new feature, I wanted to gather feedback before doing so. I've found this functionality, already in use at Universität Tübingen, to be both highly beneficial in many of my queries as well as a small change to Postgres - a good trade-off IMO.As you know, Postgres currently supports SQL:1999 recursive common table expressions, constructed using WITH RECURSIVE, which permit the computation of growing relations (e.g., transitive closures.) Although it is possible to use this recursion for general-purpose iterations, doing so is a deviation from its intended use case. Accordingly, an iterative-only use of WITH RECURSIVE often results in sizable overhead and poor optimizer decisions. In this email, I'd like to propose a similar but iterative-optimized form of CTE - WITH ITERATIVE.In that it can reference a relation within its definition, this iterative variant has similar capabilities as recursive CTEs. In contrast to recursive CTEs, however, rather than appending new tuples, this variant performs a replacement of the intermediate relation. As such, the final result consists of a relation containing tuples computed during the last iteration only. Why would we want this?This type of computation pattern is often found in data and graph mining algorithms. In PageRank, for example, the initial ranks are updated in each iteration. In clustering algorithms, assignments of data tuples to clusters are updated in every iteration. Something these types of algorithms have in common is that they operate on fixed-size datasets, where only specific values (ranks, assigned clusters, etc.) are updated.Rather than stopping when no new tuples are generated, WITH ITERATIVE stops when a user-defined predicate evaluates to true. By providing a non-appending iteration concept with a while-loop-style stop criterion, we can massively speed up query execution due to better optimizer decisions. Although it is possible to implement the types of algorithms mentioned above using recursive CTEs, this feature has two significant advantages:First, iterative CTEs consume significantly less memory. Consider a CTE of N tuples and I iterations. With recursive CTEs, such a relation would grow to (N * I) tuples. With iterative CTEs, however, all prior iterations are discarded early. As such, the size of the relation would remain N, requiring only a maximum of (2 * N) tuples for comparisons of the current and the prior iteration. Furthermore, in queries where the number of executed iterations represents the stop criterion, recursive CTEs require even more additional per-tuple overhead - to carry along the iteration counter.Second, iterative CTEs have lower query response times. Because of its smaller size, the time to scan and process the intermediate relation, to re-compute ranks, clusters, etc., is significantly reduced.In short, iterative CTEs retain the flexibility of recursive CTEs, but offer a significant advantage for algorithms which don't require results computed throughout all iterations. As this feature deviates only slightly from WITH RECURSIVE, there is very little code required to support it (~250 loc.)If there's any interest, I'll clean-up their patch and submit. Thoughts?-- Jonah H. Harris", "msg_date": "Mon, 27 Apr 2020 12:52:48 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Proposing WITH ITERATIVE" }, { "msg_contents": "Hi,\n\nYou might get better feedback in a month or so; right now we just got\ninto feature freeze.\n\nOn Mon, 2020-04-27 at 12:52 -0400, Jonah H. Harris wrote:\n> In that it can reference a relation within its definition, this\n> iterative variant has similar capabilities as recursive CTEs. In\n> contrast to recursive CTEs, however, rather than appending new\n> tuples, this variant performs a replacement of the intermediate\n> relation. As such, the final result consists of a relation containing\n> tuples computed during the last iteration only. Why would we want\n> this?\n\nCan you illustrate with some examples? I get that one is appending and\nthe other is modifying in-place, but how does this end up looking in\nthe query language?\n\n> This type of computation pattern is often found in data and graph\n> mining algorithms.\n\nIt certainly sounds useful.\n\n> Rather than stopping when no new tuples are generated, WITH ITERATIVE\n> stops when a user-defined predicate evaluates to true.\n\nWhy stop when it evaluates to true, and not false?\n\n> First, iterative CTEs consume significantly less memory. Consider a\n> CTE of N tuples and I iterations. With recursive CTEs, such a\n> relation would grow to (N * I) tuples. With iterative CTEs, however,\n> all prior iterations are discarded early. As such, the size of the\n> relation would remain N, requiring only a maximum of (2 * N) tuples\n> for comparisons of the current and the prior iteration. Furthermore,\n> in queries where the number of executed iterations represents the\n> stop criterion, recursive CTEs require even more additional per-tuple \n> overhead - to carry along the iteration counter.\n\nIt seems like the benefit comes from carrying information along within\ntuples (by adding to scores or counters) rather than appending tuples.\nIs it possible to achieve this in other ways? The recursive CTE\nimplementation is a very direct implementation of the standard, perhaps\nthere are smarter approaches?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Apr 2020 17:50:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 12:52:48PM -0400, Jonah H. Harris wrote:\n> Hey, everyone.\n\n> If there's any interest, I'll clean-up their patch and submit. Thoughts?\n\nWhere's the current patch?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 28 Apr 2020 04:32:34 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 10:32 PM David Fetter <david@fetter.org> wrote:\n\n> On Mon, Apr 27, 2020 at 12:52:48PM -0400, Jonah H. Harris wrote:\n> > Hey, everyone.\n>\n> > If there's any interest, I'll clean-up their patch and submit. Thoughts?\n>\n> Where's the current patch?\n\n\nIt’s private. Hence, why I’m inquiring as to interest.\n\n-- \nJonah H. Harris\n\nOn Mon, Apr 27, 2020 at 10:32 PM David Fetter <david@fetter.org> wrote:On Mon, Apr 27, 2020 at 12:52:48PM -0400, Jonah H. Harris wrote:\n> Hey, everyone.\n\n> If there's any interest, I'll clean-up their patch and submit. Thoughts?\n\nWhere's the current patch?It’s private. Hence, why I’m inquiring as to interest.-- Jonah H. Harris", "msg_date": "Mon, 27 Apr 2020 22:44:04 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 10:44:04PM -0400, Jonah H. Harris wrote:\n> On Mon, Apr 27, 2020 at 10:32 PM David Fetter <david@fetter.org> wrote:\n> > On Mon, Apr 27, 2020 at 12:52:48PM -0400, Jonah H. Harris wrote:\n> > > Hey, everyone.\n> >\n> > > If there's any interest, I'll clean-up their patch and submit. Thoughts?\n> >\n> > Where's the current patch?\n> \n> \n> It’s private. Hence, why I’m inquiring as to interest.\n\nHave the authors agreed to make it available to the project under a\ncompatible license?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 28 Apr 2020 05:33:35 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 11:33 PM David Fetter <david@fetter.org> wrote:\n\n>\n> Have the authors agreed to make it available to the project under a\n> compatible license?\n\n\nIf there’s interest, obviously. Otherwise I wouldn’t be asking.\n\nI said from the start why I wasn’t attaching a patch and that I was seeing\nfeedback. Honestly, David, stop wasting my, and list time, asking pointless\noff-topic questions.\n\n-- \nJonah H. Harris\n\nOn Mon, Apr 27, 2020 at 11:33 PM David Fetter <david@fetter.org> wrote:\nHave the authors agreed to make it available to the project under a\ncompatible license?If there’s interest, obviously. Otherwise I wouldn’t be asking.I said from the start why I wasn’t attaching a patch and that I was seeing feedback. Honestly, David, stop wasting my, and list time, asking pointless off-topic questions.-- Jonah H. Harris", "msg_date": "Mon, 27 Apr 2020 23:49:30 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Tue, Apr 28, 2020 at 5:49 AM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Mon, Apr 27, 2020 at 11:33 PM David Fetter <david@fetter.org> wrote:\n>\n>>\n>> Have the authors agreed to make it available to the project under a\n>> compatible license?\n>\n>\n> If there’s interest, obviously. Otherwise I wouldn’t be asking.\n>\n> I said from the start why I wasn’t attaching a patch and that I was seeing\n> feedback. Honestly, David, stop wasting my, and list time, asking\n> pointless off-topic questions.\n>\n\nJonah,\n\nI see it the other way round—it could end up as a waste of everyone's time\ndiscussing the details, if the authors don't agree to publish their code in\nthe first place. Of course, it could also be written from scratch, in\nwhich case I guess someone else from the community (who haven't seen that\nprivate code) would have to take a stab at it, but I believe it helps to\nknow this in advance.\n\nI also don't see how it \"obviously\" follows from your two claims: \"I've\nfound this functionality\" and \"I'll clean-up their patch and submit\", that\nyou have even asked (or, for that matter—found) the authors of that code.\n\nFinally, I'd like to suggest you adopt a more constructive tone and become\nfamiliar with the project's Code of Conduct[1], if you haven't yet. I am\ncertain that constructive, respectful communication from your side will\nhelp the community to focus on the actual details of your proposal.\n\nKind regards,\n-- \nAlex\n\n[1] https://www.postgresql.org/about/policies/coc/\n\nOn Tue, Apr 28, 2020 at 5:49 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:On Mon, Apr 27, 2020 at 11:33 PM David Fetter <david@fetter.org> wrote:\nHave the authors agreed to make it available to the project under a\ncompatible license?If there’s interest, obviously. Otherwise I wouldn’t be asking.I said from the start why I wasn’t attaching a patch and that I was seeing feedback. Honestly, David, stop wasting my, and list time, asking pointless off-topic questions.Jonah,I see it the other way round—it could end up as a waste of everyone's time discussing the details, if the authors don't agree to publish their code in the first place.  Of course, it could also be written from scratch, in which case I guess someone else from the community (who haven't seen that private code) would have to take a stab at it, but I believe it helps to know this in advance.I also don't see how it \"obviously\" follows from your two claims: \"I've found this functionality\" and \"I'll clean-up their patch and submit\", that you have even asked (or, for that matter—found) the authors of that code.Finally, I'd like to suggest you adopt a more constructive tone and become familiar with the project's Code of Conduct[1], if you haven't yet.  I am certain that constructive, respectful communication from your side will help the community to focus on the actual details of your proposal.Kind regards,-- Alex[1] https://www.postgresql.org/about/policies/coc/", "msg_date": "Tue, 28 Apr 2020 12:15:36 +0200", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On 4/27/20 6:52 PM, Jonah H. Harris wrote:> If there's any interest, \nI'll clean-up their patch and submit. Thoughts?\n\nHi,\n\nDo you have any examples of queries where it would help? It is pretty \nhard to say how much value some new syntax adds without seeing how it \nimproves an intended use case.\n\nAndreas\n\n\n", "msg_date": "Tue, 28 Apr 2020 12:19:55 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Tue, Apr 28, 2020 at 6:16 AM Oleksandr Shulgin <\noleksandr.shulgin@zalando.de> wrote:\n\n> will help the community to focus on the actual details of your proposal.\n>\n\nI'd like it if the community would focus on the details of the proposal as\nwell :)\n\n-- \nJonah H. Harris\n\nOn Tue, Apr 28, 2020 at 6:16 AM Oleksandr Shulgin <oleksandr.shulgin@zalando.de> wrote:will help the community to focus on the actual details of your proposal.I'd like it if the community would focus on the details of the proposal as well :)-- Jonah H. Harris", "msg_date": "Tue, 28 Apr 2020 11:08:51 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Tue, Apr 28, 2020 at 6:19 AM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> Do you have any examples of queries where it would help? It is pretty\n> hard to say how much value some new syntax adds without seeing how it\n> improves an intended use case.\n>\n\nHey, Andreas.\n\nThanks for the response. I'm currently working on a few examples per Jeff's\nresponse along with some benchmark information including improvements in\nresponse time and memory usage of the current implementation. In the\nmeantime, as this functionality has been added to a couple of other\ndatabases and there's academic research on it, if you're interested, here's\na few papers with examples:\n\nhttp://faculty.neu.edu.cn/cc/zhangyf/papers/2018-ICDCS2018-sqloop.pdf\nhttp://db.in.tum.de/~passing/publications/dm_in_hyper.pdf\n\n-- \nJonah H. Harris\n\nOn Tue, Apr 28, 2020 at 6:19 AM Andreas Karlsson <andreas@proxel.se> wrote:Do you have any examples of queries where it would help? It is pretty \nhard to say how much value some new syntax adds without seeing how it \nimproves an intended use case.Hey, Andreas.Thanks for the response. I'm currently working on a few examples per Jeff's response along with some benchmark information including improvements in response time and memory usage of the current implementation. In the meantime, as this functionality has been added to a couple of other databases and there's academic research on it, if you're interested, here's a few papers with examples:http://faculty.neu.edu.cn/cc/zhangyf/papers/2018-ICDCS2018-sqloop.pdfhttp://db.in.tum.de/~passing/publications/dm_in_hyper.pdf-- Jonah H. Harris", "msg_date": "Tue, 28 Apr 2020 11:15:20 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "Greetings Jonah!\n\n* Jonah H. Harris (jonah.harris@gmail.com) wrote:\n> On Tue, Apr 28, 2020 at 6:19 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> \n> > Do you have any examples of queries where it would help? It is pretty\n> > hard to say how much value some new syntax adds without seeing how it\n> > improves an intended use case.\n> \n> Thanks for the response. I'm currently working on a few examples per Jeff's\n> response along with some benchmark information including improvements in\n> response time and memory usage of the current implementation. In the\n> meantime, as this functionality has been added to a couple of other\n> databases and there's academic research on it, if you're interested, here's\n> a few papers with examples:\n> \n> http://faculty.neu.edu.cn/cc/zhangyf/papers/2018-ICDCS2018-sqloop.pdf\n> http://db.in.tum.de/~passing/publications/dm_in_hyper.pdf\n\nNice!\n\nOne of the first question that we need to ask though, imv anyway, is- do\nthe other databases use the WITH ITERATIVE syntax? How many of them?\nAre there other approaches? Has this been brought up to the SQL\ncommittee?\n\nIn general, we really prefer to avoid extending SQL beyond the standard,\nespecially in ways that the standard is likely to be expanded. In\nother words, we'd really prefer to avoid the risk that the SQL committee\ndeclares WITH ITERATIVE to mean something else in the future, causing us\nto have to have a breaking change to become compliant. Now, if all the\nother major DB vendors have WITH ITERATIVE and they all work the same\nway, then we can have at least some confidence that the SQL committee\nwill end up defining it in that way and there won't be any issue.\n\nWe do have someone on the committee who watches these lists, hopefully\nthey'll have a chance to comment on this. Perhaps it's already in\ndiscussion in the committee.\n\nThanks!\n\nStephen", "msg_date": "Tue, 28 Apr 2020 11:51:43 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 8:50 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> Hi,\n>\n\nHey, Jeff. Long time no talk. Good to see you're still on here.\n\nYou might get better feedback in a month or so; right now we just got\n> into feature freeze.\n>\n\nYep. No hurry. I've just been playing with this and wanted to start getting\nfeedback. It's a side-project for me anyway, so time is limited.\n\n\n> Can you illustrate with some examples? I get that one is appending and\n> the other is modifying in-place, but how does this end up looking in\n> the query language?\n>\n\nI'm putting together a few concrete real-world examples.\n\n> Rather than stopping when no new tuples are generated, WITH ITERATIVE\n> > stops when a user-defined predicate evaluates to true.\n>\n> Why stop when it evaluates to true, and not false?\n>\n\nIt's how they implemented it. A few other databases have implemented\nsimilar functionality but, as it's not standard, it's kinda just up to each\nimplementor. I'm not married to that idea, but it has worked well for me so\nfar.\n\nIt seems like the benefit comes from carrying information along within\n> tuples (by adding to scores or counters) rather than appending tuples.\n> Is it possible to achieve this in other ways? The recursive CTE\n> implementation is a very direct implementation of the standard, perhaps\n> there are smarter approaches?\n>\n\nYeah, in that specific case, one of the other implementations seems to\ncarry the counters along in the executor itself. But, as not all uses of\nthis functionality are iteration-count-based, I think that's a little\nlimiting. Using a terminator expression (of some kind) seems most\nadaptable, I think. I'll give some examples of both types of cases.\n\n-- \nJonah H. Harris\n\nOn Mon, Apr 27, 2020 at 8:50 PM Jeff Davis <pgsql@j-davis.com> wrote:Hi,Hey, Jeff. Long time no talk. Good to see you're still on here.You might get better feedback in a month or so; right now we just got\ninto feature freeze.Yep. No hurry. I've just been playing with this and wanted to start getting feedback. It's a side-project for me anyway, so time is limited. Can you illustrate with some examples? I get that one is appending and\nthe other is modifying in-place, but how does this end up looking in\nthe query language?I'm putting together a few concrete real-world examples.> Rather than stopping when no new tuples are generated, WITH ITERATIVE\n> stops when a user-defined predicate evaluates to true.\n\nWhy stop when it evaluates to true, and not false?It's how they implemented it. A few other databases have implemented similar functionality but, as it's not standard, it's kinda just up to each implementor. I'm not married to that idea, but it has worked well for me so far.It seems like the benefit comes from carrying information along within\ntuples (by adding to scores or counters) rather than appending tuples.\nIs it possible to achieve this in other ways? The recursive CTE\nimplementation is a very direct implementation of the standard, perhaps\nthere are smarter approaches?Yeah, in that specific case, one of the other implementations seems to carry the counters along in the executor itself. But, as not all uses of this functionality are iteration-count-based, I think that's a little limiting. Using a terminator expression (of some kind) seems most adaptable, I think. I'll give some examples of both types of cases.-- Jonah H. Harris", "msg_date": "Tue, 28 Apr 2020 11:57:01 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Tue, Apr 28, 2020 at 11:51 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings Jonah!\n>\n\nHey, Stephen. Hope you're doing well, man!\n\n\n> One of the first question that we need to ask though, imv anyway, is- do\n> the other databases use the WITH ITERATIVE syntax? How many of them?\n> Are there other approaches? Has this been brought up to the SQL\n> committee?\n>\n\nThere are four that I'm aware of, but I'll put together a full list. I\ndon't believe it's been proposed to the standards committee, but I'll see\nif I can find transcripts/proposals. I imagine those are all still public.\n\nIn general, we really prefer to avoid extending SQL beyond the standard,\n> especially in ways that the standard is likely to be expanded. In\n> other words, we'd really prefer to avoid the risk that the SQL committee\n> declares WITH ITERATIVE to mean something else in the future, causing us\n> to have to have a breaking change to become compliant.\n\n\nAgreed. That's my main concern as well.\n\n\n> Now, if all the other major DB vendors have WITH ITERATIVE and they all\n> work the same\n> way, then we can have at least some confidence that the SQL committee\n> will end up defining it in that way and there won't be any issue.\n>\n\nThis is where it sucks, as each vendor has done it differently. One uses\nWITH ITERATE, one uses WITH ITERATIVE, others use, what I consider to be, a\nnasty operator-esque style... I think ITERATIVE makes the most sense, but\nit does create a future contention as that definitely seems like the syntax\nthe committee would use as well.\n\nOracle ran into this issue as well, which is why they added an additional\nclause to WITH that permits selection of depth/breadth-first search et al.\nWhile it's kinda nasty, we could always similarly amend WITH RECURSIVE to\nadd an additional ITERATE or something to the tail of the with_clause rule.\nBut, that's why I'm looking for feedback :)\n\nWe do have someone on the committee who watches these lists, hopefully\n> they'll have a chance to comment on this. Perhaps it's already in\n> discussion in the committee.\n>\n\nThat would be awesome.\n\n-- \nJonah H. Harris\n\nOn Tue, Apr 28, 2020 at 11:51 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings Jonah!Hey, Stephen. Hope you're doing well, man! One of the first question that we need to ask though, imv anyway, is- do\nthe other databases use the WITH ITERATIVE syntax?  How many of them?\nAre there other approaches?  Has this been brought up to the SQL\ncommittee?There are four that I'm aware of, but I'll put together a full list. I don't believe it's been proposed to the standards committee, but I'll see if I can find transcripts/proposals. I imagine those are all still public.In general, we really prefer to avoid extending SQL beyond the standard,\nespecially in ways that the standard is likely to be expanded.  In\nother words, we'd really prefer to avoid the risk that the SQL committee\ndeclares WITH ITERATIVE to mean something else in the future, causing us\nto have to have a breaking change to become compliant.Agreed. That's my main concern as well.  Now, if all the other major DB vendors have WITH ITERATIVE and they all work the same\nway, then we can have at least some confidence that the SQL committee\nwill end up defining it in that way and there won't be any issue.This is where it sucks, as each vendor has done it differently. One uses WITH ITERATE, one uses WITH ITERATIVE, others use, what I consider to be, a nasty operator-esque style... I think ITERATIVE makes the most sense, but it does create a future contention as that definitely seems like the syntax the committee would use as well.Oracle ran into this issue as well, which is why they added an additional clause to WITH that permits selection of depth/breadth-first search et al. While it's kinda nasty, we could always similarly amend WITH RECURSIVE to add an additional ITERATE or something to the tail of the with_clause rule. But, that's why I'm looking for feedback :)We do have someone on the committee who watches these lists, hopefully\nthey'll have a chance to comment on this.  Perhaps it's already in\ndiscussion in the committee.That would be awesome.-- Jonah H. Harris", "msg_date": "Tue, 28 Apr 2020 12:05:23 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Mon, Apr 27, 2020 at 8:50 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> Can you illustrate with some examples? I get that one is appending and\n> the other is modifying in-place, but how does this end up looking in\n> the query language?\n>\n\nTo ensure credit is given to the original authors, the initial patch I'm\nworking with (against Postgres 11 and 12) came from Denis Hirn, Torsten\nGrust, and Christian Duta. Attached is a quick-and-dirty merge of that\npatch that applies cleanly against 13-devel. It is not solid, at all, but\ndemonstrates the functionality. At present, my updates can be found at\nhttps://github.com/jonahharris/postgres/tree/with-iterative\n\nAs the patch makes use of additional booleans for iteration, if there's\ninterest in incorporating this functionality, I'd like to discuss with Tom,\nRobert, et al regarding the current use of booleans for CTE recursion\ndifferentiation in parsing and planning. In terms of making it\nproduction-ready, I think the cleanest method would be to use a bitfield\n(rather than booleans) to differentiate recursive and iterative CTEs.\nThough, as that would touch more code, it's obviously up for discussion.\n\nI'm working on a few good standalone examples of PageRank, shortest path,\netc. But, the simplest Fibonacci example can be found here:\n\nEXPLAIN ANALYZE\nWITH RECURSIVE fib_sum (iteration, previous_number, new_number)\n AS (SELECT 1, 0::numeric, 1::numeric\n UNION ALL\n SELECT (iteration + 1), new_number, (previous_number + new_number)\n FROM fib_sum\n WHERE iteration <= 10000)\nSELECT r.iteration, r.new_number\n FROM fib_sum r\n ORDER BY 1 DESC\n LIMIT 1;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.81..3.81 rows=1 width=36) (actual time=44.418..44.418\nrows=1 loops=1)\n CTE fib_sum\n -> Recursive Union (cost=0.00..3.03 rows=31 width=68) (actual\ntime=0.005..14.002 rows=10001 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=68) (actual\ntime=0.004..0.004 rows=1 loops=1)\n -> WorkTable Scan on fib_sum (cost=0.00..0.24 rows=3 width=68)\n(actual time=0.001..0.001 rows=1 loops=10001)\n Filter: (iteration <= 10000)\n Rows Removed by Filter: 0\n -> Sort (cost=0.78..0.85 rows=31 width=36) (actual time=44.417..44.417\nrows=1 loops=1)\n Sort Key: r.iteration DESC\n Sort Method: top-N heapsort Memory: 27kB\n -> CTE Scan on fib_sum r (cost=0.00..0.62 rows=31 width=36)\n(actual time=0.009..42.660 rows=10001 loops=1)\n Planning Time: 0.331 ms\n Execution Time: 45.949 ms\n(13 rows)\n\n-- No ORDER/LIMIT is required with ITERATIVE as only a single tuple is\npresent\nEXPLAIN ANALYZE\nWITH ITERATIVE fib_sum (iteration, previous_number, new_number)\n AS (SELECT 1, 0::numeric, 1::numeric\n UNION ALL\n SELECT (iteration + 1), new_number, (previous_number + new_number)\n FROM fib_sum\n WHERE iteration <= 10000)\nSELECT r.iteration, r.new_number\n FROM fib_sum r;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n CTE Scan on fib_sum r (cost=3.03..3.65 rows=31 width=36) (actual\ntime=11.626..11.627 rows=1 loops=1)\n CTE fib_sum\n -> Recursive Union (cost=0.00..3.03 rows=31 width=68) (actual\ntime=11.621..11.621 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=68) (actual\ntime=0.001..0.001 rows=1 loops=1)\n -> WorkTable Scan on fib_sum (cost=0.00..0.24 rows=3 width=68)\n(actual time=0.001..0.001 rows=1 loops=10001)\n Filter: (iteration <= 10000)\n Rows Removed by Filter: 0\n Planning Time: 0.068 ms\n Execution Time: 11.651 ms\n(9 rows)\n\n\n-- \nJonah H. Harris", "msg_date": "Tue, 28 Apr 2020 15:38:09 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "Hello Jonah,\n\nNice.\n\n> -- No ORDER/LIMIT is required with ITERATIVE as only a single tuple is\n> present\n> EXPLAIN ANALYZE\n> WITH ITERATIVE fib_sum (iteration, previous_number, new_number)\n> AS (SELECT 1, 0::numeric, 1::numeric\n> UNION ALL\n> SELECT (iteration + 1), new_number, (previous_number + new_number)\n> FROM fib_sum\n> WHERE iteration <= 10000)\n> SELECT r.iteration, r.new_number\n> FROM fib_sum r;\n\nNice.\n\nMy 0,02€ about the feature design:\n\nI'm wondering about how to use such a feature in the context of WITH query \nwith several queries having different behaviors. Currently \"WITH\" \nintroduces a named-query (like a view), \"WITH RECURSIVE\" introduces a mix \nof recursive and named queries, pg really sees whether each one is \nrecursive or not, and \"RECURSIVE\" is required but could just be guessed.\n\nNow that there could be 3 variants in the mix, and for the feature to be \northogonal I think that it should be allowed. However, there is no obvious \nway to distinguish a RECURSIVE from an ITERATIVE, as it is more a \nbehavioral thing than a structural one. This suggests allowing to tag each \nquery somehow, eg before, which would be closer to the current approach:\n\n WITH\n foo(i) AS (simple select),\n RECURSIVE bla(i) AS (recursive select),\n ITERATIVE blup(i) AS (iterative select),\n\nor maybe after AS, which may be clearer because closer to the actual \nquery, which looks better to me:\n\n WITH\n foo(i) AS (simple select),\n bla(i) AS RECURSIVE (recursive select),\n boo(i) AS ITERATIVE (iterative select),\n …\n\n\nAlso, with 3 cases I would prefer that the default has a name so someone \ncan talk about it otherwise than saying \"default\". Maybe SIMPLE would \nsuffice, or something else. ISTM that as nothing is expected between AS \nand the open parenthesis, there is no need to have a reserved keyword, \nwhich is a good thing for the parser.\n\n-- \nFabien.", "msg_date": "Wed, 29 Apr 2020 07:09:41 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On 2020-04-29 07:09, Fabien COELHO wrote:\n> I'm wondering about how to use such a feature in the context of WITH query\n> with several queries having different behaviors. Currently \"WITH\"\n> introduces a named-query (like a view), \"WITH RECURSIVE\" introduces a mix\n> of recursive and named queries, pg really sees whether each one is\n> recursive or not, and \"RECURSIVE\" is required but could just be guessed.\n\nYeah the RECURSIVE vs ITERATIVE is a bit of a red herring here. As you \nsay, the RECURSIVE keyword doesn't specify the processing but marks the \nfact that the specification of the query is recursive.\n\nI think a syntax that would fit better within the existing framework \nwould be something like\n\nWITH RECURSIVE t AS (\n SELECT base case\n REPLACE ALL -- instead of UNION ALL\n SELECT recursive case\n)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 13:22:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Wed, Apr 29, 2020 at 7:22 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Yeah the RECURSIVE vs ITERATIVE is a bit of a red herring here. As you\n> say, the RECURSIVE keyword doesn't specify the processing but marks the\n> fact that the specification of the query is recursive.\n>\n\nAgreed. I started thinking through Fabien's response last night.\n\nI think a syntax that would fit better within the existing framework\n> would be something like\n>\n> WITH RECURSIVE t AS (\n> SELECT base case\n> REPLACE ALL -- instead of UNION ALL\n> SELECT recursive case\n> )\n>\n\nI was originally thinking more along the lines of Fabien's approach, but\nthis is similarly interesting.\n\n-- \nJonah H. Harris\n\nOn Wed, Apr 29, 2020 at 7:22 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Yeah the RECURSIVE vs ITERATIVE is a bit of a red herring here.  As you \nsay, the RECURSIVE keyword doesn't specify the processing but marks the \nfact that the specification of the query is recursive.Agreed. I started thinking through Fabien's response last night.I think a syntax that would fit better within the existing framework \nwould be something like\n\nWITH RECURSIVE t AS (\n     SELECT base case\n   REPLACE ALL  -- instead of UNION ALL\n     SELECT recursive case\n)I was originally thinking more along the lines of Fabien's approach, but this is similarly interesting.-- Jonah H. Harris", "msg_date": "Wed, 29 Apr 2020 10:33:46 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Wed, Apr 29, 2020 at 10:34 AM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Wed, Apr 29, 2020 at 7:22 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>\n>> Yeah the RECURSIVE vs ITERATIVE is a bit of a red herring here. As you\n>> say, the RECURSIVE keyword doesn't specify the processing but marks the\n>> fact that the specification of the query is recursive.\n>>\n>\n> Agreed. I started thinking through Fabien's response last night.\n>\n> I think a syntax that would fit better within the existing framework\n>> would be something like\n>>\n>> WITH RECURSIVE t AS (\n>> SELECT base case\n>> REPLACE ALL -- instead of UNION ALL\n>> SELECT recursive case\n>> )\n>>\n>\n> I was originally thinking more along the lines of Fabien's approach, but\n> this is similarly interesting.\n>\n\nObviously I'm very concerned about doing something that the SQL Standard\nwill clobber somewhere down the road. Having said that, the recursive\nsyntax always struck me as awkward even by SQL standards.\n\nPerhaps something like this would be more readable\n\nWITH t AS (\n UPDATE ( SELECT 1 AS ctr, 'x' as val )\n SET ctr = ctr + 1, val = val || 'x'\n WHILE ctr <= 100\n RETURNING ctr, val\n)\n\nThe notion of an UPDATE on an ephemeral subquery isn't that special, see\n\"subquery2\" in\nhttps://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm ,\nso the only syntax here without precedence is dropping a WHILE into an\nUPDATE statement.\n\nOn Wed, Apr 29, 2020 at 10:34 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:On Wed, Apr 29, 2020 at 7:22 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Yeah the RECURSIVE vs ITERATIVE is a bit of a red herring here.  As you \nsay, the RECURSIVE keyword doesn't specify the processing but marks the \nfact that the specification of the query is recursive.Agreed. I started thinking through Fabien's response last night.I think a syntax that would fit better within the existing framework \nwould be something like\n\nWITH RECURSIVE t AS (\n     SELECT base case\n   REPLACE ALL  -- instead of UNION ALL\n     SELECT recursive case\n)I was originally thinking more along the lines of Fabien's approach, but this is similarly interesting.Obviously I'm very concerned about doing something that the SQL Standard will clobber somewhere down the road. Having said that, the recursive syntax always struck me as awkward even by SQL standards.Perhaps something like this would be more readableWITH t AS (    UPDATE ( SELECT 1 AS ctr, 'x' as val )    SET ctr = ctr + 1, val = val || 'x'    WHILE ctr <= 100    RETURNING ctr, val)The notion of an UPDATE on an ephemeral subquery isn't that special, see \"subquery2\" in  https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm , so the only syntax here without precedence is dropping a WHILE into an UPDATE statement.", "msg_date": "Wed, 29 Apr 2020 12:05:08 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "Hello Corey, Hello Peter,\n\nMy 0.02 ᅵ about the alternative syntaxes:\n\nPeter:\n\n> I think a syntax that would fit better within the existing framework\n> would be something like\n>\n> WITH RECURSIVE t AS (\n> SELECT base case\n> REPLACE ALL -- instead of UNION ALL\n> SELECT recursive case\n> )\n\nA good point about this approach is that the replacement semantics is \nclear, whereas using ITERATIVE with UNION is very misleading, as it is \n*not* a union at all.\n\nThis said I'm wondering how the parser would react.\n\nMoreover, having a different syntax for normal queries and inside WITH \nquery looks very undesirable from a language design point of view. This\nsuggests that the user should be able to write it anywhere:\n\n SELECT 1 REPLACE SELECT 2;\n\nWell, maybe.\n\nI'm unclear whether \"REPLACE ALL\" vs \"REPLACE\" makes sense, ISTM that \nthere could be only one replacement semantics (delete the content and \ninsert a new one)?\n\nREPLACE should have an associativity defined wrt other operators:\n\n SELECT 1 UNION SELECT 2 REPLACE SELECT 3; -- how many rows?\n\nI do not see anything obvious. Probably 2 rows.\n\nCorey:\n\n> Obviously I'm very concerned about doing something that the SQL Standard\n> will clobber somewhere down the road. Having said that, the recursive\n> syntax always struck me as awkward even by SQL standards.\n\nIndeed!\n\n> Perhaps something like this would be more readable\n>\n> WITH t AS (\n> UPDATE ( SELECT 1 AS ctr, 'x' as val )\n> SET ctr = ctr + 1, val = val || 'x'\n> WHILE ctr <= 100\n> RETURNING ctr, val\n> )\n>\n> The notion of an UPDATE on an ephemeral subquery isn't that special, see\n> \"subquery2\" in\n> https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm ,\n\nI must admit that I do not like much needing another level of subquery, \nbut maybe it could just be another named query in the WITH statement.\n\nISTM that UPDATE is quite restrictive as the number of rows cannot \nchange, which does not seem desirable at all? How could I add or remove \nrows from one iteration to the next?\n\nISTM that the WHILE would be checked before updating, so that WHILE FALSE \ndoes nothing, in which case its position after SET is odd.\n\nHaving both WHERE and WHILE might look awkward.\n\nAlso it looks much more procedural this way, which is the point, but also \ndepart from the declarative SELECT approach of WITH RECURSIVE.\n\n-- \nFabien.", "msg_date": "Wed, 29 Apr 2020 20:43:04 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": ">\n>\n> > Perhaps something like this would be more readable\n> >\n> > WITH t AS (\n> > UPDATE ( SELECT 1 AS ctr, 'x' as val )\n> > SET ctr = ctr + 1, val = val || 'x'\n> > WHILE ctr <= 100\n> > RETURNING ctr, val\n> > )\n> >\n> > The notion of an UPDATE on an ephemeral subquery isn't that special, see\n> > \"subquery2\" in\n> >\n> https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm\n> ,\n>\n> I must admit that I do not like much needing another level of subquery,\n> but maybe it could just be another named query in the WITH statement.\n>\n\nSo like this:\nWITH initial_conditions as (SELECT 1 as ctr, 'x' as val)\nUPDATE initial_conditions\nSET ctr = ctr + 1, val = val || 'x'\nWHILE ctr <= 100\nRETURNING ctr, val\n\n\n> ISTM that UPDATE is quite restrictive as the number of rows cannot\n> change, which does not seem desirable at all? How could I add or remove\n> rows from one iteration to the next?\n>\n\nMy understanding was that maintaining a fixed number of rows was a desired\nfeature.\n\n\n> ISTM that the WHILE would be checked before updating, so that WHILE FALSE\n> does nothing, in which case its position after SET is odd.\n>\n\nTrue, but having the SELECT before the FROM is equally odd.\n\n\n> Having both WHERE and WHILE might look awkward.\n>\n\nMaybe an UNTIL instead of WHILE?\n\n\n>\n> Also it looks much more procedural this way, which is the point, but also\n> depart from the declarative SELECT approach of WITH RECURSIVE.\n>\n\nYeah, just throwing it out as a possibility. Looking again at what I\nsuggested, it looks a bit like the Oracle \"CONNECT BY level <= x\" idiom.\n\nI suspect that the SQL standards body already has some preliminary work\ndone, and we should ultimately follow that.\n\n> Perhaps something like this would be more readable\n>\n> WITH t AS (\n>    UPDATE ( SELECT 1 AS ctr, 'x' as val )\n>    SET ctr = ctr + 1, val = val || 'x'\n>    WHILE ctr <= 100\n>    RETURNING ctr, val\n> )\n>\n> The notion of an UPDATE on an ephemeral subquery isn't that special, see\n> \"subquery2\" in\n> https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/update_statement.htm ,\n\nI must admit that I do not like much needing another level of subquery, \nbut maybe it could just be another named query in the WITH statement.So like this:WITH initial_conditions as (SELECT 1 as ctr, 'x' as val)UPDATE initial_conditionsSET ctr = ctr + 1, val = val || 'x'WHILE ctr <= 100RETURNING ctr, val ISTM that UPDATE is quite restrictive as the number of rows cannot \nchange, which does not seem desirable at all? How could I add or remove \nrows from one iteration to the next?My understanding was that maintaining a fixed number of rows was a desired feature. ISTM that the WHILE would be checked before updating, so that WHILE FALSE \ndoes nothing, in which case its position after SET is odd.True, but having the SELECT before the FROM is equally odd. Having both WHERE and WHILE might look awkward.Maybe an UNTIL instead of WHILE? \n\nAlso it looks much more procedural this way, which is the point, but also \ndepart from the declarative SELECT approach of WITH RECURSIVE.\nYeah, just throwing it out as a possibility. Looking again at what I suggested, it looks a bit like the Oracle \"CONNECT BY level <= x\" idiom.I suspect that the SQL standards body already has some preliminary work done, and we should ultimately follow that.", "msg_date": "Wed, 29 Apr 2020 16:50:35 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Wed, Apr 29, 2020 at 4:50 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> Having both WHERE and WHILE might look awkward.\n>>\n>\n> Maybe an UNTIL instead of WHILE?\n>\n\nWhile I'm not a huge fan of it, one of the other databases implementing\nthis functionality does so using the syntax:\n\nWITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' Qf\n\nWhere N in ITERATIONS represents termination at an explicit count and, in\nUPDATES, represents termination after Ri updates more than n rows on table\nR.\n\n-- \nJonah H. Harris\n\nOn Wed, Apr 29, 2020 at 4:50 PM Corey Huinker <corey.huinker@gmail.com> wrote:Having both WHERE and WHILE might look awkward.Maybe an UNTIL instead of WHILE?While I'm not a huge fan of it, one of the other databases implementing this functionality does so using the syntax:WITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' QfWhere N in ITERATIONS represents termination at an explicit count and, in UPDATES, represents termination after Ri updates more than n rows on table R.-- Jonah H. Harris", "msg_date": "Wed, 29 Apr 2020 17:54:04 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Wed, Apr 29, 2020 at 5:54 PM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Wed, Apr 29, 2020 at 4:50 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>> Having both WHERE and WHILE might look awkward.\n>>>\n>>\n>> Maybe an UNTIL instead of WHILE?\n>>\n>\n> While I'm not a huge fan of it, one of the other databases implementing\n> this functionality does so using the syntax:\n>\n> WITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' Qf\n>\n> Where N in ITERATIONS represents termination at an explicit count and, in\n> UPDATES, represents termination after Ri updates more than n rows on table\n> R.\n>\n\nSent too soon :) One of the main reasons I dislike the above is that it\nassumes N is known. In some cases, however, you really need termination\nupon a condition.\n\n-- \nJonah H. Harris\n\nOn Wed, Apr 29, 2020 at 5:54 PM Jonah H. Harris <jonah.harris@gmail.com> wrote:On Wed, Apr 29, 2020 at 4:50 PM Corey Huinker <corey.huinker@gmail.com> wrote:Having both WHERE and WHILE might look awkward.Maybe an UNTIL instead of WHILE?While I'm not a huge fan of it, one of the other databases implementing this functionality does so using the syntax:WITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' QfWhere N in ITERATIONS represents termination at an explicit count and, in UPDATES, represents termination after Ri updates more than n rows on table R.Sent too soon :) One of the main reasons I dislike the above is that it assumes N is known. In some cases, however, you really need termination upon a condition.-- Jonah H. Harris", "msg_date": "Wed, 29 Apr 2020 17:59:27 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "\nHello,\n\nmore random thoughts about syntax, semantics, and keeping it relational.\n\n> While I'm not a huge fan of it, one of the other databases implementing\n> this functionality does so using the syntax:\n>\n> WITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' Qf\n>\n> Where N in ITERATIONS represents termination at an explicit count and, in\n> UPDATES, represents termination after Ri updates more than n rows on table\n> R.\n>\n> One of the main reasons I dislike the above is that it assumes N is \n> known. In some cases, however, you really need termination upon a \n> condition.\n\nYes, definitely, a (boolean?) condition is really needed, but possibly \nabove N could be an expression, maybe with some separator before the \nquery.\n\nISTM that using SELECT iterations is relational and close to the currently \nexisting RECURSIVE. Separating the initialization and iterations with \nITERATE is kind of the same approach than Peter's REPLACE, somehow, i.e. a \nnew marker.\n\nThe above approach bothers me because it changes the query syntax a lot. \nThe inside-WITH syntax should be the same as the normal query syntax.\n\nFirst try. If we go to new markers, maybe the following, which kind of \nreuse Corey explicit condition, but replacing UPDATE with SELECT which \nmakes it more generic:\n\n WITH R AS (\n ITERATE [STARTING] FROM R0\n WHILE/UNTIL condition REPEAT Ri\n );\n\nOk, it is quite procedural. It is really just a reordering of the syntax \nshown above, with a boolean condition thrown in and a heavy on (key)words \nSQL-like look and feel. It seems to make sense on a simple example:\n\n -- 1 by 1 count\n WITH counter(n) (\n ITERATE STARTING FROM\n SELECT 1\n WHILE n < 10 REPEAT\n SELECT n+1 FROM counter\n );\n\nHowever I'm very unclear about the WHILE stuff, it makes some sense here \nbecause there is just one row, but what if there are severals?\n\n -- 2 by 2 count\n WITH counter(n) (\n ITERATE [STARTING FROM? OVER? nothing?]\n SELECT 1 UNION SELECT 2 -- cannot be empty? why not?\n WHILE n < 10 REPEAT\n -- which n it is just above?\n -- shoult it add a ANY/ALL semantics?\n -- should it really be a sub-query returning a boolean?\n -- eg: WHILE TRUE = ANY/ALL (SELECT n < 10 FROM counter)\n -- which I find pretty ugly.\n -- what else could it be?\n SELECT n+2 FROM counter\n );\n\nAlso, the overall syntax does not make much sense outside a WITH because \none cannot reference the initial query which has no name.\n\nHmmm. Not very convincing:-) Let us try again.\n\nBasically iterating is a 3 select construct: one for initializing, one for \niterating, one for the stopping condition, with naming issues, the last \npoint being exactly what WITH should solve.\n\nby restricting the syntax to normal existing selects and moving things \nout:\n\n WITH stuff(n) AS\n ITERATE OVER/FROM/STARTING FROM '(' initial-sub-query ')' -- or a table?\n WHILE/UNTIL '(' condition-sub-query ')'\n -- what is TRUE/FALSE? non empty? other?\n -- WHILE/UNTIL [NOT] EXISTS '(' query ')' ??\n REPEAT/DO/LOOP/... '(' sub-query-over-stuff ')'\n );\n\nAt least the 3 sub-queries are just standard queries, only the wrapping \naround (ITERATE ... WHILE/UNTIL ... REPEAT ...) is WITH specific, which is \nsomehow better than having new separators in the query syntax itself. It \nis pretty relational inside, and procedural on the outside, the two levels \nare not mixed, which is the real win from my point of view.\n\nISTM that the key take away from the above discussion is to keep the \noverhead syntax in WITH, it should not be moved inside the query in any \nway, like adding REPLACE or WHILE or whatever there. This way there is \nminimal interference with future query syntax extensions, there is only a \nspecific WITH-level 3-query construct with pretty explicit markers.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 30 Apr 2020 08:00:07 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" }, { "msg_contents": "On Tue, 2020-04-28 at 11:57 -0400, Jonah H. Harris wrote:\n> Yeah, in that specific case, one of the other implementations seems\n> to carry the counters along in the executor itself. But, as not all\n> uses of this functionality are iteration-count-based, I think that's\n> a little limiting. Using a terminator expression (of some kind) seems\n> most adaptable, I think. I'll give some examples of both types of\n> cases.\n\nIn my experience, graph algorithms or other queries doing more\nspecialized analysis tend to get pretty complex with lots of special\ncases. Users may want to express these algorithms in a more familiar\nlanguage (R, Python, etc.), and to version the code (probably in an\nextension).\n\nHave you considered taking this to the extreme and implementing\nsomething like User-Defined Table Operators[1]? Or is there a\nmotivation for writing such algorithms inline in SQL?\n\nRegards,\n\tJeff Davis\n\n[1] http://www.vldb.org/conf/1999/P47.pdf\n\n\n\n\n", "msg_date": "Thu, 30 Apr 2020 08:58:09 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Proposing WITH ITERATIVE" } ]
[ { "msg_contents": "Hello --\n\nI regularly build GiST indexes on large databases. In recent times, as the\nsize of the database has ballooned (30 million rows) along with the build\ntime on a column of points in 3- and 8-dimensional space (0-volume cube).\n\nIs anyone working on (or already implemented) a parallel GiST build on\nCube? If not, I'd like to try contributing this--any pointers from folks\nfamiliar with the backend of Cube? Any input would be great.\n\nThanks,\nShyam\n\n--\nShyam Saladi <http://shyam.saladi.org>\nNSF Graduate Research Fellow - Clemons Lab\nBiochemistry and Molecular Biophysics\nCalifornia Institute of Technology\n\nHello --I regularly build GiST indexes on large databases. In recent times, as the size of the database has ballooned (30 million rows) along with the build time on a column of points in 3- and 8-dimensional space (0-volume cube). Is anyone working on (or already implemented) a parallel GiST build on Cube? If not, I'd like to try contributing this--any pointers from folks familiar with the backend of Cube? Any input would be great.Thanks,Shyam--Shyam SaladiNSF Graduate Research Fellow - Clemons LabBiochemistry and Molecular BiophysicsCalifornia Institute of Technology", "msg_date": "Mon, 27 Apr 2020 10:56:12 -0700", "msg_from": "Shyam Saladi <saladi@caltech.edu>", "msg_from_op": true, "msg_subject": "Parallel GiST build on Cube" }, { "msg_contents": "Hello,\n\nThese things for GIST I know that can help:\n - Fast sorting GIST build commitfest entry by Andrey Borodin, not parallel\nbut faster -\nhttps://www.postgresql.org/message-id/flat/1A36620E-CAD8-4267-9067-FB31385E7C0D%40yandex-team.ru\n\n - Fast sorting GIST build by Nikita Glukhov, reuses btree code so also\nparallel -\nhttps://github.com/postgres/postgres/compare/master...glukhovn:gist_btree_build\n\n - \"Choose Subtree\" routine is needed, as current \"penalty\" is very\ninefficient -\nhttps://www.postgresql.org/message-id/flat/CAPpHfdssv2i7CXTBfiyR6%3D_A3tp19%2BiLo-pkkfD6Guzs2-tvEw%40mail.gmail.com#eaa98342462a4713c0d3a94be636e259\n\nThese are very wanted for PostGIS which also indexes everything by 2-4\ndimensional cubes and require improvements in core infrastructure and\nopclass.\n\n\n\n\nOn Mon, Apr 27, 2020 at 8:57 PM Shyam Saladi <saladi@caltech.edu> wrote:\n\n> Hello --\n>\n> I regularly build GiST indexes on large databases. In recent times, as the\n> size of the database has ballooned (30 million rows) along with the build\n> time on a column of points in 3- and 8-dimensional space (0-volume cube).\n>\n> Is anyone working on (or already implemented) a parallel GiST build on\n> Cube? If not, I'd like to try contributing this--any pointers from folks\n> familiar with the backend of Cube? Any input would be great.\n>\n> Thanks,\n> Shyam\n>\n> --\n> Shyam Saladi <http://shyam.saladi.org>\n> NSF Graduate Research Fellow - Clemons Lab\n> Biochemistry and Molecular Biophysics\n> California Institute of Technology\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHello,These things for GIST I know that can help: - Fast sorting GIST build commitfest entry by Andrey Borodin, not parallel but faster - https://www.postgresql.org/message-id/flat/1A36620E-CAD8-4267-9067-FB31385E7C0D%40yandex-team.ru  - Fast sorting GIST build by Nikita Glukhov, reuses btree code so also parallel - https://github.com/postgres/postgres/compare/master...glukhovn:gist_btree_build  - \"Choose Subtree\" routine is needed, as current \"penalty\" is very inefficient - https://www.postgresql.org/message-id/flat/CAPpHfdssv2i7CXTBfiyR6%3D_A3tp19%2BiLo-pkkfD6Guzs2-tvEw%40mail.gmail.com#eaa98342462a4713c0d3a94be636e259These are very wanted for PostGIS which also indexes everything by 2-4 dimensional cubes and require improvements in core infrastructure and opclass.On Mon, Apr 27, 2020 at 8:57 PM Shyam Saladi <saladi@caltech.edu> wrote:Hello --I regularly build GiST indexes on large databases. In recent times, as the size of the database has ballooned (30 million rows) along with the build time on a column of points in 3- and 8-dimensional space (0-volume cube). Is anyone working on (or already implemented) a parallel GiST build on Cube? If not, I'd like to try contributing this--any pointers from folks familiar with the backend of Cube? Any input would be great.Thanks,Shyam--Shyam SaladiNSF Graduate Research Fellow - Clemons LabBiochemistry and Molecular BiophysicsCalifornia Institute of Technology\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Mon, 27 Apr 2020 21:48:47 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Parallel GiST build on Cube" } ]
[ { "msg_contents": "Attached are two patches, both of which are fixes for bugs in nbtree\nVACUUM page deletion.\n\nThe first fix for a bug in commit 857f9c36cda. The immediate issue is\nthat the code that maintains the oldest btpo.xact in the index\naccesses the special area of pages without holding a buffer pin. More\nfundamentally, the logic for examining pages that are deleted by the\ncurrent VACUUM operation for the purposes of maintaining the oldest\nbtpo.xact needs to be pushed down into the guts of the page deletion\ncode. This has been described on other threads [1][2], but I'm\nstarting a new one to focus attention on the bugs themselves (any\ndiscussion of Valgrind instrumentation can remain on the other\nthread).\n\nThe second fix is a spin-off of the first. It fixes a much less\nserious issue that is present in all supported versions (it has\nprobably been around forever). The issue is with undercounting in the\n\"%u index pages have been deleted\" figure reported by VACUUM VERBOSE.\n\nFor example, suppose I delete most of the tuples from a table with a\nB-Tree index, leading to lots of empty pages that are candidates for\ndeletion (the specifics really aren't very important). I then run\nVACUUM VERBOSE, and can see something like this:\n\n4900 index pages have been deleted, 0 are currently reusable.\n\nIf I immediately run another VACUUM VERBOSE, I can see this:\n\n4924 index pages have been deleted, 4924 are currently reusable.\n\nIt makes sense that the pages weren't reusable in the first VACUUM,\nbut were in the second VACUUM. But where did the extra 24 pages come\nfrom? That doesn't make any sense at all. With the second patch\napplied, the same test case shows the correct number (\"4924 index\npages have been deleted\") consistently. See the patch for details of\nhow the accounting is wrong. (The short version is that we need to\npush the accounting down into the guts of page deletion, which is why\nthe second patch relies on the first patch.)\n\nI would like to backpatch both patches to all branches that have\ncommit 857f9c36cda -- v11, v12, and master. The second issue isn't\nserious, but it seems worth keeping v11+ in sync in this area. Note\nthat any backpatch theoretically creates an ABI break for callers of\nthe _bt_pagedel() function. Perhaps I could get around this by using\nglobal state in the back branches or something, but that's ugly as\nsin. Besides, I have a very hard time imagining an extension that\nfeels entitled to call _bt_pagedel(). There are all kinds of ways in\nwhich that's a terrible idea. And it's hard to imagine how that could\never seem useful. I'd like to hear what other people think about this\nABI issue in particular, though.\n\n[1] https://postgr.es/m/CAH2-Wz=mWPBLZ2cr95cBV=kzPav8OOBtkHTfg+-+AUiozANK0w@mail.gmail.com\n[2] https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com\n-- \nPeter Geoghegan", "msg_date": "Mon, 27 Apr 2020 11:02:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Mon, Apr 27, 2020 at 11:02 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would like to backpatch both patches to all branches that have\n> commit 857f9c36cda -- v11, v12, and master. The second issue isn't\n> serious, but it seems worth keeping v11+ in sync in this area. Note\n> that any backpatch theoretically creates an ABI break for callers of\n> the _bt_pagedel() function.\n\nI plan to commit both fixes, while backpatching to v11, v12 and the\nmaster branch on Friday -- barring objections.\n\nI'm almost sure that the ABI thing is a non-issue, but it would be\nnice to get that seconded.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Apr 2020 19:35:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Tue, 28 Apr 2020 at 11:35, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Apr 27, 2020 at 11:02 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I would like to backpatch both patches to all branches that have\n> > commit 857f9c36cda -- v11, v12, and master. The second issue isn't\n> > serious, but it seems worth keeping v11+ in sync in this area. Note\n> > that any backpatch theoretically creates an ABI break for callers of\n> > the _bt_pagedel() function.\n>\n> I plan to commit both fixes, while backpatching to v11, v12 and the\n> master branch on Friday -- barring objections.\n\nThank you for the patches and for starting the new thread.\n\nI agree with both patches.\n\nFor the first fix it seems better to push down the logic to the page\ndeletion code as your 0001 patch does so. The following change changes\nthe page deletion code so that it emits LOG message indicating the\nindex corruption when a deleted page is passed. But we used to ignore\nin this case.\n\n@@ -1511,14 +1523,21 @@ _bt_pagedel(Relation rel, Buffer buf)\n\n for (;;)\n {\n- page = BufferGetPage(buf);\n+ page = BufferGetPage(leafbuf);\n opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n\n /*\n * Internal pages are never deleted directly, only as part of deleting\n * the whole branch all the way down to leaf level.\n+ *\n+ * Also check for deleted pages here. Caller never passes us a fully\n+ * deleted page. Only VACUUM can delete pages, so there can't have\n+ * been a concurrent deletion. Assume that we reached any deleted\n+ * page encountered here by following a sibling link, and that the\n+ * index is corrupt.\n */\n- if (!P_ISLEAF(opaque))\n+ Assert(!P_ISDELETED(opaque));\n+ if (!P_ISLEAF(opaque) || P_ISDELETED(opaque))\n {\n /*\n * Pre-9.4 page deletion only marked internal pages as half-dead,\n@@ -1537,13 +1556,21 @@ _bt_pagedel(Relation rel, Buffer buf)\n errmsg(\"index \\\"%s\\\" contains a half-dead\ninternal page\",\n RelationGetRelationName(rel)),\n errhint(\"This can be caused by an interrupted\nVACUUM in version 9.3 or older, before upgrade. Please REINDEX\nit.\")));\n- _bt_relbuf(rel, buf);\n+\n+ if (P_ISDELETED(opaque))\n+ ereport(LOG,\n+ (errcode(ERRCODE_INDEX_CORRUPTED),\n+ errmsg(\"index \\\"%s\\\" contains a deleted page\nwith a live sibling link\",\n+ RelationGetRelationName(rel))));\n+\n+ _bt_relbuf(rel, leafbuf);\n return ndeleted;\n }\n\nI agree with this change but since I'm not sure this change directly\nrelates to this bug I wonder if it can be a separate patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 28 Apr 2020 16:20:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Tue, Apr 28, 2020 at 12:21 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> I agree with both patches.\n\nThanks for the review.\n\n> For the first fix it seems better to push down the logic to the page\n> deletion code as your 0001 patch does so. The following change changes\n> the page deletion code so that it emits LOG message indicating the\n> index corruption when a deleted page is passed. But we used to ignore\n> in this case.\n\n> I agree with this change but since I'm not sure this change directly\n> relates to this bug I wonder if it can be a separate patch.\n\nI am not surprised that you are concerned about this. That was the\nhardest decision I had to make when writing the patch.\n\nThe central idea in the 0001-* patch (as I say at the end of the\ncommit message) is that we have a strict rule about where we handle\nmaintaining the oldest btpo.xact for already-deleted pages, and where\nwe handle it for pages that become deleted. The rules is this:\nbtvacuumpage() is strictly responsible for the former category (as\nwell as totally live pages), while _bt_pagedel() is responsible for\nthe latter category (this includes pages that started out as half-dead\nbut become deleted).\n\nIn order for this to work, _bt_pagedel() must not ever see a deleted\npage that it did not delete itself. If I didn't explicitly take a\nstrict position on this, then I suppose that I'd have to have similar\nhandling for maintaining the oldest btpo.xact for existing deleted\npages encountered within _bt_pagedel(). But that doesn't make any\nsense, even with corruption, so I took a strict position. Even with\ncorruption, how could we encounter an *existing* deleted page in\n_bt_pagedel() but not encounter the same page within some call to\nbtvacuumpage() made from btvacuumscan()? In other words, how can we\nfail to maintain the oldest btpo.xact for deleted pages even if we\nassume that the index has a corrupt page with sibling links that point\nto a fully deleted page? It seems impossible, even in this extreme,\ncontrived scenario.\n\nThen there is the separate question of the new log message about\ncorruption. It's not too hard to see why _bt_pagedel() cannot ever\naccess an existing deleted page unless there is corruption:\n\n* _bt_pagedel()'s only caller (the btvacuumpage() caller) will simply\nnever pass a deleted page -- it handles those directly.\n\n* _bt_pagedel() can only access additional leaf pages to consider\ndeleting them by following the right sibling link of the\nbtvacuumpage() caller's leaf page (or possibly some page even further\nto the right if it ends up deleting many leaf pages in one go).\nFollowing right links and finding a deleted page can only happen when\nthere is a concurrent page deletion by VACUUM, since the sibling links\nto the deleted page are changed by the second stage of deletion (i.e.\nby the atomic actionat where the page is actually marked deleted,\nwithin _bt_unlink_halfdead_page()).\n\n* There cannot be a concurrent VACUUM, though, since _bt_pagedel()\nruns in VACUUM. (Note that we already depend on this for correctness\nin _bt_unlink_halfdead_page(), which has a comment that says \"a\nhalf-dead page cannot resurrect because there can be only one vacuum\nprocess running at a time\".)\n\nThe strict position seems justified. It really isn't that strict. I'm\nnot concerned about the new LOG message concerning corruption being\nannoying or wrong\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Apr 2020 09:17:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Tue, Apr 28, 2020 at 12:21 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> For the first fix it seems better to push down the logic to the page\n> deletion code as your 0001 patch does so. The following change changes\n> the page deletion code so that it emits LOG message indicating the\n> index corruption when a deleted page is passed. But we used to ignore\n> in this case.\n\nAttached is v2 of the patch set, which includes a new patch that I\nwant to target the master branch with. This patch (v2-0003-*)\nrefactors btvacuumscan().\n\nThis revision also changed the first bugfix patch. I have simplified\nsome of the details in nbtree.c that were added by commit 857f9c36cda.\nCan't we call _bt_update_meta_cleanup_info() at a lower level, like in\nthe patch? AFAICT it makes more sense to just call it in\nbtvacuumscan(). Please let me know what you think of those changes.\n\nThe big change in the new master-only refactoring patch (v2-0003-*) is\nthat I now treat a call to btvacuumpage() that has to\n\"backtrack/recurse\" and doesn't find a page that looks like the\nexpected right half of a page split (a page split that occurred since\nthe VACUUM operation began) as indicating corruption. This corruption\nis logged. I believe that we should never see this happen, for reasons\nthat are similar to the reasons why _bt_pagedel() never finds a\ndeleted page when moving right, as I went into in my recent e-mail to\nthis thread. I think that this is worth tightening up for Postgres 13.\n\nI will hold off on committing v2-0003-*, since I need to nail down the\nreasoning for treating this condition as corruption. Plus it's not\nurgent. I think that the general direction taken in v2-0003-* is the\nright one, in any case. The \"recursion\" in btvacuumpage() doesn't make\nmuch sense -- it obscures what's really going on IMV. Also, using the\nvariable name \"scanblkno\" at every level -- btvacuumscan(),\nbtvacuumpage(), and _bt_pagedel() -- makes it clear that it's the same\nblock at all levels of the code. And that the \"backtrack/recursion\"\ncase doesn't kill tuples from the \"scanblkno\" block (and doesn't\nattempt to call _bt_pagedel(), which is designed only to handle blocks\npassed by btvacuumpage() when they are the current \"scanblkno\"). It\nseems unlikely that the VACUUM VERBOSE pages deleted accounting bug\nwould have happened if the code was already structured this way.\n\n--\nPeter Geoghegan", "msg_date": "Wed, 29 Apr 2020 15:29:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Wed, 29 Apr 2020 at 01:17, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 28, 2020 at 12:21 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > I agree with both patches.\n>\n> Thanks for the review.\n>\n> > For the first fix it seems better to push down the logic to the page\n> > deletion code as your 0001 patch does so. The following change changes\n> > the page deletion code so that it emits LOG message indicating the\n> > index corruption when a deleted page is passed. But we used to ignore\n> > in this case.\n>\n> > I agree with this change but since I'm not sure this change directly\n> > relates to this bug I wonder if it can be a separate patch.\n>\n> I am not surprised that you are concerned about this. That was the\n> hardest decision I had to make when writing the patch.\n>\n> The central idea in the 0001-* patch (as I say at the end of the\n> commit message) is that we have a strict rule about where we handle\n> maintaining the oldest btpo.xact for already-deleted pages, and where\n> we handle it for pages that become deleted. The rules is this:\n> btvacuumpage() is strictly responsible for the former category (as\n> well as totally live pages), while _bt_pagedel() is responsible for\n> the latter category (this includes pages that started out as half-dead\n> but become deleted).\n>\n> In order for this to work, _bt_pagedel() must not ever see a deleted\n> page that it did not delete itself. If I didn't explicitly take a\n> strict position on this, then I suppose that I'd have to have similar\n> handling for maintaining the oldest btpo.xact for existing deleted\n> pages encountered within _bt_pagedel(). But that doesn't make any\n> sense, even with corruption, so I took a strict position. Even with\n> corruption, how could we encounter an *existing* deleted page in\n> _bt_pagedel() but not encounter the same page within some call to\n> btvacuumpage() made from btvacuumscan()? In other words, how can we\n> fail to maintain the oldest btpo.xact for deleted pages even if we\n> assume that the index has a corrupt page with sibling links that point\n> to a fully deleted page? It seems impossible, even in this extreme,\n> contrived scenario.\n>\n> Then there is the separate question of the new log message about\n> corruption. It's not too hard to see why _bt_pagedel() cannot ever\n> access an existing deleted page unless there is corruption:\n>\n> * _bt_pagedel()'s only caller (the btvacuumpage() caller) will simply\n> never pass a deleted page -- it handles those directly.\n>\n> * _bt_pagedel() can only access additional leaf pages to consider\n> deleting them by following the right sibling link of the\n> btvacuumpage() caller's leaf page (or possibly some page even further\n> to the right if it ends up deleting many leaf pages in one go).\n> Following right links and finding a deleted page can only happen when\n> there is a concurrent page deletion by VACUUM, since the sibling links\n> to the deleted page are changed by the second stage of deletion (i.e.\n> by the atomic actionat where the page is actually marked deleted,\n> within _bt_unlink_halfdead_page()).\n>\n> * There cannot be a concurrent VACUUM, though, since _bt_pagedel()\n> runs in VACUUM. (Note that we already depend on this for correctness\n> in _bt_unlink_halfdead_page(), which has a comment that says \"a\n> half-dead page cannot resurrect because there can be only one vacuum\n> process running at a time\".)\n>\n> The strict position seems justified. It really isn't that strict. I'm\n> not concerned about the new LOG message concerning corruption being\n> annoying or wrong\n\nThank you for the explanation. This makes sense to me. We've ever been\nable to regard it as an index corruption that an existing deleted page\nis detected within _bt_pagedel(). But with changes required for fixing\nthis bug, the responsibilities will change so that _bt_pagedel() must\nnot see a deleted page. Therefore we can take a more strict position.\nIt's not that since this bug could lead an index corruption we will\nstart warning it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Apr 2020 15:38:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Thu, 30 Apr 2020 at 07:29, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 28, 2020 at 12:21 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > For the first fix it seems better to push down the logic to the page\n> > deletion code as your 0001 patch does so. The following change changes\n> > the page deletion code so that it emits LOG message indicating the\n> > index corruption when a deleted page is passed. But we used to ignore\n> > in this case.\n>\n> Attached is v2 of the patch set, which includes a new patch that I\n> want to target the master branch with. This patch (v2-0003-*)\n> refactors btvacuumscan().\n>\n> This revision also changed the first bugfix patch. I have simplified\n> some of the details in nbtree.c that were added by commit 857f9c36cda.\n> Can't we call _bt_update_meta_cleanup_info() at a lower level, like in\n> the patch? AFAICT it makes more sense to just call it in\n> btvacuumscan(). Please let me know what you think of those changes.\n\nYes, I agree.\n\n>\n> The big change in the new master-only refactoring patch (v2-0003-*) is\n> that I now treat a call to btvacuumpage() that has to\n> \"backtrack/recurse\" and doesn't find a page that looks like the\n> expected right half of a page split (a page split that occurred since\n> the VACUUM operation began) as indicating corruption. This corruption\n> is logged. I believe that we should never see this happen, for reasons\n> that are similar to the reasons why _bt_pagedel() never finds a\n> deleted page when moving right, as I went into in my recent e-mail to\n> this thread. I think that this is worth tightening up for Postgres 13.\n>\n> I will hold off on committing v2-0003-*, since I need to nail down the\n> reasoning for treating this condition as corruption. Plus it's not\n> urgent. I think that the general direction taken in v2-0003-* is the\n> right one, in any case. The \"recursion\" in btvacuumpage() doesn't make\n> much sense -- it obscures what's really going on IMV. Also, using the\n> variable name \"scanblkno\" at every level -- btvacuumscan(),\n> btvacuumpage(), and _bt_pagedel() -- makes it clear that it's the same\n> block at all levels of the code. And that the \"backtrack/recursion\"\n> case doesn't kill tuples from the \"scanblkno\" block (and doesn't\n> attempt to call _bt_pagedel(), which is designed only to handle blocks\n> passed by btvacuumpage() when they are the current \"scanblkno\"). It\n> seems unlikely that the VACUUM VERBOSE pages deleted accounting bug\n> would have happened if the code was already structured this way.\n\n+1 for refactoring. I often get confused that btvacuumpage() takes two\nblock numbers (blkno and orig_blkno) in spite of the same value. Your\nchange also makes it clear.\n\nFor the part of treating that case as an index corruption I will need\nsome time to review because of lacking knowledge of btree indexes. So\nI'll review it later.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Apr 2020 16:20:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Wed, Apr 29, 2020 at 11:38 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> Thank you for the explanation. This makes sense to me.\n\nPushed both of the fixes.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 May 2020 09:53:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" }, { "msg_contents": "On Thu, Apr 30, 2020 at 12:20 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> For the part of treating that case as an index corruption I will need\n> some time to review because of lacking knowledge of btree indexes. So\n> I'll review it later.\n\nI pushed the refactoring patch today. Thanks for the review.\n\nThe final test for corruption that I added to btvacuumscan() is less\naggressive than what you saw in the patch I posted. We only report\ncorruption when backtracking/recursing if the page is \"new\", not a\nleaf page, or is half-dead. We don't treat a fully deleted page as\ncorruption, because there is a case where the same call to\nbtvacuumscan() may have called _bt_pagedel() already, which may have\ndeleted the block that we backtrack/recurse to. The \"sibling links\ncannot point to a deleted page without concurrent deletion, and we\nknow that can't happen because we are VACUUM\" stuff doesn't really\napply -- we remember which block we will recurse to *before* we\nactually call _bt_pagedel().\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 2 May 2020 14:12:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Fixes for two separate bugs in nbtree VACUUM's page deletion" } ]
[ { "msg_contents": "Hi hackers,\n\nI happened to notice $subject and not sure if it's an issue or not. When\nwe're trying to remove a LEFT JOIN, one of the requirements is the inner\nside needs to be a single baserel. If there is a join qual that is a\nsublink and can be converted to a semi join with the inner side rel, the\ninner side would no longer be a single baserel and as a result the LEFT\nJOIN can no longer be removed.\n\nHere is an example to illustrate this behavior:\n\ncreate table a(i int, j int);\ncreate table b(i int UNIQUE, j int);\ncreate table c(i int, j int);\n\n# explain (costs off) select a.i from a left join b on a.i = b.i and\n b.j in (select j from c where b.i = c.i);\n QUERY PLAN\n---------------\n Seq Scan on a\n(1 row)\n\nFor the query above, we do not pull up the sublink and the LEFT JOIN is\nremoved.\n\n\n# explain (costs off) select a.i from a left join b on a.i = b.i and\n b.j in (select j from c);\n QUERY PLAN\n---------------------------------------\n Hash Left Join\n Hash Cond: (a.i = b.i)\n -> Seq Scan on a\n -> Hash\n -> Hash Semi Join\n Hash Cond: (b.j = c.j)\n -> Seq Scan on b\n -> Hash\n -> Seq Scan on c\n(9 rows)\n\nNow for this above query, the sublink is pulled up to be a semi-join\nwith inner side rel 'b', which makes the inner side no longer a single\nbaserel. That causes the LEFT JOIN failing to be removed.\n\nThat is to say, pulling up sublink sometimes breaks join-removal logic.\nIs this an issue that bothers you too?\n\nThanks\nRichard\n\nHi hackers,I happened to notice $subject and not sure if it's an issue or not. Whenwe're trying to remove a LEFT JOIN, one of the requirements is the innerside needs to be a single baserel. If there is a join qual that is asublink and can be converted to a semi join with the inner side rel, theinner side would no longer be a single baserel and as a result the LEFTJOIN can no longer be removed.Here is an example to illustrate this behavior:create table a(i int, j int);create table b(i int UNIQUE, j int);create table c(i int, j int);# explain (costs off) select a.i from a left join b on a.i = b.i and    b.j in (select j from c where b.i = c.i);  QUERY PLAN--------------- Seq Scan on a(1 row)For the query above, we do not pull up the sublink and the LEFT JOIN isremoved.# explain (costs off) select a.i from a left join b on a.i = b.i and    b.j in (select j from c);              QUERY PLAN--------------------------------------- Hash Left Join   Hash Cond: (a.i = b.i)   ->  Seq Scan on a   ->  Hash         ->  Hash Semi Join               Hash Cond: (b.j = c.j)               ->  Seq Scan on b               ->  Hash                     ->  Seq Scan on c(9 rows)Now for this above query, the sublink is pulled up to be a semi-joinwith inner side rel 'b', which makes the inner side no longer a singlebaserel. That causes the LEFT JOIN failing to be removed.That is to say, pulling up sublink sometimes breaks join-removal logic.Is this an issue that bothers you too?ThanksRichard", "msg_date": "Tue, 28 Apr 2020 15:04:30 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Pulling up sublink may break join-removal logic" }, { "msg_contents": "On Tue, 28 Apr 2020 at 19:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> I happened to notice $subject and not sure if it's an issue or not. When\n> we're trying to remove a LEFT JOIN, one of the requirements is the inner\n> side needs to be a single baserel. If there is a join qual that is a\n> sublink and can be converted to a semi join with the inner side rel, the\n> inner side would no longer be a single baserel and as a result the LEFT\n> JOIN can no longer be removed.\n\nI think, in theory at least, that can be fixed by [1], where we no\nlonger rely on looking to see if the RelOptInfo has a unique index to\ndetermine if the relation can duplicate outer side rows during the\njoin. Of course, they'll only exist on base relations, so hence the\ncheck you're talking about. Instead, the patch's idea is to propagate\nuniqueness down the join tree in the form of UniqueKeys.\n\nA quick glance shows there are a few implementation details of join\nremovals of why the removal still won't work with [1]. For example,\nthe singleton rel check causes it to abort both on the pre-check and\nthe final join removal check. There's also the removal itself that\nassumes we're just removing a single relation. I'd guess that would\nneed to loop over the min_righthand relids with a bms_next_member loop\nand remove each base rel one by one. I'd need to look in more detail\nto know if there are any other limiting factors there.\n\nI'll link back here over on that thread to see if Andy would like to\ntake a quick look at it.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKU4AWoymsjbm5KDYbsko13GUfG57pX1QyC3Y8sDHyrfoQeyQQ@mail.gmail.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:22:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pulling up sublink may break join-removal logic" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 28 Apr 2020 at 19:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> > I happened to notice $subject and not sure if it's an issue or not. When\n> > we're trying to remove a LEFT JOIN, one of the requirements is the inner\n> > side needs to be a single baserel. If there is a join qual that is a\n> > sublink and can be converted to a semi join with the inner side rel, the\n> > inner side would no longer be a single baserel and as a result the LEFT\n> > JOIN can no longer be removed.\n>\n> I think, in theory at least, that can be fixed by [1], where we no\n> longer rely on looking to see if the RelOptInfo has a unique index to\n> determine if the relation can duplicate outer side rows during the\n> join. Of course, they'll only exist on base relations, so hence the\n> check you're talking about. Instead, the patch's idea is to propagate\n> uniqueness down the join tree in the form of UniqueKeys.\n>\n\nDo you mean we're tracking the uniqueness of each RelOptInfo, baserel or\njoinrel, with UniqueKeys? I like the idea!\n\n\n>\n> A quick glance shows there are a few implementation details of join\n> removals of why the removal still won't work with [1]. For example,\n> the singleton rel check causes it to abort both on the pre-check and\n> the final join removal check. There's also the removal itself that\n> assumes we're just removing a single relation. I'd guess that would\n> need to loop over the min_righthand relids with a bms_next_member loop\n> and remove each base rel one by one. I'd need to look in more detail\n> to know if there are any other limiting factors there.\n>\n\nYeah, we'll have to teach remove_useless_joins to work with multiple\nrelids.\n\nThanks\nRichard\n\nOn Wed, Apr 29, 2020 at 8:23 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 28 Apr 2020 at 19:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> I happened to notice $subject and not sure if it's an issue or not. When\n> we're trying to remove a LEFT JOIN, one of the requirements is the inner\n> side needs to be a single baserel. If there is a join qual that is a\n> sublink and can be converted to a semi join with the inner side rel, the\n> inner side would no longer be a single baserel and as a result the LEFT\n> JOIN can no longer be removed.\n\nI think, in theory at least, that can be fixed by [1], where we no\nlonger rely on looking to see if the RelOptInfo has a unique index to\ndetermine if the relation can duplicate outer side rows during the\njoin. Of course, they'll only exist on base relations, so hence the\ncheck you're talking about. Instead, the patch's idea is to propagate\nuniqueness down the join tree in the form of UniqueKeys.Do you mean we're tracking the uniqueness of each RelOptInfo, baserel orjoinrel, with UniqueKeys? I like the idea! \n\nA quick glance shows there are a few implementation details of join\nremovals of why the removal still won't work with [1].  For example,\nthe singleton rel check causes it to abort both on the pre-check and\nthe final join removal check.  There's also the removal itself that\nassumes we're just removing a single relation. I'd guess that would\nneed to loop over the min_righthand relids with a bms_next_member loop\nand remove each base rel one by one.  I'd need to look in more detail\nto know if there are any other limiting factors there.Yeah, we'll have to teach remove_useless_joins to work with multiplerelids.ThanksRichard", "msg_date": "Wed, 29 Apr 2020 10:37:00 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Pulling up sublink may break join-removal logic" }, { "msg_contents": "On Wed, Apr 29, 2020 at 10:37 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Wed, Apr 29, 2020 at 8:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> On Tue, 28 Apr 2020 at 19:04, Richard Guo <guofenglinux@gmail.com> wrote:\n>> > I happened to notice $subject and not sure if it's an issue or not. When\n>> > we're trying to remove a LEFT JOIN, one of the requirements is the inner\n>> > side needs to be a single baserel. If there is a join qual that is a\n>> > sublink and can be converted to a semi join with the inner side rel, the\n>> > inner side would no longer be a single baserel and as a result the LEFT\n>> > JOIN can no longer be removed.\n>>\n>> I think, in theory at least, that can be fixed by [1], where we no\n>> longer rely on looking to see if the RelOptInfo has a unique index to\n>> determine if the relation can duplicate outer side rows during the\n>> join. Of course, they'll only exist on base relations, so hence the\n>> check you're talking about. Instead, the patch's idea is to propagate\n>> uniqueness down the join tree in the form of UniqueKeys.\n>>\n>\n> Do you mean we're tracking the uniqueness of each RelOptInfo, baserel or\n> joinrel, with UniqueKeys? I like the idea!\n>\n\nYes, it is and welcome the for review for that patch:)\n\n\n>\n>> A quick glance shows there are a few implementation details of join\n>> removals of why the removal still won't work with [1]. For example,\n>> the singleton rel check causes it to abort both on the pre-check and\n>> the final join removal check. There's also the removal itself that\n>> assumes we're just removing a single relation. I'd guess that would\n>> need to loop over the min_righthand relids with a bms_next_member loop\n>> and remove each base rel one by one. I'd need to look in more detail\n>> to know if there are any other limiting factors there.\n>>\n>\n> Yeah, we'll have to teach remove_useless_joins to work with multiple\n> relids.\n>\n\nYou can see [1] for the discuss for this issue with UniqueKey respect.\nsearch\n\"In the past we have some limited ability to detect the unqiueness after\njoin,\n so that's would be ok. Since we have such ability now, this may be\nanother\nopportunity to improve the join_is_removable function\"\n\nI'm checking it today and will have a feedback soon.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWrGrs0Vk5OrZmS1gbTA2ijDH18NHKnXZTPZNuupn%2B%2Bing%40mail.gmail.com\n\n\nBest Regards\nAndy Fan\n\nOn Wed, Apr 29, 2020 at 10:37 AM Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Apr 29, 2020 at 8:23 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 28 Apr 2020 at 19:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> I happened to notice $subject and not sure if it's an issue or not. When\n> we're trying to remove a LEFT JOIN, one of the requirements is the inner\n> side needs to be a single baserel. If there is a join qual that is a\n> sublink and can be converted to a semi join with the inner side rel, the\n> inner side would no longer be a single baserel and as a result the LEFT\n> JOIN can no longer be removed.\n\nI think, in theory at least, that can be fixed by [1], where we no\nlonger rely on looking to see if the RelOptInfo has a unique index to\ndetermine if the relation can duplicate outer side rows during the\njoin. Of course, they'll only exist on base relations, so hence the\ncheck you're talking about. Instead, the patch's idea is to propagate\nuniqueness down the join tree in the form of UniqueKeys.Do you mean we're tracking the uniqueness of each RelOptInfo, baserel orjoinrel, with UniqueKeys? I like the idea!Yes, it is and welcome the for review for that patch:)  \n\nA quick glance shows there are a few implementation details of join\nremovals of why the removal still won't work with [1].  For example,\nthe singleton rel check causes it to abort both on the pre-check and\nthe final join removal check.  There's also the removal itself that\nassumes we're just removing a single relation. I'd guess that would\nneed to loop over the min_righthand relids with a bms_next_member loop\nand remove each base rel one by one.  I'd need to look in more detail\nto know if there are any other limiting factors there.Yeah, we'll have to teach remove_useless_joins to work with multiplerelids.You can see [1] for the discuss for this issue with UniqueKey respect.  search\"In the past we have some limited ability to detect the unqiueness after join,  so that's would be ok.  Since we have such ability now,  this may be another opportunity to improve the join_is_removable function\"I'm checking it today and will have a feedback soon. [1] https://www.postgresql.org/message-id/CAKU4AWrGrs0Vk5OrZmS1gbTA2ijDH18NHKnXZTPZNuupn%2B%2Bing%40mail.gmail.com Best RegardsAndy Fan", "msg_date": "Wed, 29 Apr 2020 10:50:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pulling up sublink may break join-removal logic" } ]
[ { "msg_contents": "One of the problems to allow logical decoding of in-progress (or\nprepared) transactions is that the transaction which we are decoding\ncan abort concurrently and we might not be able to detect it which\nleads to getting the wrong version of a row from system catalogs.\nThis can further lead to unpredictable behavior when decoding WAL with\nwrong catalog information.\n\nThe problem occurs when we have a sequence of operations like\nAlter-Abort-Alter as discussed previously in the context of two-phase\ntransactions [1]. I am reiterating the same problem so as to ease the\ndiscussion for in-progress transactions.\n\nSuppose we created a table (mytable), then in some transaction (say\nwith txid-200) we are altering it and after that aborting that txn. So\npg_class will have something like this:\n\nxmin | xmax | relname\n100 | 200 | mytable\n200 | 0 | mytable\n\nAfter the previous abort, tuple (100,200,mytable) becomes visible to\nother transactions and if we will alter the table again then xmax of\nfirst tuple will be set current xid, resulting in following table:\n\nxmin | xmax | relname\n100 | 300 | mytable\n200 | 0 | mytable\n300 | 0 | mytable\n\nAt that moment we’ve lost information that first tuple was deleted by\nour txn (txid-200). And from POV of the historic snapshot, the first\ntuple (100, 300, mytable) will become visible and used to decode the\nWAL, but actually the second tuple should have been used. Moreover,\nsuch a snapshot could see both tuples (first and second) violating oid\nuniqueness.\n\nNow, we are planning to handle such a situation by returning\nERRCODE_TRANSACTION_ROLLBACK sqlerrcode from system table scan APIs to\nthe backend decoding a specific uncommitted transaction. The decoding\nlogic on the receipt of such an sqlerrcode aborts the ongoing\ndecoding, discard the changes of current (sub)xact and continue\nprocessing the changes of other txns. The basic idea is that while\nsetting up historic snapshots (SetupHistoricSnapshot), we remember the\nxid corresponding to ReorderBufferTXN (provided it's not committed\nyet) whose changes we are going to decode. Now, after getting the\ntuple from a system table scan, we check if the remembered XID is\naborted and if so we return a specific error\nERRCODE_TRANSACTION_ROLLBACK which helps the caller to proceed\ngracefully by discarding the changes of such a transaction. This is\nbased on the idea/patch [2][3] discussed among Andres Freund and\nNikhil Sontakke in the context of logical decoding of two-phase\ntransactions.\n\nWe need to deal with this problem to allow logical decoding of\nin-progress transactions which is being discussed in a separate thread\n[4]. I have previously tried to discuss this in the main thread [5]\nbut didn't get much response so trying again.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/EEBD82AA-61EE-46F4-845E-05B94168E8F2%40postgrespro.ru\n[2] - https://www.postgresql.org/message-id/20180720145836.gxwhbftuoyx5h4gc%40alap3.anarazel.de\n[3] - https://www.postgresql.org/message-id/CAMGcDxcBmN6jNeQkgWddfhX8HbSjQpW%3DUo70iBY3P_EPdp%2BLTQ%40mail.gmail.com\n[4] - https://www.postgresql.org/message-id/688b0b7f-2f6c-d827-c27b-216a8e3ea700%402ndquadrant.com\n[5] - https://www.postgresql.org/message-id/CAA4eK1KtxLpSp2rP6Rt8izQnPmhiA%3D2QUpLk%2BvoagTjKowc0HA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Apr 2020 12:40:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Handling of concurrent aborts in logical decoding of in-progress\n xacts" } ]
[ { "msg_contents": "Hey,\n\nfor an university project I'm currently doing some research on \nPostgreSQL. I was wondering if hypothetically it would be possible to \nimplement a raw device system to PostgreSQL. I know that the \ndisadvantages would probably be higher than the advantages compared to \nworking with the file system. Just hypothetically: Would it be possible \nto change the source code of PostgreSQL so a raw device system could be \nimplemented, or would that cause a chain reaction so that basically one \nwould have to rewrite almost the entire code, because too many elements \nof PostgreSQL rely on the file system?\n\nBest regards\n\n\n\n", "msg_date": "Tue, 28 Apr 2020 10:43:27 +0200", "msg_from": "Benjamin Schaller <benjamin.schaller@s2018.tu-chemnitz.de>", "msg_from_op": true, "msg_subject": "Raw device on PostgreSQL" }, { "msg_contents": "Greetings,\n\n* Benjamin Schaller (benjamin.schaller@s2018.tu-chemnitz.de) wrote:\n> for an university project I'm currently doing some research on PostgreSQL. I\n> was wondering if hypothetically it would be possible to implement a raw\n> device system to PostgreSQL. I know that the disadvantages would probably be\n> higher than the advantages compared to working with the file system. Just\n> hypothetically: Would it be possible to change the source code of PostgreSQL\n> so a raw device system could be implemented, or would that cause a chain\n> reaction so that basically one would have to rewrite almost the entire code,\n> because too many elements of PostgreSQL rely on the file system?\n\nyes, it'd be possible, no, you wouldn't have to rewrite all of PG.\nInstead, if you want it to be performant at all, you'd have to write\nlots of new code to do all the things the filesystem and kernel do for\nus today.\n\nThanks,\n\nStephen", "msg_date": "Tue, 28 Apr 2020 08:01:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On 4/28/20 10:43 AM, Benjamin Schaller wrote:\n> for an university project I'm currently doing some research on \n> PostgreSQL. I was wondering if hypothetically it would be possible to \n> implement a raw device system to PostgreSQL. I know that the \n> disadvantages would probably be higher than the advantages compared to \n> working with the file system. Just hypothetically: Would it be possible \n> to change the source code of PostgreSQL so a raw device system could be \n> implemented, or would that cause a chain reaction so that basically one \n> would have to rewrite almost the entire code, because too many elements \n> of PostgreSQL rely on the file system?\n\nIt would require quite a bit of work since 1) PostgreSQL stores its data \nin multiple files and 2) PostgreSQL currently supports only synchronous \nbuffered IO.\n\nTo get the performance benefits from using raw devices I think you would \nwant to add support for asynchronous IO to PostgreSQL rather than \nimplementing your own layer to emulate the kernel's buffered IO.\n\nAndres Freund did a talk on aync IO in PostgreSQL earlier this year. It \nwas not recorded but the slides are available.\n\nhttps://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/\n\nAndreas\n\n\n", "msg_date": "Tue, 28 Apr 2020 14:10:51 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Tue, Apr 28, 2020 at 02:10:51PM +0200, Andreas Karlsson wrote:\n>On 4/28/20 10:43 AM, Benjamin Schaller wrote:\n>>for an university project I'm currently doing some research on \n>>PostgreSQL. I was wondering if hypothetically it would be possible \n>>to implement a raw device system to PostgreSQL. I know that the \n>>disadvantages would probably be higher than the advantages compared \n>>to working with the file system. Just hypothetically: Would it be \n>>possible to change the source code of PostgreSQL so a raw device \n>>system could be implemented, or would that cause a chain reaction so \n>>that basically one would have to rewrite almost the entire code, \n>>because too many elements of PostgreSQL rely on the file system?\n>\n>It would require quite a bit of work since 1) PostgreSQL stores its \n>data in multiple files and 2) PostgreSQL currently supports only \n>synchronous buffered IO.\n>\n\nNot sure how that's related to raw devices, which is what Benjamin was\nasking about. AFAICS most of the changes would be in smgr.c and md.c,\nbut I might be wrong.\n\nI'd imagine supporting raw devices would require implementing some sort\nof custom file system on the device, and I'd expect it to work with\nrelation segments just fine. So why would that be a problem?\n\nThe synchronous buffered I/O is a bigger challenge, I guess, but then\nagain - you could continue using synchronous I/O even with raw devices.\n\n>To get the performance benefits from using raw devices I think you \n>would want to add support for asynchronous IO to PostgreSQL rather \n>than implementing your own layer to emulate the kernel's buffered IO.\n>\n>Andres Freund did a talk on aync IO in PostgreSQL earlier this year. \n>It was not recorded but the slides are available.\n>\n>https://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/\n>\n\nYeah, I think the question is what are the expected benefits of using\nraw devices. It might be an interesting exercise / experiment, but my\nunderstanding is that most of the benefits can be achieved by using file\nsystems but with direct I/O and async I/O, which would allow us to\ncontinue reusing the existing filesystem code with much less disruption\nto our code base.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Apr 2020 02:26:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Tue, Apr 28, 2020 at 8:10 AM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> It would require quite a bit of work since 1) PostgreSQL stores its data\n> in multiple files and 2) PostgreSQL currently supports only synchronous\n> buffered IO.\n>\n> To get the performance benefits from using raw devices I think you would\n> want to add support for asynchronous IO to PostgreSQL rather than\n> implementing your own layer to emulate the kernel's buffered IO.\n>\n> Andres Freund did a talk on aync IO in PostgreSQL earlier this year. It\n> was not recorded but the slides are available.\n>\n>\n> https://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/\n\n\nFWIW, in 2007/2008, when I was at EnterpriseDB, Inaam Rana and I\nimplemented a benchmarkable proof-of-concept patch for direct I/O and\nasynchronous I/O (for libaio and POSIX). We made that patch public, so it\nshould be on the list somewhere. But, we began to run into performance\nissues related to buffer manager scaling in terms of locking and,\nspecifically, replacement. We began prototyping alternate buffer managers\n(going back to the old MRU/LRU model with midpoint insertion and testing a\n2Q variant) but that wasn't public. I had also prototyped raw device\nsupport, which is a good amount of work and required implementing a custom\nfilesystem (similar to Oracle's ASM) within the storage manager. It's\nprobably a bit harder now than it was then, given the number of different\ntypes of file access.\n\n-- \nJonah H. Harris\n\nOn Tue, Apr 28, 2020 at 8:10 AM Andreas Karlsson <andreas@proxel.se> wrote:It would require quite a bit of work since 1) PostgreSQL stores its data \nin multiple files and 2) PostgreSQL currently supports only synchronous \nbuffered IO.\n\nTo get the performance benefits from using raw devices I think you would \nwant to add support for asynchronous IO to PostgreSQL rather than \nimplementing your own layer to emulate the kernel's buffered IO.\n\nAndres Freund did a talk on aync IO in PostgreSQL earlier this year. It \nwas not recorded but the slides are available.\n\nhttps://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/FWIW, in 2007/2008, when I was at EnterpriseDB, Inaam Rana and I implemented a benchmarkable proof-of-concept patch for direct I/O and asynchronous I/O (for libaio and POSIX). We made that patch public, so it should be on the list somewhere. But, we began to run into performance issues related to buffer manager scaling in terms of locking and, specifically, replacement. We began prototyping alternate buffer managers (going back to the old MRU/LRU model with midpoint insertion and testing a 2Q variant) but that wasn't public. I had also prototyped raw device support, which is a good amount of work and required implementing a custom filesystem (similar to Oracle's ASM) within the storage manager. It's probably a bit harder now than it was then, given the number of different types of file access.-- Jonah H. Harris", "msg_date": "Wed, 29 Apr 2020 20:34:24 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Yeah, I think the question is what are the expected benefits of using\n> raw devices. It might be an interesting exercise / experiment, but my\n> understanding is that most of the benefits can be achieved by using file\n> systems but with direct I/O and async I/O, which would allow us to\n> continue reusing the existing filesystem code with much less disruption\n> to our code base.\n\nThere's another very large problem with using raw devices: on pretty\nmuch every platform, you don't get to do that without running as root.\nIt is not easy to express how hard a sell it would be to even consider\nallowing Postgres to run as root. Between the security issues, and\nthe generally poor return-on-investment we'd get from reinventing\nour own filesystem and I/O scheduler, I just don't see this sort of\nthing ever going forward. Direct and/or async I/O seems a lot more\nplausible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Apr 2020 20:35:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Thu, Apr 30, 2020 at 12:26 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Yeah, I think the question is what are the expected benefits of using\n> raw devices. It might be an interesting exercise / experiment, but my\n> understanding is that most of the benefits can be achieved by using file\n> systems but with direct I/O and async I/O, which would allow us to\n> continue reusing the existing filesystem code with much less disruption\n> to our code base.\n\nAgreed.\n\nI've often wondered if the RDBMSs that supported raw devices did so\n*because* there was no other way to get unbuffered I/O on some systems\nat the time (for example it looks like Solaris didn't have direct I/O\nuntil 2.6 in 1997?). Last I heard, raw devices weren't recommended\nanymore on the system I'm thinking of because they're more painful to\nmanage than regular filesystems and there's little to no gain. Back\nin ancient times, before BSD4.2 introduced it in 1983 there was\napparently no fsync() system call on any strain of Unix, so I guess\ndatabase reliability must have been an uphill battle on early Unix\nbuffered I/O (I wonder if the Ingres/Postgres people asked them to add\nthat?!). It must have been very appealing to sidestep the whole thing\nfor multiple reasons. One key thing to note is that the well known\nRDBMSs that can use raw devices also deal with regular filesystems by\ncreating one or more large data files, and then manage the space\ninside those to hold all their tables and indexes. That is, they\nalready have their own system to manage separate database objects and\nallocate space etc, and don't have to do any regular filesystem\nmeta-data manipulation during transactions (which has all kinds of\nproblems). That means they already have the complicated code that you\nneed to do that, but we don't: we have one (or more) file per table or\nindex, so our database relies on the filesystem as kind of lower level\ndatabase of relfilenode->blocks. That's probably the main work\nrequired to make this work, and might be a valuable thing to have\nindependently of whether you stick it on a raw device, a big data\nfile, NV RAM or some other kind of storage system -- but it's a really\ndifficult project.\n\n\n", "msg_date": "Thu, 30 Apr 2020 16:22:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:34 PM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Tue, Apr 28, 2020 at 8:10 AM Andreas Karlsson <andreas@proxel.se>\n> wrote:\n>\n>> To get the performance benefits from using raw devices I think you would\n>> want to add support for asynchronous IO to PostgreSQL rather than\n>> implementing your own layer to emulate the kernel's buffered IO.\n>>\n>> Andres Freund did a talk on aync IO in PostgreSQL earlier this year. It\n>> was not recorded but the slides are available.\n>>\n>>\n>> https://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/\n>\n>\n> FWIW, in 2007/2008, when I was at EnterpriseDB, Inaam Rana and I\n> implemented a benchmarkable proof-of-concept patch for direct I/O and\n> asynchronous I/O (for libaio and POSIX). We made that patch public, so it\n> should be on the list somewhere. But, we began to run into performance\n> issues related to buffer manager scaling in terms of locking and,\n> specifically, replacement. We began prototyping alternate buffer managers\n> (going back to the old MRU/LRU model with midpoint insertion and testing a\n> 2Q variant) but that wasn't public. I had also prototyped raw device\n> support, which is a good amount of work and required implementing a custom\n> filesystem (similar to Oracle's ASM) within the storage manager. It's\n> probably a bit harder now than it was then, given the number of different\n> types of file access.\n>\n\nHere's a hack job merge of that preliminary PoC AIO/DIO patch against\n13devel. This was designed to keep the buffer manager clean using AIO and\nis write-only. I'll have to dig through some of my other old Postgres 8.x\npatches to find the AIO-based prefetching version with aio_req_t modified\nto handle read vs. write in FileAIO. Also, this will likely have an issue\nwith O_DIRECT as additional buffer manager alignment is needed and I\nhaven't tracked it down in 13 yet. As my default development is on a Mac, I\nhave POSIX AIO only. As such, I can't natively play with the O_DIRECT or\nlibaio paths to see if they work without going into Docker or VirtualBox -\nand I don't care that much right now :)\n\nThe code is nasty, but maybe it will give someone ideas. If I get some time\nto work on it, I'll rewrite it properly.\n\n-- \nJonah H. Harris", "msg_date": "Thu, 30 Apr 2020 20:27:49 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On 30/4/20 6:22, Thomas Munro wrote:\n> On Thu, Apr 30, 2020 at 12:26 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> Yeah, I think the question is what are the expected benefits of using\n>> raw devices. It might be an interesting exercise / experiment, but my\n>> understanding is that most of the benefits can be achieved by using file\n>> systems but with direct I/O and async I/O, which would allow us to\n>> continue reusing the existing filesystem code with much less disruption\n>> to our code base.\n> Agreed.\n>\n> [snip] That's probably the main work\n> required to make this work, and might be a valuable thing to have\n> independently of whether you stick it on a raw device, a big data\n> file, NV RAM\n    ^^^^^^  THIS, with NV DIMMs / PMEM (persistent memory) possibly \nbecoming a hot topic in the not-too-distant future\n> or some other kind of storage system -- but it's a really\n> difficult project.\n\nIndeed.... But you might have already pointed out the *only* required \nfeature for this to work: a \"database\" of relfilenode ---which is \nactually an int, or rather, a tuple (relfilenode,segment) where both \ncomponents are 32-bit currently: that is, a 64bit \"objectID\" of sorts--- \nto \"set of extents\" ---yes, extents, not blocks: sequential I/O is still \nfaster in all known storage/persistent (vs RAM) systems---- where the \ncurrent I/O primitives would be able to write.\n\nSome conversion from \"absolute\" (within the \"file\") to \"relative\" \n(within the \"tablespace\") offsets would need to happen before delegating \nto the kernel... or even dereferencing a pointer to an mmap'd region !, \nbut not much more, ISTM (but I'm far from an expert in this area).\n\nOut of the top of my head:\n\nCREATE TABLESPACE tblspcname [other_options] LOCATION '/dev/nvme1n2' \nWITH (kind=raw, extent_min=4MB);\n\n   or something similar to that approac might do it.\n\n     Please note that I have purposefully specified \"namespace 2\" in an \n\"enterprise\" NVME device, to show the possibility.\n\nOR\n\n   use some filesystem (e.g. XFS) with DAX[1] (mount -o dax ) where \navailable along something equivalent to  WITH(kind=mmaped)\n\n\n... though the locking we currently get \"for free\" from the kernel would \nneed to be replaced by something else.\n\n\nIndeed it seems like an enormous amount of work.... but it may well pay \noff. I can't fully assess the effort, though\n\n\nJust my .02€\n\n[1] https://www.kernel.org/doc/Documentation/filesystems/dax.txt\n\n\nThanks,\n\n     / J.L.\n\n\n\n\n", "msg_date": "Fri, 1 May 2020 12:22:39 +0200", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Fri, May 1, 2020 at 12:28 PM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> Also, this will likely have an issue with O_DIRECT as additional buffer manager alignment is needed and I haven't tracked it down in 13 yet. As my default development is on a Mac, I have POSIX AIO only. As such, I can't natively play with the O_DIRECT or libaio paths to see if they work without going into Docker or VirtualBox - and I don't care that much right now :)\n\nAndres is prototyping with io_uring, which supersedes libaio and can\ndo much more stuff, notably buffered and unbuffered I/O; there's no\npoint in looking at libaio. I agree that we should definitely support\nPOSIX AIO, because that gets you macOS, FreeBSD, NetBSD, AIX, HPUX\nwith one effort (those are the systems that use either kernel threads\nor true async I/O down to the driver; Solaris and Linux also provide\nPOSIX AIO, but it's emulated with user space threads, which probably\nwouldn't work well for our multi process design). The third API that\nwe'd want to support is Windows overlapped I/O with completion ports.\nWith those three APIs you can hit all systems in our build farm except\nSolaris and OpenBSD, so they'd still use synchronous I/O (though we\ncould do our own emulation with worker processes pretty easily).\n\n\n", "msg_date": "Sat, 2 May 2020 08:58:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" }, { "msg_contents": "On Fri, May 1, 2020 at 4:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, May 1, 2020 at 12:28 PM Jonah H. Harris <jonah.harris@gmail.com>\n> wrote:\n> > Also, this will likely have an issue with O_DIRECT as additional buffer\n> manager alignment is needed and I haven't tracked it down in 13 yet. As my\n> default development is on a Mac, I have POSIX AIO only. As such, I can't\n> natively play with the O_DIRECT or libaio paths to see if they work without\n> going into Docker or VirtualBox - and I don't care that much right now :)\n>\n> Andres is prototyping with io_uring, which supersedes libaio and can\n> do much more stuff, notably buffered and unbuffered I/O; there's no\n> point in looking at libaio. I agree that we should definitely support\n> POSIX AIO, because that gets you macOS, FreeBSD, NetBSD, AIX, HPUX\n> with one effort (those are the systems that use either kernel threads\n> or true async I/O down to the driver; Solaris and Linux also provide\n> POSIX AIO, but it's emulated with user space threads, which probably\n> wouldn't work well for our multi process design). The third API that\n> we'd want to support is Windows overlapped I/O with completion ports.\n> With those three APIs you can hit all systems in our build farm except\n> Solaris and OpenBSD, so they'd still use synchronous I/O (though we\n> could do our own emulation with worker processes pretty easily).\n>\n\nIs it public? I saw the presentations, but couldn't find that patch\nanywhere.\n\n-- \nJonah H. Harris\n\nOn Fri, May 1, 2020 at 4:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, May 1, 2020 at 12:28 PM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> Also, this will likely have an issue with O_DIRECT as additional buffer manager alignment is needed and I haven't tracked it down in 13 yet. As my default development is on a Mac, I have POSIX AIO only. As such, I can't natively play with the O_DIRECT or libaio paths to see if they work without going into Docker or VirtualBox - and I don't care that much right now :)\n\nAndres is prototyping with io_uring, which supersedes libaio and can\ndo much more stuff, notably buffered and unbuffered I/O; there's no\npoint in looking at libaio.  I agree that we should definitely support\nPOSIX AIO, because that gets you macOS, FreeBSD, NetBSD, AIX, HPUX\nwith one effort (those are the systems that use either kernel threads\nor true async I/O down to the driver; Solaris and Linux also provide\nPOSIX AIO, but it's emulated with user space threads, which probably\nwouldn't work well for our multi process design).  The third API that\nwe'd want to support is Windows overlapped I/O with completion ports.\nWith those three APIs you can hit all systems in our build farm except\nSolaris and OpenBSD, so they'd still use synchronous I/O (though we\ncould do our own emulation with worker processes pretty easily).Is it public? I saw the presentations, but couldn't find that patch anywhere. -- Jonah H. Harris", "msg_date": "Fri, 1 May 2020 17:00:55 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Raw device on PostgreSQL" } ]
[ { "msg_contents": "Is there any impact of using the character varying without providing the\nlength while creating tables?\nI have created two tables and inserted 1M records. But I don't see any\ndifference in pg_class. (size, relpage)\n\ncreate table test_1(name varchar);\ncreate table test_2(name varchar(50));\n\ninsert into test_1 ... 10M records\ninsert into test_2 ... 10M records\n\nvacuum (full,analyze) db_size_test_1;\nvacuum (full,analyze) db_size_test_2;\n\nWhich option is recommended?\n\n*Regards,*\n*Rajin *\n\nIs there any impact of using the character varying without providing the length while creating tables? I have created two tables and inserted 1M records. But I don't see any difference in pg_class. (size, relpage)create table test_1(name varchar);create table test_2(name varchar(50));insert into test_1 ... 10M records insert into test_2 ... 10M records vacuum (full,analyze) db_size_test_1;vacuum (full,analyze) db_size_test_2;Which option is recommended? Regards,Rajin", "msg_date": "Tue, 28 Apr 2020 14:52:07 +0530", "msg_from": "Rajin Raj <rajin.raj@opsveda.com>", "msg_from_op": true, "msg_subject": "PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "No, there is no impact.\n\nAm 28.04.20 um 11:22 schrieb Rajin Raj:\n> Is there any impact of using the character varying without providing \n> the length while creating tables?\n> I have created two tables and inserted 1M records. But I don't see any \n> difference in pg_class. (size, relpage)\n>\n> create table test_1(name varchar);\n> create table test_2(name varchar(50));\n>\n> insert into test_1 ... 10M records\n> insert into test_2 ... 10M records\n>\n> vacuum (full,analyze) db_size_test_1;\n> vacuum (full,analyze) db_size_test_2;\n>\n> Which option is recommended?\n>\n> *Regards,*\n> *Rajin *\n\n-- \nHolger Jakobs, Bergisch Gladbach, Tel. +49-178-9759012\n\n\n\n\n\n\n\nNo, there is no impact.\n\nAm 28.04.20 um 11:22 schrieb Rajin Raj:\n\n\n\n\nIs\n there any impact of using the character varying without\n providing the length while creating tables? \nI\n have created two tables and inserted 1M records. But I don't\n see any difference in pg_class. (size, relpage)\n\n create table test_1(name varchar);\n create table test_2(name varchar(50));\n\n insert into test_1 ... 10M\n records \n insert into test_2 ... 10M\n records \n\n vacuum (full,analyze) db_size_test_1;\n vacuum (full,analyze) db_size_test_2;\n\nWhich\n option is recommended? \n\n\n\n\n\n\n\n\n\n\nRegards,\nRajin \n\n\n\n\n\n\n\n\n\n\n-- \nHolger Jakobs, Bergisch Gladbach, Tel. +49-178-9759012", "msg_date": "Tue, 28 Apr 2020 11:43:39 +0200", "msg_from": "Holger Jakobs <holger@jakobs.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tue, Apr 28, 2020 at 2:53 PM Rajin Raj <rajin.raj@opsveda.com> wrote:\n>\n> Is there any impact of using the character varying without providing the length while creating tables?\n> I have created two tables and inserted 1M records. But I don't see any difference in pg_class. (size, relpage)\n>\n> create table test_1(name varchar);\n> create table test_2(name varchar(50));\n\nI don't think there's a difference in the way these two are stored\non-disk. But if you know that your strings will be at most 50\ncharacters long, better set that limit so that server takes\nappropriate action (i.e. truncates the strings to 50).\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 28 Apr 2020 17:16:03 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "Truncation will NEVER happen. PostgreSQL throws an ERROR on any attempt \nof saving more characters (not bytes!) into a VARCHAR(50) column.\n\nThere is some other well-known system which silently truncates, but we \nall know why we would never use that.\n\nAm 28.04.20 um 13:46 schrieb Ashutosh Bapat:\n> On Tue, Apr 28, 2020 at 2:53 PM Rajin Raj <rajin.raj@opsveda.com> wrote:\n>> Is there any impact of using the character varying without providing the length while creating tables?\n>> I have created two tables and inserted 1M records. But I don't see any difference in pg_class. (size, relpage)\n>>\n>> create table test_1(name varchar);\n>> create table test_2(name varchar(50));\n> I don't think there's a difference in the way these two are stored\n> on-disk. But if you know that your strings will be at most 50\n> characters long, better set that limit so that server takes\n> appropriate action (i.e. truncates the strings to 50).\n>\n-- \nHolger Jakobs, Bergisch Gladbach, Tel. +49-178-9759012\n\n\n\n", "msg_date": "Tue, 28 Apr 2020 17:10:31 +0200", "msg_from": "Holger Jakobs <holger@jakobs.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "PG the text, character varying, character varying(length), character column\ntypes are all the same thing with each column type inheriting the\nproperties from the parent type. With each successive type further\nproperties are added but they're all basically just \"text\" with some\nadditional metadata. If you're coming from other database engines or just\ngeneral programming languages where text and fixed length string fields are\nhandled differently then the above can seem a bit different form what\nyou're used to. Heck, I can think of one engine where if you have a text\ncolumn you have to query the table for the blob identifier and then issue a\nseparate call to retrieve it. Here in PG it's literally all the same,\nhandled the same, performs the same. Use what limiters make sense for your\napplication.\n\nOn Tue, Apr 28, 2020 at 5:22 AM Rajin Raj <rajin.raj@opsveda.com> wrote:\n\n> Is there any impact of using the character varying without providing the\n> length while creating tables?\n> I have created two tables and inserted 1M records. But I don't see any\n> difference in pg_class. (size, relpage)\n>\n> create table test_1(name varchar);\n> create table test_2(name varchar(50));\n>\n> insert into test_1 ... 10M records\n> insert into test_2 ... 10M records\n>\n> vacuum (full,analyze) db_size_test_1;\n> vacuum (full,analyze) db_size_test_2;\n>\n> Which option is recommended?\n>\n> *Regards,*\n> *Rajin *\n>\n\nPG the text, character varying, character varying(length), character column types are all the same thing with each column type inheriting the properties from the parent type.  With each successive type further properties are added but they're all basically just \"text\" with some additional metadata.  If you're coming from other database engines or just general programming languages where text and fixed length string fields are handled differently then the above can seem a bit different form what you're used to.  Heck, I can think of one engine where if you have a text column you have to query the table for the blob identifier and then issue a separate call to retrieve it.  Here in PG it's literally all the same, handled the same, performs the same.  Use what limiters make sense for your application.On Tue, Apr 28, 2020 at 5:22 AM Rajin Raj <rajin.raj@opsveda.com> wrote:Is there any impact of using the character varying without providing the length while creating tables? I have created two tables and inserted 1M records. But I don't see any difference in pg_class. (size, relpage)create table test_1(name varchar);create table test_2(name varchar(50));insert into test_1 ... 10M records insert into test_2 ... 10M records vacuum (full,analyze) db_size_test_1;vacuum (full,analyze) db_size_test_2;Which option is recommended? Regards,Rajin", "msg_date": "Tue, 28 Apr 2020 12:24:23 -0400", "msg_from": "Paul Carlucci <paul.carlucci@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "An example:\n\ntest=# create table bargle (f1 varchar(10));\nCREATE TABLE\ntest=# insert into bargle values ('01234567890123');\nERROR:  value too long for type character varying(10)\n\n\nOn 4/28/20 10:10 AM, Holger Jakobs wrote:\n> Truncation will NEVER happen. PostgreSQL throws an ERROR on any attempt of \n> saving more characters (not bytes!) into a VARCHAR(50) column.\n>\n> There is some other well-known system which silently truncates, but we all \n> know why we would never use that.\n>\n> Am 28.04.20 um 13:46 schrieb Ashutosh Bapat:\n>> On Tue, Apr 28, 2020 at 2:53 PM Rajin Raj <rajin.raj@opsveda.com> wrote:\n>>> Is there any impact of using the character varying without providing the \n>>> length while creating tables?\n>>> I have created two tables and inserted 1M records. But I don't see any \n>>> difference in pg_class. (size, relpage)\n>>>\n>>> create table test_1(name varchar);\n>>> create table test_2(name varchar(50));\n>> I don't think there's a difference in the way these two are stored\n>> on-disk. But if you know that your strings will be at most 50\n>> characters long, better set that limit so that server takes\n>> appropriate action (i.e. truncates the strings to 50).\n>>\n\n-- \nAngular momentum makes the world go 'round.\n\n\n", "msg_date": "Tue, 28 Apr 2020 11:39:52 -0500", "msg_from": "Ron <ronljohnsonjr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "Paul Carlucci wrote:\n\n> On Tue, Apr 28, 2020 at 5:22 AM Rajin Raj <rajin.raj@opsveda.com> wrote:\n> \n> > Is there any impact of using the character varying without providing the\n> > length while creating tables?\n> > I have created two tables and inserted 1M records. But I don't see any\n> > difference in pg_class. (size, relpage)\n> >\n> > create table test_1(name varchar);\n> > create table test_2(name varchar(50));\n> >\n> > insert into test_1 ... 10M records\n> > insert into test_2 ... 10M records\n> >\n> > vacuum (full,analyze) db_size_test_1;\n> > vacuum (full,analyze) db_size_test_2;\n> >\n> > Which option is recommended?\n> >\n> > *Regards,*\n> > *Rajin *\n> >\n> PG the text, character varying, character varying(length), character column\n> types are all the same thing with each column type inheriting the\n> properties from the parent type. With each successive type further\n> properties are added but they're all basically just \"text\" with some\n> additional metadata. If you're coming from other database engines or just\n> general programming languages where text and fixed length string fields are\n> handled differently then the above can seem a bit different form what\n> you're used to. Heck, I can think of one engine where if you have a text\n> column you have to query the table for the blob identifier and then issue a\n> separate call to retrieve it. Here in PG it's literally all the same,\n> handled the same, performs the same. Use what limiters make sense for your\n> application.\n\nMy advice is to never impose arbitrary limits on text.\nYou will probably regret the choice of limit at some\npoint. I recently encountered people complaining that\nthey (thought they) needed to store 21 characters in\na field that they had limited to 10 characters (even\nthough they were originally told that the recipient\nof the data would accept up to 40 characters).\n\nI just use \"text\" for everything. It's less typing. :-)\n\nThe only good reason I can think of for limiting the\nlength would be to mitigate the risk of some kind of\ndenial of service, so a limit of 1KiB or 1MiB maybe.\nBut even that sounds silly. I've never done it (except\nto limit CPU usage for slow password hashing but even\nthen, the 1KiB limit was imposed by input validation,\nnot by the database schema).\n\ncheers,\nraf\n\nP.S. My aversion to arbitrary length limits applies to\npostgres identifier names as well. I wish they weren't\nlimited to 63 characters.\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 09:43:32 +1000", "msg_from": "raf <raf@raf.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "raf wrote:\n\n> Paul Carlucci wrote:\n> \n> > On Tue, Apr 28, 2020 at 5:22 AM Rajin Raj <rajin.raj@opsveda.com> wrote:\n> > \n> > > Is there any impact of using the character varying without providing the\n> > > length while creating tables?\n> > > I have created two tables and inserted 1M records. But I don't see any\n> > > difference in pg_class. (size, relpage)\n> > >\n> > > create table test_1(name varchar);\n> > > create table test_2(name varchar(50));\n> > >\n> > > insert into test_1 ... 10M records\n> > > insert into test_2 ... 10M records\n> > >\n> > > vacuum (full,analyze) db_size_test_1;\n> > > vacuum (full,analyze) db_size_test_2;\n> > >\n> > > Which option is recommended?\n> > >\n> > > *Regards,*\n> > > *Rajin *\n> > >\n> > PG the text, character varying, character varying(length), character column\n> > types are all the same thing with each column type inheriting the\n> > properties from the parent type. With each successive type further\n> > properties are added but they're all basically just \"text\" with some\n> > additional metadata. If you're coming from other database engines or just\n> > general programming languages where text and fixed length string fields are\n> > handled differently then the above can seem a bit different form what\n> > you're used to. Heck, I can think of one engine where if you have a text\n> > column you have to query the table for the blob identifier and then issue a\n> > separate call to retrieve it. Here in PG it's literally all the same,\n> > handled the same, performs the same. Use what limiters make sense for your\n> > application.\n> \n> My advice is to never impose arbitrary limits on text.\n> You will probably regret the choice of limit at some\n> point. I recently encountered people complaining that\n> they (thought they) needed to store 21 characters in\n> a field that they had limited to 10 characters (even\n> though they were originally told that the recipient\n> of the data would accept up to 40 characters).\n> \n> I just use \"text\" for everything. It's less typing. :-)\n> \n> The only good reason I can think of for limiting the\n> length would be to mitigate the risk of some kind of\n> denial of service, so a limit of 1KiB or 1MiB maybe.\n> But even that sounds silly. I've never done it (except\n> to limit CPU usage for slow password hashing but even\n> then, the 1KiB limit was imposed by input validation,\n> not by the database schema).\n\nSorry, the above is misleading/a bad example. The hash\nstored in the database is a fixed reasonable length. It\nonly varies according to the hashing scheme used. It's\nonly the unhashed password (that isn't stored anywhere)\nthat was limited by input validation to limit CPU\nusage.\n\n> cheers,\n> raf\n> \n> P.S. My aversion to arbitrary length limits applies to\n> postgres identifier names as well. I wish they weren't\n> limited to 63 characters.\n> \n\n\n", "msg_date": "Wed, 29 Apr 2020 09:49:56 +1000", "msg_from": "raf <raf@raf.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:\n> \n> I just use \"text\" for everything. It's less typing. :-)\n> \n\nUgh, I see it as sign that the designers of the schema didn’t fully think about the actual requirements or care about them and it usually shows. \nOn Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:I just use \"text\" for everything. It's less typing. :-)Ugh, I see it as sign that the designers of the schema didn’t fully think about the actual requirements or care about them and it usually shows.", "msg_date": "Tue, 28 Apr 2020 20:20:53 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tue, Apr 28, 2020 at 5:21 PM Rui DeSousa <rui@crazybean.net> wrote:\n\n> I just use \"text\" for everything. It's less typing. :-)\n>\n> Ugh, I see it as sign that the designers of the schema didn’t fully think\n> about the actual requirements or care about them and it usually shows.\n>\n\nThere are very few situations where a non-arbitrary free-form text field is\ngoing to have a non-arbitrary length constraint - that is also immutable.\nGenerally, spending time to figure out those rare exceptions is wasted\neffort better spent elsewhere. They are also mostly insufficient when used\nfor their typical \"protection\" purpose. If you really want protection add\nwell thought out constraints.\n\nIts less problematic now that increasing the generally arbitrary length\ndoesn't require a table rewrite but you still need to rebuild dependent\nobjects.\n\nDavid J.\n\nOn Tue, Apr 28, 2020 at 5:21 PM Rui DeSousa <rui@crazybean.net> wrote:I just use \"text\" for everything. It's less typing. :-)Ugh, I see it as sign that the designers of the schema didn’t fully think about the actual requirements or care about them and it usually shows.    There are very few situations where a non-arbitrary free-form text field is going to have a non-arbitrary length constraint - that is also immutable.  Generally, spending time to figure out those rare exceptions is wasted effort better spent elsewhere.  They are also mostly insufficient when used for their typical \"protection\" purpose.  If you really want protection add well thought out constraints.Its less problematic now that increasing the generally arbitrary length doesn't require a table rewrite but you still need to rebuild dependent objects.David J.", "msg_date": "Tue, 28 Apr 2020 17:34:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 28, 2020, at 8:34 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> Its less problematic now that increasing the generally arbitrary length doesn't require a table rewrite but you still need to rebuild dependent objects.\n> \n\nTo increase a column length does not require a table rewrite or table scan; however, reducing its size will require a full table scan. So cleaning up a schema like the one proposed sucks. \nOn Apr 28, 2020, at 8:34 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:Its less problematic now that increasing the generally arbitrary length doesn't require a table rewrite but you still need to rebuild dependent objects.To increase a column length does not require a table rewrite or table scan; however, reducing its size will require a full table scan.  So cleaning up a schema like the one proposed sucks.", "msg_date": "Tue, 28 Apr 2020 20:40:57 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tue, Apr 28, 2020 at 5:40 PM Rui DeSousa <rui@crazybean.net> wrote:\n\n> To increase a column length does not require a table rewrite or table\n> scan; however, reducing its size will require a full table scan. So\n> cleaning up a schema like the one proposed sucks.\n>\n\nI estimate the probability of ever desiring to reduce the length of a\nvarchar field to be indistinguishable from zero.\n\nDavid J.\n\nOn Tue, Apr 28, 2020 at 5:40 PM Rui DeSousa <rui@crazybean.net> wrote:To increase a column length does not require a table rewrite or table scan; however, reducing its size will require a full table scan.  So cleaning up a schema like the one proposed sucks. I estimate the probability of ever desiring to reduce the length of a varchar field to be indistinguishable from zero.David J.", "msg_date": "Tue, 28 Apr 2020 18:22:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "Rui DeSousa wrote:\n\n> > On Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:\n> > \n> > I just use \"text\" for everything. It's less typing. :-)\n> \n> Ugh, I see it as sign that the designers of the schema didn’t fully\n> think about the actual requirements or care about them and it usually\n> shows.\n\nYou are mistaken. I care a lot. That's why I\nfuture-proof designs whenever possible by\nnot imposing arbitrarily chosen limits that\nappear to suit current requirements.\n\nIn other words, I know I'm not smart enough\nto predict the future so I don't let that\nfact ruin my software. :-)\n\ncheers,\nraf\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:29:28 +1000", "msg_from": "raf <raf@raf.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 28, 2020, at 10:29 PM, raf <raf@raf.org> wrote:\n> \n> Rui DeSousa wrote:\n> \n>>> On Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:\n>>> \n>>> I just use \"text\" for everything. It's less typing. :-)\n>> \n>> Ugh, I see it as sign that the designers of the schema didn’t fully\n>> think about the actual requirements or care about them and it usually\n>> shows.\n> \n> You are mistaken. I care a lot. That's why I\n> future-proof designs whenever possible by\n> not imposing arbitrarily chosen limits that\n> appear to suit current requirements.\n> \n> In other words, I know I'm not smart enough\n> to predict the future so I don't let that\n> fact ruin my software. :-)\n> \n> cheers,\n> raf\n> \n\nArbitrarily? What’s a cusip, vin, ssn? Why would you put a btree index on a text field? Because it’s not.\n\nWhat you’re advocating is a NoSQL design — defer your schema design. Letting the application code littered in multiple places elsewhere define what a cusip, etc. is. \n\n\n\n\nOn Apr 28, 2020, at 10:29 PM, raf <raf@raf.org> wrote:Rui DeSousa wrote:On Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:I just use \"text\" for everything. It's less typing. :-)Ugh, I see it as sign that the designers of the schema didn’t fullythink about the actual requirements or care about them and it usuallyshows.You are mistaken. I care a lot. That's why Ifuture-proof designs whenever possible bynot imposing arbitrarily chosen limits thatappear to suit current requirements.In other words, I know I'm not smart enoughto predict the future so I don't let thatfact ruin my software. :-)cheers,rafArbitrarily? What’s a cusip, vin, ssn?  Why would you put a btree index on a text field? Because it’s not.What you’re advocating is a NoSQL design — defer your schema design.  Letting the application code littered in multiple places elsewhere define what a cusip, etc. is.", "msg_date": "Wed, 29 Apr 2020 00:19:05 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:\n\n>\n> Arbitrarily? What’s a cusip, vin, ssn? Why would you put a btree index on\n> a text field? Because it’s not.\n>\n> What you’re advocating is a NoSQL design — defer your schema design.\n> Letting the application code littered in multiple places elsewhere define\n> what a cusip, etc. is.\n>\n>\nAll of those would be defined as PKs somewhere with a constraint that\nlimits not only their length but also allowable characters so you don’t get\nsomething like !@#$%^&*( as a valid ssn of length 9. A domain is probably\neven better though has implementation trade-offs.\n\nA length constraint by itself is insufficient in those examples, which are\nstill arbitrary though the decision is outside the control of the modeler.\nIf the supplied values are external, which they likely are, the system\nunder design should probably just define the values loosely and accept\nwhatever the source system provides as-is.\n\nDavid J.\n\nOn Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:Arbitrarily? What’s a cusip, vin, ssn?  Why would you put a btree index on a text field? Because it’s not.What you’re advocating is a NoSQL design — defer your schema design.  Letting the application code littered in multiple places elsewhere define what a cusip, etc. is. All of those would be defined as PKs somewhere with a constraint that limits not only their length but also allowable characters so you don’t get something like !@#$%^&*( as a valid ssn of length 9.  A domain is probably even better though has implementation trade-offs.A length constraint by itself is insufficient in those examples, which are still arbitrary though the decision is outside the control of the modeler.  If the supplied values are external, which they likely are, the system under design should probably just define the values loosely and accept whatever the source system provides as-is.David J.", "msg_date": "Tue, 28 Apr 2020 21:34:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 29, 2020, at 12:34 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net <mailto:rui@crazybean.net>> wrote:\n> \n> Arbitrarily? What’s a cusip, vin, ssn? Why would you put a btree index on a text field? Because it’s not.\n> \n> What you’re advocating is a NoSQL design — defer your schema design. Letting the application code littered in multiple places elsewhere define what a cusip, etc. is. \n> \n> \n> All of those would be defined as PKs somewhere with a constraint that limits not only their length but also allowable characters so you don’t get something like !@#$%^&*( as a valid ssn of length 9. A domain is probably even better though has implementation trade-offs.\n> \n> A length constraint by itself is insufficient in those examples, which are still arbitrary though the decision is outside the control of the modeler. If the supplied values are external, which they likely are, the system under design should probably just define the values loosely and accept whatever the source system provides as-is.\n> \n> David J.\n\nThat is the worst; seeing a text field being used in a primary key; seriously? Trying to understand how wide a table is when it’s 40 columns wide and 35 of them are text fields, ugh. When someone asks for btree index on a column and it is a text field; why?\n\nDon’t fool yourself, you are not future proofing your application; what really is happening is a slow creeping data quality issue which later needs a special project just clean up.\n\nI think we can both agree that you need to model your data correctly or at least to your best knowledge and ability.\n\n\nOn Apr 29, 2020, at 12:34 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:Arbitrarily? What’s a cusip, vin, ssn?  Why would you put a btree index on a text field? Because it’s not.What you’re advocating is a NoSQL design — defer your schema design.  Letting the application code littered in multiple places elsewhere define what a cusip, etc. is. All of those would be defined as PKs somewhere with a constraint that limits not only their length but also allowable characters so you don’t get something like !@#$%^&*( as a valid ssn of length 9.  A domain is probably even better though has implementation trade-offs.A length constraint by itself is insufficient in those examples, which are still arbitrary though the decision is outside the control of the modeler.  If the supplied values are external, which they likely are, the system under design should probably just define the values loosely and accept whatever the source system provides as-is.David J.\nThat is the worst; seeing a text field being used in a primary key; seriously?  Trying to understand how wide a table is when it’s 40 columns wide and 35 of them are text fields, ugh.  When someone asks for btree index on a column and it is a text field; why?Don’t fool yourself, you are not future proofing your application; what really is happening is a slow creeping data quality issue which later needs a special project just clean up.I think we can both agree that you need to model your data correctly or at least to your best knowledge and ability.", "msg_date": "Wed, 29 Apr 2020 00:57:29 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:\n>\n> Don’t fool yourself, you are not future proofing your application; what\n> really is happening is a slow creeping data quality issue which later needs\n> a special project just clean up.\n>\n\nI don’t use text instead of varchar(n) for future proofing and use it quite\nwell within well defined relational schemas. Using varchar(n) in a table\nalways has a better solution, use text and a constraint.\n\nDavid J.\n\nOn Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:Don’t fool yourself, you are not future proofing your application; what really is happening is a slow creeping data quality issue which later needs a special project just clean up.I don’t use text instead of varchar(n) for future proofing and use it quite well within well defined relational schemas.  Using varchar(n) in a table always has a better solution, use text and a constraint.David J.", "msg_date": "Tue, 28 Apr 2020 22:09:28 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 29, 2020, at 1:09 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net <mailto:rui@crazybean.net>> wrote:\n> Don’t fool yourself, you are not future proofing your application; what really is happening is a slow creeping data quality issue which later needs a special project just clean up.\n> \n> I don’t use text instead of varchar(n) for future proofing and use it quite well within well defined relational schemas. Using varchar(n) in a table always has a better solution, use text and a constraint.\n> \n> David J.\n\nI would agree with you that \"text and a constraint\" is a lot better than just text; and would be functionally equivalent to varchar(n).\n\nIt does requires the reader to look into each constraint to know what’s going on.\n\nAlso, when porting the schema to a different database engine and the create table statement fails because it’s too wide and doesn’t fit on a page; the end result is having to go back and redefine the text fields as varchar(n)/char(n) anyway.\n\n\nOn Apr 29, 2020, at 1:09 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:Don’t fool yourself, you are not future proofing your application; what really is happening is a slow creeping data quality issue which later needs a special project just clean up.I don’t use text instead of varchar(n) for future proofing and use it quite well within well defined relational schemas.  Using varchar(n) in a table always has a better solution, use text and a constraint.David J.\nI would agree with you that \"text and a constraint\" is a lot better than just text; and would be functionally equivalent to varchar(n).It does requires the reader to look into each constraint to know what’s going on.Also, when porting the schema to a different database engine and the create table statement fails because it’s too wide and doesn’t fit on a page; the end result is having to go back and redefine the text fields as varchar(n)/char(n) anyway.", "msg_date": "Wed, 29 Apr 2020 01:26:03 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "\nRui DeSousa <rui@crazybean.net> writes:\n\n>> On Apr 28, 2020, at 10:29 PM, raf <raf@raf.org> wrote:\n>> \n>> Rui DeSousa wrote:\n>> \n>>>> On Apr 28, 2020, at 7:43 PM, raf <raf@raf.org> wrote:\n>>>> \n>>>> I just use \"text\" for everything. It's less typing. :-)\n>>> \n>>> Ugh, I see it as sign that the designers of the schema didn’t fully\n>>> think about the actual requirements or care about them and it usually\n>>> shows.\n>> \n>> You are mistaken. I care a lot. That's why I\n>> future-proof designs whenever possible by\n>> not imposing arbitrarily chosen limits that\n>> appear to suit current requirements.\n>> \n>> In other words, I know I'm not smart enough\n>> to predict the future so I don't let that\n>> fact ruin my software. :-)\n>> \n>> cheers,\n>> raf\n>> \n>\n> Arbitrarily? What’s a cusip, vin, ssn? Why would you put a btree index on a text field? Because it’s not.\n>\n> What you’re advocating is a NoSQL design — defer your schema design. Letting the application code littered in multiple places elsewhere define what a cusip, etc. is. \n\nI think the key term in this thread is 'arbitrary'. When implementing a\nschema design, it should reflect the known constraints inherent in the\nmodel, but it should avoid imposing arbitrary constraints if none exist\nor cannot be determined.\n\nSo, if you know that a customer ID field has a current limitation of 50\ncharacters, then use a definition which reflects that. It may be that at\nsome point in the future, this will be increased, but then again, it may\nnot and that bit of information provides useful information for\napplication developers and helps with consistency across APIs. Without\nsome guideline, different developers will impose different values,\nleading to maintenance issues and bugs down the track.\n\nOn the other hand, imposing an arbitrary limitation, based on little\nmore than a guess by the designer, can cause enormous problems. As an\nexample, I was working on an identity management system where there was\na constraint of 8 characters on the username and password. This was an\narbitrary limit based on what was common practice, but was not a\nlimitation imposed by any of the systems the IAM system interacted with.\nIt was recognised that both fields were too small and needed to be\nincreased. The easy solution would have been to make these fields text.\nHowever, that would cause a problem with some of the systems we needed\nto integrate with because either they had a limit on username size or\nthey had a limit on password size. There were also multiple different\nAPIs which needed to work with this system and when we performed\nanalysis, they had varying limits on both fields.\n\nWhat we did was look at all the systems we had to integrate with and\nfound the maximum supported username and password lengths for each\nsystem and set the fields to have the maximum length supported by the\nsystems with the shortest lengths. Having that information in the\ndatabase schema also informed those developing other interfaces what the\nmaximums were. It is quite likely these limits would be increased in the\nfuture and the database definition would need to be increased - in fact,\nsome years after going into production, exactly this occurred with the\npassword field when a different encryption algorithm was adopted which\ndid not have the previous character limitation and the client wanted to\nencourage users to use pass phrases rather than a word. \n\nThe point is, just using text for all character fields loses information\nand results in your model and schema being less expressive. Providing\nthis information is sometimes critical in ensuring limits are maintained\nand provides useful documentation about the model that developers can\nuse. However, imposing limits based on little more than a guess is\nusually a bad idea and if you cannot find any reason to impose a limit,\nthen don't. I disagree with approaches which claim using text everywhere\nis easier and future proofing. In reality, it is just pushing the\nproblem out for someone else to deal with. The best way to future proof\nyour application is to have a clear well defined data model that fits\nthe domain and is well documented and reflected in your database schema. \n\n\n-- \nTim Cross\n\n\n", "msg_date": "Wed, 29 Apr 2020 15:30:03 +1000", "msg_from": "Tim Cross <theophilusx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "On Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:\n>\n> I would agree with you that \"text and a constraint\" is a lot better than\n> just text; and would be functionally equivalent to varchar(n).\n>\n\nClose enough...\n\nIt does requires the reader to look into each constraint to know what’s\n> going on.\n>\n\n And “n” is so informative...please. The name of the field tells me most\nof what I care about, the “n” and/or constraint are fluff.\n\n\n> Also, when porting the schema to a different database engine and the\n> create table statement fails because it’s too wide and doesn’t fit on a\n> page; the end result is having to go back and redefine the text fields as\n> varchar(n)/char(n) anyway.\n>\n\nNot something I’m concerned about and if that other db doesn’t have\nsomething like TOAST it seems like an undesirable target.\n\nDavid J.\n\nOn Tuesday, April 28, 2020, Rui DeSousa <rui@crazybean.net> wrote:I would agree with you that \"text and a constraint\" is a lot better than just text; and would be functionally equivalent to varchar(n). Close enough...It does requires the reader to look into each constraint to know what’s going on. And “n” is so informative...please.  The name of the field tells me most of what I care about, the “n” and/or constraint are fluff.Also, when porting the schema to a different database engine and the create table statement fails because it’s too wide and doesn’t fit on a page; the end result is having to go back and redefine the text fields as varchar(n)/char(n) anyway.Not something I’m concerned about and if that other db doesn’t have something like TOAST it seems like an undesirable target.David J.", "msg_date": "Tue, 28 Apr 2020 22:32:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "> On Apr 29, 2020, at 1:32 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> \n> And “n” is so informative...please. The name of the field tells me most of what I care about, the “n” and/or constraint are fluff.\n> \n\nThat was your recommendation; so I’m confused as to why it’s no longer valid.\n\n> \n> Also, when porting the schema to a different database engine and the create table statement fails because it’s too wide and doesn’t fit on a page; the end result is having to go back and redefine the text fields as varchar(n)/char(n) anyway.\n> \n> Not something I’m concerned about and if that other db doesn’t have something like TOAST it seems like an undesirable target.\n> \n\nFine, I assume you will be employed by your employer in perpetuity and the system will remain on PostgreSQL.\nOn Apr 29, 2020, at 1:32 AM, David G. Johnston <david.g.johnston@gmail.com> wrote: And “n” is so informative...please.  The name of the field tells me most of what I care about, the “n” and/or constraint are fluff.That was your recommendation; so I’m confused as to why it’s no longer valid.Also, when porting the schema to a different database engine and the create table statement fails because it’s too wide and doesn’t fit on a page; the end result is having to go back and redefine the text fields as varchar(n)/char(n) anyway.Not something I’m concerned about and if that other db doesn’t have something like TOAST it seems like an undesirable target.Fine, I assume you will be employed by your employer in perpetuity and the system will remain on PostgreSQL.", "msg_date": "Wed, 29 Apr 2020 01:51:05 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" }, { "msg_contents": "\n\n> On Apr 29, 2020, at 1:30 AM, Tim Cross <theophilusx@gmail.com> wrote:\n> \n> I think the key term in this thread is 'arbitrary'. When implementing a\n> schema design, it should reflect the known constraints inherent in the\n> model, but it should avoid imposing arbitrary constraints if none exist\n> or cannot be determined.\n> \n> So, if you know that a customer ID field has a current limitation of 50\n> characters, then use a definition which reflects that. It may be that at\n> some point in the future, this will be increased, but then again, it may\n> not and that bit of information provides useful information for\n> application developers and helps with consistency across APIs. Without\n> some guideline, different developers will impose different values,\n> leading to maintenance issues and bugs down the track.\n> \n> On the other hand, imposing an arbitrary limitation, based on little\n> more than a guess by the designer, can cause enormous problems. As an\n> example, I was working on an identity management system where there was\n> a constraint of 8 characters on the username and password. This was an\n> arbitrary limit based on what was common practice, but was not a\n> limitation imposed by any of the systems the IAM system interacted with.\n> It was recognised that both fields were too small and needed to be\n> increased. The easy solution would have been to make these fields text.\n> However, that would cause a problem with some of the systems we needed\n> to integrate with because either they had a limit on username size or\n> they had a limit on password size. There were also multiple different\n> APIs which needed to work with this system and when we performed\n> analysis, they had varying limits on both fields.\n> \n> What we did was look at all the systems we had to integrate with and\n> found the maximum supported username and password lengths for each\n> system and set the fields to have the maximum length supported by the\n> systems with the shortest lengths. Having that information in the\n> database schema also informed those developing other interfaces what the\n> maximums were. It is quite likely these limits would be increased in the\n> future and the database definition would need to be increased - in fact,\n> some years after going into production, exactly this occurred with the\n> password field when a different encryption algorithm was adopted which\n> did not have the previous character limitation and the client wanted to\n> encourage users to use pass phrases rather than a word. \n> \n> The point is, just using text for all character fields loses information\n> and results in your model and schema being less expressive. Providing\n> this information is sometimes critical in ensuring limits are maintained\n> and provides useful documentation about the model that developers can\n> use. However, imposing limits based on little more than a guess is\n> usually a bad idea and if you cannot find any reason to impose a limit,\n> then don't. I disagree with approaches which claim using text everywhere\n> is easier and future proofing. In reality, it is just pushing the\n> problem out for someone else to deal with. The best way to future proof\n> your application is to have a clear well defined data model that fits\n> the domain and is well documented and reflected in your database schema. \n> \n> \n> -- \n> Tim Cross\n> \n\nI can’t agree more… Thanks Tim.\n\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 01:52:07 -0400", "msg_from": "Rui DeSousa <rui@crazybean.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL CHARACTER VARYING vs CHARACTER VARYING (Length)" } ]
[ { "msg_contents": "Hi ,\n\nWhile testing something else ,i found 1 scenario  where pg_dump  is failing\n\nBelow is the standalone scenario -\n\n--connect to psql terminal and create 2 database\n\npostgres=# create database db1;\nCREATE DATABASE\npostgres=# create database db2;\nCREATE DATABASE\n\n--Connect to database db1 and run these below bunch of sql ( got from \nvacuum.sql file)\n\n\\c db1\n\ncreate  temp table vaccluster (i INT PRIMARY KEY);\nALTER TABLE vaccluster CLUSTER ON vaccluster_pkey;\nCLUSTER vaccluster;\n\nCREATE FUNCTION do_analyze() RETURNS VOID VOLATILE LANGUAGE SQL\n         AS 'ANALYZE pg_am';\nCREATE FUNCTION wrap_do_analyze(c INT) RETURNS INT IMMUTABLE LANGUAGE SQL\n         AS 'SELECT $1 FROM do_analyze()';\nCREATE INDEX ON vaccluster(wrap_do_analyze(i));\nINSERT INTO vaccluster VALUES (1), (2);\n\n--Take the  dump of db1 database  ( ./pg_dump -Fp db1 > /tmp/dump.sql)\n\n--Restore the dump file into db2 database\n\nYou are now connected to database \"db2\" as user \"tushar\".\ndb2=# \\i /tmp/dump.sql\nSET\nSET\nSET\nSET\nSET\n  set_config\n------------\n\n(1 row)\n\nSET\nSET\nSET\nSET\nCREATE FUNCTION\nALTER FUNCTION\nCREATE FUNCTION\nALTER FUNCTION\nSET\nSET\nCREATE TABLE\nALTER TABLE\nALTER TABLE\nALTER TABLE\npsql:/tmp/dump.sql:71: ERROR:  function do_analyze() does not exist\nLINE 1: SELECT $1 FROM do_analyze()\n                        ^\nHINT:  No function matches the given name and argument types. You might \nneed to add explicit type casts.\nQUERY:  SELECT $1 FROM do_analyze()\nCONTEXT:  SQL function \"wrap_do_analyze\" during inlining\ndb2=#\n\nWorkaround -\n\nreset search_path ; before 'create index' statement in the dump.sql file .\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 28 Apr 2020 17:09:13 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[pg_dump] 'create index' statement is failing due to search_path is\n empty" }, { "msg_contents": "tushar <tushar.ahuja@enterprisedb.com> writes:\n> While testing something else ,i found 1 scenario  where pg_dump  is failing\n\n> CREATE FUNCTION do_analyze() RETURNS VOID VOLATILE LANGUAGE SQL\n>         AS 'ANALYZE pg_am';\n> CREATE FUNCTION wrap_do_analyze(c INT) RETURNS INT IMMUTABLE LANGUAGE SQL\n>         AS 'SELECT $1 FROM do_analyze()';\n> CREATE INDEX ON vaccluster(wrap_do_analyze(i));\n> INSERT INTO vaccluster VALUES (1), (2);\n\nYou failed to schema-qualify the function reference. That's not\na pg_dump bug.\n\nWhile we're on the subject: this is an intentionally unsafe index.\nThe system doesn't try very hard to prevent you from lying about the\nvolatility status of a function ... but when, not if, it breaks\nwe're not going to regard the consequences as a Postgres bug.\nBasically, there isn't anything about this example that I'm not\ngoing to disclaim as \"that's not supported\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Apr 2020 10:31:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [pg_dump] 'create index' statement is failing due to search_path\n is empty" } ]
[ { "msg_contents": "Hi hackers,\n\nPer discussion in [1], we don't need to strip relabel for the expr\nexplicitly before calling pull_varnos() to retrieve all mentioned\nrelids. pull_varnos() would recurse into T_RelabelType nodes.\n\nAdd a patch to remove that and simplify the code a bit.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs48HF9f%3Dg%2BjSmmYBnWub9%2BWyg5Xh-FoqAnvqAspue5ypAw%40mail.gmail.com#b6e77e4c1ae67e2c5d97dce830b58037\n\nThanks\nRichard", "msg_date": "Wed, 29 Apr 2020 15:51:40 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Remove unnecessary relabel stripping" }, { "msg_contents": "On Wed, Apr 29, 2020 at 03:51:40PM +0800, Richard Guo wrote:\n>Hi hackers,\n>\n>Per discussion in [1], we don't need to strip relabel for the expr\n>explicitly before calling pull_varnos() to retrieve all mentioned\n>relids. pull_varnos() would recurse into T_RelabelType nodes.\n>\n>Add a patch to remove that and simplify the code a bit.\n>\n>[1]\n>https://www.postgresql.org/message-id/flat/CAMbWs48HF9f%3Dg%2BjSmmYBnWub9%2BWyg5Xh-FoqAnvqAspue5ypAw%40mail.gmail.com#b6e77e4c1ae67e2c5d97dce830b58037\n>\n>Thanks\n>Richard\n\nThanks, I'll get this pushed (or something similar to this patch) soon.\n\nFWIW it'd be better to send the patch to the original thread instead of\nstarting a new one.\n\nregards\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Apr 2020 02:11:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary relabel stripping" }, { "msg_contents": "On Thu, Apr 30, 2020 at 8:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Apr 29, 2020 at 03:51:40PM +0800, Richard Guo wrote:\n> >Hi hackers,\n> >\n> >Per discussion in [1], we don't need to strip relabel for the expr\n> >explicitly before calling pull_varnos() to retrieve all mentioned\n> >relids. pull_varnos() would recurse into T_RelabelType nodes.\n> >\n> >Add a patch to remove that and simplify the code a bit.\n> >\n> >[1]\n> >\n> https://www.postgresql.org/message-id/flat/CAMbWs48HF9f%3Dg%2BjSmmYBnWub9%2BWyg5Xh-FoqAnvqAspue5ypAw%40mail.gmail.com#b6e77e4c1ae67e2c5d97dce830b58037\n> >\n> >Thanks\n> >Richard\n>\n> Thanks, I'll get this pushed (or something similar to this patch) soon.\n>\n\nThanks.\n\n\n>\n> FWIW it'd be better to send the patch to the original thread instead of\n> starting a new one.\n>\n\nAh yes, you're right. Sorry for not doing so.\n\nThanks\nRichard\n\nOn Thu, Apr 30, 2020 at 8:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, Apr 29, 2020 at 03:51:40PM +0800, Richard Guo wrote:\n>Hi hackers,\n>\n>Per discussion in [1], we don't need to strip relabel for the expr\n>explicitly before calling pull_varnos() to retrieve all mentioned\n>relids. pull_varnos() would recurse into T_RelabelType nodes.\n>\n>Add a patch to remove that and simplify the code a bit.\n>\n>[1]\n>https://www.postgresql.org/message-id/flat/CAMbWs48HF9f%3Dg%2BjSmmYBnWub9%2BWyg5Xh-FoqAnvqAspue5ypAw%40mail.gmail.com#b6e77e4c1ae67e2c5d97dce830b58037\n>\n>Thanks\n>Richard\n\nThanks, I'll get this pushed (or something similar to this patch) soon.Thanks. \n\nFWIW it'd be better to send the patch to the original thread instead of\nstarting a new one.Ah yes, you're right. Sorry for not doing so.ThanksRichard", "msg_date": "Thu, 30 Apr 2020 09:37:41 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary relabel stripping" }, { "msg_contents": "On Thu, Apr 30, 2020 at 09:37:41AM +0800, Richard Guo wrote:\n> On Thu, Apr 30, 2020 at 8:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:>\n>> FWIW it'd be better to send the patch to the original thread instead of\n>> starting a new one.\n> \n> Ah yes, you're right. Sorry for not doing so.\n\nFWIW, I don't find the move from Richard completely incorrect either\nas the original thread discusses about a crash in incremental sorts\nwith sqlsmith, and here we have a patch to remove a useless operation.\nDifferent threads pointing to different issues help to attract\nsometimes a more correct audience.\n--\nMichael", "msg_date": "Thu, 30 Apr 2020 13:40:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary relabel stripping" }, { "msg_contents": "On Thu, Apr 30, 2020 at 01:40:11PM +0900, Michael Paquier wrote:\n>On Thu, Apr 30, 2020 at 09:37:41AM +0800, Richard Guo wrote:\n>> On Thu, Apr 30, 2020 at 8:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> wrote:>\n>>> FWIW it'd be better to send the patch to the original thread instead of\n>>> starting a new one.\n>>\n>> Ah yes, you're right. Sorry for not doing so.\n>\n>FWIW, I don't find the move from Richard completely incorrect either\n>as the original thread discusses about a crash in incremental sorts\n>with sqlsmith, and here we have a patch to remove a useless operation.\n>Different threads pointing to different issues help to attract\n>sometimes a more correct audience.\n\nPossibly. I agree it's not an entirely clear case.\n\nAnyway, I've pushed the fix.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 2 May 2020 01:51:34 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary relabel stripping" } ]
[ { "msg_contents": "I think that the PVC_RECURSE_WINDOWFUNCS flag shouldn't be used in\nmake_partial_grouping_target().\n\nFirst, this function uses the grouping_target (see grouping_planner()) as the\ninput, and that should only contain the input expressions of window functions\nas opposed to the window functions themselves. (make_window_input_target() is\nresponsible for pulling the input expressions from the window functions.)\n\nSecond, if a window function appeared in the result of\nmake_partial_grouping_target() for any reason, the Agg node would fail to\nevaluate it. Am I wrong?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 29 Apr 2020 16:45:20 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Accidental use of the PVC_RECURSE_WINDOWFUNCS flag?" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Second, if a window function appeared in the result of\n> make_partial_grouping_target() for any reason, the Agg node would fail to\n> evaluate it. Am I wrong?\n\nWell, this is PVC_RECURSE..., not PVC_INCLUDE..., so the window function\ncannot appear in the result. But I still think that the function shouldn't\nfind any window functions.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:28:33 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Accidental use of the PVC_RECURSE_WINDOWFUNCS flag?" } ]
[ { "msg_contents": "Hello PG Hackers\n\nHope you are well and safe!\n\nI'm opening this thread to clarify something that I can observe: duplicated\nvalues for sequences.\n\n[My understanding is that duplication is not something we desire. In fact\nit does NOT happen in the majority of cases, for example, when you\nimmediately insert the value and commit it. But it can eventually happen in\nsome specific scenarios...describe below]\n\nThe duplication can be observed when you only makes use of \"nextval\" (which\ncalls sequence.c / nextval_internal function) without - inserting it for\nany reason - and the database crashes.\nThis is reproducible using the steps described on this link:\n\nhttps://gist.github.com/vinnix/2fe148e3c42e11269bac5fcc5c78a8d1\n\nThere are two variants where this is reproducible (steps on the link above):\n\n - Autocommit + Suspending I/O (simulating a storage issue)\n - Explicitly opening transaction + Not Suspending I/O\n\n** To simulate the \"crash\" I'm running: `killall -9 postgres`.\n\nI've being debugging sequence.c code, I can see we only want to flush the\nWAL segments once every 32 requests (per connection) - this is defined by\nSEQ_LOG_VALS. Since, these values are \"persisted\" in the WAL file\ncontaining the \"advanced\"/future values; not the current value being\nretrieved by nextval_internal().\n\nWhat I'm trying to understand now if this is a \"bug\" or a \"new feature\"...\n\nKind regards,\nVini\n\nHello PG Hackers\nHope you are well and safe!\nI'm opening this thread to clarify something that I can observe: duplicated values for sequences. [My understanding is that duplication is not something we desire. In fact it does NOT happen in the majority of cases, for example, when you immediately insert the value and commit it. But it can eventually happen in some specific scenarios...describe below]\n\nThe duplication can be observed when you only makes use of \"nextval\" (which calls sequence.c / nextval_internal function) without - inserting it for any reason - and the database crashes. This is reproducible using the steps described on this link: \nhttps://gist.github.com/vinnix/2fe148e3c42e11269bac5fcc5c78a8d1\nThere are two variants where this is reproducible (steps on the link above):\nAutocommit + Suspending I/O (simulating a storage issue)Explicitly opening transaction + Not Suspending I/O** To simulate the \"crash\" I'm running: `killall -9 postgres`.\nI've being debugging sequence.c code, I can see we only want to flush the WAL segments once every 32 requests (per connection) - this is defined by SEQ_LOG_VALS. Since, these values are \"persisted\" in the WAL file containing the \"advanced\"/future values; not the current value being retrieved by nextval_internal(). \n\nWhat I'm trying to understand now if this is a \"bug\" or a \"new feature\"...Kind regards,Vini", "msg_date": "Wed, 29 Apr 2020 16:56:11 +0100", "msg_from": "Vinicius Abrahao <vinnix.bsd@gmail.com>", "msg_from_op": true, "msg_subject": "SEQUENCE values (duplicated) in some corner cases when crash happens" }, { "msg_contents": "On 2020-Apr-29, Vinicius Abrahao wrote:\n\n> Hello PG Hackers\n> \n> Hope you are well and safe!\n> \n> I'm opening this thread to clarify something that I can observe: duplicated\n> values for sequences.\n> \n> [My understanding is that duplication is not something we desire. In fact\n> it does NOT happen in the majority of cases, for example, when you\n> immediately insert the value and commit it. But it can eventually happen in\n> some specific scenarios...describe below]\n\nHi Vinicius\n\nI'm not sure that a sequence that produces the same value twice, without\nwriting it to the database the first time, and with an intervening crash\nin between, is necessarily a bug that we care to fix. Especially so if\nthe fix will cause a large performance regression for the normal case\nwhere the sequence value is written to the DB by a committed transaction.\n\n(I think you could also cause the same problem with an async-commit\ntransaction that does write the value to the database, but whose writes\nare lost in the crash ... since the WAL record for the sequence would\nalso be lost.)\n\nIs there some valid reason to be interested in that scenario?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:29:07 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Apr-29, Vinicius Abrahao wrote:\n>> I'm opening this thread to clarify something that I can observe: duplicated\n>> values for sequences.\n\n> I'm not sure that a sequence that produces the same value twice, without\n> writing it to the database the first time, and with an intervening crash\n> in between, is necessarily a bug that we care to fix. Especially so if\n> the fix will cause a large performance regression for the normal case\n> where the sequence value is written to the DB by a committed transaction.\n\nI believe this behavior is 100% intentional: the advance of the sequence\nvalue is logged to WAL, but we don't guarantee to make the WAL entry\npersistent until the calling transaction commits. And while I'm too\nlazy to check right now, I think the calling transaction might've had\nto cause some additional non-sequence-object updates to happen on disk,\ntoo, else we won't think it has done anything that needs committing.\n\nAs you say, doing something different would entail a large performance\npenalty for a rather dubious semantic requirement. The normal expectation\nis that we have to protect sequence values that get written into tables\nsomeplace.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:02:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" }, { "msg_contents": "On 4/29/20 11:02, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2020-Apr-29, Vinicius Abrahao wrote:\n>>> I'm opening this thread to clarify something that I can observe: duplicated\n>>> values for sequences.\n> \n>> I'm not sure that a sequence that produces the same value twice, without\n>> writing it to the database the first time, and with an intervening crash\n>> in between, is necessarily a bug that we care to fix. Especially so if\n>> the fix will cause a large performance regression for the normal case\n>> where the sequence value is written to the DB by a committed transaction.\n> \n> I believe this behavior is 100% intentional: the advance of the sequence\n> value is logged to WAL, but we don't guarantee to make the WAL entry\n> persistent until the calling transaction commits. And while I'm too\n> lazy to check right now, I think the calling transaction might've had\n> to cause some additional non-sequence-object updates to happen on disk,\n> too, else we won't think it has done anything that needs committing.\nThe behavior we're observing is that a nextval() call in a committed\ntransaction is not crash-safe. This was discovered because some\napplications were using nextval() to get a guaranteed unique sequence\nnumber [or so they thought], then the application did some processing\nwith the value and later stored it in a relation of the same database.\n\nThe nextval() number was not used until the transaction was committed -\nbut then the fact of a value being generated, returned and committed was\nlost on crash. The nextval() call used in isolation did not seem to\nprovide durability.\n\n\n> As you say, doing something different would entail a large performance\n> penalty for a rather dubious semantic requirement. The normal expectation\n> is that we have to protect sequence values that get written into tables\n> someplace.\n\nWhether or not it's dubious is in the eye of the beholder I guess; in\nOracle I believe the equivalent use of sequences provides the usual\ndurability guarantees. Probably that's why this particular user was\nsurprised at the behavior. If PostgreSQL isn't going to provide\ndurability for isolated use of sequences, then IMO that's fine but the\nfact should at least be in the documentation.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Wed, 6 May 2020 10:51:59 -0700", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" }, { "msg_contents": "On Wed, May 6, 2020 at 1:52 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n\n\n> The behavior we're observing is that a nextval() call in a committed\n>\ntransaction is not crash-safe. This was discovered because some\n> applications were using nextval() to get a guaranteed unique sequence\n> number [or so they thought], then the application did some processing\n> with the value and later stored it in a relation of the same database.\n>\n> The nextval() number was not used until the transaction was committed -\n>\n\nI don't know what this line means. You said it was stored in a relation,\nsurely that needs to have happened through some command which preceded the\ncommit chronologically, though formally they may have happened atomically.\n\n\n> but then the fact of a value being generated, returned and committed was\n> lost on crash. The nextval() call used in isolation did not seem to\n> provide durability.\n>\n\nAre you clarifying the original complaint, or this a new, different\ncomplaint? Vini's test cases don't include any insertions. Do you have\ntest cases that can reproduce your complaint?\n\nCheers,\n\nJeff\n\nOn Wed, May 6, 2020 at 1:52 PM Jeremy Schneider <schnjere@amazon.com> wrote: The behavior we're observing is that a nextval() call in a committed\ntransaction is not crash-safe. This was discovered because some\napplications were using nextval() to get a guaranteed unique sequence\nnumber [or so they thought], then the application did some processing\nwith the value and later stored it in a relation of the same database.\n\nThe nextval() number was not used until the transaction was committed -I don't know what this line means.  You said it was stored in a relation, surely that needs to have happened through some command which preceded the commit chronologically, though formally they may have happened atomically. \nbut then the fact of a value being generated, returned and committed was\nlost on crash. The nextval() call used in isolation did not seem to\nprovide durability.Are you clarifying the original complaint, or this a new, different complaint? Vini's test cases don't include any insertions.  Do you have test cases that can reproduce your complaint?Cheers,Jeff", "msg_date": "Thu, 14 May 2020 17:58:35 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" }, { "msg_contents": "On 5/14/20 14:58, jeff.janes@gmail.com wrote:\n>\n> *CAUTION*: This email originated from outside of the organization. Do\n> not click links or open attachments unless you can confirm the sender\n> and know the content is safe.\n>\n>\n> On Wed, May 6, 2020 at 1:52 PM Jeremy Schneider <schnjere@amazon.com\n> <mailto:schnjere@amazon.com>> wrote:\n>  \n>\n> The behavior we're observing is that a nextval() call in a committed\n>\n> transaction is not crash-safe. This was discovered because some\n> applications were using nextval() to get a guaranteed unique sequence\n> number [or so they thought], then the application did some processing\n> with the value and later stored it in a relation of the same database.\n>\n> The nextval() number was not used until the transaction was\n> committed -\n>\n>\n> I don't know what this line means.  You said it was stored in a\n> relation, surely that needs to have happened through some command\n> which preceded the commit chronologically, though formally they may\n> have happened atomically.\n\n\"Later stored it in the table\" - I'd have to double check with the other\nteam, but IIUC it was application pseudo-code like this:\n\n * execute SQL \"select nextval()\" and store result in\n my_local_variable_unique_id\n * commit\n * do some processing, tracing, logging, etc identified with\n my_local_variable_unique_id\n * execute SQL \"insert into mytable values(my_local_variable_unique_id,\n data1, data2)\"\n * commit\n\nThey weren't expecting that they could get duplicates from a sequence,\nwhich leads to unique violations and other problems later.  Maybe a\nworkaround is doing some kind of dummy insert or update or something in\nthe transaction that gets a sequence value.\n\n\n>  \n>\n> but then the fact of a value being generated, returned and\n> committed was\n> lost on crash. The nextval() call used in isolation did not seem to\n> provide durability.\n>\n>\n> Are you clarifying the original complaint, or this a new, different\n> complaint? Vini's test cases don't include any insertions.  Do you\n> have test cases that can reproduce your complaint?\n\nClarification of same issue, not a new issue.\n\nTom also has said as much in his email - he said it's quite plausible\nthat sequences used in isolation aren't crash safe.  I just think we\nshould document it; I'll work on a proposal/doc-update-patch for\neveryone to bikeshed on when I have a few minutes  :)\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 5/14/20 14:58, jeff.janes@gmail.com\n wrote:\n\n\n\n\n\n\n\n\nCAUTION: This email originated\n from outside of the organization. Do not click links\n or open attachments unless you can confirm the\n sender and know the content is safe.\n\n\n\n\n\n\n\n\nOn Wed, May 6, 2020 at 1:52 PM Jeremy Schneider\n <schnjere@amazon.com> wrote:\n\n\n \n\n The behavior we're observing is that a nextval() call in a\n committed\n\n\n transaction is not crash-safe. This was discovered because\n some\n applications were using nextval() to get a guaranteed\n unique sequence\n number [or so they thought], then the application did some\n processing\n with the value and later stored it in a relation of the\n same database.\n\n The nextval() number was not used until the transaction\n was committed -\n\n\n\nI don't know what this line means.  You said it was\n stored in a relation, surely that needs to have happened\n through some command which preceded the commit\n chronologically, though formally they may have happened\n atomically.\n\n\n\n\n\n \"Later stored it in the table\" - I'd have to double check with the\n other team, but IIUC it was application pseudo-code like this:\n\nexecute SQL \"select nextval()\" and store result in\n my_local_variable_unique_id\n\ncommit\ndo some processing, tracing, logging, etc identified with\n my_local_variable_unique_id\nexecute SQL \"insert into mytable\n values(my_local_variable_unique_id, data1, data2)\"\ncommit\n\nThey weren't expecting that they could get duplicates from a\n sequence, which leads to unique violations and other problems\n later.  Maybe a workaround is doing some kind of dummy insert or\n update or something in the transaction that gets a sequence value.\n\n\n\n\n\n\n\n \n\n but then the fact of a value being generated, returned and\n committed was\n lost on crash. The nextval() call used in isolation did\n not seem to\n provide durability.\n\n\n\nAre you clarifying the original complaint, or this a\n new, different complaint? Vini's test cases don't include\n any insertions.  Do you have test cases that can reproduce\n your complaint?\n\n\n\n\n\n Clarification of same issue, not a new issue.\n\n Tom also has said as much in his email - he said it's quite\n plausible that sequences used in isolation aren't crash safe.  I\n just think we should document it; I'll work on a\n proposal/doc-update-patch for everyone to bikeshed on when I have a\n few minutes  :)\n\n -Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services", "msg_date": "Thu, 14 May 2020 15:09:28 -0700", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" }, { "msg_contents": "On 2020-May-14, Jeremy Schneider wrote:\n\n> \"Later stored it in the table\" - I'd have to double check with the other\n> team, but IIUC it was application pseudo-code like this:\n> \n> * execute SQL \"select nextval()\" and store result in\n> my_local_variable_unique_id\n> * commit\n\nYes, simply inserting the sequence value in a (logged!) dummy table\nbefore this commit, as you suggest, should fix this problem. The insert\nensures that the transaction commit is flushed to WAL. The table need\nnot have indexes, making the insert faster. Just make sure to truncate\nthe table every now and then.\n\n+1 to documenting this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 May 2020 18:47:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SEQUENCE values (duplicated) in some corner cases when crash\n happens" } ]
[ { "msg_contents": "Hi,\n\nI think it's not good that do_pg_start_backup() takes a flag which\ntells it to call back into basebackup.c's sendTablespace(). This means\nthat details which ought to be private to basebackup.c leak out and\nbecome visible to other parts of the code. This seems to have\noriginated in commit 72d422a5227ef6f76f412486a395aba9f53bf3f0, and it\nlooks like there was some discussion of the issue at the time. I think\nthat patch was right to want only a single iteration over the\ntablespace list; if not, the list of tablespaces returned by the\nbackup could be different from the list that is included in the\ntablespace map, which does seem like a good thing to avoid.\n\nHowever, it doesn't follow that sendTablespace() needs to be called\nfrom do_pg_start_backup(). It's not actually sending the tablespace at\nthat point, just calculating the size, because the sizeonly argument\nis passed as true. And, there's no reason that I can see why that\nneeds to be done from within do_pg_start_backup(). It can equally well\nbe done after that function returns, as in the attached 0001. I\nbelieve that this is functionally equivalent but more elegant,\nalthough there is a notable behavior difference: today,\nsendTablespaces() is called in sizeonly mode with \"fullpath\" as the\nargument, which I think is pg_tblspc/$OID, and in non-sizeonly mode\nwith ti->path as an argument, which seems to be the path to which the\nsymlink points. With the patch, it would be called with the latter in\nboth cases. It looks to me like that should be OK, and it definitely\nseems more consistent.\n\nWhile I was poking around in this area, I found some other code which\nI thought could stand a bit of improvement also. The attached 0002\nslightly modifies some tablespace_map related code and comments in\nperform_base_backup(), so that instead of having two very similar\ncalls to sendDir() right next to each other that differ only in the\nvalue passed for the fifth argument, we have just one call with the\nfifth argument being a variable. Although this is a minor change I\nthink it's a good cleanup that reduces the chances of future mistakes\nin this area.\n\nComments?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 29 Apr 2020 12:27:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "tablespace_map code cleanup" }, { "msg_contents": "On Wed, Apr 29, 2020 at 9:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi,\n>\n> I think it's not good that do_pg_start_backup() takes a flag which\n> tells it to call back into basebackup.c's sendTablespace(). This means\n> that details which ought to be private to basebackup.c leak out and\n> become visible to other parts of the code. This seems to have\n> originated in commit 72d422a5227ef6f76f412486a395aba9f53bf3f0, and it\n> looks like there was some discussion of the issue at the time. I think\n> that patch was right to want only a single iteration over the\n> tablespace list; if not, the list of tablespaces returned by the\n> backup could be different from the list that is included in the\n> tablespace map, which does seem like a good thing to avoid.\n>\n> However, it doesn't follow that sendTablespace() needs to be called\n> from do_pg_start_backup(). It's not actually sending the tablespace at\n> that point, just calculating the size, because the sizeonly argument\n> is passed as true. And, there's no reason that I can see why that\n> needs to be done from within do_pg_start_backup(). It can equally well\n> be done after that function returns, as in the attached 0001. I\n> believe that this is functionally equivalent but more elegant,\n> although there is a notable behavior difference: today,\n> sendTablespaces() is called in sizeonly mode with \"fullpath\" as the\n> argument, which I think is pg_tblspc/$OID, and in non-sizeonly mode\n> with ti->path as an argument, which seems to be the path to which the\n> symlink points. With the patch, it would be called with the latter in\n> both cases. It looks to me like that should be OK, and it definitely\n> seems more consistent.\n>\n\nIf we want to move the calculation of size for tablespaces in the\ncaller then I think we also need to do something about the progress\nreporting related to phase\nPROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE.\n\n> While I was poking around in this area, I found some other code which\n> I thought could stand a bit of improvement also. The attached 0002\n> slightly modifies some tablespace_map related code and comments in\n> perform_base_backup(), so that instead of having two very similar\n> calls to sendDir() right next to each other that differ only in the\n> value passed for the fifth argument, we have just one call with the\n> fifth argument being a variable. Although this is a minor change I\n> think it's a good cleanup that reduces the chances of future mistakes\n> in this area.\n>\n\nThe 0002 patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 May 2020 14:54:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Mon, May 4, 2020 at 5:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> If we want to move the calculation of size for tablespaces in the\n> caller then I think we also need to do something about the progress\n> reporting related to phase\n> PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE.\n\nOh, good point. v2 attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 6 May 2020 11:15:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Wed, May 6, 2020 at 11:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Oh, good point. v2 attached.\n\nHere's v3, with one more small cleanup. I noticed tblspc_map_file is\ninitialized to NULL and then unconditionally reset to the return value\nof makeStringInfo(), and then later tested to see whether it is NULL.\nIt can't be, because makeStringInfo() doesn't return NULL. So the\nattached version deletes the superfluous initialization and the\nsuperfluous test.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 7 May 2020 12:14:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Thu, May 7, 2020 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 6, 2020 at 11:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Oh, good point. v2 attached.\n>\n\nWhile looking at this, I noticed that caller (perform_base_backup) of\ndo_pg_start_backup, sets the backup phase as\nPROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT whereas, in\ndo_pg_start_backup, we do collect the information about all\ntablespaces after the checkpoint. I am not sure if it is long enough\nthat we consider having a separate phase for it. Without your patch,\nit was covered under PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE\nphase which doesn't appear to be a bad idea.\n\n> Here's v3, with one more small cleanup. I noticed tblspc_map_file is\n> initialized to NULL and then unconditionally reset to the return value\n> of makeStringInfo(), and then later tested to see whether it is NULL.\n> It can't be, because makeStringInfo() doesn't return NULL. So the\n> attached version deletes the superfluous initialization and the\n> superfluous test.\n>\n\nThis change looks fine to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 11:53:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Tue, May 12, 2020 at 2:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> While looking at this, I noticed that caller (perform_base_backup) of\n> do_pg_start_backup, sets the backup phase as\n> PROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT whereas, in\n> do_pg_start_backup, we do collect the information about all\n> tablespaces after the checkpoint. I am not sure if it is long enough\n> that we consider having a separate phase for it. Without your patch,\n> it was covered under PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE\n> phase which doesn't appear to be a bad idea.\n\nMaybe I'm confused here, but I think the size estimation still *is*\ncovered under PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE. It's\njust that now that happens a bit later. I'm assuming that listing the\ntablespaces is pretty cheap, but sizing them is expensive, as you'd\nhave to iterate over all the files and stat() each one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:24:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Wed, May 13, 2020 at 1:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 12, 2020 at 2:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > While looking at this, I noticed that caller (perform_base_backup) of\n> > do_pg_start_backup, sets the backup phase as\n> > PROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT whereas, in\n> > do_pg_start_backup, we do collect the information about all\n> > tablespaces after the checkpoint. I am not sure if it is long enough\n> > that we consider having a separate phase for it. Without your patch,\n> > it was covered under PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE\n> > phase which doesn't appear to be a bad idea.\n>\n> Maybe I'm confused here, but I think the size estimation still *is*\n> covered under PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE. It's\n> just that now that happens a bit later.\n>\n\nThere is no problem with this part.\n\n> I'm assuming that listing the\n> tablespaces is pretty cheap, but sizing them is expensive, as you'd\n> have to iterate over all the files and stat() each one.\n>\n\nI was trying to say that tablespace listing will happen under\nPROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT phase which could be a\nproblem if it is a costly operation but as you said it is pretty cheap\nso I think we don't need to bother about that.\n\nApart from the above point which I think we don't need to bother, both\nyour patches look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 07:56:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Tue, May 12, 2020 at 10:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I was trying to say that tablespace listing will happen under\n> PROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT phase which could be a\n> problem if it is a costly operation but as you said it is pretty cheap\n> so I think we don't need to bother about that.\n>\n> Apart from the above point which I think we don't need to bother, both\n> your patches look good to me.\n\nOK, good. Let's see if anyone else feels differently about this issue\nor wants to raise anything else. If not, I'll plan to commit these\npatches after we branch. Thanks for the review.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 May 2020 15:10:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "At Wed, 13 May 2020 15:10:30 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, May 12, 2020 at 10:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I was trying to say that tablespace listing will happen under\n> > PROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT phase which could be a\n> > problem if it is a costly operation but as you said it is pretty cheap\n> > so I think we don't need to bother about that.\n> >\n> > Apart from the above point which I think we don't need to bother, both\n> > your patches look good to me.\n> \n> OK, good. Let's see if anyone else feels differently about this issue\n> or wants to raise anything else. If not, I'll plan to commit these\n> patches after we branch. Thanks for the review.\n\nTable space listing needs only one or few 512k pages, which should be\non OS file cache, which cannot take long time unless the system is\nfacing a severe trouble. (I believe that is the same on Windows.)\n\nI'm fine that WAIT_CHECKPOINT contains the time to enumerate\ntablespace directories.\n\n\n0001 looks good to me. The progress information gets\n\nAbout 0002,\n\n+\t\t\t\tbool\tsendtblspclinks = true;\n\nThe boolean seems to me useless since it is always the inverse of\nopt->sendtblspcmapfile when it is used.\n\nEverything looks fine to me except the above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 May 2020 11:43:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Wed, May 13, 2020 at 10:43 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> About 0002,\n>\n> + bool sendtblspclinks = true;\n>\n> The boolean seems to me useless since it is always the inverse of\n> opt->sendtblspcmapfile when it is used.\n\nWell, I think it might have some documentation value, to clarify that\nwhether or not we send tablespace links is the opposite of whether we\nsend the tablespace map file.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 27 May 2020 07:57:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "At Wed, 27 May 2020 07:57:38 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, May 13, 2020 at 10:43 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > About 0002,\n> >\n> > + bool sendtblspclinks = true;\n> >\n> > The boolean seems to me useless since it is always the inverse of\n> > opt->sendtblspcmapfile when it is used.\n> \n> Well, I think it might have some documentation value, to clarify that\n> whether or not we send tablespace links is the opposite of whether we\n> send the tablespace map file.\n\nThat makes sense to me. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 28 May 2020 09:11:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tablespace_map code cleanup" }, { "msg_contents": "On Wed, May 27, 2020 at 8:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> That makes sense to me. Thanks!\n\nGreat. Thanks to you and Amit for reviewing. I have committed the patches.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 17 Jun 2020 11:08:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tablespace_map code cleanup" } ]
[ { "msg_contents": "Hi hackers,\n\nI found two email threads below,\n\nhttps://www.postgresql.org/message-id/b0d099ca-f9c3-00ed-0c95-4d7a9f7c97fc%402ndquadrant.com\n\nhttps://www.postgresql.org/message-id/CA%2B4BxBwBHmDkSpgvnfG_Ps1SEeYhDRuLpr1AvdbUwFh-otTg8A%40mail.gmail.com\n\nand I understood \"OUT parameters in procedures are not implemented yet, \nbut would like to have\nin the future\" at that moment. However, I ran a quick test by simply \ncommented out a few lines below in src/backend/commands/functioncmds.c\n\n+//             if (objtype == OBJECT_PROCEDURE)\n+//             {\n+//                     if (fp->mode == FUNC_PARAM_OUT)\n+//                             ereport(ERROR,\n+// (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+// errmsg(\"procedures cannot have OUT arguments\"),\n+//                                              errhint(\"INOUT \narguments are permitted.\")));\n+//             }\n\nthen I can use OUT as the parameter to create a PROCEDURE, and I can do \nsomething like,\n\npostgres=# create procedure p2(IN a int, OUT b int) as 'select 9*$1' \nlanguage sql;\nCREATE PROCEDURE\npostgres=# CALL p2(1);\n  b\n---\n  9\n(1 row)\n\nBy enabling the OUT parameter, I can see some difference, for example,\n1. user doesn't have to provide one (or many) dummy \"INOUT\" parameter in \norder to get the output\n2. it has similar behavior compare with FUNCTION when using IN, INOUT, \nand OUT parameters\n\nSo, the questions are,\n\n1. what are the limitation or concern that block the support of the OUT \nparameter at this moment?\n\n2. if the OUT parameter is enabled like above, what will be the impact?\n\n3. Is there any other components that should be considered following the \nabove change?\n\n\nThanks,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 16:42:50 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "Can the OUT parameter be enabled in stored procedure?" }, { "msg_contents": "On Wed, Apr 29, 2020 at 4:43 PM David Zhang <david.zhang@highgo.ca> wrote:\n\n> Hi hackers,\n>\n> I found two email threads below,\n>\n>\n> https://www.postgresql.org/message-id/b0d099ca-f9c3-00ed-0c95-4d7a9f7c97fc%402ndquadrant.com\n>\n>\n> https://www.postgresql.org/message-id/CA%2B4BxBwBHmDkSpgvnfG_Ps1SEeYhDRuLpr1AvdbUwFh-otTg8A%40mail.gmail.com\n>\n> and I understood \"OUT parameters in procedures are not implemented yet,\n> but would like to have\n> in the future\" at that moment.\n>\n\nHere is a more original thread that might prove insightful:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0wnJz1yATUQxc7rrb-7rJAbkYQyLq%2Bb%3DKSGYh8F19nFVg%40mail.gmail.com#e2a37b42fa1587187a71340e79e3f2ee\n\nDavid J.\n\nOn Wed, Apr 29, 2020 at 4:43 PM David Zhang <david.zhang@highgo.ca> wrote:Hi hackers,\n\nI found two email threads below,\n\nhttps://www.postgresql.org/message-id/b0d099ca-f9c3-00ed-0c95-4d7a9f7c97fc%402ndquadrant.com\n\nhttps://www.postgresql.org/message-id/CA%2B4BxBwBHmDkSpgvnfG_Ps1SEeYhDRuLpr1AvdbUwFh-otTg8A%40mail.gmail.com\n\nand I understood \"OUT parameters in procedures are not implemented yet, \nbut would like to have\nin the future\" at that moment.Here is a more original thread that might prove insightful:https://www.postgresql.org/message-id/flat/CAHyXU0wnJz1yATUQxc7rrb-7rJAbkYQyLq%2Bb%3DKSGYh8F19nFVg%40mail.gmail.com#e2a37b42fa1587187a71340e79e3f2eeDavid J.", "msg_date": "Wed, 29 Apr 2020 17:22:13 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can the OUT parameter be enabled in stored procedure?" } ]
[ { "msg_contents": "I played with a silly example and got a result that surprises me:\n\n WITH RECURSIVE fib AS (\n SELECT n, \"fibₙ\"\n FROM (VALUES (1, 1::bigint), (2, 1)) AS f(n,\"fibₙ\")\n UNION ALL\n SELECT max(n) + 1,\n sum(\"fibₙ\")::bigint\n FROM (SELECT n, \"fibₙ\"\n FROM fib\n ORDER BY n DESC\n LIMIT 2) AS tail\n HAVING max(n) < 10\n )\n SELECT * FROM fib;\n\n n | fibₙ \n ----+------\n 1 | 1\n 2 | 1\n 3 | 2\n 4 | 2\n 5 | 2\n 6 | 2\n 7 | 2\n 8 | 2\n 9 | 2\n 10 | 2\n (10 rows)\n\nI would have expected either the Fibonacci sequence or\n\n ERROR: aggregate functions are not allowed in a recursive query's recursive term\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 30 Apr 2020 05:18:49 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Bug with subqueries in recursive CTEs?" }, { "msg_contents": ">>>>> \"Laurenz\" == Laurenz Albe <laurenz.albe@cybertec.at> writes:\n\n Laurenz> I played with a silly example and got a result that surprises\n Laurenz> me:\n\n Laurenz> WITH RECURSIVE fib AS (\n Laurenz> SELECT n, \"fibₙ\"\n Laurenz> FROM (VALUES (1, 1::bigint), (2, 1)) AS f(n,\"fibₙ\")\n Laurenz> UNION ALL\n Laurenz> SELECT max(n) + 1,\n Laurenz> sum(\"fibₙ\")::bigint\n Laurenz> FROM (SELECT n, \"fibₙ\"\n Laurenz> FROM fib\n Laurenz> ORDER BY n DESC\n Laurenz> LIMIT 2) AS tail\n Laurenz> HAVING max(n) < 10\n Laurenz> )\n Laurenz> SELECT * FROM fib;\n\n Laurenz> I would have expected either the Fibonacci sequence or\n\n Laurenz> ERROR: aggregate functions are not allowed in a recursive\n Laurenz> query's recursive term\n\nYou don't get a Fibonacci sequence because the recursive term only sees\nthe rows (in this case only one row) added by the previous iteration,\nnot the entire result set so far.\n\nSo the result seems correct as far as that goes. The reason the\n\"aggregate functions are not allowed\" error isn't hit is that the\naggregate and the recursive reference aren't ending up in the same query\n- the check for aggregates is looking at the rangetable of the query\nlevel containing the agg to see if it has an RTE_CTE entry which is a\nrecursive reference.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Thu, 30 Apr 2020 04:37:24 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Bug with subqueries in recursive CTEs?" }, { "msg_contents": "On Thu, 2020-04-30 at 04:37 +0100, Andrew Gierth wrote:\n> \"Laurenz\" == Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> \n> Laurenz> I played with a silly example and got a result that surprises\n> Laurenz> me:\n> \n> Laurenz> WITH RECURSIVE fib AS (\n> Laurenz> SELECT n, \"fibₙ\"\n> Laurenz> FROM (VALUES (1, 1::bigint), (2, 1)) AS f(n,\"fibₙ\")\n> Laurenz> UNION ALL\n> Laurenz> SELECT max(n) + 1,\n> Laurenz> sum(\"fibₙ\")::bigint\n> Laurenz> FROM (SELECT n, \"fibₙ\"\n> Laurenz> FROM fib\n> Laurenz> ORDER BY n DESC\n> Laurenz> LIMIT 2) AS tail\n> Laurenz> HAVING max(n) < 10\n> Laurenz> )\n> Laurenz> SELECT * FROM fib;\n> \n> Laurenz> I would have expected either the Fibonacci sequence or\n> \n> Laurenz> ERROR: aggregate functions are not allowed in a recursive\n> Laurenz> query's recursive term\n\nThanks for looking at this!\n\n> You don't get a Fibonacci sequence because the recursive term only sees\n> the rows (in this case only one row) added by the previous iteration,\n> not the entire result set so far.\n\nAh, of course. You are right.\n\n> So the result seems correct as far as that goes. The reason the\n> \"aggregate functions are not allowed\" error isn't hit is that the\n> aggregate and the recursive reference aren't ending up in the same query\n> - the check for aggregates is looking at the rangetable of the query\n> level containing the agg to see if it has an RTE_CTE entry which is a\n> recursive reference.\n\nBut I wonder about that.\n\nThe source says that\n \"Per spec, aggregates can't appear in a recursive term.\"\nIs that the only reason for that error message, or is there a deeper reason\nto forbid it?\n\nIt feels wrong that a subquery would make using an aggregate legal when\nit is illegal without the subquery.\nBut then, it doesn't bother me enough to research, and as long as the result\nas such is correct, I feel much better.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 30 Apr 2020 21:08:54 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Bug with subqueries in recursive CTEs?" } ]
[ { "msg_contents": "Hi:\n For a given level for join_search_one_level, it is always try to join\nevery relation\nin joinrel[level-1] to *initial_rels*. but the current code doesn't show\nthis directly.\n\njoin_search_one_level\n\n if (level == 2) /* consider remaining\ninitial rels */\n {\n other_rels_list = joinrels[level - 1];\n other_rels = lnext(other_rels_list, r);\n }\n else /* consider all\ninitial rels */\n {\n other_rels_list = joinrels[1];\n other_rels = list_head(other_rels_list);\n }\n\n make_rels_by_clause_joins(root,\n\nold_rel,\n\nother_rels_list,\n\nother_rels);\n\nI'd like to remove the parameter and use root->inital_rels directly. I did\nthe same\nfor make_rels_by_clauseless_joins. The attached is my proposal which\nshould be\nsemantic correctly and more explicitly.\n\nBest Regards\nAndy Fan", "msg_date": "Thu, 30 Apr 2020 12:01:38 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Can we remove the other_rels_list parameter for\n make_rels_by_clause_joins" } ]
[ { "msg_contents": "I noticed that in hot standby, XLOG_STANDBY_LOCK redo is sometimes block by another query, and all the rest redo is blocked by this lock getting operation, which is not good and often happed in my database, so the hot standby will be left behind and master will store a lot of WAL which can’t be purged.\n\nSo here is the idea:\nWe can do XLOG_STANDBY_LOCK redo asynchronously, and the rest redo will continue.\nAnd I wonder will LogStandbySnapshot influence the consistency in hot standby, for the redo is not by order. And how to avoid this.\n\n// ------ startup ------\nStartupXLOG()\n{\n while (readRecord())\n {\n check_lock_get_state();\n if (record.tx is in pending tbl):\n append this record to the pending lock for further redo.\n redo_record();\n }\n}\n\ncheck_lock_get_state()\n{\n for (tx in pending_tx):\n if (tx.all_lock are got):\n redo the rest record for this tx\n free this tx\n}\n\nstandby_redo\n{\n if (XLOG_STANDBY_LOCK redo falied)\n {\n add_lock_to_pending_tx_tbl();\n }\n}\n\n// ------ worker process ------\nmain()\n{\n while(true)\n {\n for (lock in pending locks order by lsn)\n try_to_get_lock_from_pending_tbl();\n }\n}\n\n\nregards.\nYuhang\n\n\nI noticed that in hot standby, XLOG_STANDBY_LOCK redo is sometimes block by another query, and all the rest redo is blocked by this lock getting operation, which is not good and often happed in my database, so the hot standby will be left behind and master will store a lot of WAL which can’t be purged.So here is the idea:We can do XLOG_STANDBY_LOCK redo asynchronously, and the rest redo will continue.And I wonder will LogStandbySnapshot influence the consistency in hot standby, for the redo is not by order. And how to avoid this.// ------ startup ------StartupXLOG(){    while (readRecord())    {        check_lock_get_state();        if (record.tx is in pending tbl):            append this record to the pending lock for further redo.        redo_record();    }}check_lock_get_state(){    for (tx in pending_tx):        if (tx.all_lock are got):             redo the rest record for this tx             free this tx}standby_redo{    if (XLOG_STANDBY_LOCK redo falied)    {        add_lock_to_pending_tx_tbl();    }}// ------ worker process ------main(){     while(true)    {        for (lock in pending locks order by lsn)            try_to_get_lock_from_pending_tbl();    }}regards.Yuhang", "msg_date": "Thu, 30 Apr 2020 18:37:27 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <iamqyh@gmail.com>", "msg_from_op": true, "msg_subject": "Optimization for hot standby XLOG_STANDBY_LOCK redo" }, { "msg_contents": "On Thu, Apr 30, 2020 at 4:07 PM 邱宇航 <iamqyh@gmail.com> wrote:\n>\n> I noticed that in hot standby, XLOG_STANDBY_LOCK redo is sometimes block by another query, and all the rest redo is blocked by this lock getting operation, which is not good and often happed in my database, so the hot standby will be left behind and master will store a lot of WAL which can’t be purged.\n>\n> So here is the idea:\n> We can do XLOG_STANDBY_LOCK redo asynchronously, and the rest redo will continue.\n>\n\nHmm, I don't think we can do this. The XLOG_STANDBY_LOCK WAL is used\nfor AccessExclusiveLock on a Relation which means it is a lock for a\nDDL operation. If you skip processing the WAL for this lock, the\nbehavior of queries running on standby will be unpredictable.\nConsider a case where on the master, the user has dropped the table\n<t1> and when it will replay such an operation on standby the\nconcurrent queries on t1 will be blocked due to replay of\nXLOG_STANDBY_LOCK WAL and if you skip that WAL, the drop of table and\nquery on the same table can happen simultaneously leading to\nunpredictable behavior.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Apr 2020 16:42:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimization for hot standby XLOG_STANDBY_LOCK redo" }, { "msg_contents": "I mean that all resources protected by XLOG_STANDBY_LOCK should redo later.\nThe semantics of XLOG_STANDBY_LOCK is still kept.\n\n> 2020年4月30日 下午7:12,Amit Kapila <amit.kapila16@gmail.com> 写道:\n> \n> On Thu, Apr 30, 2020 at 4:07 PM 邱宇航 <iamqyh@gmail.com> wrote:\n>> \n>> I noticed that in hot standby, XLOG_STANDBY_LOCK redo is sometimes block by another query, and all the rest redo is blocked by this lock getting operation, which is not good and often happed in my database, so the hot standby will be left behind and master will store a lot of WAL which can’t be purged.\n>> \n>> So here is the idea:\n>> We can do XLOG_STANDBY_LOCK redo asynchronously, and the rest redo will continue.\n>> \n> \n> Hmm, I don't think we can do this. The XLOG_STANDBY_LOCK WAL is used\n> for AccessExclusiveLock on a Relation which means it is a lock for a\n> DDL operation. If you skip processing the WAL for this lock, the\n> behavior of queries running on standby will be unpredictable.\n> Consider a case where on the master, the user has dropped the table\n> <t1> and when it will replay such an operation on standby the\n> concurrent queries on t1 will be blocked due to replay of\n> XLOG_STANDBY_LOCK WAL and if you skip that WAL, the drop of table and\n> query on the same table can happen simultaneously leading to\n> unpredictable behavior.\n> \n> -- \n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\nI mean that all resources protected by XLOG_STANDBY_LOCK should redo later.The semantics of XLOG_STANDBY_LOCK is still kept.2020年4月30日 下午7:12,Amit Kapila <amit.kapila16@gmail.com> 写道:On Thu, Apr 30, 2020 at 4:07 PM 邱宇航 <iamqyh@gmail.com> wrote:I noticed that in hot standby, XLOG_STANDBY_LOCK redo is sometimes block by another query, and all the rest redo is blocked by this lock getting operation, which is not good and often happed in my database, so the hot standby will be left behind and master will store a lot of WAL which can’t be purged.So here is the idea:We can do XLOG_STANDBY_LOCK redo asynchronously, and the rest redo will continue.Hmm, I don't think we can do this.  The XLOG_STANDBY_LOCK WAL is usedfor AccessExclusiveLock on a Relation which means it is a lock for aDDL operation.  If you skip processing the WAL for this lock, thebehavior of queries running on standby will be unpredictable.Consider a case where on the master, the user has dropped the table<t1> and when it will replay such an operation on standby theconcurrent queries on t1 will be blocked due to replay ofXLOG_STANDBY_LOCK WAL and if you skip that WAL, the drop of table andquery on the same table can happen simultaneously leading tounpredictable behavior.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 May 2020 10:36:28 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <iamqyh@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimization for hot standby XLOG_STANDBY_LOCK redo" }, { "msg_contents": "And one more question, what LogAccessExclusiveLocks in LogStandbySnapshot is used for? Can We remove this.\n\n> 2020年5月6日 上午10:36,邱宇航 <iamqyh@gmail.com> 写道:\n> \n> I mean that all resources protected by XLOG_STANDBY_LOCK should redo later.\n> The semantics of XLOG_STANDBY_LOCK is still kept.\n> \n\n\nAnd one more question, what LogAccessExclusiveLocks in LogStandbySnapshot is used for? Can We remove this.2020年5月6日 上午10:36,邱宇航 <iamqyh@gmail.com> 写道:I mean that all resources protected by XLOG_STANDBY_LOCK should redo later.The semantics of XLOG_STANDBY_LOCK is still kept.", "msg_date": "Wed, 6 May 2020 11:05:06 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <iamqyh@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimization for hot standby XLOG_STANDBY_LOCK redo" }, { "msg_contents": "On Wed, May 6, 2020 at 8:35 AM 邱宇航 <iamqyh@gmail.com> wrote:\n>\n> And one more question, what LogAccessExclusiveLocks in LogStandbySnapshot is used for?\n>\n\nAs far as I understand, this is required to ensure that we have\nacquired all the AccessExclusiveLocks on relations before we can say\nstandby has reached STANDBY_SNAPSHOT_READY and allow read-only queries\nin standby. Read comments above LogStandbySnapshot.\n\n> Can We remove this.\n>\n\nI don't think so. In general, if you want to change and or remove\nsome code, it is your responsibility to come up with a reason/theory\nwhy it is OK to do so.\n\n> 2020年5月6日 上午10:36,邱宇航 <iamqyh@gmail.com> 写道:\n>\n> I mean that all resources protected by XLOG_STANDBY_LOCK should redo later.\n> The semantics of XLOG_STANDBY_LOCK is still kept.\n>\n\nI don't think we can postpone it. If we delay applying\nXLOG_STANDBY_LOCK and apply others then the result could be\nunpredictable as explained in my previous email.\n\nNote - Please don't top-post. Use the style that I and or others are\nusing in this list as that will make it easier to understand and\nrespond to your emails.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 May 2020 16:35:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimization for hot standby XLOG_STANDBY_LOCK redo" } ]
[ { "msg_contents": "Collegues,\n\nAccidently I've come over minor bug in the Mkvcbuild.pm.\nIt happens, that it doesn't tolerate spaces in the $config->{python}\npath, because it want to call python in order to find out version,\nprefix and so on, and doesn't properly quote command.\n\nFix is very simple, see attach.\n\nPatch is made against REL_12_STABLE, but probably applicable to other\nversions as well.\n--", "msg_date": "Thu, 30 Apr 2020 15:06:08 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Postgres Windows build system doesn't work with python installed in\n Program Files" }, { "msg_contents": "On Thu, Apr 30, 2020 at 03:06:08PM +0300, Victor Wagner wrote:\n> Fix is very simple, see attach.\n> \n> Patch is made against REL_12_STABLE, but probably applicable to other\n> versions as well.\n\nIndeed, thanks.\n\n> \t\tmy $pythonprog = \"import sys;print(sys.prefix);\"\n> \t\t . \"print(str(sys.version_info[0])+str(sys.version_info[1]))\";\n> \t\tmy $prefixcmd =\n> -\t\t $solution->{options}->{python} . \"\\\\python -c \\\"$pythonprog\\\"\";\n> +\t\t'\"' . $solution->{options}->{python} . \"\\\\python\\\" -c \\\"$pythonprog\\\"\";\n> \t\tmy $pyout = `$prefixcmd`;\n> \t\tdie \"Could not query for python version!\\n\" if $?;\n> \t\tmy ($pyprefix, $pyver) = split(/\\r?\\n/, $pyout);\n\nThis reminds me of ad7595b. Wouldn't it be better to use qq() here?\n--\nMichael", "msg_date": "Fri, 1 May 2020 17:52:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "В Fri, 1 May 2020 17:52:15 +0900\nMichael Paquier <michael@paquier.xyz> пишет:\n\n> On Thu, Apr 30, 2020 at 03:06:08PM +0300, Victor Wagner wrote:\n> > Fix is very simple, see attach.\n> > \n> > Patch is made against REL_12_STABLE, but probably applicable to\n> > other versions as well. \n> \n> Indeed, thanks.\n> \n> > \t\tmy $pythonprog = \"import sys;print(sys.prefix);\"\n> > \t\t .\n> > \"print(str(sys.version_info[0])+str(sys.version_info[1]))\"; my\n> > $prefixcmd =\n> > -\t\t $solution->{options}->{python} . \"\\\\python -c\n> > \\\"$pythonprog\\\"\";\n> > +\t\t'\"' . $solution->{options}->{python} . \"\\\\python\\\"\n> > -c \\\"$pythonprog\\\"\"; my $pyout = `$prefixcmd`;\n> > \t\tdie \"Could not query for python version!\\n\" if $?;\n> > \t\tmy ($pyprefix, $pyver) = split(/\\r?\\n/, $pyout); \n> \n> This reminds me of ad7595b. Wouldn't it be better to use qq() here?\n\nMaybe. But probably original author of this code was afraid of using\ntoo long chain of ->{} in the string substitution. \n\nSo, I left this style n place.\n\nNonetheless, using qq wouldn't save us from doubling backslashes.\n\n--\n\n\n", "msg_date": "Fri, 1 May 2020 12:48:17 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres Windows build system doesn't work with python\n installed in Program Files" }, { "msg_contents": "Em qui., 30 de abr. de 2020 às 09:06, Victor Wagner <vitus@wagner.pp.ru>\nescreveu:\n\n>\n> Collegues,\n>\n> Accidently I've come over minor bug in the Mkvcbuild.pm.\n> It happens, that it doesn't tolerate spaces in the $config->{python}\n> path, because it want to call python in order to find out version,\n> prefix and so on, and doesn't properly quote command.\n>\n> Fix is very simple, see attach.\n>\n> Patch is made against REL_12_STABLE, but probably applicable to other\n> versions as well.\n>\n\nI don't know if it applies to the same case, but from the moment I\ninstalled python on the development machine, the Postgres build stopped\nworking correctly.\nAlthough perl, flex and bison are available in the path, the build does not\ngenerate files that depend on flex and bison.\n\nRunning bison on src/backend/bootstrap/bootparse.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/bootstrap/bootscanner.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running bison on src/backend/parser/gram.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\netc\n\nWarning from build.pl\nUse of uninitialized value $ARGV[0] in uc at build.pl line 44.\nUse of uninitialized value $ARGV[0] in uc at build.pl line 48.\n\nregards,\nRanier Vilela\n\nEm qui., 30 de abr. de 2020 às 09:06, Victor Wagner <vitus@wagner.pp.ru> escreveu:\nCollegues,\n\nAccidently I've come over minor bug in the Mkvcbuild.pm.\nIt happens, that it doesn't tolerate spaces in the $config->{python}\npath, because it want to call python in order to find out version,\nprefix and so on, and doesn't properly quote command.\n\nFix is very simple, see attach.\n\nPatch is made against REL_12_STABLE, but probably applicable to other\nversions as well.I don't know if it applies to the same case, but from the moment I installed python on the development machine, the Postgres build stopped working correctly.Although perl, flex and bison are available in the path, the build does not generate files that depend on flex and bison.Running bison on src/backend/bootstrap/bootparse.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/bootstrap/bootscanner.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running bison on src/backend/parser/gram.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.etcWarning from build.plUse of uninitialized value $ARGV[0] in uc at build.pl line 44.Use of uninitialized value $ARGV[0] in uc at build.pl line 48.regards,Ranier Vilela", "msg_date": "Sun, 3 May 2020 16:23:24 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Sun, May 03, 2020 at 04:23:24PM -0300, Ranier Vilela wrote:\n> I don't know if it applies to the same case, but from the moment I\n> installed python on the development machine, the Postgres build stopped\n> working correctly.\n> Although perl, flex and bison are available in the path, the build does not\n> generate files that depend on flex and bison.\n\nAre you following the instructions of the documentation? Here is a\nlink to them:\nhttps://www.postgresql.org/docs/devel/install-windows-full.html\n\nMy guess is that you would be just missing a PATH configuration or\nsuch because python enforced a new setting?\n\n> Warning from build.pl\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 44.\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 48.\n\nHmm. We have buildfarm machines using the MSVC scripts and Python,\nsee for example woodloose. And note that @ARGV would be normally\ndefined, so your warning looks fishy to me.\n--\nMichael", "msg_date": "Mon, 4 May 2020 14:58:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Mon, May 4, 2020 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> > Warning from build.pl\n> > Use of uninitialized value $ARGV[0] in uc at build.pl line 44.\n> > Use of uninitialized value $ARGV[0] in uc at build.pl line 48.\n>\n> Hmm. We have buildfarm machines using the MSVC scripts and Python,\n> see for example woodloose. And note that @ARGV would be normally\n> defined, so your warning looks fishy to me.\n\n\nI think these are two different issues, python PATH and build.pl warnings.\nFor the later, you can check woodloose logs and see the warning after\ncommit 8f00d84afc.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Mon, May 4, 2020 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Warning from build.pl\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 44.\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 48.\n\nHmm.  We have buildfarm machines using the MSVC scripts and Python,\nsee for example woodloose.  And note that @ARGV would be normally\ndefined, so your warning looks fishy to me.I think these are two different issues, python PATH and build.pl warnings. For the later, you can check woodloose logs and see the warning after commit 8f00d84afc.Regards,Juan José Santamaría Flecha", "msg_date": "Mon, 4 May 2020 09:45:54 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em seg., 4 de mai. de 2020 às 02:58, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sun, May 03, 2020 at 04:23:24PM -0300, Ranier Vilela wrote:\n> > I don't know if it applies to the same case, but from the moment I\n> > installed python on the development machine, the Postgres build stopped\n> > working correctly.\n> > Although perl, flex and bison are available in the path, the build does\n> not\n> > generate files that depend on flex and bison.\n>\n> Are you following the instructions of the documentation? Here is a\n> link to them:\n> https://www.postgresql.org/docs/devel/install-windows-full.html\n\nMy environment was ok and worked 100%, compiling with msvc 2019 (64 bits).\n\n\n>\n>\n> My guess is that you would be just missing a PATH configuration or\n> such because python enforced a new setting?\n>\nPerl and flex and bison, are in the path, no doubt.\n\n\n>\n> > Warning from build.pl\n> > Use of uninitialized value $ARGV[0] in uc at build.pl line 44.\n> > Use of uninitialized value $ARGV[0] in uc at build.pl line 48.\n>\n> Hmm. We have buildfarm machines using the MSVC scripts and Python,\n> see for example woodloose. And note that @ARGV would be normally\n> defined, so your warning looks fishy to me.\n>\nI'll redo from the beginning.\n1. Make empty directory postgres\n2. Clone postgres\n3. Call msvc 2019 (64 bits) env batch\n4. Setup path to perl, bison and flex\n set path=%path%;c:\\perl;\\bin;c:\\bin\n\n5. C:\\dll>perl -V\nSummary of my perl5 (revision 5 version 30 subversion 1) configuration:\n\n Platform:\n osname=MSWin32\n osvers=10.0.18363.476\n archname=MSWin32-x64-multi-thread\n uname='Win32 strawberry-perl 5.30.1.1 #1 Fri Nov 22 02:24:29 2019 x64'\n\n6. C:\\dll>bison -V\nbison (GNU Bison) 2.7\nWritten by Robert Corbett and Richard Stallman.\n\n7. C:\\dll>flex -V\nflex 2.6.4\n\n8. cd\\dll\\postgres\\src\\tools\\msvc\n9. build\n\nresults:\n...\nPrepareForBuild:\n Creating directory \".\\Release\\postgres\\\".\n Creating directory \".\\Release\\postgres\\postgres.tlog\\\".\nInitializeBuildStatus:\n Creating \".\\Release\\postgres\\postgres.tlog\\unsuccessfulbuild\" because\n\"AlwaysCreate\" was specified.\nCustomBuild:\n Running bison on src/backend/bootstrap/bootparse.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/bootstrap/bootscanner.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running bison on src/backend/parser/gram.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/parser/scan.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running bison on src/backend/replication/repl_gram.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/replication/repl_scanner.l\n Running bison on src/backend/replication/syncrep_gram.y\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/replication/syncrep_scanner.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running bison on src/backend/utils/adt/jsonpath_gram.y\n Running flex on src/backend/utils/adt/jsonpath_scan.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\n Running flex on src/backend/utils/misc/guc-file.l\n 'perl' nao é reconhecido como um comando interno\n ou externo, um programa operável ou um arquivo em lotes.\nC:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5):\nerror MSB6006: \"cmd.exe\" exited with code 9009.\n [C:\\dll\\postgres\\postgres.vcxproj]\nDone Building Project \"C:\\dll\\postgres\\postgres.vcxproj\" (default targets)\n-- FAILED.\n\nDone Building Project \"C:\\dll\\postgres\\pgsql.sln\" (default targets) --\nFAILED.\n\n...\n\nBuild FAILED.\n\n\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\dll\\postgres\\cyrillic_and_mic.vcxproj\" (default target) (2) ->\n\"C:\\dll\\postgres\\postgres.vcxproj\" (default target) (3) ->\n(CustomBuild target) ->\n C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5):\nerror MSB6006: \"cmd.exe\" exited with code 900\n9. [C:\\dll\\postgres\\postgres.vcxproj]\n\n\n\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\dll\\postgres\\initdb.vcxproj\" (default target) (32) ->\n\"C:\\dll\\postgres\\libpgfeutils.vcxproj\" (default target) (33) ->\n C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5):\nerror MSB6006: \"cmd.exe\" exited with code 900\n9. [C:\\dll\\postgres\\libpgfeutils.vcxproj]\n\n\n\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\dll\\postgres\\ecpg.vcxproj\" (default target) (124) ->\n C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5):\nerror MSB6006: \"cmd.exe\" exited with code 900\n9. [C:\\dll\\postgres\\ecpg.vcxproj]\n\n\n\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\dll\\postgres\\isolationtester.vcxproj\" (default target) (129) ->\n C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5):\nerror MSB6006: \"cmd.exe\" exited with code 900\n9. [C:\\dll\\postgres\\isolationtester.vcxproj]\n\n 0 Warning(s)\n 4 Error(s)\n\nNo warnings, this time.\n\nbest regards,\nRanier Vilela\n\nEm seg., 4 de mai. de 2020 às 02:58, Michael Paquier <michael@paquier.xyz> escreveu:On Sun, May 03, 2020 at 04:23:24PM -0300, Ranier Vilela wrote:\n> I don't know if it applies to the same case, but from the moment I\n> installed python on the development machine, the Postgres build stopped\n> working correctly.\n> Although perl, flex and bison are available in the path, the build does not\n> generate files that depend on flex and bison.\n\nAre you following the instructions of the documentation?  Here is a\nlink to them:\nhttps://www.postgresql.org/docs/devel/install-windows-full.htmlMy environment was ok and worked 100%, compiling with msvc 2019 (64 bits). \n\nMy guess is that you would be just missing a PATH configuration or\nsuch because python enforced a new setting?Perl and flex and bison, are in the path, no doubt. \n\n> Warning from build.pl\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 44.\n> Use of uninitialized value $ARGV[0] in uc at build.pl line 48.\n\nHmm.  We have buildfarm machines using the MSVC scripts and Python,\nsee for example woodloose.  And note that @ARGV would be normally\ndefined, so your warning looks fishy to me.I'll redo from the beginning.1. Make empty directory postgres2. Clone postgres3. Call msvc 2019 (64 bits) env batch4. Setup path to perl, bison and flex    set path=%path%;c:\\perl;\\bin;c:\\bin5. C:\\dll>perl -VSummary of my perl5 (revision 5 version 30 subversion 1) configuration:  Platform:    osname=MSWin32    osvers=10.0.18363.476    archname=MSWin32-x64-multi-thread    uname='Win32 strawberry-perl 5.30.1.1 #1 Fri Nov 22 02:24:29 2019 x64'6. C:\\dll>bison -Vbison (GNU Bison) 2.7Written by Robert Corbett and Richard Stallman.7. C:\\dll>flex -Vflex 2.6.48. cd\\dll\\postgres\\src\\tools\\msvc9. buildresults:...PrepareForBuild:  Creating directory \".\\Release\\postgres\\\".  Creating directory \".\\Release\\postgres\\postgres.tlog\\\".InitializeBuildStatus:  Creating \".\\Release\\postgres\\postgres.tlog\\unsuccessfulbuild\" because \"AlwaysCreate\" was specified.CustomBuild:  Running bison on src/backend/bootstrap/bootparse.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/bootstrap/bootscanner.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running bison on src/backend/parser/gram.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/parser/scan.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running bison on src/backend/replication/repl_gram.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/replication/repl_scanner.l  Running bison on src/backend/replication/syncrep_gram.y  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/replication/syncrep_scanner.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running bison on src/backend/utils/adt/jsonpath_gram.y  Running flex on src/backend/utils/adt/jsonpath_scan.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.  Running flex on src/backend/utils/misc/guc-file.l  'perl' nao é reconhecido como um comando interno  ou externo, um programa operável ou um arquivo em lotes.C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5): error MSB6006: \"cmd.exe\" exited with code 9009. [C:\\dll\\postgres\\postgres.vcxproj]Done Building Project \"C:\\dll\\postgres\\postgres.vcxproj\" (default targets) -- FAILED.Done Building Project \"C:\\dll\\postgres\\pgsql.sln\" (default targets) -- FAILED....Build FAILED.\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\"C:\\dll\\postgres\\cyrillic_and_mic.vcxproj\" (default target) (2) ->\"C:\\dll\\postgres\\postgres.vcxproj\" (default target) (3) ->(CustomBuild target) ->  C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5): error MSB6006: \"cmd.exe\" exited with code 9009. [C:\\dll\\postgres\\postgres.vcxproj]\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\"C:\\dll\\postgres\\initdb.vcxproj\" (default target) (32) ->\"C:\\dll\\postgres\\libpgfeutils.vcxproj\" (default target) (33) ->  C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5): error MSB6006: \"cmd.exe\" exited with code 9009. [C:\\dll\\postgres\\libpgfeutils.vcxproj]\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\"C:\\dll\\postgres\\ecpg.vcxproj\" (default target) (124) ->  C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5): error MSB6006: \"cmd.exe\" exited with code 9009. [C:\\dll\\postgres\\ecpg.vcxproj]\"C:\\dll\\postgres\\pgsql.sln\" (default target) (1) ->\"C:\\dll\\postgres\\isolationtester.vcxproj\" (default target) (129) ->  C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(231,5): error MSB6006: \"cmd.exe\" exited with code 9009. [C:\\dll\\postgres\\isolationtester.vcxproj]    0 Warning(s)    4 Error(s)No warnings, this time.best regards,Ranier Vilela", "msg_date": "Mon, 4 May 2020 08:20:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Mon, May 04, 2020 at 09:45:54AM +0200, Juan José Santamaría Flecha wrote:\n> I think these are two different issues, python PATH and build.pl warnings.\n> For the later, you can check woodloose logs and see the warning after\n> commit 8f00d84afc.\n\nOh, indeed. I somewhat managed to miss these in the logs of the\nbuildfarm. What if we refactored the code of build.pl so as we'd\ncheck first if $ARGV[0] is defined or not? If not defined, then we\nneed to have a release-quality build for all the components. How does\nthat sound? Something not documented is that using \"release\" as first\nargument enforces also a release-quality build for all the components,\nso we had better not break that part.\n--\nMichael", "msg_date": "Mon, 4 May 2020 21:18:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Mon, May 4, 2020 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 04, 2020 at 09:45:54AM +0200, Juan José Santamaría Flecha\n> wrote:\n> > I think these are two different issues, python PATH and build.pl\n> warnings.\n> > For the later, you can check woodloose logs and see the warning after\n> > commit 8f00d84afc.\n>\n> Oh, indeed. I somewhat managed to miss these in the logs of the\n> buildfarm. What if we refactored the code of build.pl so as we'd\n> check first if $ARGV[0] is defined or not? If not defined, then we\n> need to have a release-quality build for all the components. How does\n> that sound? Something not documented is that using \"release\" as first\n> argument enforces also a release-quality build for all the components,\n> so we had better not break that part.\n>\n\n+1, seems like the way to go to me.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, May 4, 2020 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 04, 2020 at 09:45:54AM +0200, Juan José Santamaría Flecha wrote:\n> I think these are two different issues, python PATH and build.pl warnings.\n> For the later, you can check woodloose logs and see the warning after\n> commit 8f00d84afc.\n\nOh, indeed.  I somewhat managed to miss these in the logs of the\nbuildfarm.  What if we refactored the code of build.pl so as we'd\ncheck first if $ARGV[0] is defined or not?  If not defined, then we\nneed to have a release-quality build for all the components.  How does\nthat sound?  Something not documented is that using \"release\" as first\nargument enforces also a release-quality build for all the components,\nso we had better not break that part.+1, seems like the way to go to me.Regards,Juan José Santamaría Flecha", "msg_date": "Mon, 4 May 2020 15:42:20 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Fri, May 01, 2020 at 12:48:17PM +0300, Victor Wagner wrote:\n> Maybe. But probably original author of this code was afraid of using\n> too long chain of ->{} in the string substitution. \n> \n> So, I left this style n place.\n> \n> Nonetheless, using qq wouldn't save us from doubling backslashes.\n\nLooking at this part in more details, I find the attached much more\nreadable. I have been able to test it on my own Windows environment\nand the problem gets fixed (I have reproduced the original problem as\nwell).\n--\nMichael", "msg_date": "Tue, 5 May 2020 15:45:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "В Tue, 5 May 2020 15:45:48 +0900\nMichael Paquier <michael@paquier.xyz> пишет:\n\n> On Fri, May 01, 2020 at 12:48:17PM +0300, Victor Wagner wrote:\n> > Maybe. But probably original author of this code was afraid of using\n> > too long chain of ->{} in the string substitution. \n> > \n> > So, I left this style n place.\n> > \n> > Nonetheless, using qq wouldn't save us from doubling backslashes. \n> \n> Looking at this part in more details, I find the attached much more\n> readable. I have been able to test it on my own Windows environment\n\nI agree, it is better.\n\n\n> and the problem gets fixed (I have reproduced the original problem as\n> well).\n--\n\n\n", "msg_date": "Tue, 5 May 2020 10:16:23 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres Windows build system doesn't work with python\n installed in Program Files" }, { "msg_contents": "On Mon, May 04, 2020 at 03:42:20PM +0200, Juan José Santamaría Flecha wrote:\n> +1, seems like the way to go to me.\n\nAttached is a patch for that and I have gone with a simple solution,\nwith some bonus comments about the way things happen. Here are the\npatterns I tested for build.pl and the commands it generates, making\nsure that we have the same commands with HEAD and the patch:\n1) perl build.pl\nmsbuild pgsql.sln /verbosity:normal /p:Configuration=Release\n2) perl build.pl debug\nmsbuild pgsql.sln /verbosity:normal /p:Configuration=Debug\n3) perl build.pl release\nmsbuild pgsql.sln /verbosity:normal /p:Configuration=Release\n4) perl build.pl foo\nmsbuild foo.vcxproj /verbosity:normal /p:Configuration=Release\n5) perl build.pl debug foo\nmsbuild foo.vcxproj /verbosity:normal /p:Configuration=Debug\n6) perl build.pl release foo\nmsbuild foo.vcxproj /verbosity:normal /p:Configuration=Release\n\nThe two warnings show up only in the first case, of course.\n--\nMichael", "msg_date": "Tue, 5 May 2020 23:06:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Tue, May 5, 2020 at 4:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 04, 2020 at 03:42:20PM +0200, Juan José Santamaría Flecha\n> wrote:\n> > +1, seems like the way to go to me.\n>\n> Attached is a patch for that and I have gone with a simple solution,\n> with some bonus comments about the way things happen.\n>\n\nThis solves the issue.\n\nPlease forgive me if I am being too nitpicky, but I find the comments a\nlittle too verbose, a usage format might be more visual and easier to\nexplain:\n\nUsage: build [[CONFIGURATION] COMPONENT]\n\nThe options are case-insensitive.\nCONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by\ndefault).\nCOMPONENT defines a component to build. An empty option means all\ncomponents.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, May 5, 2020 at 4:06 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 04, 2020 at 03:42:20PM +0200, Juan José Santamaría Flecha wrote:\n> +1, seems like the way to go to me.\n\nAttached is a patch for that and I have gone with a simple solution,\nwith some bonus comments about the way things happen. This solves the issue.Please forgive me if I am being too nitpicky, but I find the comments a little too verbose, a usage format might be more visual and easier to explain:Usage: build [[CONFIGURATION] COMPONENT]The options are \n\ncase-insensitive.CONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by default).COMPONENT defines a component to build. An empty option means all components.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 6 May 2020 00:17:03 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> I agree, it is better.\n\nThanks, applied and back-patched down to 9.5. Now for the second\nproblem of this thread..\n--\nMichael", "msg_date": "Wed, 6 May 2020 21:53:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> > I agree, it is better.\n>\n> Thanks, applied and back-patched down to 9.5. Now for the second\n> problem of this thread..\n>\nSorry, no clue yet.\nI hacked the perl sources, to hardcoded perl, bison and flex with path.It\nworks like this.\nFor some reason, which I haven't yet discovered, msbuild is ignoring the\npath, where perl and bison and flex are.\nAlthough it is being set, within the 64-bit compilation environment of msvc\n2019.\nI'm still investigating.\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> I agree, it is better.\n\nThanks, applied and back-patched down to 9.5.  Now for the second\nproblem of this thread..Sorry, \nno clue yet. I hacked the perl sources, to hardcoded perl, bison and flex with path.It works like this.For some reason, which I haven't yet discovered, msbuild is ignoring the path, where perl and bison and flex are.Although it is being set, within the 64-bit compilation environment of msvc 2019.I'm still investigating.regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 10:21:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 10:21, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n>\n>> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n>> > I agree, it is better.\n>>\n>> Thanks, applied and back-patched down to 9.5. Now for the second\n>> problem of this thread..\n>>\n> Sorry, no clue yet.\n> I hacked the perl sources, to hardcoded perl, bison and flex with path.It\n> works like this.\n> For some reason, which I haven't yet discovered, msbuild is ignoring the\n> path, where perl and bison and flex are.\n> Although it is being set, within the 64-bit compilation environment of\n> msvc 2019.\n> I'm still investigating.\n>\nIn fact perl, it is found, otherwise, neither build.pl would be working.\nBut within the perl environment, when the system call is made, in this\ncase, neither perl, bison, nor flex is found.\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 10:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> I agree, it is better.\n\nThanks, applied and back-patched down to 9.5.  Now for the second\nproblem of this thread..Sorry, \nno clue yet. I hacked the perl sources, to hardcoded perl, bison and flex with path.It works like this.For some reason, which I haven't yet discovered, msbuild is ignoring the path, where perl and bison and flex are.Although it is being set, within the 64-bit compilation environment of msvc 2019.I'm still investigating.In fact perl, it is found, otherwise, neither build.pl would be working. But within the perl environment, when the system call is made, in this case, neither perl, bison, nor flex is found. regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 10:25:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 10:25, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 6 de mai. de 2020 às 10:21, Ranier Vilela <ranier.vf@gmail.com>\n> escreveu:\n>\n>> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz>\n>> escreveu:\n>>\n>>> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n>>> > I agree, it is better.\n>>>\n>>> Thanks, applied and back-patched down to 9.5. Now for the second\n>>> problem of this thread..\n>>>\n>> Sorry, no clue yet.\n>> I hacked the perl sources, to hardcoded perl, bison and flex with path.It\n>> works like this.\n>> For some reason, which I haven't yet discovered, msbuild is ignoring the\n>> path, where perl and bison and flex are.\n>> Although it is being set, within the 64-bit compilation environment of\n>> msvc 2019.\n>> I'm still investigating.\n>>\n> In fact perl, it is found, otherwise, neither build.pl would be working.\n> But within the perl environment, when the system call is made, in this\n> case, neither perl, bison, nor flex is found.\n>\n\nI'm using it like this, for now.\n\nFile pgbison.pl:\nsystem(\"c:\\\\bin\\\\bison $headerflag $input -o $output\");\nFile pgflex.pl:\nsystem(\"c:\\\\bin\\\\flex $flexflags -o$output $input\");\n system(\"c:\\\\perl\\\\bin\\\\perl src\\\\tools\\\\fix-old-flex-code.pl\n$output\");\n\nFile Solution.pm:\n system(\n system('perl generate-lwlocknames.pl lwlocknames.txt');\n system(\n system(\n system(\n system(\n system(\n system(\n system(\"perl create_help.pl ../../../doc/src/sgml/ref\nsql_help\");\n system(\n system(\n system(\n system(\n system(\n system('perl parse.pl < ../../../backend/parser/gram.y >\npreproc.y');\n system(\n\nC:\\dll\\postgres\\src\\tools\\msvc>\\bin\\grep bison *pm\nFile MSBuildProject.pm:\n <Message\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">Running\nbison on $grammarFile</Message>\n <Command\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl\n\"src\\\\tools\\\\msvc\\\\pgbison.pl\" \"$grammarFile\"</Command>\n <Message\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">Running\nbison on $grammarFile</Message>\n <Command\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl\n\"src\\\\tools\\\\msvc\\\\pgbison.pl\" \"$grammarFile\"</Command>\n\nC:\\dll\\postgres\\src\\tools\\msvc>\\bin\\grep flex *pm\nFile MSBuildProject.pm:\n <Message\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">Running\nflex on $grammarFile</Message>\n <Command\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl\n\"src\\\\tools\\\\msvc\\\\pgflex.pl\" \"$grammarFile\"</Command>\n <Message\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">Running\nflex on $grammarFile</Message>\n <Command\nCondition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl\n\"src\\\\tools\\\\msvc\\\\pgflex.pl\" \"$grammarFile\"</Command>\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 10:25, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 6 de mai. de 2020 às 10:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> I agree, it is better.\n\nThanks, applied and back-patched down to 9.5.  Now for the second\nproblem of this thread..Sorry, \nno clue yet. I hacked the perl sources, to hardcoded perl, bison and flex with path.It works like this.For some reason, which I haven't yet discovered, msbuild is ignoring the path, where perl and bison and flex are.Although it is being set, within the 64-bit compilation environment of msvc 2019.I'm still investigating.In fact perl, it is found, otherwise, neither build.pl would be working. But within the perl environment, when the system call is made, in this case, neither perl, bison, nor flex is found.I'm using it like this, for now.File pgbison.pl:system(\"c:\\\\bin\\\\bison $headerflag $input -o $output\");File pgflex.pl:system(\"c:\\\\bin\\\\flex $flexflags -o$output $input\");                system(\"c:\\\\perl\\\\bin\\\\perl src\\\\tools\\\\fix-old-flex-code.pl $output\");File Solution.pm:                system(                system('perl generate-lwlocknames.pl lwlocknames.txt');                system(                system(                system(                system(                system(                system(                system(\"perl create_help.pl ../../../doc/src/sgml/ref sql_help\");                system(                system(                system(                system(                system(                system('perl parse.pl < ../../../backend/parser/gram.y > preproc.y');                system(C:\\dll\\postgres\\src\\tools\\msvc>\\bin\\grep bison *pmFile MSBuildProject.pm:      <Message Condition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">Running bison on $grammarFile</Message>      <Command Condition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl \"src\\\\tools\\\\msvc\\\\pgbison.pl\" \"$grammarFile\"</Command>      <Message Condition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">Running bison on $grammarFile</Message>      <Command Condition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl \"src\\\\tools\\\\msvc\\\\pgbison.pl\" \"$grammarFile\"</Command>C:\\dll\\postgres\\src\\tools\\msvc>\\bin\\grep flex *pmFile MSBuildProject.pm:      <Message Condition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">Running flex on $grammarFile</Message>      <Command Condition=\"'\\$(Configuration)|\\$(Platform)'=='Debug|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl \"src\\\\tools\\\\msvc\\\\pgflex.pl\" \"$grammarFile\"</Command>      <Message Condition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">Running flex on $grammarFile</Message>      <Command Condition=\"'\\$(Configuration)|\\$(Platform)'=='Release|$self->{platform}'\">c:\\\\perl\\\\bin\\\\perl \"src\\\\tools\\\\msvc\\\\pgflex.pl\" \"$grammarFile\"</Command> regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 10:33:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "В Wed, 6 May 2020 10:21:41 -0300\nRanier Vilela <ranier.vf@gmail.com> пишет:\n\n> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n> <michael@paquier.xyz> escreveu:\n> \n> > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: \n> > > I agree, it is better. \n> >\n> > Thanks, applied and back-patched down to 9.5. Now for the second\n> > problem of this thread..\n> > \n> Sorry, no clue yet.\n> I hacked the perl sources, to hardcoded perl, bison and flex with\n> path.It works like this.\n\nPerl has \"magic\" variable $^X which expands to full path to perl\nexecutable, I wonder why build.pl doesn't use it to invoke secondary \nperl scripts.\n\n--\n\n\n", "msg_date": "Wed, 6 May 2020 20:14:22 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres Windows build system doesn't work with python\n installed in Program Files" }, { "msg_contents": "\nOn 5/6/20 1:14 PM, Victor Wagner wrote:\n> В Wed, 6 May 2020 10:21:41 -0300\n> Ranier Vilela <ranier.vf@gmail.com> пишет:\n>\n>> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n>> <michael@paquier.xyz> escreveu:\n>>\n>>> On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote: \n>>>> I agree, it is better. \n>>> Thanks, applied and back-patched down to 9.5. Now for the second\n>>> problem of this thread..\n>>> \n>> Sorry, no clue yet.\n>> I hacked the perl sources, to hardcoded perl, bison and flex with\n>> path.It works like this.\n> Perl has \"magic\" variable $^X which expands to full path to perl\n> executable, I wonder why build.pl doesn't use it to invoke secondary \n> perl scripts.\n>\n\nWe assume perl, flex and bison are in the PATH. That doesn't seem\nunreasonable, it's worked well for quite a long time.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 6 May 2020 14:11:34 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 14:14, Victor Wagner <vitus@wagner.pp.ru>\nescreveu:\n\n> В Wed, 6 May 2020 10:21:41 -0300\n> Ranier Vilela <ranier.vf@gmail.com> пишет:\n>\n> > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n> > <michael@paquier.xyz> escreveu:\n> >\n> > > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n> > > > I agree, it is better.\n> > >\n> > > Thanks, applied and back-patched down to 9.5. Now for the second\n> > > problem of this thread..\n> > >\n> > Sorry, no clue yet.\n> > I hacked the perl sources, to hardcoded perl, bison and flex with\n> > path.It works like this.\n>\n> Perl has \"magic\" variable $^X which expands to full path to perl\n> executable, I wonder why build.pl doesn't use it to invoke secondary\n> perl scripts.\n>\nI still don't think it's necessary, it was working well.\nMy main suspicions are:\n1. Path with spaces;\n2. Incompatibility with < symbol, some suggest use &quot;\n\n<Exec Command=\"&quot;\n\n3. Msbuid.exe It has been updated (version 16.5.0)\n4. Perl scripts increased the level of security.\n5. My user do not have administrator rights.\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 14:14, Victor Wagner <vitus@wagner.pp.ru> escreveu:В Wed, 6 May 2020 10:21:41 -0300\nRanier Vilela <ranier.vf@gmail.com> пишет:\n\n> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n> <michael@paquier.xyz> escreveu:\n> \n> > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:  \n> > > I agree, it is better.  \n> >\n> > Thanks, applied and back-patched down to 9.5.  Now for the second\n> > problem of this thread..\n> >  \n> Sorry, no clue yet.\n> I hacked the perl sources, to hardcoded perl, bison and flex with\n> path.It works like this.\n\nPerl has \"magic\" variable $^X which expands to full path to perl\nexecutable, I wonder why build.pl doesn't  use it to invoke secondary \nperl scripts.I still don't think it's necessary, it was working well.My main suspicions are:1. Path with spaces;2. Incompatibility with < symbol, some suggest use &quot;\n<Exec Command=\"&quot;\n3. Msbuid.exe It has been updated (version 16.5.0)4. Perl scripts increased the level of security.5. My user do not have administrator rights. regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 15:19:00 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 15:19, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 6 de mai. de 2020 às 14:14, Victor Wagner <vitus@wagner.pp.ru>\n> escreveu:\n>\n>> В Wed, 6 May 2020 10:21:41 -0300\n>> Ranier Vilela <ranier.vf@gmail.com> пишет:\n>>\n>> > Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n>> > <michael@paquier.xyz> escreveu:\n>> >\n>> > > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:\n>> > > > I agree, it is better.\n>> > >\n>> > > Thanks, applied and back-patched down to 9.5. Now for the second\n>> > > problem of this thread..\n>> > >\n>> > Sorry, no clue yet.\n>> > I hacked the perl sources, to hardcoded perl, bison and flex with\n>> > path.It works like this.\n>>\n>> Perl has \"magic\" variable $^X which expands to full path to perl\n>> executable, I wonder why build.pl doesn't use it to invoke secondary\n>> perl scripts.\n>>\n> I still don't think it's necessary, it was working well.\n> My main suspicions are:\n> 1. Path with spaces;\n> 2. Incompatibility with < symbol, some suggest use &quot;\n>\n> <Exec Command=\"&quot;\n>\n> 3. Msbuid.exe It has been updated (version 16.5.0)\n> 4. Perl scripts increased the level of security.\n> 5. My user do not have administrator rights.\n>\nCause found.\n\nHow it worked before\n1. Call link from menu Visual Studio 2019: Auxiliary\\Build\\vcvars64.bat\n That create a console with settings to compile on 64 bits.\n2. Adjusting the path manually\n set path=%path%;c:\\perl\\bin;c:\\bin\n3. Call build.bat\n\nHacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl,\nreturned to being the original, without the addition of c:\\perl\\bin;c:\\bin.\nmy $out = $ENV{PATH};\nprint \"Path after system call=$out\\n\";\nPath after system\ncall=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\nThe final part lacks: c:\\perl\\bin;c:\\bin\n\nNow I need to find out why the path is being reset, within the perl scripts.\n\nCause: PATH being reset.\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 15:19, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 6 de mai. de 2020 às 14:14, Victor Wagner <vitus@wagner.pp.ru> escreveu:В Wed, 6 May 2020 10:21:41 -0300\nRanier Vilela <ranier.vf@gmail.com> пишет:\n\n> Em qua., 6 de mai. de 2020 às 09:53, Michael Paquier\n> <michael@paquier.xyz> escreveu:\n> \n> > On Tue, May 05, 2020 at 10:16:23AM +0300, Victor Wagner wrote:  \n> > > I agree, it is better.  \n> >\n> > Thanks, applied and back-patched down to 9.5.  Now for the second\n> > problem of this thread..\n> >  \n> Sorry, no clue yet.\n> I hacked the perl sources, to hardcoded perl, bison and flex with\n> path.It works like this.\n\nPerl has \"magic\" variable $^X which expands to full path to perl\nexecutable, I wonder why build.pl doesn't  use it to invoke secondary \nperl scripts.I still don't think it's necessary, it was working well.My main suspicions are:1. Path with spaces;2. Incompatibility with < symbol, some suggest use &quot;\n<Exec Command=\"&quot;\n3. Msbuid.exe It has been updated (version 16.5.0)4. Perl scripts increased the level of security.5. My user do not have administrator rights.Cause found.How it worked before1. Call link from menu Visual Studio 2019: Auxiliary\\Build\\vcvars64.bat    That create a console with settings to compile on 64 bits.2. Adjusting the path manually    set path=%path%;c:\\perl\\bin;c:\\bin3. Call build.batHacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl, returned to being the original, without the addition of c:\\perl\\bin;c:\\bin.my $out = $ENV{PATH};print \"Path after system call=$out\\n\";Path after system call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;The final part lacks:\nc:\\perl\\bin;c:\\binNow I need to find out why the path is being reset, within the perl scripts.Cause: PATH being reset.regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 15:58:15 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Wed, May 06, 2020 at 02:11:34PM -0400, Andrew Dunstan wrote:\n> We assume perl, flex and bison are in the PATH. That doesn't seem\n> unreasonable, it's worked well for quite a long time.\n\nI recall that it is an assumption we rely on since MSVC scripts are\naround, and that's rather easy to configure, so it seems to me that\nchanging things now would just introduce annoying changes for anybody\n(developers, maintainers) using this stuff.\n--\nMichael", "msg_date": "Thu, 7 May 2020 09:08:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 21:08, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, May 06, 2020 at 02:11:34PM -0400, Andrew Dunstan wrote:\n> > We assume perl, flex and bison are in the PATH. That doesn't seem\n> > unreasonable, it's worked well for quite a long time.\n>\n> I recall that it is an assumption we rely on since MSVC scripts are\n> around, and that's rather easy to configure, so it seems to me that\n> changing things now would just introduce annoying changes for anybody\n> (developers, maintainers) using this stuff.\n>\nAh yes, better to leave it as is. No problem for me, I already got around\nthe difficulty.\n\nregards,\nRanier Vilela\n\nEm qua., 6 de mai. de 2020 às 21:08, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, May 06, 2020 at 02:11:34PM -0400, Andrew Dunstan wrote:\n> We assume perl, flex and bison are in the PATH. That doesn't seem\n> unreasonable, it's worked well for quite a long time.\n\nI recall that it is an assumption we rely on since MSVC scripts are\naround, and that's rather easy to configure, so it seems to me that\nchanging things now would just introduce annoying changes for anybody\n(developers, maintainers) using this stuff.Ah yes, better to leave it as is. No problem for me, I already got around the difficulty. regards,Ranier Vilela", "msg_date": "Wed, 6 May 2020 21:14:16 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> Hacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl,\n> returned to being the original, without the addition of c:\\perl\\bin;c:\\bin.\n> my $out = $ENV{PATH};\n> print \"Path after system call=$out\\n\";\n> Path after system\n> call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> The final part lacks: c:\\perl\\bin;c:\\bin\n> \n> Now I need to find out why the path is being reset, within the perl scripts.\n\nFWIW, we have a buildfarm animal called drongo that runs with VS 2019,\nthat uses Python, and that is now happy. One of my own machines uses\nVS 2019 as well and I have yet to see what you are describing here.\nPerhaps that's related to a difference in the version of perl you are\nusing and the version of that any others?\n--\nMichael", "msg_date": "Thu, 7 May 2020 09:14:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qua., 6 de mai. de 2020 às 21:14, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> > Hacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl\n> ,\n> > returned to being the original, without the addition of\n> c:\\perl\\bin;c:\\bin.\n> > my $out = $ENV{PATH};\n> > print \"Path after system call=$out\\n\";\n> > Path after system\n> > call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> > The final part lacks: c:\\perl\\bin;c:\\bin\n> >\n> > Now I need to find out why the path is being reset, within the perl\n> scripts.\n>\n> FWIW, we have a buildfarm animal called drongo that runs with VS 2019,\n> that uses Python, and that is now happy. One of my own machines uses\n> VS 2019 as well and I have yet to see what you are describing here.\n> Perhaps that's related to a difference in the version of perl you are\n> using and the version of that any others?\n>\nI really don't know what to say, I know very little about perl.\n\nThe perl is:\nWin32 strawberry-perl 5.30.1.1\n\nregards,\nRanier VIlela\n\nEm qua., 6 de mai. de 2020 às 21:14, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> Hacking pgbison.pl, to print PATH, shows that the path inside pgbison.pl,\n> returned to being the original, without the addition of c:\\perl\\bin;c:\\bin.\n> my $out = $ENV{PATH};\n> print \"Path after system call=$out\\n\";\n> Path after system\n> call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> The final part lacks: c:\\perl\\bin;c:\\bin\n> \n> Now I need to find out why the path is being reset, within the perl scripts.\n\nFWIW, we have a buildfarm animal called drongo that runs with VS 2019,\nthat uses Python, and that is now happy.  One of my own machines uses\nVS 2019 as well and I have yet to see what you are describing here.\nPerhaps that's related to a difference in the version of perl you are\nusing and the version of that any others?I really don't know what to say, I know very little about perl. The perl is:Win32 strawberry-perl 5.30.1.1regards,Ranier VIlela", "msg_date": "Wed, 6 May 2020 21:23:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "В Thu, 7 May 2020 09:14:33 +0900\nMichael Paquier <michael@paquier.xyz> пишет:\n\n> On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> > Hacking pgbison.pl, to print PATH, shows that the path inside\n> > pgbison.pl, returned to being the original, without the addition of\n> > c:\\perl\\bin;c:\\bin. my $out = $ENV{PATH};\n> > print \"Path after system call=$out\\n\";\n> > Path after system\n> > call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> > The final part lacks: c:\\perl\\bin;c:\\bin\n> > \n> > Now I need to find out why the path is being reset, within the perl\n> > scripts. \n> \n> FWIW, we have a buildfarm animal called drongo that runs with VS 2019,\n> that uses Python, and that is now happy. One of my own machines uses\n> VS 2019 as well and I have yet to see what you are describing here.\n> Perhaps that's related to a difference in the version of perl you are\n> using and the version of that any others?\n\n\nI doubt so. I have different machines with perl from 5.22 to 5.30 but\nnone of tham exibits such weird behavoir. \n\nPerhaps problem is that Ranier calls vcvars64.bat from the menu, and\nthen calls msbuild such way that is becames unrelated process.\n\nObvoisly buildfarm animal doesn't use menu and then starts build\nprocess from same CMD.EXE process, that it called vcvarsall.but into.\n\nIt is same in all OSes - Windows, *nix and even MS-DOS - there is no\nway to change environment of parent process. You can change environment\nof current process (and if this process is command interpreter you can\ndo so by sourcing script into it. In windows this misleadingly called\n'CALL', but it executes commands from command file in the current\nshell, not in subshell) you can pass enivronment to the child\nprocesses. But you can never affect environment of the parent or\nsibling process.\n\nThe only exception is - if you know that some process would at startup\nread environment vars from some file or registry, you can modify this\nsource in unrelated process.\n\nSo, if you want perl in path of msbuild, started from Visual Studio,\nyou should either first set path in CMD.EXE, then type command to start\nStudio from this very command window, or set path via control panel\ndialog (which modified registry). Later is what I usially do on machines\nwher I compile postgres.\n\n\n\n\n--\n\n\n\n> --\n> Michael\n\n\n\n", "msg_date": "Thu, 7 May 2020 08:04:53 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres Windows build system doesn't work with python\n installed in Program Files" }, { "msg_contents": "В Wed, 6 May 2020 21:23:57 -0300\nRanier Vilela <ranier.vf@gmail.com> пишет:\n\n> \n> The perl is:\n> Win32 strawberry-perl 5.30.1.1\n> \n\nThis perl would have problems when compiling PL/Perl (see my letter\nabout week ago), but it have no problems running various build scripts\nfor Postgres. I'm using it with MSVisualStudio 2019 and only unexpected\nthing I've encountered is that it comes with its own patch.exe, which\ndoesn't like unix-style end-of-lines in patches (but OK with them in\npatched files).\n\n\n> regards,\n> Ranier VIlela\n-- \n\n\n\n", "msg_date": "Thu, 7 May 2020 08:10:18 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres Windows build system doesn't work with python\n installed in Program Files" }, { "msg_contents": "On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha wrote:\n> Please forgive me if I am being too nitpicky, but I find the comments a\n> little too verbose, a usage format might be more visual and easier to\n> explain:\n> \n> Usage: build [[CONFIGURATION] COMPONENT]\n> \n> The options are case-insensitive.\n> CONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by\n> default).\n> COMPONENT defines a component to build. An empty option means all\n> components.\n\nYour comment makes sense to me. What about the attached then? On top\nof documenting the script usage in the code, let's trigger it if it\ngets called with more than 3 arguments. What do you think?\n\nFWIW, I forgot to mention that I don't think those warnings are worth\na backpatch. No objections with improving things on HEAD of course.\n--\nMichael", "msg_date": "Thu, 7 May 2020 14:45:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Em qui., 7 de mai. de 2020 às 02:04, Victor Wagner <vitus@wagner.pp.ru>\nescreveu:\n\n> В Thu, 7 May 2020 09:14:33 +0900\n> Michael Paquier <michael@paquier.xyz> пишет:\n>\n> > On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> > > Hacking pgbison.pl, to print PATH, shows that the path inside\n> > > pgbison.pl, returned to being the original, without the addition of\n> > > c:\\perl\\bin;c:\\bin. my $out = $ENV{PATH};\n> > > print \"Path after system call=$out\\n\";\n> > > Path after system\n> > > call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> > > The final part lacks: c:\\perl\\bin;c:\\bin\n> > >\n> > > Now I need to find out why the path is being reset, within the perl\n> > > scripts.\n> >\n> > FWIW, we have a buildfarm animal called drongo that runs with VS 2019,\n> > that uses Python, and that is now happy. One of my own machines uses\n> > VS 2019 as well and I have yet to see what you are describing here.\n> > Perhaps that's related to a difference in the version of perl you are\n> > using and the version of that any others?\n>\n>\n> I doubt so. I have different machines with perl from 5.22 to 5.30 but\n> none of tham exibits such weird behavoir.\n>\nThe perl is the same,when it was working ok.\n\n\n>\n> Perhaps problem is that Ranier calls vcvars64.bat from the menu, and\n> then calls msbuild such way that is becames unrelated process.\n>\nIt also worked previously, using this same process, link menu and manual\npath configuration.\nWhat has changed:\n1 In the environment, the python installation, which added entries to the\npath.\n2. Perl scripts: Use perl's $/ more idiomatically\ncommit beb2516e961490723fb1a2f193406afb3d71ea9c\n3. Msbuild and others, have been updated.They are not the same ones that\nwere working before.\n\n\n>\n> Obvoisly buildfarm animal doesn't use menu and then starts build\n> process from same CMD.EXE process, that it called vcvarsall.but into.\n>\n> It is same in all OSes - Windows, *nix and even MS-DOS - there is no\n> way to change environment of parent process. You can change environment\n> of current process (and if this process is command interpreter you can\n> do so by sourcing script into it. In windows this misleadingly called\n> 'CALL', but it executes commands from command file in the current\n> shell, not in subshell) you can pass enivronment to the child\n> processes. But you can never affect environment of the parent or\n> sibling process.\n>\nMaybe that's what is happening, calling system, perl or msbuild, would be\ncreating a new environment, transferring the path that is configured in\nWindows, and not the path that is in the environment that was manually\nconfigured.\n\n\n>\n> The only exception is - if you know that some process would at startup\n> read environment vars from some file or registry, you can modify this\n> source in unrelated process.\n>\n> So, if you want perl in path of msbuild, started from Visual Studio,\n> you should either first set path in CMD.EXE, then type command to start\n> Studio from this very command window, or set path via control panel\n> dialog (which modified registry). Later is what I usially do on machines\n> wher I compile postgres.\n>\nbuidfarm aninal, uses a more secure and reliable process, the path is\nalready configured and does not change.\nPerhaps this is the way for me and for others.\n\nIt would then remain to document, to warn that to work correctly, the path\nmust be configured before entering the compilation environment.\n\nregards,\nRanier Vilela\n\nEm qui., 7 de mai. de 2020 às 02:04, Victor Wagner <vitus@wagner.pp.ru> escreveu:В Thu, 7 May 2020 09:14:33 +0900\nMichael Paquier <michael@paquier.xyz> пишет:\n\n> On Wed, May 06, 2020 at 03:58:15PM -0300, Ranier Vilela wrote:\n> > Hacking pgbison.pl, to print PATH, shows that the path inside\n> > pgbison.pl, returned to being the original, without the addition of\n> > c:\\perl\\bin;c:\\bin. my $out = $ENV{PATH};\n> > print \"Path after system call=$out\\n\";\n> > Path after system\n> > call=...C:\\Users\\ranier\\AppData\\Local\\Microsoft\\WindowsApps;;\n> > The final part lacks: c:\\perl\\bin;c:\\bin\n> > \n> > Now I need to find out why the path is being reset, within the perl\n> > scripts.  \n> \n> FWIW, we have a buildfarm animal called drongo that runs with VS 2019,\n> that uses Python, and that is now happy.  One of my own machines uses\n> VS 2019 as well and I have yet to see what you are describing here.\n> Perhaps that's related to a difference in the version of perl you are\n> using and the version of that any others?\n\n\nI doubt so. I have different machines with perl from 5.22 to 5.30 but\nnone of tham exibits such weird behavoir. The perl is the same,when it was working ok. \n\nPerhaps problem is that Ranier calls vcvars64.bat from the menu, and\nthen calls msbuild such way that is becames unrelated process.It also worked previously, using this same process, link menu and manual path configuration.What has changed:1 In the environment, the python installation, which added entries to the path.2. Perl scripts: \nUse perl's $/ more idiomatically \ncommit beb2516e961490723fb1a2f193406afb3d71ea9c\n3. Msbuild and others, have been updated.They are not the same ones that were working before. \n\nObvoisly buildfarm animal doesn't use menu and then starts build\nprocess from same CMD.EXE process, that it called vcvarsall.but into.\n\nIt is same in all OSes - Windows, *nix and even MS-DOS - there is no\nway to change environment of parent process. You can change environment\nof current process (and if this process is command interpreter you can\ndo so by sourcing script into it. In windows this misleadingly called\n'CALL', but it executes commands from command file in the current\nshell, not in subshell) you can pass enivronment to the child\nprocesses. But you can never affect environment of the parent or\nsibling process.Maybe that's what is happening, calling system, perl or msbuild, would be creating a new environment, transferring the path that is configured in Windows, and not the path that is in the environment that was manually configured.  \n\nThe only exception is - if you know that some process would at startup\nread environment vars from some file or registry, you can modify this\nsource in unrelated process.\n\nSo, if you want perl in path of msbuild, started from Visual Studio,\nyou should either first set path in CMD.EXE, then type command to start\nStudio from this very  command window, or set path via control panel\ndialog (which modified registry). Later is what I usially do on machines\nwher I compile postgres.\nbuidfarm aninal, uses a more secure and reliable process, the path is already configured and does not change. Perhaps this is the way for me and for others.It would then remain to document, to warn that to work correctly, the path must be configured before entering the compilation environment.regards,Ranier Vilela", "msg_date": "Thu, 7 May 2020 08:32:13 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "Hi Michael,\n\nI performed a quick test for the path \"msvc-build-init-v2.patch\" using \nbelow cases:\n\n1. perl build.pl\n2. perl build.pl debug psql\n3. perl build.pl RELEASE psql\n4. perl build.pl deBUG psql\n5. perl build.pl psql\nThe above cases (case-insensitive) are all working great without any \nwarning for latest master branch.\n\nWhen build with more than 3 parameters, yes, I got below expected \nmessage as well.\n\nc:\\Users\\david\\Downloads\\postgres\\src\\tools\\msvc>perl build.pl DEbug \npsql pg_baseback\nUsage: build.pl [ [ <configuration> ] <component> ]\nOptions are case-insensitive.\nconfiguration: Release | Debug. This sets the configuration\nto build. Default is Release.\ncomponent: name of component to build. An empty value means\nto build all components.\n\n\nHowever, if I ask for help in a typical Windows' way, i.e, \"perl \nbuild.pl -help\" or \"perl build.pl /?\", I got some messages like below,\n\nc:\\Users\\david\\Downloads\\postgres\\src\\tools\\msvc>perl build.pl -help\nDetected hardware platform: Win32\nFiles src/bin/pgbench/exprscan.l\nFiles src/bin/pgbench/exprparse.y\nFiles src/bin/psql/psqlscanslash.l\nFiles contrib/cube/cubescan.l\nFiles contrib/cube/cubeparse.y\nFiles contrib/seg/segscan.l\nFiles contrib/seg/segparse.y\nGenerating configuration headers...\nMicrosoft (R) Build Engine version 16.5.1+4616136f8 for .NET Framework\nCopyright (C) Microsoft Corporation. All rights reserved.\n\nMSBUILD : error MSB1001: Unknown switch.\nSwitch: -help.vcxproj\n\nFor switch syntax, type \"MSBuild -help\"\n\nc:\\Users\\david\\Downloads\\postgres\\src\\tools\\msvc>perl build.pl /?\nDetected hardware platform: Win32\nFiles src/bin/pgbench/exprscan.l\nFiles src/bin/pgbench/exprparse.y\nFiles src/bin/psql/psqlscanslash.l\nFiles contrib/cube/cubescan.l\nFiles contrib/cube/cubeparse.y\nFiles contrib/seg/segscan.l\nFiles contrib/seg/segparse.y\nGenerating configuration headers...\nMicrosoft (R) Build Engine version 16.5.1+4616136f8 for .NET Framework\nCopyright (C) Microsoft Corporation. All rights reserved.\n\nMSBUILD : error MSB1001: Unknown switch.\nSwitch: /?.vcxproj\n\nFor switch syntax, type \"MSBuild -help\"\n\nIt would be a bonus if the build.pl can support the \"help\" in Windows' way.\n\n\nThanks,\n\nDavid\n\nOn 2020-05-06 10:45 p.m., Michael Paquier wrote:\n> On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha wrote:\n>> Please forgive me if I am being too nitpicky, but I find the comments a\n>> little too verbose, a usage format might be more visual and easier to\n>> explain:\n>>\n>> Usage: build [[CONFIGURATION] COMPONENT]\n>>\n>> The options are case-insensitive.\n>> CONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by\n>> default).\n>> COMPONENT defines a component to build. An empty option means all\n>> components.\n> Your comment makes sense to me. What about the attached then? On top\n> of documenting the script usage in the code, let's trigger it if it\n> gets called with more than 3 arguments. What do you think?\n>\n> FWIW, I forgot to mention that I don't think those warnings are worth\n> a backpatch. No objections with improving things on HEAD of course.\n> --\n> Michael\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n", "msg_date": "Mon, 1 Jun 2020 17:05:50 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Tue, Jun 2, 2020 at 2:06 AM David Zhang <david.zhang@highgo.ca> wrote:\n\n>\n> On 2020-05-06 10:45 p.m., Michael Paquier wrote:\n> > On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha\n> wrote:\n> >> Please forgive me if I am being too nitpicky, but I find the comments a\n> >> little too verbose, a usage format might be more visual and easier to\n> >> explain:\n> >>\n> >> Usage: build [[CONFIGURATION] COMPONENT]\n> >>\n> >> The options are case-insensitive.\n> >> CONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by\n> >> default).\n> >> COMPONENT defines a component to build. An empty option means all\n> >> components.\n> > Your comment makes sense to me. What about the attached then? On top\n> > of documenting the script usage in the code, let's trigger it if it\n> > gets called with more than 3 arguments. What do you think?\n> >\n> > FWIW, I forgot to mention that I don't think those warnings are worth\n> > a backpatch. No objections with improving things on HEAD of course.\n>\n> It would be a bonus if the build.pl can support the \"help\" in Windows'\n> way.\n>\n\nGoing through the open items in the commitfest, I see that this patch has\nnot been pushed. It still applies and solves the warning so, I am marking\nit as RFC.\n\nAdding a help option is a new feature, that can have its own patch without\ndelaying this one any further.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jun 2, 2020 at 2:06 AM David Zhang <david.zhang@highgo.ca> wrote:On 2020-05-06 10:45 p.m., Michael Paquier wrote:\n> On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha wrote:\n>> Please forgive me if I am being too nitpicky, but I find the comments a\n>> little too verbose, a usage format might be more visual and easier to\n>> explain:\n>>\n>> Usage: build [[CONFIGURATION] COMPONENT]\n>>\n>> The options are  case-insensitive.\n>> CONFIGURATION sets the configuration to build, \"debug\" or \"release\" (by\n>> default).\n>> COMPONENT defines a component to build. An empty option means all\n>> components.\n> Your comment makes sense to me.  What about the attached then?  On top\n> of documenting the script usage in the code, let's trigger it if it\n> gets called with more than 3 arguments.  What do you think?\n>\n> FWIW, I forgot to mention that I don't think those warnings are worth\n> a backpatch.  No objections with improving things on HEAD of course.\n\nIt would be a bonus if the build.pl can support the \"help\" in Windows' way.Going through the open items in the commitfest, I see that this patch has not been pushed. It still applies and solves the warning so, I am marking it as RFC.Adding a help option is a new feature, that can have its own patch without delaying this one any further.Regards,Juan José Santamaría Flecha", "msg_date": "Sat, 4 Jul 2020 21:17:52 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" }, { "msg_contents": "On Sat, Jul 04, 2020 at 09:17:52PM +0200, Juan José Santamaría Flecha wrote:\n> Going through the open items in the commitfest, I see that this patch has\n> not been pushed. It still applies and solves the warning so, I am marking\n> it as RFC.\n\nThanks, applied. I was actually waiting to see if you had more\ncomments.\n\n> Adding a help option is a new feature, that can have its own patch without\n> delaying this one any further.\n\nYep. And I am not sure if that's worth worrying either.\n--\nMichael", "msg_date": "Mon, 6 Jul 2020 09:20:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Windows build system doesn't work with python installed\n in Program Files" } ]
[ { "msg_contents": "Hi,\n\nAs I understand it, the point of having \"do {} while (0)\" in a\nmulti-statement macro is to turn it into a simple statement. As such,\nending with a semicolon in both the macro definition and the\ninvocation will turn it back into multiple statements, creating\nconfusion if someone were to invoke the macro in an \"if\" statement.\nEven if that never happens, it seems good to keep them all consistent,\nas in the attached patch.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 1 May 2020 09:08:02 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "do {} while (0) nitpick" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> As I understand it, the point of having \"do {} while (0)\" in a\n> multi-statement macro is to turn it into a simple statement.\n\nRight.\n\n> As such,\n> ending with a semicolon in both the macro definition and the\n> invocation will turn it back into multiple statements, creating\n> confusion if someone were to invoke the macro in an \"if\" statement.\n\nYeah. I'd call these actual bugs, and perhaps even back-patch worthy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Apr 2020 21:51:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "On Thu, Apr 30, 2020 at 09:51:10PM -0400, Tom Lane wrote:\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > As I understand it, the point of having \"do {} while (0)\" in a\n> > multi-statement macro is to turn it into a simple statement.\n> \n> Right.\n> \n> > As such,\n> > ending with a semicolon in both the macro definition and the\n> > invocation will turn it back into multiple statements, creating\n> > confusion if someone were to invoke the macro in an \"if\" statement.\n> \n> Yeah. I'd call these actual bugs, and perhaps even back-patch worthy.\n\nAgreed. Those semicolons could easily create bugs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 30 Apr 2020 21:52:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "On 4/30/20 9:52 PM, Bruce Momjian wrote:\n> On Thu, Apr 30, 2020 at 09:51:10PM -0400, Tom Lane wrote:\n>> John Naylor <john.naylor@2ndquadrant.com> writes:\n>>> As I understand it, the point of having \"do {} while (0)\" in a\n>>> multi-statement macro is to turn it into a simple statement.\n>>\n>> Right.\n>>\n>>> As such,\n>>> ending with a semicolon in both the macro definition and the\n>>> invocation will turn it back into multiple statements, creating\n>>> confusion if someone were to invoke the macro in an \"if\" statement.\n>>\n>> Yeah. I'd call these actual bugs, and perhaps even back-patch worthy.\n> \n> Agreed. Those semicolons could easily create bugs.\n\n+1. The patch looks good to me.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 1 May 2020 09:26:40 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 4/30/20 9:52 PM, Bruce Momjian wrote:\n>> On Thu, Apr 30, 2020 at 09:51:10PM -0400, Tom Lane wrote:\n>>> Yeah. I'd call these actual bugs, and perhaps even back-patch worthy.\n\n>> Agreed. Those semicolons could easily create bugs.\n\n> +1. The patch looks good to me.\n\nGrepping showed me that there were some not-do-while macros that\nalso had trailing semicolons. These seem just as broken, so I\nfixed 'em all.\n\nThere are remaining instances of this antipattern in the flex-generated\nscanners, which we can't do anything about; and in pl/plperl/ppport.h,\nwhich we shouldn't do anything about because that's upstream-generated\ncode. (I wonder though if there's a newer version available.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 May 2020 17:32:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "On Fri, May 1, 2020 at 3:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Apr 30, 2020 at 09:51:10PM -0400, Tom Lane wrote:\n> > John Naylor <john.naylor@2ndquadrant.com> writes:\n> > > As I understand it, the point of having \"do {} while (0)\" in a\n> > > multi-statement macro is to turn it into a simple statement.\n> >\n> > Right.\n> >\n> > > As such,\n> > > ending with a semicolon in both the macro definition and the\n> > > invocation will turn it back into multiple statements, creating\n> > > confusion if someone were to invoke the macro in an \"if\" statement.\n> >\n> > Yeah. I'd call these actual bugs, and perhaps even back-patch worthy.\n>\n> Agreed. Those semicolons could easily create bugs.\n\nIt was a while ago that I last checked our Developer guide over at\nPostgreSQL wiki website, but I wonder if this is a sort of issue that\nmodern linters would be able to recognize?\n\nThe only hit for \"linting\" search on the wiki is this page referring to the\ndeveloper meeting in Ottawa about a year ago:\nhttps://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting\n\n> Other major projects include:\n> ...\n> Code linting\n\nAnybody aware what's the current status of that effort?\n\nCheers,\n--\nAlex\n\nOn Fri, May 1, 2020 at 3:52 AM Bruce Momjian <bruce@momjian.us> wrote:>> On Thu, Apr 30, 2020 at 09:51:10PM -0400, Tom Lane wrote:> > John Naylor <john.naylor@2ndquadrant.com> writes:> > > As I understand it, the point of having \"do {} while (0)\" in a> > > multi-statement macro is to turn it into a simple statement.> >> > Right.> >> > > As such,> > > ending with a semicolon in both the macro definition and the> > > invocation will turn it back into multiple statements, creating> > > confusion if someone were to invoke the macro in an \"if\" statement.> >> > Yeah.  I'd call these actual bugs, and perhaps even back-patch worthy.>> Agreed.  Those semicolons could easily create bugs.It was a while ago that I last checked our Developer guide over at PostgreSQL wiki website, but I wonder if this is a sort of issue that modern linters would be able to recognize?The only hit for \"linting\" search on the wiki is this page referring to the developer meeting in Ottawa about a year ago: https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting> Other major projects include:>   ...>   Code lintingAnybody aware what's the current status of that effort?Cheers,--Alex", "msg_date": "Mon, 4 May 2020 10:38:03 +0200", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "Hi Tom,\n\nOn Fri, May 1, 2020 at 2:32 PM Tom Lane wrote:\n>\n> Grepping showed me that there were some not-do-while macros that\n> also had trailing semicolons. These seem just as broken, so I\n> fixed 'em all.\n>\n\nI'm curious: *How* are you able to discover those occurrences with grep?\nI understand how John might have done it with his original patch: it's\nquite clear the pattern he would look for looks like \"while (0);\" but\nhow did you find all these other macro definitions with a trailing\nsemicolon? My tiny brain can only imagine:\n\n1. Either grep for trailing semicolon (OMG almost every line will come\nup) and squint through the context the see the previous line has a\ntrailing backslash;\n2. Or use some LLVM magic to spelunk through every macro definition and\nlook for a trailing semicolon\n\nCheers,\nJesse\n\n\n", "msg_date": "Mon, 4 May 2020 08:01:56 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> On Fri, May 1, 2020 at 2:32 PM Tom Lane wrote:\n>> Grepping showed me that there were some not-do-while macros that\n>> also had trailing semicolons. These seem just as broken, so I\n>> fixed 'em all.\n\n> I'm curious: *How* are you able to discover those occurrences with grep?\n\nUm, well, actually, it was a little perl script with a state variable\nto remember whether it was in a macro definition or not (set on seeing\na #define, unset when current line doesn't end with '\\', complain if\nset and line ends with ';').\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 May 2020 11:28:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "\nOn 5/1/20 5:32 PM, Tom Lane wrote:\n>\n> There are remaining instances of this antipattern in the flex-generated\n> scanners, which we can't do anything about; and in pl/plperl/ppport.h,\n> which we shouldn't do anything about because that's upstream-generated\n> code. (I wonder though if there's a newer version available.)\n\n\nI'll take a look. It's more than 10 years since we updated it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 4 May 2020 18:44:14 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "\nOn 5/4/20 6:44 PM, Andrew Dunstan wrote:\n> On 5/1/20 5:32 PM, Tom Lane wrote:\n>> There are remaining instances of this antipattern in the flex-generated\n>> scanners, which we can't do anything about; and in pl/plperl/ppport.h,\n>> which we shouldn't do anything about because that's upstream-generated\n>> code. (I wonder though if there's a newer version available.)\n>\n> I'll take a look. It's more than 10 years since we updated it.\n>\n>\n\n\nI tried this out with ppport.h from perl 5.30.2 which is what's on my\nFedora 31 workstation. It compiled fine, no warnings and the tests all\nran fine.\n\n\nSo we could update it. I'm just not sure there would be any great\nbenefit from doing so until we want to use some piece of perl API that\npostdates 5.11.2, which is where our current file comes from.\n\n\nI couldn't actually find an instance of the offending pattern in either\nversion of pport.h. What am I overlooking?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 6 May 2020 14:06:41 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I tried this out with ppport.h from perl 5.30.2 which is what's on my\n> Fedora 31 workstation. It compiled fine, no warnings and the tests all\n> ran fine.\n> So we could update it. I'm just not sure there would be any great\n> benefit from doing so until we want to use some piece of perl API that\n> postdates 5.11.2, which is where our current file comes from.\n\nYeah, perhaps not. Given our general desire not to break old toolchains,\nit might be a long time before we want to require any new Perl APIs.\n\n> I couldn't actually find an instance of the offending pattern in either\n> version of pport.h. What am I overlooking?\n\nMy script was looking for any macro ending with ';', so it found these:\n\n#define START_MY_CXT\tstatic my_cxt_t my_cxt;\n\n# define XCPT_TRY_END JMPENV_POP;\n\n# define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf);\n\nThose don't seem like things we'd use directly, so it's mostly moot.\n\nBTW, I looked around and could not find a package-provided ppport.h\nat all on my Red Hat systems. What package is it in?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 May 2020 15:24:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "\nOn 5/6/20 3:24 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I tried this out with ppport.h from perl 5.30.2 which is what's on my\n>> Fedora 31 workstation. It compiled fine, no warnings and the tests all\n>> ran fine.\n>> So we could update it. I'm just not sure there would be any great\n>> benefit from doing so until we want to use some piece of perl API that\n>> postdates 5.11.2, which is where our current file comes from.\n> Yeah, perhaps not. Given our general desire not to break old toolchains,\n> it might be a long time before we want to require any new Perl APIs.\n>\n>> I couldn't actually find an instance of the offending pattern in either\n>> version of pport.h. What am I overlooking?\n> My script was looking for any macro ending with ';', so it found these:\n>\n> #define START_MY_CXT\tstatic my_cxt_t my_cxt;\n>\n> # define XCPT_TRY_END JMPENV_POP;\n>\n> # define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf);\n>\n> Those don't seem like things we'd use directly, so it's mostly moot.\n\n\n\nYeah. My search was too specific.\n\n\nThe modern one has these too :-(\n\n\n\n> BTW, I looked around and could not find a package-provided ppport.h\n> at all on my Red Hat systems. What package is it in?\n\n\nperl-Devel-PPPort contains a perl module that will write the file for\nyou like this:\n\n\n perl -MDevel::PPPort -e 'Devel::PPPort::WriteFile();'\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 6 May 2020 18:28:46 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "On 5/6/20 6:28 PM, Andrew Dunstan wrote:\n> On 5/6/20 3:24 PM, Tom Lane wrote:\n> \n>> BTW, I looked around and could not find a package-provided ppport.h\n>> at all on my Red Hat systems. What package is it in?\n> \n> perl-Devel-PPPort contains a perl module that will write the file for\n> you like this:\n> \n> perl -MDevel::PPPort -e 'Devel::PPPort::WriteFile();'\n\nFWIW, pgBackRest always shipped with the newest version of ppport.h we \nwere able to generate. This never caused any issues, but neither did we \nhave problems that forced us to upgrade.\n\nThe documentation seems to encourage this behavior:\n\nDon't direct the users of your module to download Devel::PPPort . They \nare most probably no XS writers. Also, don't make ppport.h optional. \nRather, just take the most recent copy of ppport.h that you can find \n(e.g. by generating it with the latest Devel::PPPort release from CPAN), \ncopy it into your project, adjust your project to use it, and distribute \nthe header along with your module.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 6 May 2020 18:39:01 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" }, { "msg_contents": "\nOn 5/6/20 6:39 PM, David Steele wrote:\n> On 5/6/20 6:28 PM, Andrew Dunstan wrote:\n>> On 5/6/20 3:24 PM, Tom Lane wrote:\n>>\n>>> BTW, I looked around and could not find a package-provided ppport.h\n>>> at all on my Red Hat systems.  What package is it in?\n>>\n>> perl-Devel-PPPort contains a perl module that will write the file for\n>> you like this:\n>>\n>>      perl -MDevel::PPPort -e 'Devel::PPPort::WriteFile();'\n>\n> FWIW, pgBackRest always shipped with the newest version of ppport.h we\n> were able to generate. This never caused any issues, but neither did\n> we have problems that forced us to upgrade.\n>\n> The documentation seems to encourage this behavior:\n>\n> Don't direct the users of your module to download Devel::PPPort . They\n> are most probably no XS writers. Also, don't make ppport.h optional.\n> Rather, just take the most recent copy of ppport.h that you can find\n> (e.g. by generating it with the latest Devel::PPPort release from\n> CPAN), copy it into your project, adjust your project to use it, and\n> distribute the header along with your module.\n>\n>\n\n\nI don't think we need to keep updating it, though. plperl is essentially\npretty stable.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 7 May 2020 08:24:38 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: do {} while (0) nitpick" } ]
[ { "msg_contents": "In another thread [1] I'd mused that \"there might be some value in a\nREADME or comments\naddition that would be a guide to what the various hash\nimplementations are useful for...so that we have something to make the\ncode base a bit more discoverable.\"\n\nI'd solicited feedback from Andres (as the author of the simplehash\nimplementation) and gotten further explanation from Tomas (both cc'd\nhere) and have tried to condense that into the comment changes in this\npatch series.\n\nv1-0001-Summarize-trade-offs-between-simplehash-and-dynah.patch\nContains the summaries mentioned above.\n\nv1-0002-Improve-simplehash-usage-notes.patch\nI'd noticed while adding a simplehash implementation in that other\nthread that the facts that:\n- The element type requires a status member.\n- Why storing a hash in the element type is useful (i.e., when to\ndefine SH_STORE_HASH/SH_GET_HASH).\n- The availability of private_data member for metadata access from callbacks.\nare either undocumented or hard to discover, and so I've added the\ninformation directly to the usage notes section.\n\nv1-0003-Show-sample-simplehash-method-signatures.patch\nI find it hard to read the macro code \"templating\" particularly for\nseeing what the available API is and so added sample method signatures\nin comments to the macro generated method signature defines.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe956a-zbm1qR8pqz%3DiLbF8LW5vBrQGrzXVHXdLk3at5_Q%40mail.gmail.com", "msg_date": "Thu, 30 Apr 2020 21:53:10 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Fri, May 1, 2020 at 1:53 PM James Coleman <jtc331@gmail.com> wrote:\n> In another thread [1] I'd mused that \"there might be some value in a\n> README or comments\n> addition that would be a guide to what the various hash\n> implementations are useful for...so that we have something to make the\n> code base a bit more discoverable.\"\n\n+1\n\n> I'd solicited feedback from Andres (as the author of the simplehash\n> implementation) and gotten further explanation from Tomas (both cc'd\n> here) and have tried to condense that into the comment changes in this\n> patch series.\n>\n> v1-0001-Summarize-trade-offs-between-simplehash-and-dynah.patch\n> Contains the summaries mentioned above.\n\n+ * - It supports partitioning, which is useful for shared memory access using\n\nI wonder if we should say a bit more about the shared memory mode.\nShared memory dynahash tables are allocated in a fixed size area at\nstartup, and are discoverable by name in other other processes that\nneed to get access to them, while simplehash assumes that it can get\nmemory from a MemoryContext or an allocator with a malloc/free-style\ninterface, which isn't very well suited for use in shared memory.\n(I'm sure you can convince it to work in shared memory with some\nwork.)\n\n> v1-0002-Improve-simplehash-usage-notes.patch\n\n+ * For convenience the hash table create functions accept a void pointer\n+ * will be stored in the hash table type's member private_data.\n\n*that* will be stored?\n\n> v1-0003-Show-sample-simplehash-method-signatures.patch\n> I find it hard to read the macro code \"templating\" particularly for\n> seeing what the available API is and so added sample method signatures\n> in comments to the macro generated method signature defines.\n\nI didn't double-check all the expansions of the macros but +1 for this\nidea, it's very useful.\n\n\n", "msg_date": "Mon, 20 Jul 2020 17:28:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Mon, Jul 20, 2020 at 1:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, May 1, 2020 at 1:53 PM James Coleman <jtc331@gmail.com> wrote:\n> > In another thread [1] I'd mused that \"there might be some value in a\n> > README or comments\n> > addition that would be a guide to what the various hash\n> > implementations are useful for...so that we have something to make the\n> > code base a bit more discoverable.\"\n>\n> +1\n>\n> > I'd solicited feedback from Andres (as the author of the simplehash\n> > implementation) and gotten further explanation from Tomas (both cc'd\n> > here) and have tried to condense that into the comment changes in this\n> > patch series.\n> >\n> > v1-0001-Summarize-trade-offs-between-simplehash-and-dynah.patch\n> > Contains the summaries mentioned above.\n>\n> + * - It supports partitioning, which is useful for shared memory access using\n>\n> I wonder if we should say a bit more about the shared memory mode.\n> Shared memory dynahash tables are allocated in a fixed size area at\n> startup, and are discoverable by name in other other processes that\n> need to get access to them, while simplehash assumes that it can get\n> memory from a MemoryContext or an allocator with a malloc/free-style\n> interface, which isn't very well suited for use in shared memory.\n> (I'm sure you can convince it to work in shared memory with some\n> work.)\n\nAdded.\n\n> > v1-0002-Improve-simplehash-usage-notes.patch\n>\n> + * For convenience the hash table create functions accept a void pointer\n> + * will be stored in the hash table type's member private_data.\n>\n> *that* will be stored?\n\nFixed.\n\n> > v1-0003-Show-sample-simplehash-method-signatures.patch\n> > I find it hard to read the macro code \"templating\" particularly for\n> > seeing what the available API is and so added sample method signatures\n> > in comments to the macro generated method signature defines.\n>\n> I didn't double-check all the expansions of the macros but +1 for this\n> idea, it's very useful.\n\nJames", "msg_date": "Fri, 31 Jul 2020 15:22:24 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Sat, Aug 1, 2020 at 7:22 AM James Coleman <jtc331@gmail.com> wrote:\n> [v2 patch set]\n\nI ran it through pgindent which insisted on adding some newlines, I\nmanually replaced some spaces with tabs to match nearby lines, I added\nsome asterisks in your example function prototypes where <element> is\nreturned because they seemed to be missing, and I pushed this.\nThanks!\n\n\n", "msg_date": "Sat, 1 Aug 2020 12:16:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Fri, Jul 31, 2020 at 8:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Aug 1, 2020 at 7:22 AM James Coleman <jtc331@gmail.com> wrote:\n> > [v2 patch set]\n>\n> I ran it through pgindent which insisted on adding some newlines, I\n> manually replaced some spaces with tabs to match nearby lines, I added\n> some asterisks in your example function prototypes where <element> is\n> returned because they seemed to be missing, and I pushed this.\n> Thanks!\n\n\nThanks for reviewing and committing!\n\nJames\n\n\n", "msg_date": "Sat, 1 Aug 2020 14:15:36 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Sat, 1 Aug 2020 at 12:17, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Aug 1, 2020 at 7:22 AM James Coleman <jtc331@gmail.com> wrote:\n> > [v2 patch set]\n>\n> I ran it through pgindent which insisted on adding some newlines, I\n> manually replaced some spaces with tabs to match nearby lines, I added\n> some asterisks in your example function prototypes where <element> is\n> returned because they seemed to be missing, and I pushed this.\n> Thanks!\n\nI was just reading over this and wondered about the following:\n\n+ * The element type is required to contain a \"uint32 status\" member.\n\nI see that PagetableEntry does not follow this and I also didn't\nfollow it when writing the Result Cache patch in [1]. I managed to\nshrink the struct I was using for the hash table by 4 bytes by using a\nchar instead of an int. That sounds like a small amount of memory, but\nit did result in much better cache hit ratios in the patch\n\nMaybe it would be better just to get rid of the enum and just #define\nthe values. It seems unlikely that we're every going to need many more\nstates than what are there already, let along more than, say 127 of\nthem. It does look like manifest_file could be shrunk down a bit too\nby making the status field a char.\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n\n\n", "msg_date": "Sun, 2 Aug 2020 11:16:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Sun, 2 Aug 2020 at 11:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> Maybe it would be better just to get rid of the enum and just #define\n> the values. It seems unlikely that we're every going to need many more\n> states than what are there already, let along more than, say 127 of\n> them. It does look like manifest_file could be shrunk down a bit too\n> by making the status field a char.\n\nThis didn't turn out quite as pretty as I had imagined. I needed to\nleave the two statuses defined in simplehash.h so that callers could\nmake use of them. (Result Cache will do this).\n\nThe change here would be callers would need to use SH_STATUS_IN_USE\nrather than <prefix>_SH_IN_USE.\n\nI'm not really that sold on doing things this way. I really just don't\nwant to have to make my status field a uint32 in Result Cache per what\nthe new comment states we must do. If there's a nicer way, then\nperhaps that's worth considering.\n\nDavid", "msg_date": "Sun, 2 Aug 2020 13:53:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Sun, Aug 2, 2020 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Sun, 2 Aug 2020 at 11:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Maybe it would be better just to get rid of the enum and just #define\n> > the values. It seems unlikely that we're every going to need many more\n> > states than what are there already, let along more than, say 127 of\n> > them. It does look like manifest_file could be shrunk down a bit too\n> > by making the status field a char.\n>\n> This didn't turn out quite as pretty as I had imagined. I needed to\n> leave the two statuses defined in simplehash.h so that callers could\n> make use of them. (Result Cache will do this).\n>\n> The change here would be callers would need to use SH_STATUS_IN_USE\n> rather than <prefix>_SH_IN_USE.\n>\n> I'm not really that sold on doing things this way. I really just don't\n> want to have to make my status field a uint32 in Result Cache per what\n> the new comment states we must do. If there's a nicer way, then\n> perhaps that's worth considering.\n\nAgreed that the new comment is wrong and should be changed.\n\nI think you can probably go further, though, and make it require no\nstorage at all by making it optionally \"intrusive\", by using a special\nvalue in an existing member, and supplying an expression to set and\ntest for that value. For example, maybe some users have a pointer but\nnever want to use NULL, and maybe some users already have a field\nholding various flags that are one bit wide and can spare a bit.\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:09:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Mon, 3 Aug 2020 at 11:10, Thomas Munro <thomas.munro@gmail.com> wrote:\n> I think you can probably go further, though, and make it require no\n> storage at all by making it optionally \"intrusive\", by using a special\n> value in an existing member, and supplying an expression to set and\n> test for that value. For example, maybe some users have a pointer but\n> never want to use NULL, and maybe some users already have a field\n> holding various flags that are one bit wide and can spare a bit.\n\nI agree that it would be good to allow users of simplehash.h that\nadditional flexibility. It may allow additional memory savings.\nHowever, it would mean we'd need to do some additional work when we\ncreate and grow the hash table to ensure that we mark new buckets as\nempty. For now, we get that for free with the zeroing of the newly\nallocated memory, but we couldn't rely on that if we allowed users to\ndefine their own macro.\n\nIt looks like none of the current callers could gain from this\n\n1. TupleHashEntryData does not have any reusable fields. The status\nshould fit in the padding on a 64-bit machine anyway.\n2. PagetableEntry already has a status that fits into the padding.\n3. manifest_file could have its status moved to the end of the struct\nand made into a char and the struct would be the same size as if the\nfield did not exist.\n\nSo, with the current users, we'd stand to lose more than we'd gain\nfrom doing it that way.\n\nDavid\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:36:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Mon, 3 Aug 2020 at 11:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> So, with the current users, we'd stand to lose more than we'd gain\n> from doing it that way.\n\nFWIW, I'd be ok with just:\n\n- * The element type is required to contain a \"uint32 status\" member.\n+ * The element type is required to contain an integer-based\n\"status\" member\n+ * which can store the range of values defined in the SH_STATUS enum.\n\nDavid\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:42:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" }, { "msg_contents": "On Mon, Aug 3, 2020 at 11:42 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Mon, 3 Aug 2020 at 11:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> > So, with the current users, we'd stand to lose more than we'd gain\n> > from doing it that way.\n>\n> FWIW, I'd be ok with just:\n>\n> - * The element type is required to contain a \"uint32 status\" member.\n> + * The element type is required to contain an integer-based\n> \"status\" member\n> + * which can store the range of values defined in the SH_STATUS enum.\n\nThanks for the correction. Pushed.\n\n\n", "msg_date": "Mon, 3 Aug 2020 12:24:55 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comment simplehash/dynahash trade-offs" } ]
[ { "msg_contents": "Hi,\n\nWhen I tried to reset a counter in pg_stat_slru using\npg_stat_reset_slru(name),\nnot only the specified counter but all the counters were reset.\n\n postgres=# SELECT * FROM pg_stat_slru ;\n name | blks_zeroed | blks_hit | blks_read | blks_written |\nblks_exists | flushes | truncates | stats_reset\n\n------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n async | 3 | 0 | 0 | 3 |\n 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n clog | 0 | 56 | 0 | 0 |\n 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n commit_timestamp | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n (snip)\n\n\n postgres=# SELECT pg_stat_reset_slru('clog');\n\n\n postgres=# SELECT * FROM pg_stat_slru ;\n name | blks_zeroed | blks_hit | blks_read | blks_written |\nblks_exists | flushes | truncates | stats_reset\n\n------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n async | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2000-01-01 09:00:00+09\n clog | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2020-05-01 17:37:02.525006+09\n commit_timestamp | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 2000-01-01 09:00:00+09\n (snip)\n\n\nAttached a patch.\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 1 May 2020 19:10:23 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "pg_stat_reset_slru(name) doesn't seem to work as documented" }, { "msg_contents": "On Fri, May 01, 2020 at 07:10:23PM +0900, Atsushi Torikoshi wrote:\n>Hi,\n>\n>When I tried to reset a counter in pg_stat_slru using\n>pg_stat_reset_slru(name),\n>not only the specified counter but all the counters were reset.\n>\n> postgres=# SELECT * FROM pg_stat_slru ;\n> name | blks_zeroed | blks_hit | blks_read | blks_written |\n>blks_exists | flushes | truncates | stats_reset\n>\n>------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n> async | 3 | 0 | 0 | 3 |\n> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n> clog | 0 | 56 | 0 | 0 |\n> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n> commit_timestamp | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n> (snip)\n>\n>\n> postgres=# SELECT pg_stat_reset_slru('clog');\n>\n>\n> postgres=# SELECT * FROM pg_stat_slru ;\n> name | blks_zeroed | blks_hit | blks_read | blks_written |\n>blks_exists | flushes | truncates | stats_reset\n>\n>------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n> async | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 2000-01-01 09:00:00+09\n> clog | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 2020-05-01 17:37:02.525006+09\n> commit_timestamp | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 2000-01-01 09:00:00+09\n> (snip)\n>\n>\n>Attached a patch.\n>\n\nYeah, that memset is a left-over from some earlier version of the patch.\nThanks for the report and patch, I'll push this shortly.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 2 May 2020 14:27:33 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_reset_slru(name) doesn't seem to work as documented" }, { "msg_contents": "On Sat, May 02, 2020 at 02:27:33PM +0200, Tomas Vondra wrote:\n>On Fri, May 01, 2020 at 07:10:23PM +0900, Atsushi Torikoshi wrote:\n>>Hi,\n>>\n>>When I tried to reset a counter in pg_stat_slru using\n>>pg_stat_reset_slru(name),\n>>not only the specified counter but all the counters were reset.\n>>\n>> postgres=# SELECT * FROM pg_stat_slru ;\n>> name | blks_zeroed | blks_hit | blks_read | blks_written |\n>>blks_exists | flushes | truncates | stats_reset\n>>\n>>------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n>> async | 3 | 0 | 0 | 3 |\n>> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n>> clog | 0 | 56 | 0 | 0 |\n>> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n>> commit_timestamp | 0 | 0 | 0 | 0 |\n>> 0 | 0 | 0 | 2020-05-01 17:36:26.073433+09\n>> (snip)\n>>\n>>\n>> postgres=# SELECT pg_stat_reset_slru('clog');\n>>\n>>\n>> postgres=# SELECT * FROM pg_stat_slru ;\n>> name | blks_zeroed | blks_hit | blks_read | blks_written |\n>>blks_exists | flushes | truncates | stats_reset\n>>\n>>------------------+-------------+----------+-----------+--------------+-------------+---------+-----------+-------------------------------\n>> async | 0 | 0 | 0 | 0 |\n>> 0 | 0 | 0 | 2000-01-01 09:00:00+09\n>> clog | 0 | 0 | 0 | 0 |\n>> 0 | 0 | 0 | 2020-05-01 17:37:02.525006+09\n>> commit_timestamp | 0 | 0 | 0 | 0 |\n>> 0 | 0 | 0 | 2000-01-01 09:00:00+09\n>> (snip)\n>>\n>>\n>>Attached a patch.\n>>\n>\n>Yeah, that memset is a left-over from some earlier version of the patch.\n>Thanks for the report and patch, I'll push this shortly.\n>\n\nPushed. Thanks for the report.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 2 May 2020 16:05:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_reset_slru(name) doesn't seem to work as documented" }, { "msg_contents": "On Sat, May 2, 2020 at 11:05 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote\n\n> Pushed. Thanks for the report.\n>\n\nThanks!\n\n--\nAtsushi Torikoshi\n\nOn Sat, May 2, 2020 at 11:05 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote\nPushed. Thanks for the report.Thanks! --Atsushi Torikoshi", "msg_date": "Sun, 3 May 2020 09:57:16 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_stat_reset_slru(name) doesn't seem to work as documented" } ]
[ { "msg_contents": "Collegues,\n\nPostgresql embeded perl, plperl contain code long time ago copied\nfrom POSIX.xs file in the perl distribution.\nIt is function setlocale_perl, which does some allocation of\nperl-specific locale data using functions(or macros) new_ctype,\nnew_collate and new_numeric.\n\nThis is used only for WIN32, because as comment in the code said:\n\n /*\n * The perl library on startup does horrible things like call\n * setlocale(LC_ALL,\"\"). We have protected against that on most platforms\n * by setting the environment appropriately. However, on Windows,\n * setlocale() does not consult the environment, so we need to save the\n * existing locale settings before perl has a chance to mangle them and\n * restore them after its dirty deeds are done.\n *\n * MSDN ref:\n * http://msdn.microsoft.com/library/en-us/vclib/html/_crt_locale.asp\n *\n * It appears that we only need to do this on interpreter startup, and\n * subsequent calls to the interpreter don't mess with the locale\n * settings.\n *\n * We restore them using setlocale_perl(), defined below, so that Perl\n * doesn't have a different idea of the locale from Postgres.\n *\n */\n\n\nThis worked up to perl 5.26. But in perl 5.28 these macros and\ncorresponding functions became strictly private. However public\nfunction Perl_setlocale appeared in libperl, which from the quick\nglance to the code does the same thing as setlocale_perl in plperl code.\n\nAttached patch makes use of this function if PERL_VERSION >= 28. \nIt makes plperl compile with ActiveStatePerl 5.28 and StrawberryPerl\n5.30.2.1.\n\nHowever, I'm not sure that I've choose correct approach. May be perl\njust no more does horrible things with locale at startup?\n\n--", "msg_date": "Fri, 1 May 2020 13:47:11 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Postgresql Windows build and modern perl (>=5.28)" }, { "msg_contents": "Victor Wagner <vitus@wagner.pp.ru> writes:\n\n> Attached patch makes use of this function if PERL_VERSION >= 28. \n> It makes plperl compile with ActiveStatePerl 5.28 and StrawberryPerl\n> 5.30.2.1.\n\nI have no opinion on the substantive content of this patch, but please\ndon't just check against just PERL_VERSION. Now that Perl 6 has been\nrenamed to Raku, Perl may bump its major version (PERL_REVISION) to 7 at\nsome point in th future.\n\nThe correct thing to use is the PERL_VERSION_(GT|GE|LE|LT|EQ|NE) macros,\nwhich are provided by newer versions of perl. We would need to update\nthe included copy of ppport.h to get them on older perls, but we should\ndo that anyway, it's not been updated since 2009. I'll start a separate\nthread for that.\n\n- ilmari\n\n\n", "msg_date": "Mon, 04 Oct 2021 23:08:25 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Postgresql Windows build and modern perl (>=5.28)" }, { "msg_contents": "Hi,\n\nOn 2021-10-04 23:08:25 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Victor Wagner <vitus@wagner.pp.ru> writes:\n> \n> > Attached patch makes use of this function if PERL_VERSION >= 28. \n> > It makes plperl compile with ActiveStatePerl 5.28 and StrawberryPerl\n> > 5.30.2.1.\n> \n> I have no opinion on the substantive content of this patch, but please\n> don't just check against just PERL_VERSION. Now that Perl 6 has been\n> renamed to Raku, Perl may bump its major version (PERL_REVISION) to 7 at\n> some point in th future.\n\nHow about the attached? I've just spent more time looking at plperl than I\nreally ever want to, so I'd like to get plperl working and tested on 5.32 on\nwindows asap... And this is one of the two fixes necessary (see [1] for the\nsecond).\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20220130221659.tlyr2lbw3wk22owg%40alap3.anarazel.de", "msg_date": "Sun, 30 Jan 2022 14:39:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Postgresql Windows build and modern perl (>=5.28)" } ]
[ { "msg_contents": "[proposal for PG 14]\n\nThere are a number of Remove${Something}ById() functions that are \nessentially identical in structure and only different in which catalog \nthey are working on. This patch refactors this to be one generic \nfunction. The information about which oid column, index, etc. to use \nwas already available in ObjectProperty for most catalogs, in a few \ncases it was easily added.\n\nConceivably, this could be taken further by categorizing more special \ncases as ObjectProperty fields or something like that, but this seemed \nlike a good balance.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 1 May 2020 16:39:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Unify drop-by-OID functions" }, { "msg_contents": "pá 1. 5. 2020 v 16:39 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> [proposal for PG 14]\n>\n> There are a number of Remove${Something}ById() functions that are\n> essentially identical in structure and only different in which catalog\n> they are working on. This patch refactors this to be one generic\n> function. The information about which oid column, index, etc. to use\n> was already available in ObjectProperty for most catalogs, in a few\n> cases it was easily added.\n>\n> Conceivably, this could be taken further by categorizing more special\n> cases as ObjectProperty fields or something like that, but this seemed\n> like a good balance.\n>\n\n+1\n\nnice\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 1. 5. 2020 v 16:39 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:[proposal for PG 14]\n\nThere are a number of Remove${Something}ById() functions that are \nessentially identical in structure and only different in which catalog \nthey are working on.  This patch refactors this to be one generic \nfunction.  The information about which oid column, index, etc. to use \nwas already available in ObjectProperty for most catalogs, in a few \ncases it was easily added.\n\nConceivably, this could be taken further by categorizing more special \ncases as ObjectProperty fields or something like that, but this seemed \nlike a good balance.+1nice Pavel\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 1 May 2020 16:50:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On Fri, May 1, 2020 at 10:51 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> +1\n\n+1 from me, too, but I have a few suggestions:\n\n+DropGenericById(const ObjectAddress *object)\n\nHow about \"Generic\" -> \"Object\" or \"Generic\" -> \"ObjectAddress\"?\n\n+ elog(ERROR, \"cache lookup failed for %s entry %u\",\n+ elog(ERROR, \"could not find tuple for class %u entry %u\",\n\nHow about \"entry\" -> \"with OID\"?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 May 2020 11:44:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em sex., 1 de mai. de 2020 às 11:39, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> escreveu:\n\n> [proposal for PG 14]\n>\n> There are a number of Remove${Something}ById() functions that are\n> essentially identical in structure and only different in which catalog\n> they are working on. This patch refactors this to be one generic\n> function. The information about which oid column, index, etc. to use\n> was already available in ObjectProperty for most catalogs, in a few\n> cases it was easily added.\n>\n> Conceivably, this could be taken further by categorizing more special\n> cases as ObjectProperty fields or something like that, but this seemed\n> like a good balance.\n>\nVery good.\n\nI can suggest improvements?\n\n1. In case Object is cached, delay open_table until the last moment, for\nthe row to be blocked as little as possible and close the table as quickly\nas possible.\n2. In case Object is cached and the tuple is invalid, do not open table.\n3. Otherwise, is it possible to call systable_endscan, after table_close?\n\nI think that lock resources, for as little time as possible, it is an\nadvantage..\n\n+static void\n+DropGenericById(const ObjectAddress *object)\n+{\n+ int cacheId;\n+ Relation rel;\n+ HeapTuple tup;\n+\n+ cacheId = get_object_catcache_oid(object->classId);\n+\n+ /*\n+ * Use the system cache for the oid column, if one exists.\n+ */\n+ if (cacheId >= 0)\n+ {\n+ tup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for %s entry %u\",\n+ get_object_class_descr(object->classId), object->objectId);\n+\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ table_close(rel, RowExclusiveLock);\n+\n+ ReleaseSysCache(tup);\n+ }\n+ else\n+ {\n+ ScanKeyData skey[1];\n+ SysScanDesc scan;\n+\n+ ScanKeyInit(&skey[0],\n+ get_object_attnum_oid(object->classId),\n+ BTEqualStrategyNumber, F_OIDEQ,\n+ ObjectIdGetDatum(object->objectId));\n+\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ scan = systable_beginscan(rel, get_object_oid_index(object->classId),\ntrue,\n+ NULL, 1, skey);\n+\n+ /* we expect exactly one match */\n+ tup = systable_getnext(scan);\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"could not find tuple for class %u entry %u\",\n+ object->classId, object->objectId);\n+\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ systable_endscan(scan);\n+ table_close(rel, RowExclusiveLock);\n+ }\n+}\n+\n\nregards,\nRanier Vilela\n\nEm sex., 1 de mai. de 2020 às 11:39, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> escreveu:[proposal for PG 14]\n\nThere are a number of Remove${Something}ById() functions that are \nessentially identical in structure and only different in which catalog \nthey are working on.  This patch refactors this to be one generic \nfunction.  The information about which oid column, index, etc. to use \nwas already available in ObjectProperty for most catalogs, in a few \ncases it was easily added.\n\nConceivably, this could be taken further by categorizing more special \ncases as ObjectProperty fields or something like that, but this seemed \nlike a good balance.Very good.I can suggest improvements?1. In case Object is cached, delay open_table until the last moment, for the row to be blocked as little as possible and close the table as quickly as possible.2. In case Object is cached and the tuple is invalid, do not open table.3. Otherwise, is it possible to call systable_endscan, after table_close?I think that lock resources, for as little time as possible, it is an advantage..+static void+DropGenericById(const ObjectAddress *object)+{+\tint\t\t\tcacheId;+\tRelation\trel;+\tHeapTuple\ttup;++\tcacheId = get_object_catcache_oid(object->classId);++\t/*+\t * Use the system cache for the oid column, if one exists.+\t */+\tif (cacheId >= 0)+\t{+\t\ttup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));+\t\tif (!HeapTupleIsValid(tup))+\t\t\telog(ERROR, \"cache lookup failed for %s entry %u\",+\t\t\t\t get_object_class_descr(object->classId), object->objectId);++\t        rel = table_open(object->classId, RowExclusiveLock);+\t\tCatalogTupleDelete(rel, &tup->t_self);+\t        table_close(rel, RowExclusiveLock);++\t\tReleaseSysCache(tup);+\t}+\telse+\t{+\t\tScanKeyData skey[1];+\t\tSysScanDesc scan;++\t\tScanKeyInit(&skey[0],+\t\t\t\t\tget_object_attnum_oid(object->classId),+\t\t\t\t\tBTEqualStrategyNumber, F_OIDEQ,+\t\t\t\t\tObjectIdGetDatum(object->objectId));++\t        rel = table_open(object->classId, RowExclusiveLock);+\t\tscan = systable_beginscan(rel, get_object_oid_index(object->classId), true,+\t\t\t\t\t\t\t\t  NULL, 1, skey);++\t\t/* we expect exactly one match */+\t\ttup = systable_getnext(scan);+\t\tif (!HeapTupleIsValid(tup))+\t\t\telog(ERROR, \"could not find tuple for class %u entry %u\",+\t\t\t\t object->classId, object->objectId);++\t\tCatalogTupleDelete(rel, &tup->t_self);+\t\tsystable_endscan(scan);+\t        table_close(rel, RowExclusiveLock);+\t}+}+regards,Ranier Vilela", "msg_date": "Fri, 1 May 2020 18:31:37 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On 2020-05-01 23:31, Ranier Vilela wrote:\n> I can suggest improvements?\n> \n> 1. In case Object is cached, delay open_table until the last moment, for \n> the row to be blocked as little as possible and close the table as \n> quickly as possible.\n> 2. In case Object is cached and the tuple is invalid, do not open table.\n> 3. Otherwise, is it possible to call systable_endscan, after table_close?\n\nWhat do you mean by \"object is cached\"?\n\nIn any case, this is a refactoring patch, so significant changes to the \ninternal logic would not really be in scope.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 2 May 2020 10:01:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em sáb., 2 de mai. de 2020 às 05:01, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> escreveu:\n\n> On 2020-05-01 23:31, Ranier Vilela wrote:\n> > I can suggest improvements?\n> >\n> > 1. In case Object is cached, delay open_table until the last moment, for\n> > the row to be blocked as little as possible and close the table as\n> > quickly as possible.\n> > 2. In case Object is cached and the tuple is invalid, do not open table.\n> > 3. Otherwise, is it possible to call systable_endscan, after table_close?\n>\n> What do you mean by \"object is cached\"?\n>\nWell, that's what I deduced from the cacheId variable name.\n\nregards,\nRanier Vilela\n\nEm sáb., 2 de mai. de 2020 às 05:01, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> escreveu:On 2020-05-01 23:31, Ranier Vilela wrote:\n> I can suggest improvements?\n> \n> 1. In case Object is cached, delay open_table until the last moment, for \n> the row to be blocked as little as possible and close the table as \n> quickly as possible.\n> 2. In case Object is cached and the tuple is invalid, do not open table.\n> 3. Otherwise, is it possible to call systable_endscan, after table_close?\n\nWhat do you mean by \"object is cached\"?Well, that's what I deduced from the cacheId variable name. regards,Ranier Vilela", "msg_date": "Sat, 2 May 2020 10:04:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On 2020-05-01 17:44, Robert Haas wrote:\n> On Fri, May 1, 2020 at 10:51 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> +1\n> \n> +1 from me, too, but I have a few suggestions:\n> \n> +DropGenericById(const ObjectAddress *object)\n> \n> How about \"Generic\" -> \"Object\" or \"Generic\" -> \"ObjectAddress\"?\n\nChanged to \"Object\", that also matches existing functions that operate \non an ObjectAddress.\n\n> + elog(ERROR, \"cache lookup failed for %s entry %u\",\n> + elog(ERROR, \"could not find tuple for class %u entry %u\",\n> \n> How about \"entry\" -> \"with OID\"?\n\nI changed these to just\n\n\"cache lookup failed for %s %u\"\n\"could not find tuple for %s %u\"\n\nwhich matches the existing wording for the not-refactored cases. I \ndon't recall why I went and reworded them.\n\nNew patch attached. I'll park it until PG14 opens.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 4 May 2020 20:57:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On Fri, May 1, 2020 at 5:32 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I can suggest improvements?\n>\n> 1. In case Object is cached, delay open_table until the last moment, for the row to be blocked as little as possible and close the table as quickly as possible.\n> 2. In case Object is cached and the tuple is invalid, do not open table.\n> 3. Otherwise, is it possible to call systable_endscan, after table_close?\n>\n> I think that lock resources, for as little time as possible, it is an advantage..\n\nOnly if it's correct, which (3) definitely wouldn't be, and I'm\ndoubtful about (1) as well.\n\nThis reminds me: I think that the issues in\nhttp://postgr.es/m/CA+TgmoYaFYRRdRZ94p_Qdt+1oONg6sMOvbkGHKVsFtONCrFkhw@mail.gmail.com\nshould be considered here - we should guarantee that there's a\nsnapshot registered continuously from before the call to\nSearchSysCache1 until after the call to CatalogTupleDelete. In the\nsystable_beginscan case, we should be fine as long as the\nsystable_endscan follows the CatalogTupleDelete call.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 5 May 2020 12:05:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em ter., 5 de mai. de 2020 às 13:06, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Fri, May 1, 2020 at 5:32 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > I can suggest improvements?\n> >\n> > 1. In case Object is cached, delay open_table until the last moment, for\n> the row to be blocked as little as possible and close the table as quickly\n> as possible.\n> > 2. In case Object is cached and the tuple is invalid, do not open table.\n> > 3. Otherwise, is it possible to call systable_endscan, after table_close?\n> >\n> > I think that lock resources, for as little time as possible, it is an\n> advantage..\n>\n> Only if it's correct, which (3) definitely wouldn't be, and I'm\n> doubtful about (1) as well.\n>\nOk, so the question. If (3) is not safe, obvious we shouldn't use, and must\ncall table_close, after systable_endscan.\nNow (1) and (2), I would have no hesitation in using it.\nI work with ERP, and throughout the time, the later, lock resources and\nrelease them soon, the better, for the performance of the system as a whole.\nEven if it doesn't make much difference locally, using this process,\nthroughout the system, efficiency is noticeable.\nApparently, it is more code, but it is less resources used and for less\ntime.\nAnd (2), if it is a case, frequently, no table would be blocked in this\nfunction.\n\nSimple examples.\n\nExemple 1:\nFILE * f;\nf = fopen(\"data.txt\", \"r\");\nif (f != NULL) {\n char buf[512];\n size_t result;\n result = fread(&buf, sizeof(char), 512, f);\n fclose(f); // we no longer need the resource, release.\n if (result != 0) {\n process(buf);\n printf(\"buf=%s\\n\", buf);\n }\n}\n\nExemple 2:\nFILE * f;\nf = fopen(\"data.txt\", \"r\");\nif (f != NULL) {\n char buf[512];\n size_t result;\n result = fread(&buf, sizeof(char), 512, f);\n if (result != 0) {\n process(buf);\n printf(\"buf=%s\\n\", buf);\n }\n fclose(f); // resource blocked until the end.\n}\n\nregards,\nRanier Vilela\n\nEm ter., 5 de mai. de 2020 às 13:06, Robert Haas <robertmhaas@gmail.com> escreveu:On Fri, May 1, 2020 at 5:32 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I can suggest improvements?\n>\n> 1. In case Object is cached, delay open_table until the last moment, for the row to be blocked as little as possible and close the table as quickly as possible.\n> 2. In case Object is cached and the tuple is invalid, do not open table.\n> 3. Otherwise, is it possible to call systable_endscan, after table_close?\n>\n> I think that lock resources, for as little time as possible, it is an advantage..\n\nOnly if it's correct, which (3) definitely wouldn't be, and I'm\ndoubtful about (1) as well.Ok, so the question. If (3) is not safe, obvious we shouldn't use, and must call table_close, after systable_endscan.Now (1) and (2), I would have no hesitation in using it.I work with ERP, and throughout the time, the later, lock resources and release them soon, the better, for the performance of the system as a whole.Even if it doesn't make much difference locally, using this process, throughout the system, efficiency is noticeable.Apparently, it is more code, but it is less resources used and for less time.And (2), if it is a case, frequently, no table would be blocked in this function.Simple examples.Exemple 1:FILE * f;f = fopen(\"data.txt\", \"r\");if (f != NULL) {\n    char buf[512];    size_t result;\n\n    result = fread(&buf, sizeof(char), 512, f);    fclose(f); // we no longer need the resource, release.    if (result != 0) {        process(buf);        printf(\"buf=%s\\n\", buf);    }}\nExemple 2:FILE * f;f = fopen(\"data.txt\", \"r\");if (f != NULL) {\n\n    char buf[512];\n\n    size_t result;\n\n    result = fread(&buf, sizeof(char), 512, f);\n    if (result != 0) {        process(buf);        printf(\"buf=%s\\n\", buf);    }\n\n\n    fclose(f); // resource blocked until the end.}regards,Ranier Vilela", "msg_date": "Tue, 5 May 2020 14:21:36 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On Tue, May 5, 2020 at 1:22 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Ok, so the question. If (3) is not safe, obvious we shouldn't use, and must call table_close, after systable_endscan.\n> Now (1) and (2), I would have no hesitation in using it.\n> I work with ERP, and throughout the time, the later, lock resources and release them soon, the better, for the performance of the system as a whole.\n> Even if it doesn't make much difference locally, using this process, throughout the system, efficiency is noticeable.\n> Apparently, it is more code, but it is less resources used and for less time.\n> And (2), if it is a case, frequently, no table would be blocked in this function.\n\nNobody here is going to question the concept that it's better to use\nresources for less time rather than more, but the wisdom of sticking\nto well-established coding patterns instead of inventing altogether\nnew ones is also well-understood. There are often good reasons why the\ncode is written in the way that it is, and it's important to\nunderstand those before proposing to change things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 5 May 2020 13:28:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em ter., 5 de mai. de 2020 às 14:29, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Tue, May 5, 2020 at 1:22 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Ok, so the question. If (3) is not safe, obvious we shouldn't use, and\n> must call table_close, after systable_endscan.\n> > Now (1) and (2), I would have no hesitation in using it.\n> > I work with ERP, and throughout the time, the later, lock resources and\n> release them soon, the better, for the performance of the system as a whole.\n> > Even if it doesn't make much difference locally, using this process,\n> throughout the system, efficiency is noticeable.\n> > Apparently, it is more code, but it is less resources used and for less\n> time.\n> > And (2), if it is a case, frequently, no table would be blocked in this\n> function.\n>\n> Nobody here is going to question the concept that it's better to use\n> resources for less time rather than more, but the wisdom of sticking\n> to well-established coding patterns instead of inventing altogether\n> new ones is also well-understood. There are often good reasons why the\n> code is written in the way that it is, and it's important to\n> understand those before proposing to change things.\n>\nI see, the famous \"cliché\".\n\nregards,\nRanier Vilela\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nEm ter., 5 de mai. de 2020 às 14:29, Robert Haas <robertmhaas@gmail.com> escreveu:On Tue, May 5, 2020 at 1:22 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Ok, so the question. If (3) is not safe, obvious we shouldn't use, and must call table_close, after systable_endscan.\n> Now (1) and (2), I would have no hesitation in using it.\n> I work with ERP, and throughout the time, the later, lock resources and release them soon, the better, for the performance of the system as a whole.\n> Even if it doesn't make much difference locally, using this process, throughout the system, efficiency is noticeable.\n> Apparently, it is more code, but it is less resources used and for less time.\n> And (2), if it is a case, frequently, no table would be blocked in this function.\n\nNobody here is going to question the concept that it's better to use\nresources for less time rather than more, but the wisdom of sticking\nto well-established coding patterns instead of inventing altogether\nnew ones is also well-understood. There are often good reasons why the\ncode is written in the way that it is, and it's important to\nunderstand those before proposing to change things.I see, the famous \"cliché\".regards,Ranier Vilela\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 5 May 2020 14:42:17 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On Tue, May 5, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I see, the famous \"cliché\".\n\nBy using the word cliché, and by putting it in quotes, you seem to\nsuggest that you consider my argument dubious. However, I stand by it.\nCode shouldn't be changed without understanding the reasons behind the\ncurrent coding. Doing so very often breaks things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 5 May 2020 13:56:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em ter., 5 de mai. de 2020 às 14:57, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Tue, May 5, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > I see, the famous \"cliché\".\n>\n> By using the word cliché, and by putting it in quotes, you seem to\n> suggest that you consider my argument dubious. However, I stand by it.\n> Code shouldn't be changed without understanding the reasons behind the\n> current coding. Doing so very often breaks things.\n>\nSorry Robert, It was not my intention. I didn't know that using quotes\nwould change your understanding.\nOf course, I find your arguments very valid and valuable.\nAnd I understand that there are many interrelated things, which can break\nif done in the wrong order.\nMaybe I used the wrong word, in this case, the cliché.\nWhat I mean is, the cliché, does some strange things, like leaving\nvariables to be declared, assigned in and not used.\nAnd in that specific case, leaving resources blocked, which perhaps, in my\nhumble opinion, could be released quickly.\nI think the expected behavior is being the same, with the changes I\nproposed, IMHO.\n\n+static void\n+DropObjectById(const ObjectAddress *object)\n+{\n+ int cacheId;\n+ Relation rel;\n+ HeapTuple tup;\n+\n+ cacheId = get_object_catcache_oid(object->classId);\n+\n+ /*\n+ * Use the system cache for the oid column, if one exists.\n+ */\n+ if (cacheId >= 0)\n+ {\n+ tup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for %s %u\",\n+ get_object_class_descr(object->classId), object->objectId);\n+\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ table_close(rel, RowExclusiveLock);\n+\n+ ReleaseSysCache(tup);\n+ }\n+ else\n+ {\n+ ScanKeyData skey[1];\n+ SysScanDesc scan;\n+\n+ ScanKeyInit(&skey[0],\n+ get_object_attnum_oid(object->classId),\n+ BTEqualStrategyNumber, F_OIDEQ,\n+ ObjectIdGetDatum(object->objectId));\n+\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ scan = systable_beginscan(rel, get_object_oid_index(object->classId),\ntrue,\n+ NULL, 1, skey);\n+\n+ /* we expect exactly one match */\n+ tup = systable_getnext(scan);\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"could not find tuple for %s %u\",\n+ get_object_class_descr(object->classId), object->objectId);\n+\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ systable_endscan(scan);\n+\n+ table_close(rel, RowExclusiveLock);\n+ }\n+}\n\nAnd again, your opinion is very important to me.\n\nbest regards,\nRanier Vilela\n\nEm ter., 5 de mai. de 2020 às 14:57, Robert Haas <robertmhaas@gmail.com> escreveu:On Tue, May 5, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I see, the famous \"cliché\".\n\nBy using the word cliché, and by putting it in quotes, you seem to\nsuggest that you consider my argument dubious. However, I stand by it.\nCode shouldn't be changed without understanding the reasons behind the\ncurrent coding. Doing so very often breaks things.Sorry Robert, It was not my intention. I didn't know that using quotes would change your understanding.Of course, I find your arguments very valid and valuable.And I understand that there are many interrelated things, which can break if done in the wrong order.Maybe I used the wrong word, in this case, the cliché. What I mean is, the cliché, does some strange things, like leaving variables to be declared, assigned in and not used.And in that specific case, leaving resources blocked, which perhaps, in my humble opinion, could be released quickly.I think the expected behavior is being the same, with the changes I proposed, IMHO.+static void+DropObjectById(const ObjectAddress *object)+{+\tint\t\t\tcacheId;+\tRelation\trel;+\tHeapTuple\ttup;++\tcacheId = get_object_catcache_oid(object->classId);++\t/*+\t * Use the system cache for the oid column, if one exists.+\t */+\tif (cacheId >= 0)+\t{+\t\ttup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));+\t\tif (!HeapTupleIsValid(tup))+\t\t\telog(ERROR, \"cache lookup failed for %s %u\",+\t\t\t\t get_object_class_descr(object->classId), object->objectId);++\t    rel = table_open(object->classId, RowExclusiveLock);+\t\tCatalogTupleDelete(rel, &tup->t_self);+\t    table_close(rel, RowExclusiveLock);++\t\tReleaseSysCache(tup);+\t}+\telse+\t{+\t\tScanKeyData skey[1];+\t\tSysScanDesc scan;++\t\tScanKeyInit(&skey[0],+\t\t\t\t\tget_object_attnum_oid(object->classId),+\t\t\t\t\tBTEqualStrategyNumber, F_OIDEQ,+\t\t\t\t\tObjectIdGetDatum(object->objectId));++\t    rel = table_open(object->classId, RowExclusiveLock);+\t\tscan = systable_beginscan(rel, get_object_oid_index(object->classId), true,+\t\t\t\t\t\t\t\t  NULL, 1, skey);++\t\t/* we expect exactly one match */+\t\ttup = systable_getnext(scan);+\t\tif (!HeapTupleIsValid(tup))+\t\t\telog(ERROR, \"could not find tuple for %s %u\",+\t\t\t\t get_object_class_descr(object->classId), object->objectId);++\t\tCatalogTupleDelete(rel, &tup->t_self);+\t\tsystable_endscan(scan);++\t    table_close(rel, RowExclusiveLock);+\t}+}And again, your opinion is very important to me.best regards,Ranier Vilela", "msg_date": "Tue, 5 May 2020 15:15:43 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On 2020-May-05, Ranier Vilela wrote:\n\n> And in that specific case, leaving resources blocked, which perhaps, in my\n> humble opinion, could be released quickly.\n\nI very much doubt that you can measure any difference at all between\nthese two codings of the function.\n\nI agree with the principle you mention, of not holding resources for\nlong if they can be released earlier; but in this case the table_close\ncall occurs across a ReleaseSysCache() call, which is hardly of\nsignificance. It's not like you have to wait for some other\ntransaction, or wait for I/O, or anything like that that would make it\nmeasurable. \n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 17:10:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Em ter., 5 de mai. de 2020 às 18:11, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-May-05, Ranier Vilela wrote:\n>\n> > And in that specific case, leaving resources blocked, which perhaps, in\n> my\n> > humble opinion, could be released quickly.\n>\n> I very much doubt that you can measure any difference at all between\n> these two codings of the function.\n>\n> I agree with the principle you mention, of not holding resources for\n> long if they can be released earlier; but in this case the table_close\n> call occurs across a ReleaseSysCache() call, which is hardly of\n> significance. It's not like you have to wait for some other\n> transaction, or wait for I/O, or anything like that that would make it\n> measurable.\n>\nLocally it may not be as efficient, but it is a question, to follow the\ngood programming practices, among them, not to break anything, not to\nblock, and if it is not possible, release soon.\nIn at least one case, there will not even be a block, which is an\nimprovement.\n\nAnother version, that could, be, without using ERROR.\n\n+static void\n+DropObjectById(const ObjectAddress *object)\n+{\n+ int cacheId;\n+ Relation rel;\n+ HeapTuple tup;\n+\n+ cacheId = get_object_catcache_oid(object->classId);\n+\n+ /*\n+ * Use the system cache for the oid column, if one exists.\n+ */\n+ if (cacheId >= 0)\n+ {\n+ tup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));\n+ if (HeapTupleIsValid(tup)) {\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ table_close(rel, RowExclusiveLock);\n+ ReleaseSysCache(tup);\n+ }\n+ else\n+ elog(LOG, \"cache lookup failed for %s %u\",\n+ get_object_class_descr(object->classId), object->objectId);\n+ }\n+ else\n+ {\n+ ScanKeyData skey[1];\n+ SysScanDesc scan;\n+\n+ ScanKeyInit(&skey[0],\n+ get_object_attnum_oid(object->classId),\n+ BTEqualStrategyNumber, F_OIDEQ,\n+ ObjectIdGetDatum(object->objectId));\n+\n+ rel = table_open(object->classId, RowExclusiveLock);\n+ scan = systable_beginscan(rel, get_object_oid_index(object->classId),\ntrue,\n+ NULL, 1, skey);\n+\n+ /* we expect exactly one match */\n+ tup = systable_getnext(scan);\n+ if (HeapTupleIsValid(tup))\n+ CatalogTupleDelete(rel, &tup->t_self);\n+ else\n+ elog(LOG, \"could not find tuple for %s %u\",\n+ get_object_class_descr(object->classId), object->objectId);\n+\n+ systable_endscan(scan);\n+ table_close(rel, RowExclusiveLock);\n+ }\n+}\n+\n\nregards,\nRanier Vilela\n\nEm ter., 5 de mai. de 2020 às 18:11, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-May-05, Ranier Vilela wrote:\n\n> And in that specific case, leaving resources blocked, which perhaps, in my\n> humble opinion, could be released quickly.\n\nI very much doubt that you can measure any difference at all between\nthese two codings of the function.\n\nI agree with the principle you mention, of not holding resources for\nlong if they can be released earlier; but in this case the table_close\ncall occurs across a ReleaseSysCache() call, which is hardly of\nsignificance.  It's not like you have to wait for some other\ntransaction, or wait for I/O, or anything like that that would make it\nmeasurable.  Locally it may not be as efficient, but it is a question, to follow the good programming practices, among them, not to break anything, not to block, and if it is not possible, release soon.In at least one case, there will not even be a block, which is an improvement.Another version, that could, be, without using ERROR.+static void+DropObjectById(const ObjectAddress *object)+{+\tint\t\t\tcacheId;+\tRelation\trel;+\tHeapTuple\ttup;++\tcacheId = get_object_catcache_oid(object->classId);++\t/*+\t * Use the system cache for the oid column, if one exists.+\t */+\tif (cacheId >= 0)+\t{+\t\ttup = SearchSysCache1(cacheId, ObjectIdGetDatum(object->objectId));+\t\tif (HeapTupleIsValid(tup)) {+\t        rel = table_open(object->classId, RowExclusiveLock);+\t\t    CatalogTupleDelete(rel, &tup->t_self);+\t        table_close(rel, RowExclusiveLock);+\t\t    ReleaseSysCache(tup);+       }+       else+\t\t\telog(LOG, \"cache lookup failed for %s %u\",+\t\t\t\t get_object_class_descr(object->classId), object->objectId);+\t}+\telse+\t{+\t\tScanKeyData skey[1];+\t\tSysScanDesc scan;++\t\tScanKeyInit(&skey[0],+\t\t\t\t\tget_object_attnum_oid(object->classId),+\t\t\t\t\tBTEqualStrategyNumber, F_OIDEQ,+\t\t\t\t\tObjectIdGetDatum(object->objectId));++\t    rel = table_open(object->classId, RowExclusiveLock);+\t\tscan = systable_beginscan(rel, get_object_oid_index(object->classId), true,+\t\t\t\t\t\t\t\t  NULL, 1, skey);++\t\t/* we expect exactly one match */+\t\ttup = systable_getnext(scan);+\t\tif (HeapTupleIsValid(tup))+\t\t    CatalogTupleDelete(rel, &tup->t_self);+       else+\t\t\telog(LOG, \"could not find tuple for %s %u\",+\t\t\t\t get_object_class_descr(object->classId), object->objectId);++\t    systable_endscan(scan);+\t    table_close(rel, RowExclusiveLock);+\t}+}+ regards,Ranier Vilela", "msg_date": "Tue, 5 May 2020 18:51:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "Only as a homework, is this function completely useless?\n\nItemPointerData\nsystable_scan_next(Relation heapRelation,\n Oid indexId,\n bool indexOK,\n Snapshot snapshot,\n int nkeys, ScanKey key)\n{\n SysScanDesc scan;\nHeapTuple htup;\nItemPointerData tid;\n\n scan = systable_beginscan(heapRelation, indexId, indexOK, snapshot,\nnkeys, key);\nhtup = systable_getnext(scan);\nif (HeapTupleIsValid(htup))\n tid = &htup->t_self;\nelse\n tid = NULL;\n systable_endscan(scan);\n\nreturn tid;\n}\n\nregards,\nRanier Vilela\n\nOnly as a homework, is this function completely useless?ItemPointerDatasystable_scan_next(Relation heapRelation,\t\t\t\t   Oid indexId,\t\t\t\t   bool indexOK,\t\t\t\t   Snapshot snapshot,\t\t\t\t   int nkeys, ScanKey key){    SysScanDesc scan;\tHeapTuple\thtup;\tItemPointerData\ttid;    scan = systable_beginscan(heapRelation, indexId, indexOK, snapshot, nkeys, key);\thtup = systable_getnext(scan);\tif (HeapTupleIsValid(htup))        tid = &htup->t_self;\telse        tid = NULL;    systable_endscan(scan);\treturn tid;}regards,Ranier Vilela", "msg_date": "Tue, 5 May 2020 20:21:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On 2020-05-04 20:57, Peter Eisentraut wrote:\n> New patch attached. I'll park it until PG14 opens.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jun 2020 09:54:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Unify drop-by-OID functions" }, { "msg_contents": "On 2020-05-05 18:05, Robert Haas wrote:\n> This reminds me: I think that the issues in\n> http://postgr.es/m/CA+TgmoYaFYRRdRZ94p_Qdt+1oONg6sMOvbkGHKVsFtONCrFkhw@mail.gmail.com\n> should be considered here - we should guarantee that there's a\n> snapshot registered continuously from before the call to\n> SearchSysCache1 until after the call to CatalogTupleDelete. In the\n> systable_beginscan case, we should be fine as long as the\n> systable_endscan follows the CatalogTupleDelete call.\n\nI considered this, but it seems independent of my patch. If there are \nchanges to be made, there are now fewer places to fix up.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jun 2020 09:55:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Unify drop-by-OID functions" } ]
[ { "msg_contents": "Hi all,\n\nThe first part of src/backend/replication/README lists all the APIs\nusable for a WAL receiver, but these have aged and lost track of most\nchanges that happened over the years. Four functions are listed in\nthe README, with incorrect signatures and many others are missing:\n- walrcv_connect()\n- walrcv_receive()\n- walrcv_send()\n- walrcv_disconnect()\n\nI think that we should clean up that. And as it seems to me that\nnobody really remembers to update this README, I would suggest to\nupdate the first section of the README to refer to walreceiver.h for\ndetails about each function, then move the existing API descriptions\nfrom the README to walreceiver.h (while fixing the incomplete\ndescriptions of course). This way, if a new function is added or if\nan existing function is changed, it is going to be hard to miss an\nupdate of the function descriptions.\n\nAny thoughts?\n--\nMichael", "msg_date": "Sat, 2 May 2020 11:46:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Rotten parts of src/backend/replication/README" }, { "msg_contents": "On Sat, May 2, 2020 at 8:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> The first part of src/backend/replication/README lists all the APIs\n> usable for a WAL receiver, but these have aged and lost track of most\n> changes that happened over the years. Four functions are listed in\n> the README, with incorrect signatures and many others are missing:\n> - walrcv_connect()\n> - walrcv_receive()\n> - walrcv_send()\n> - walrcv_disconnect()\n>\n> I think that we should clean up that.\n\n+1.\n\n> And as it seems to me that\n> nobody really remembers to update this README, I would suggest to\n> update the first section of the README to refer to walreceiver.h for\n> details about each function, then move the existing API descriptions\n> from the README to walreceiver.h (while fixing the incomplete\n> descriptions of course). This way, if a new function is added or if\n> an existing function is changed, it is going to be hard to miss an\n> update of the function descriptions.\n>\n\nI think in README we can have a general description of the module and\nmaybe at the broad level how different APIs can help to achieve the\nrequired functionality and then for API description we can refer to\n.h/.c. The detailed description of APIs should be where those APIs\nare defined. The header file can contain some generic description.\nThe detailed description I am referring to is below in the README:\n\"Retrieve any message available without blocking through the\nconnection. If a message was successfully read, returns its length.\nIf the connection is closed, returns -1. Otherwise returns 0 to\nindicate that no data is available, and sets *wait_fd to a socket\ndescriptor which can be waited on before trying again. On success, a\npointer to the message payload is stored in *buffer. The returned\nbuffer is valid until the next call to walrcv_* functions, and the\ncaller should not attempt to free it.\"\n\nI think having such a description near the actual definition helps in\nkeeping it updated whenever we change the function.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 May 2020 16:54:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rotten parts of src/backend/replication/README" }, { "msg_contents": "On Sat, May 02, 2020 at 04:54:32PM +0530, Amit Kapila wrote:\n> I think in README we can have a general description of the module and\n> maybe at the broad level how different APIs can help to achieve the\n> required functionality and then for API description we can refer to\n> .h/.c. The detailed description of APIs should be where those APIs\n> are defined. The header file can contain some generic description.\n> The detailed description I am referring to is below in the README:\n> \"Retrieve any message available without blocking through the\n> connection. If a message was successfully read, returns its length.\n> If the connection is closed, returns -1. Otherwise returns 0 to\n> indicate that no data is available, and sets *wait_fd to a socket\n> descriptor which can be waited on before trying again. On success, a\n> pointer to the message payload is stored in *buffer. The returned\n> buffer is valid until the next call to walrcv_* functions, and the\n> caller should not attempt to free it.\"\n> \n> I think having such a description near the actual definition helps in\n> keeping it updated whenever we change the function.\n\nYeah. The years have visibly proved that the README had updates when\nit came to the general descriptions of the libpqwalreceiver interface,\nbut that we had better consolidate the header to give some\ndocumentation to whoever plays with this interface. Attached is a\npatch to address all that, where I simplified the README and added\nsome description to all the callbacks. Thoughts are welcome. I'll\nadd that to the next CF. Now I don't see any actual problems in\ngetting that on HEAD before v13 forks. But let's gather more opinions\nfirst.\n--\nMichael", "msg_date": "Mon, 4 May 2020 14:50:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rotten parts of src/backend/replication/README" }, { "msg_contents": "On 2020-05-04 07:50, Michael Paquier wrote:\n> Yeah. The years have visibly proved that the README had updates when\n> it came to the general descriptions of the libpqwalreceiver interface,\n> but that we had better consolidate the header to give some\n> documentation to whoever plays with this interface. Attached is a\n> patch to address all that, where I simplified the README and added\n> some description to all the callbacks. Thoughts are welcome. I'll\n> add that to the next CF. Now I don't see any actual problems in\n> getting that on HEAD before v13 forks. But let's gather more opinions\n> first.\n\nThis patch looks good to me.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 10:11:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Rotten parts of src/backend/replication/README" }, { "msg_contents": "On Tue, Jun 30, 2020 at 10:11:04AM +0200, Peter Eisentraut wrote:\n> On 2020-05-04 07:50, Michael Paquier wrote:\n>> Yeah. The years have visibly proved that the README had updates when\n>> it came to the general descriptions of the libpqwalreceiver interface,\n>> but that we had better consolidate the header to give some\n>> documentation to whoever plays with this interface. Attached is a\n>> patch to address all that, where I simplified the README and added\n>> some description to all the callbacks. Thoughts are welcome. I'll\n>> add that to the next CF. Now I don't see any actual problems in\n>> getting that on HEAD before v13 forks. But let's gather more opinions\n>> first.\n> \n> This patch looks good to me.\n\nThanks for the review, Peter. After an extra read, the description\nof walrcv_create_slot_fn was not complete (missed the end of the\nsentence to say that NULL is returned for a physical slot) and had a\ngrammar mistake. So I have fixed this part, and applied the patch on\nHEAD. Perhaps things could be improved further more, so if anybody\nhas any suggestion please feel free.\n--\nMichael", "msg_date": "Thu, 2 Jul 2020 14:03:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rotten parts of src/backend/replication/README" } ]
[ { "msg_contents": "Hi,\n\nover the past couple of weeks I've been running various benchmarks with\nthe intent to demonstrate how the performance evolved since ~8.3. In\nmost tests we're doing pretty good, but I've noticed that full-text\nsearch using GiST indexes is an annoying exception - it's getting slower\nand slower since ~9.1.\n\nI've been investigating it a bit, but I still don't have any idea what\nmight be causing this. So perhaps someone else might have an idea ...\n\n\nThe benchmark is not anything special - it simply loads all our mailing\nlist archives into a table, computes tsvectors from message bodies, and\nthen an index on the tsvectors. And then it runs ~33k tsqueries that\npeople were actually searching at our archives (I got this years ago\nfrom someone, I don't recall the details). IMO it's a fairly realistic\nbenchmark, not something entirely made up.\n\nAttached is a couple of charts illustrating the regression.\n\nThe machine is not particularly beefy (4 cores, 8GB of RAM, SSDs) but\nit's the same for all the tests. The postgresql.conf was modified a bit,\nnothing beyond basic typical tuning.\n\n\n1) gist-91-vs-13-queries-per-second.png\n\nThis shows the throughput since 8.3, where we've been doing ~314 tps,\nwhile now we're doing only about ~200 tps. In 9.1 we did about 270 tps,\nand that's the number I'll use for comparisons because queries against\n9.0 and older versions are returning fewer results, so and the index on\nmessage body is much lower. So clearly something changed in 9.1, either\nin how we compute the tsvector or something. Since 9.1 it seems pretty\nstable, though. The main drop seems to happened between 9.2 and 9.3.\n\nNote: All the durations (in all charts) are in milliseconds.\n\n\n2) gist-large-91-vs-13.png\n\nThis plots durations of all 33k queries, comparing duration on 9.1\n(x-axis) to 13 (y-axis). The diagonal means same duration on both\nversions, anything above it means the query got slower. It's pretty\nclear the queries are consistently slower, but the chart is log-scale so\nit's not obvious what the slowdown is.\n\n\n3) gist-large-91-vs-13-slowdown.png\n\nThis is a different view on the query durations, plotting 9.1 duration\non x-axis vs. (duration on 13 / duration on 9.1) on y-axis. So for\nexample 1.5 means the query on 13 takes about 1.5x longer than on 9.1.\n\nThe chart seems to say the slowdown is pretty consistent, about 50% for\nthe shortest queries and then gradually improving for longer ones. This\nseems far too consistent to be noise, and the timings are actually\naverages of 5 runs for each query (and it seems quite consistent).\n\n\n\nI doubt this is merely due to changes in binary layout, or something to\ndo with compiler. The difference is a bit too high for that (the drop\nfrom 270 to 200 is about 25%), and it seems to affect all versions since\n9.3 about the same. Everything was built using the same gcc version\n(9.2.0). Only --enable-debug was used, everything else is the same.\nThere seem to be some minor variation in final CFLAGS, though.\n\nI've done some basic profiling using perf, but I don't see anything\nobvious in the profiles (attached).\n\nI kept the data directories, so I can do additional test if needed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 3 May 2020 00:36:49 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "full-text search with GiST indexes seems to be getting slower since\n 9.1" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> over the past couple of weeks I've been running various benchmarks with\n> the intent to demonstrate how the performance evolved since ~8.3. In\n> most tests we're doing pretty good, but I've noticed that full-text\n> search using GiST indexes is an annoying exception - it's getting slower\n> and slower since ~9.1.\n\nHmmm... given that TS_execute is near the top, I wonder if this has\nanything to do with the several times we've mostly-rewritten that\nto make it less incorrect for phrase search cases. But that came in\nin 9.6 IIRC, and the fixes have only been back-patched that far.\nDon't know what might've changed in 9.3.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 May 2020 23:09:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: full-text search with GiST indexes seems to be getting slower\n since 9.1" } ]
[ { "msg_contents": "1. Warning: the right operand to | always evaluates to 0\n\nsrc/include/storage/bufpage.h\n#define PAI_OVERWRITE (1 << 0)\n#define PAI_IS_HEAP (1 << 1)\n\n#define PageAddItem(page, item, size, offsetNumber, overwrite, is_heap) \\\nPageAddItemExtended(page, item, size, offsetNumber, \\\n((overwrite) ? PAI_OVERWRITE : 0) | \\\n((is_heap) ? PAI_IS_HEAP : 0))\n\nTypo | should be ||:\n((overwrite) ? PAI_OVERWRITE : 0) || \\\n((is_heap) ? PAI_IS_HEAP : 0))\n\nregards,\nRanier Vilela\n\n1. Warning: the right  operand to | always evaluates to 0\n\nsrc/include/storage/bufpage.h#define PAI_OVERWRITE\t\t\t(1 << 0)#define PAI_IS_HEAP\t\t\t\t(1 << 1)#define PageAddItem(page, item, size, offsetNumber, overwrite, is_heap) \\\tPageAddItemExtended(page, item, size, offsetNumber, \\\t\t\t\t\t\t((overwrite) ? PAI_OVERWRITE : 0) | \\\t\t\t\t\t\t((is_heap) ? PAI_IS_HEAP : 0))\nTypo | should be ||:\n((overwrite) ? PAI_OVERWRITE : 0) || \\\t\t\t\t\t\t((is_heap) ? PAI_IS_HEAP : 0))regards,Ranier Vilela", "msg_date": "Sun, 3 May 2020 17:05:54 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[REPORT] Static analys warnings" }, { "msg_contents": "On 2020-05-03 17:05:54 -0300, Ranier Vilela wrote:\n> 1. Warning: the right operand to | always evaluates to 0\n> \n> src/include/storage/bufpage.h\n> #define PAI_OVERWRITE (1 << 0)\n> #define PAI_IS_HEAP (1 << 1)\n> \n> #define PageAddItem(page, item, size, offsetNumber, overwrite, is_heap) \\\n> PageAddItemExtended(page, item, size, offsetNumber, \\\n> ((overwrite) ? PAI_OVERWRITE : 0) | \\\n> ((is_heap) ? PAI_IS_HEAP : 0))\n> \n> Typo | should be ||:\n> ((overwrite) ? PAI_OVERWRITE : 0) || \\\n> ((is_heap) ? PAI_IS_HEAP : 0))\n\nDefinitely not. PageAddItemExtended's flags argument is not a boolean,\nit's a flag bitmask. It'd entirely break the semantics to make the\nchange you suggest.\n\nNor do I know what this warning is about, because clearly in the general\ncase the right argument to the | does not generally evaluate to 0. I\nguess this about a particular use of the macro (with a constant\nargument), rather than the macro itself.\n\n\n", "msg_date": "Sun, 3 May 2020 14:20:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [REPORT] Static analys warnings" }, { "msg_contents": "Fix possible overflow when converting, possible negative number to uint16.\n\npostingoff can be -1,when converts to uint16, overflow can raise.\nOtherwise, truncation can be occurs, losing precision, from int (31 bits)\nto uint16 (15 bits)\nThere is a little confusion in the parameters of some functions in this\nfile, postigoff is declared as int, other declared as uint16.\n\nsrc/backend/access/nbtree/nbtinsert.c\nstatic void _bt_insertonpg(Relation rel, BTScanInsert itup_key,\n Buffer buf,\n Buffer cbuf,\n BTStack stack,\n IndexTuple itup,\n Size itemsz,\n OffsetNumber newitemoff,\n int postingoff, // INT\n bool split_only_page);\nstatic Buffer _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf,\nBuffer cbuf, OffsetNumber newitemoff, Size newitemsz,\nIndexTuple newitem, IndexTuple orignewitem,\nIndexTuple nposting, uint16 postingoff); // UINT16\n\nregards,\nRanier Vilela", "msg_date": "Mon, 4 May 2020 19:49:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [REPORT] Static analys warnings" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI believe I've discovered a race condition between the startup and\r\ncheckpointer processes that can cause a CRC mismatch in the pg_control\r\nfile. If a cluster crashes at the right time, the following error\r\nappears when you attempt to restart it:\r\n\r\n FATAL: incorrect checksum in control file\r\n\r\nThis appears to be caused by some code paths in xlog_redo() that\r\nupdate ControlFile without taking the ControlFileLock. The attached\r\npatch seems to be sufficient to prevent the CRC mismatch in the\r\ncontrol file, but perhaps this is a symptom of a bigger problem with\r\nconcurrent modifications of ControlFile->checkPointCopy.nextFullXid.\r\n\r\nNathan", "msg_date": "Mon, 4 May 2020 17:44:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "race condition when writing pg_control" }, { "msg_contents": "On Tue, May 5, 2020 at 5:53 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I believe I've discovered a race condition between the startup and\n> checkpointer processes that can cause a CRC mismatch in the pg_control\n> file. If a cluster crashes at the right time, the following error\n> appears when you attempt to restart it:\n>\n> FATAL: incorrect checksum in control file\n>\n> This appears to be caused by some code paths in xlog_redo() that\n> update ControlFile without taking the ControlFileLock. The attached\n> patch seems to be sufficient to prevent the CRC mismatch in the\n> control file, but perhaps this is a symptom of a bigger problem with\n> concurrent modifications of ControlFile->checkPointCopy.nextFullXid.\n\nThis does indeed look pretty dodgy. CreateRestartPoint() running in\nthe checkpointer does UpdateControlFile() to compute a checksum and\nwrite it out, but xlog_redo() processing\nXLOG_CHECKPOINT_{ONLINE,SHUTDOWN} modifies that data without\ninterlocking. It looks like the ancestors of that line were there\nsince 35af5422f64 (2006), but back then RecoveryRestartPoint() ran\nUpdateControLFile() directly in the startup process (immediately after\nthat update), so no interlocking problem. Then in cdd46c76548 (2009),\nRecoveryRestartPoint() was split up so that CreateRestartPoint() ran\nin another process.\n\n\n", "msg_date": "Tue, 5 May 2020 09:51:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Tue, May 5, 2020 at 9:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, May 5, 2020 at 5:53 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > I believe I've discovered a race condition between the startup and\n> > checkpointer processes that can cause a CRC mismatch in the pg_control\n> > file. If a cluster crashes at the right time, the following error\n> > appears when you attempt to restart it:\n> >\n> > FATAL: incorrect checksum in control file\n> >\n> > This appears to be caused by some code paths in xlog_redo() that\n> > update ControlFile without taking the ControlFileLock. The attached\n> > patch seems to be sufficient to prevent the CRC mismatch in the\n> > control file, but perhaps this is a symptom of a bigger problem with\n> > concurrent modifications of ControlFile->checkPointCopy.nextFullXid.\n>\n> This does indeed look pretty dodgy. CreateRestartPoint() running in\n> the checkpointer does UpdateControlFile() to compute a checksum and\n> write it out, but xlog_redo() processing\n> XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} modifies that data without\n> interlocking. It looks like the ancestors of that line were there\n> since 35af5422f64 (2006), but back then RecoveryRestartPoint() ran\n> UpdateControLFile() directly in the startup process (immediately after\n> that update), so no interlocking problem. Then in cdd46c76548 (2009),\n> RecoveryRestartPoint() was split up so that CreateRestartPoint() ran\n> in another process.\n\nHere's a version with a commit message added. I'll push this to all\nreleases in a day or two if there are no objections.", "msg_date": "Fri, 22 May 2020 16:51:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "\n\nOn 2020/05/22 13:51, Thomas Munro wrote:\n> On Tue, May 5, 2020 at 9:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Tue, May 5, 2020 at 5:53 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>>> I believe I've discovered a race condition between the startup and\n>>> checkpointer processes that can cause a CRC mismatch in the pg_control\n>>> file. If a cluster crashes at the right time, the following error\n>>> appears when you attempt to restart it:\n>>>\n>>> FATAL: incorrect checksum in control file\n>>>\n>>> This appears to be caused by some code paths in xlog_redo() that\n>>> update ControlFile without taking the ControlFileLock. The attached\n>>> patch seems to be sufficient to prevent the CRC mismatch in the\n>>> control file, but perhaps this is a symptom of a bigger problem with\n>>> concurrent modifications of ControlFile->checkPointCopy.nextFullXid.\n>>\n>> This does indeed look pretty dodgy. CreateRestartPoint() running in\n>> the checkpointer does UpdateControlFile() to compute a checksum and\n>> write it out, but xlog_redo() processing\n>> XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} modifies that data without\n>> interlocking. It looks like the ancestors of that line were there\n>> since 35af5422f64 (2006), but back then RecoveryRestartPoint() ran\n>> UpdateControLFile() directly in the startup process (immediately after\n>> that update), so no interlocking problem. Then in cdd46c76548 (2009),\n>> RecoveryRestartPoint() was split up so that CreateRestartPoint() ran\n>> in another process.\n> \n> Here's a version with a commit message added. I'll push this to all\n> releases in a day or two if there are no objections.\n\n+1 to push the patch.\n\nPer my quick check, XLogReportParameters() seems to have the similar issue,\ni.e., it updates the control file without taking ControlFileLock.\nMaybe we should fix this at the same time?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 23 May 2020 01:00:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Sat, May 23, 2020 at 01:00:17AM +0900, Fujii Masao wrote:\n> Per my quick check, XLogReportParameters() seems to have the similar issue,\n> i.e., it updates the control file without taking ControlFileLock.\n> Maybe we should fix this at the same time?\n\nYeah. It also checks the control file values, implying that we should\nhave LW_SHARED taken at least at the beginning, but this lock cannot\nbe upgraded we need LW_EXCLUSIVE the whole time. I am wondering if we\nshould check with an assert if ControlFileLock is taken when going\nthrough UpdateControlFile(). We have one code path at the beginning\nof redo where we don't need a lock close to the backup_label file\nchecks, but we could just pass down a boolean flag to the routine to\nhandle that case. Another good thing in having an assert is that any\nnew caller of UpdateControlFile() would need to think about the need\nof a lock.\n--\nMichael", "msg_date": "Sat, 23 May 2020 14:39:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On 5/21/20, 9:52 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n> Here's a version with a commit message added. I'll push this to all\r\n> releases in a day or two if there are no objections.\r\n\r\nLooks good to me. Thanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 26 May 2020 15:45:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On 5/22/20, 10:40 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Sat, May 23, 2020 at 01:00:17AM +0900, Fujii Masao wrote:\r\n>> Per my quick check, XLogReportParameters() seems to have the similar issue,\r\n>> i.e., it updates the control file without taking ControlFileLock.\r\n>> Maybe we should fix this at the same time?\r\n>\r\n> Yeah. It also checks the control file values, implying that we should\r\n> have LW_SHARED taken at least at the beginning, but this lock cannot\r\n> be upgraded we need LW_EXCLUSIVE the whole time. I am wondering if we\r\n> should check with an assert if ControlFileLock is taken when going\r\n> through UpdateControlFile(). We have one code path at the beginning\r\n> of redo where we don't need a lock close to the backup_label file\r\n> checks, but we could just pass down a boolean flag to the routine to\r\n> handle that case. Another good thing in having an assert is that any\r\n> new caller of UpdateControlFile() would need to think about the need\r\n> of a lock.\r\n\r\nWhile an assertion in UpdateControlFile() would not have helped us\r\ncatch the problem I initially reported, it does seem worthwhile to add\r\nit. I have attached a patch that adds this assertion and also\r\nattempts to fix XLogReportParameters(). Since there is only one place\r\nwhere we feel it is safe to call UpdateControlFile() without a lock, I\r\njust changed it to take the lock. I don't think this adds any sort of\r\nsignificant contention risk, and IMO it is a bit cleaner than the\r\nboolean flag.\r\n\r\nFor the XLogReportParameters() fix, I simply added an exclusive lock\r\nacquisition for the portion that updates the values in shared memory\r\nand calls UpdateControlFile(). IIUC the first part of this function\r\nthat accesses several ControlFile values should be safe, as none of\r\nthem can be updated after server start.\r\n\r\nNathan", "msg_date": "Tue, 26 May 2020 19:30:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Tue, May 26, 2020 at 07:30:54PM +0000, Bossart, Nathan wrote:\n> While an assertion in UpdateControlFile() would not have helped us\n> catch the problem I initially reported, it does seem worthwhile to add\n> it. I have attached a patch that adds this assertion and also\n> attempts to fix XLogReportParameters(). Since there is only one place\n> where we feel it is safe to call UpdateControlFile() without a lock, I\n> just changed it to take the lock. I don't think this adds any sort of\n> significant contention risk, and IMO it is a bit cleaner than the\n> boolean flag.\n\nLet's see what Fujii-san and Thomas think about that. I'd rather\navoid taking a lock here because we don't need it and because it makes\nthings IMO confusing with the beginning of StartupXLOG() where a lot\nof the fields are read, even if we go without this extra assertion.\n\n> For the XLogReportParameters() fix, I simply added an exclusive lock\n> acquisition for the portion that updates the values in shared memory\n> and calls UpdateControlFile(). IIUC the first part of this function\n> that accesses several ControlFile values should be safe, as none of\n> them can be updated after server start.\n\nThey can get updated when replaying a XLOG_PARAMETER_CHANGE record.\nBut you are right as all of this happens in the startup process, so\nyour patch looks right to me here.\n--\nMichael", "msg_date": "Wed, 27 May 2020 16:10:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "\n\nOn 2020/05/27 16:10, Michael Paquier wrote:\n> On Tue, May 26, 2020 at 07:30:54PM +0000, Bossart, Nathan wrote:\n>> While an assertion in UpdateControlFile() would not have helped us\n>> catch the problem I initially reported, it does seem worthwhile to add\n>> it. I have attached a patch that adds this assertion and also\n>> attempts to fix XLogReportParameters(). Since there is only one place\n>> where we feel it is safe to call UpdateControlFile() without a lock, I\n>> just changed it to take the lock. I don't think this adds any sort of\n>> significant contention risk, and IMO it is a bit cleaner than the\n>> boolean flag.\n> \n> Let's see what Fujii-san and Thomas think about that. I'd rather\n> avoid taking a lock here because we don't need it and because it makes\n> things IMO confusing with the beginning of StartupXLOG() where a lot\n> of the fields are read, even if we go without this extra assertion.\n\nI have no strong opinion about this, but I tend to agree with Michael here.\n \n>> For the XLogReportParameters() fix, I simply added an exclusive lock\n>> acquisition for the portion that updates the values in shared memory\n>> and calls UpdateControlFile(). IIUC the first part of this function\n>> that accesses several ControlFile values should be safe, as none of\n>> them can be updated after server start.\n> \n> They can get updated when replaying a XLOG_PARAMETER_CHANGE record.\n> But you are right as all of this happens in the startup process, so\n> your patch looks right to me here.\n\nLGTM.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 May 2020 16:24:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On 5/29/20, 12:24 AM, \"Fujii Masao\" <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2020/05/27 16:10, Michael Paquier wrote:\r\n>> On Tue, May 26, 2020 at 07:30:54PM +0000, Bossart, Nathan wrote:\r\n>>> While an assertion in UpdateControlFile() would not have helped us\r\n>>> catch the problem I initially reported, it does seem worthwhile to add\r\n>>> it. I have attached a patch that adds this assertion and also\r\n>>> attempts to fix XLogReportParameters(). Since there is only one place\r\n>>> where we feel it is safe to call UpdateControlFile() without a lock, I\r\n>>> just changed it to take the lock. I don't think this adds any sort of\r\n>>> significant contention risk, and IMO it is a bit cleaner than the\r\n>>> boolean flag.\r\n>>\r\n>> Let's see what Fujii-san and Thomas think about that. I'd rather\r\n>> avoid taking a lock here because we don't need it and because it makes\r\n>> things IMO confusing with the beginning of StartupXLOG() where a lot\r\n>> of the fields are read, even if we go without this extra assertion.\r\n>\r\n> I have no strong opinion about this, but I tend to agree with Michael here.\r\n>\r\n>>> For the XLogReportParameters() fix, I simply added an exclusive lock\r\n>>> acquisition for the portion that updates the values in shared memory\r\n>>> and calls UpdateControlFile(). IIUC the first part of this function\r\n>>> that accesses several ControlFile values should be safe, as none of\r\n>>> them can be updated after server start.\r\n>>\r\n>> They can get updated when replaying a XLOG_PARAMETER_CHANGE record.\r\n>> But you are right as all of this happens in the startup process, so\r\n>> your patch looks right to me here.\r\n>\r\n> LGTM.\r\n\r\nThanks for the feedback. I've attached a new set of patches.\r\n\r\nNathan", "msg_date": "Sun, 31 May 2020 21:11:35 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Sun, May 31, 2020 at 09:11:35PM +0000, Bossart, Nathan wrote:\n> Thanks for the feedback. I've attached a new set of patches.\n\nThanks for splitting the set. 0001 and 0002 are the minimum set for\nback-patching, and it would be better to merge them together. 0003 is\ndebatable and not an actual bug fix, so I would refrain from doing a\nbackpatch. It does not seem that there is a strong consensus in favor\nof 0003 either.\n\nThomas, are you planning to look at this patch set?\n--\nMichael", "msg_date": "Tue, 2 Jun 2020 14:24:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Tue, Jun 2, 2020 at 5:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, May 31, 2020 at 09:11:35PM +0000, Bossart, Nathan wrote:\n> > Thanks for the feedback. I've attached a new set of patches.\n>\n> Thanks for splitting the set. 0001 and 0002 are the minimum set for\n> back-patching, and it would be better to merge them together. 0003 is\n> debatable and not an actual bug fix, so I would refrain from doing a\n> backpatch. It does not seem that there is a strong consensus in favor\n> of 0003 either.\n>\n> Thomas, are you planning to look at this patch set?\n\nSorry for my radio silence, I got tangled up with a couple of\nconferences. I'm planning to look at 0001 and 0002 shortly.\n\n\n", "msg_date": "Wed, 3 Jun 2020 10:56:13 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Wed, Jun 03, 2020 at 10:56:13AM +1200, Thomas Munro wrote:\n> Sorry for my radio silence, I got tangled up with a couple of\n> conferences. I'm planning to look at 0001 and 0002 shortly.\n\nThanks!\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 11:02:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Wed, Jun 3, 2020 at 2:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jun 03, 2020 at 10:56:13AM +1200, Thomas Munro wrote:\n> > Sorry for my radio silence, I got tangled up with a couple of\n> > conferences. I'm planning to look at 0001 and 0002 shortly.\n>\n> Thanks!\n\nI pushed 0001 and 0002, squashed into one commit. I'm not sure about\n0003. If we're going to do that, wouldn't it be better to just\nacquire the lock in that one extra place in StartupXLOG(), rather than\nintroducing the extra parameter?\n\n\n", "msg_date": "Mon, 8 Jun 2020 14:48:55 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On 6/7/20, 7:50 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n> I pushed 0001 and 0002, squashed into one commit. I'm not sure about\r\n> 0003. If we're going to do that, wouldn't it be better to just\r\n> acquire the lock in that one extra place in StartupXLOG(), rather than\r\n> introducing the extra parameter?\r\n\r\nThanks! The approach for 0003 was discussed a bit upthread [0]. I do\r\nnot have a strong opinion, but I lean towards just acquiring the lock.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/20200527071053.GD103662%40paquier.xyz\r\n\r\n", "msg_date": "Mon, 8 Jun 2020 03:25:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Mon, Jun 08, 2020 at 03:25:31AM +0000, Bossart, Nathan wrote:\n> On 6/7/20, 7:50 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\n>> I pushed 0001 and 0002, squashed into one commit. I'm not sure about\n>> 0003. If we're going to do that, wouldn't it be better to just\n>> acquire the lock in that one extra place in StartupXLOG(), rather than\n>> introducing the extra parameter?\n> \n> Thanks! The approach for 0003 was discussed a bit upthread [0]. I do\n> not have a strong opinion, but I lean towards just acquiring the lock.\n\nFujii-san has provided an answer upthread, that can maybe translated\nas a +0.3~0.4:\nhttps://www.postgresql.org/message-id/fc796148-7d63-47bb-e91d-e09b62a502e9@oss.nttdata.com\n\nFWIW, I'd rather not take the lock as that's not necessary and just\nadd the parameter if I were to do it. Now I would be fine as well to\njust take the lock if you decide that's more simple, as long as we add\nthis new assertion as a safety net for future changes.\n--\nMichael", "msg_date": "Mon, 8 Jun 2020 15:26:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Fri, May 29, 2020 at 12:54 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/05/27 16:10, Michael Paquier wrote:\n> > On Tue, May 26, 2020 at 07:30:54PM +0000, Bossart, Nathan wrote:\n> >> While an assertion in UpdateControlFile() would not have helped us\n> >> catch the problem I initially reported, it does seem worthwhile to add\n> >> it. I have attached a patch that adds this assertion and also\n> >> attempts to fix XLogReportParameters(). Since there is only one place\n> >> where we feel it is safe to call UpdateControlFile() without a lock, I\n> >> just changed it to take the lock. I don't think this adds any sort of\n> >> significant contention risk, and IMO it is a bit cleaner than the\n> >> boolean flag.\n> >\n> > Let's see what Fujii-san and Thomas think about that. I'd rather\n> > avoid taking a lock here because we don't need it and because it makes\n> > things IMO confusing with the beginning of StartupXLOG() where a lot\n> > of the fields are read, even if we go without this extra assertion.\n>\n> I have no strong opinion about this, but I tend to agree with Michael here.\n>\n>\nI too don't have a strong opinion about this either, but I like Nathan's\napproach more, just take the lock in the startup process as well for the\nsimplicity if that is not hurting much. I think, apart from the startup\nprocess we\nhave to take the lock to update the control file, then having separate\ntreatment\nfor the startup process looks confusing to me, IMHO.\n\nRegards,\nAmul\n\nOn Fri, May 29, 2020 at 12:54 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/05/27 16:10, Michael Paquier wrote:\n> On Tue, May 26, 2020 at 07:30:54PM +0000, Bossart, Nathan wrote:\n>> While an assertion in UpdateControlFile() would not have helped us\n>> catch the problem I initially reported, it does seem worthwhile to add\n>> it.  I have attached a patch that adds this assertion and also\n>> attempts to fix XLogReportParameters().  Since there is only one place\n>> where we feel it is safe to call UpdateControlFile() without a lock, I\n>> just changed it to take the lock.  I don't think this adds any sort of\n>> significant contention risk, and IMO it is a bit cleaner than the\n>> boolean flag.\n> \n> Let's see what Fujii-san and Thomas think about that.  I'd rather\n> avoid taking a lock here because we don't need it and because it makes\n> things IMO confusing with the beginning of StartupXLOG() where a lot\n> of the fields are read, even if we go without this extra assertion.\n\nI have no strong opinion about this, but I tend to agree with Michael here.\nI too don't have a strong opinion about this either, but I like Nathan'sapproach more, just take the lock in the startup process as well for thesimplicity if that is not hurting much. I think, apart from the startup process wehave to take the lock to update the control file, then having separate treatmentfor the startup process looks confusing to me, IMHO.Regards,Amul", "msg_date": "Mon, 8 Jun 2020 16:14:53 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Sun, Jun 7, 2020 at 10:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jun 3, 2020 at 2:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Jun 03, 2020 at 10:56:13AM +1200, Thomas Munro wrote:\n> > > Sorry for my radio silence, I got tangled up with a couple of\n> > > conferences. I'm planning to look at 0001 and 0002 shortly.\n> >\n> > Thanks!\n>\n> I pushed 0001 and 0002, squashed into one commit. I'm not sure about\n> 0003. If we're going to do that, wouldn't it be better to just\n> acquire the lock in that one extra place in StartupXLOG(), rather than\n> introducing the extra parameter?\n\nToday, after committing a3e6c6f, I saw recovery/018_wal_optimize.pl\nfail and see this message in the replica log [2].\n\n2024-05-16 15:12:22.821 GMT [5440][not initialized] FATAL: incorrect\nchecksum in control file\n\nI'm pretty sure it's not related to my commit. So, I was looking for\nexisting reports of this error message.\n\nIt's a long shot, since 0001 and 0002 were already pushed, but this is\nthe only recent report I could find of \"FATAL: incorrect checksum in\ncontrol file\" in pgsql-hackers or bugs archives.\n\nI do see this thread from 2016 [3] which might be relevant because the\nreported bug was also on Windows.\n\n- Melanie\n\n[1] https://cirrus-ci.com/task/4626725689098240\n[2] https://api.cirrus-ci.com/v1/artifact/task/4626725689098240/testrun/build/testrun/recovery/018_wal_optimize/log/018_wal_optimize_node_replica.log\n[3] https://www.postgresql.org/message-id/flat/CAEepm%3D0hh_Dvd2Q%2BfcjYpkVzSoNX2%2Bf167cYu5nwu%3Dqh5HZhJw%40mail.gmail.com#042e9ec55c782370ab49c3a4ef254f4a\n\n\n", "msg_date": "Thu, 16 May 2024 12:19:22 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Thu, May 16, 2024 at 12:19:22PM -0400, Melanie Plageman wrote:\n> Today, after committing a3e6c6f, I saw recovery/018_wal_optimize.pl\n> fail and see this message in the replica log [2].\n> \n> 2024-05-16 15:12:22.821 GMT [5440][not initialized] FATAL: incorrect\n> checksum in control file\n> \n> I'm pretty sure it's not related to my commit. So, I was looking for\n> existing reports of this error message.\n\nYeah, I don't see how it could be related.\n\n> It's a long shot, since 0001 and 0002 were already pushed, but this is\n> the only recent report I could find of \"FATAL: incorrect checksum in\n> control file\" in pgsql-hackers or bugs archives.\n> \n> I do see this thread from 2016 [3] which might be relevant because the\n> reported bug was also on Windows.\n\nI suspect it will be difficult to investigate this one too much further\nunless we can track down a copy of the control file with the bad checksum.\nOther than searching for any new code that isn't doing the appropriate\nlocking, maybe we could search the buildfarm for any other occurrences. I\nalso seem some threads concerning whether the way we are reading/writing\nthe control file is atomic.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 May 2024 13:35:58 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I suspect it will be difficult to investigate this one too much further\n> unless we can track down a copy of the control file with the bad checksum.\n> Other than searching for any new code that isn't doing the appropriate\n> locking, maybe we could search the buildfarm for any other occurrences. I\n> also seem some threads concerning whether the way we are reading/writing\n> the control file is atomic.\n\nThe intention was certainly always that it be atomic. If it isn't\nwe have got *big* trouble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 14:50:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "Hi,\n\nOn 2024-05-16 14:50:50 -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > I suspect it will be difficult to investigate this one too much further\n> > unless we can track down a copy of the control file with the bad checksum.\n> > Other than searching for any new code that isn't doing the appropriate\n> > locking, maybe we could search the buildfarm for any other occurrences. I\n> > also seem some threads concerning whether the way we are reading/writing\n> > the control file is atomic.\n> \n> The intention was certainly always that it be atomic. If it isn't\n> we have got *big* trouble.\n\nWe unfortunately do *know* that on several systems e.g. basebackup can read a\npartially written control file, while the control file is being\nupdated. Thomas addressed this partially for frontend code, but not yet for\nbackend code. See\nhttps://postgr.es/m/CA%2BhUKGLhLGCV67NuTiE%3Detdcw5ChMkYgpgFsa9PtrXm-984FYA%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 May 2024 11:58:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2024-05-16 14:50:50 -0400, Tom Lane wrote:\n>> The intention was certainly always that it be atomic. If it isn't\n>> we have got *big* trouble.\n\n> We unfortunately do *know* that on several systems e.g. basebackup can read a\n> partially written control file, while the control file is being\n> updated.\n\nYeah, but can't we just retry that if we get a bad checksum?\n\nWhat had better be atomic is the write to disk. Systems that can't\nmanage POSIX semantics for concurrent reads and writes are annoying,\nbut not fatal ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 15:01:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "Hi,\n\nOn 2024-05-16 15:01:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2024-05-16 14:50:50 -0400, Tom Lane wrote:\n> >> The intention was certainly always that it be atomic. If it isn't\n> >> we have got *big* trouble.\n> \n> > We unfortunately do *know* that on several systems e.g. basebackup can read a\n> > partially written control file, while the control file is being\n> > updated.\n> \n> Yeah, but can't we just retry that if we get a bad checksum?\n\nRetry what/where precisely? We can avoid the issue in basebackup.c by taking\nControlFileLock in the right moment - but that doesn't address\npg_start/stop_backup based backups. Hence the patch in the referenced thread\nmoving to replacing the control file by atomic-rename if there are base\nbackups ongoing.\n\n\n> What had better be atomic is the write to disk.\n\nThat is still true to my knowledge.\n\n\n> Systems that can't manage POSIX semantics for concurrent reads and writes\n> are annoying, but not fatal ...\n\nI think part of the issue that people don't agree what posix says about\na read that's concurrent to a write... See e.g.\nhttps://utcc.utoronto.ca/~cks/space/blog/unix/WriteNotVeryAtomic\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 May 2024 12:07:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "The specific problem here is that LocalProcessControlFile() runs in\nevery launched child for EXEC_BACKEND builds. Windows uses\nEXEC_BACKEND, and Windows' NTFS file system is one of the two file\nsystems known to this list to have the concurrent read/write data\nmashing problem (the other being ext4).\n\n\n", "msg_date": "Fri, 17 May 2024 16:46:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Fri, May 17, 2024 at 4:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The specific problem here is that LocalProcessControlFile() runs in\n> every launched child for EXEC_BACKEND builds. Windows uses\n> EXEC_BACKEND, and Windows' NTFS file system is one of the two file\n> systems known to this list to have the concurrent read/write data\n> mashing problem (the other being ext4).\n\nPhngh... this is surprisingly difficult to fix.\n\nThings that don't work: We \"just\" need to acquire ControlFileLock\nwhile reading the file or examining the object in shared memory, or\nget a copy of it, passed through the EXEC_BACKEND BackendParameters\nthat was acquired while holding the lock, but the current location of\nthis code in child startup is too early to use LWLocks, and the\npostmaster can't acquire locks either so it can't even safely take a\ncopy to pass on. You could reorder startup so that we are allowed to\nacquire LWLocks in children at that point, but then you'd need to\nconvince yourself that there is no danger of breaking some ordering\nrequirement in external preload libraries, and figure out what to do\nabout children that don't even attach to shared memory. Maybe that's\npossible, but that doesn't sound like a good idea to back-patch.\n\nFirst idea idea I've come up with to avoid all of that: pass a copy of\nthe \"proto-controlfile\", to coin a term for the one read early in\npostmaster startup by LocalProcessControlFile(). As far as I know,\nthe only reason we need it is to suck some settings out of it that\ndon't change while a cluster is running (mostly can't change after\ninitdb, and checksums can only be {en,dis}abled while down). Right?\nChildren can just \"import\" that sucker instead of calling\nLocalProcessControlFile() to figure out the size of WAL segments yada\nyada, I think? Later they will attach to the real one in shared\nmemory for all future purposes, once normal interlocking is allowed.\n\nI dunno. Draft patch attached. Better plans welcome. This passes CI\non Linux systems afflicted by EXEC_BACKEND, and Windows. Thoughts?", "msg_date": "Sat, 18 May 2024 17:29:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Sat, May 18, 2024 at 05:29:12PM +1200, Thomas Munro wrote:\n> On Fri, May 17, 2024 at 4:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The specific problem here is that LocalProcessControlFile() runs in\n> > every launched child for EXEC_BACKEND builds. Windows uses\n> > EXEC_BACKEND, and Windows' NTFS file system is one of the two file\n> > systems known to this list to have the concurrent read/write data\n> > mashing problem (the other being ext4).\n\n> First idea idea I've come up with to avoid all of that: pass a copy of\n> the \"proto-controlfile\", to coin a term for the one read early in\n> postmaster startup by LocalProcessControlFile(). As far as I know,\n> the only reason we need it is to suck some settings out of it that\n> don't change while a cluster is running (mostly can't change after\n> initdb, and checksums can only be {en,dis}abled while down). Right?\n> Children can just \"import\" that sucker instead of calling\n> LocalProcessControlFile() to figure out the size of WAL segments yada\n> yada, I think? Later they will attach to the real one in shared\n> memory for all future purposes, once normal interlocking is allowed.\n\nI like that strategy, particularly because it recreates what !EXEC_BACKEND\nbackends inherit from the postmaster. It might prevent future bugs that would\nhave been specific to EXEC_BACKEND.\n\n> I dunno. Draft patch attached. Better plans welcome. This passes CI\n> on Linux systems afflicted by EXEC_BACKEND, and Windows. Thoughts?\n\nLooks reasonable. I didn't check over every detail, given the draft status.\n\n\n", "msg_date": "Fri, 12 Jul 2024 04:43:22 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Fri, Jul 12, 2024 at 11:43 PM Noah Misch <noah@leadboat.com> wrote:\n> On Sat, May 18, 2024 at 05:29:12PM +1200, Thomas Munro wrote:\n> > On Fri, May 17, 2024 at 4:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > The specific problem here is that LocalProcessControlFile() runs in\n> > > every launched child for EXEC_BACKEND builds. Windows uses\n> > > EXEC_BACKEND, and Windows' NTFS file system is one of the two file\n> > > systems known to this list to have the concurrent read/write data\n> > > mashing problem (the other being ext4).\n>\n> > First idea idea I've come up with to avoid all of that: pass a copy of\n> > the \"proto-controlfile\", to coin a term for the one read early in\n> > postmaster startup by LocalProcessControlFile(). As far as I know,\n> > the only reason we need it is to suck some settings out of it that\n> > don't change while a cluster is running (mostly can't change after\n> > initdb, and checksums can only be {en,dis}abled while down). Right?\n> > Children can just \"import\" that sucker instead of calling\n> > LocalProcessControlFile() to figure out the size of WAL segments yada\n> > yada, I think? Later they will attach to the real one in shared\n> > memory for all future purposes, once normal interlocking is allowed.\n>\n> I like that strategy, particularly because it recreates what !EXEC_BACKEND\n> backends inherit from the postmaster. It might prevent future bugs that would\n> have been specific to EXEC_BACKEND.\n\nThanks for looking! Yeah, that is a good way to put it.\n\nThe only other idea I can think of is that the Postmaster could take\nall of the things that LocalProcessControlFile() wants to extract from\nthe file, and transfer them via that struct used for EXEC_BACKEND as\nindividual variables, instead of this new proto-controlfile copy. I\nthink it would be a bigger change with no obvious-to-me additional\nbenefit, so I didn't try it.\n\n> > I dunno. Draft patch attached. Better plans welcome. This passes CI\n> > on Linux systems afflicted by EXEC_BACKEND, and Windows. Thoughts?\n>\n> Looks reasonable. I didn't check over every detail, given the draft status.\n\nI'm going to upgrade this to a proposal:\n\nhttps://commitfest.postgresql.org/49/5124/\n\nI wonder how often this happens in the wild.\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:44:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "Hello Thomas,\n\n15.07.2024 06:44, Thomas Munro wrote:\n> I'm going to upgrade this to a proposal:\n>\n> https://commitfest.postgresql.org/49/5124/\n>\n> I wonder how often this happens in the wild.\n\nPlease look at a recent failure [1], produced by buildfarm animal\nculicidae, which tests EXEC_BACKEND. I guess it's caused by the issue\ndiscussed.\n\nMaybe it would make sense to construct a reliable reproducer for the\nissue (I could not find a ready-to-use recipe in this thread)...\n\nWhat do you think?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2024-07-24%2004%3A08%3A23\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 24 Jul 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" }, { "msg_contents": "On Mon, Jul 15, 2024 at 03:44:48PM +1200, Thomas Munro wrote:\n> On Fri, Jul 12, 2024 at 11:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Sat, May 18, 2024 at 05:29:12PM +1200, Thomas Munro wrote:\n> > > On Fri, May 17, 2024 at 4:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > The specific problem here is that LocalProcessControlFile() runs in\n> > > > every launched child for EXEC_BACKEND builds. Windows uses\n> > > > EXEC_BACKEND, and Windows' NTFS file system is one of the two file\n> > > > systems known to this list to have the concurrent read/write data\n> > > > mashing problem (the other being ext4).\n> >\n> > > First idea idea I've come up with to avoid all of that: pass a copy of\n> > > the \"proto-controlfile\", to coin a term for the one read early in\n> > > postmaster startup by LocalProcessControlFile(). As far as I know,\n> > > the only reason we need it is to suck some settings out of it that\n> > > don't change while a cluster is running (mostly can't change after\n> > > initdb, and checksums can only be {en,dis}abled while down). Right?\n> > > Children can just \"import\" that sucker instead of calling\n> > > LocalProcessControlFile() to figure out the size of WAL segments yada\n> > > yada, I think? Later they will attach to the real one in shared\n> > > memory for all future purposes, once normal interlocking is allowed.\n> >\n> > I like that strategy, particularly because it recreates what !EXEC_BACKEND\n> > backends inherit from the postmaster. It might prevent future bugs that would\n> > have been specific to EXEC_BACKEND.\n> \n> Thanks for looking! Yeah, that is a good way to put it.\n\nOops, the way I put it turned out to be false. Postmaster has ControlFile\npointing to shared memory before forking backends, so !EXEC_BACKEND children\nare born that way. In the postmaster, ControlFile->checkPointCopy->redo does\nchange after each checkpoint.\n\n> The only other idea I can think of is that the Postmaster could take\n> all of the things that LocalProcessControlFile() wants to extract from\n> the file, and transfer them via that struct used for EXEC_BACKEND as\n> individual variables, instead of this new proto-controlfile copy. I\n> think it would be a bigger change with no obvious-to-me additional\n> benefit, so I didn't try it.\n\nYeah, that would be more future-proof but a bigger change. One could argue\nfor a yet-larger refactor so even the !EXEC_BACKEND case doesn't read those\nfields from ControlFile memory. Then we could get rid of ControlFile ever\nbeing set to something other than NULL or a shmem pointer. ControlFileData's\nmix of initdb-time fields, postmaster-start-time fields, and changes-anytime\nfields is inconvenient here.\n\nThe unknown is the value of that future proofing. Much EXEC_BACKEND early\nstartup code is shared with postmaster startup, which can assume it's the only\nprocess. I can't rule out a future bug where that shared code does a read\nthat's harmless in postmaster startup but harmful when an EXEC_BACKEND child\nruns the same read. For a changes-anytime field, the code would already be\nsubtly buggy in EXEC_BACKEND today, since it would be reading without an\nLWLock. For a postmaster-start-time field, things should be okay so long as\nwe capture the proto ControlFileData after the last change to such fields.\nThat invariant is not trivial to achieve, but it's not gravely hard either.\n\nA possible middle option would be to use the proto control file, but\nexplicitly set its changes-anytime fields to bogus values.\n\nWhat's your preference? I don't think any of these would be bad decisions.\nIt could be clearer after enumerating how many ControlFile fields get used\nthis early.\n\n\n", "msg_date": "Tue, 30 Jul 2024 16:54:55 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: race condition when writing pg_control" } ]
[ { "msg_contents": "As of HEAD, building the PDF docs for A4 paper draws 538 \"contents\n... exceed the available area\" warnings. While this is a nice step\nforward from where we were (v12 has more than 1500 such warnings),\nwe're far from done fixing that issue.\n\nA large chunk of the remaining warnings are about tables that describe\nthe columns of system catalogs, system views, and information_schema\nviews. The typical contents of a row in such a table are a field name,\na field data type, possibly a \"references\" link, and then a description.\nUnsurprisingly, this does not work very well for descriptions of more\nthan a few words. And not infrequently, we *need* more than a few words.\n\nISTM this is more or less the same problem we have/had with function\ndescriptions, and so I'm tempted to solve it in more or less the same\nway. Let's redefine the table layout to look like, say, this for\npg_attrdef [1]:\n\noid oid\n\tRow identifier\n\nadrelid oid (references pg_class.oid)\n\tThe table this column belongs to\n\nadnum int2 (references pg_attribute.attnum)\n\tThe number of the column\n\nadbin pg_node_tree\n\tThe column default value, in nodeToString() representation. Use\n\tpg_get_expr(adbin, adrelid) to convert it to an SQL expression.\n\nThat is, let's go over to something that's more or less like a table-ized\n<variablelist>, with the fixed items for an entry all written on the first\nline, and then as much description text as we need. The actual markup\nwould be closely modeled on what we did for function-table entries.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/catalog-pg-attrdef.html\n\n\n", "msg_date": "Mon, 04 May 2020 21:52:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 5/4/20 9:52 PM, Tom Lane wrote:\n> As of HEAD, building the PDF docs for A4 paper draws 538 \"contents\n> ... exceed the available area\" warnings. While this is a nice step\n> forward from where we were (v12 has more than 1500 such warnings),\n> we're far from done fixing that issue.\n> \n> A large chunk of the remaining warnings are about tables that describe\n> the columns of system catalogs, system views, and information_schema\n> views. The typical contents of a row in such a table are a field name,\n> a field data type, possibly a \"references\" link, and then a description.\n> Unsurprisingly, this does not work very well for descriptions of more\n> than a few words. And not infrequently, we *need* more than a few words.\n> \n> ISTM this is more or less the same problem we have/had with function\n> descriptions, and so I'm tempted to solve it in more or less the same\n> way. Let's redefine the table layout to look like, say, this for\n> pg_attrdef [1]:\n> \n> oid oid\n> \tRow identifier\n> \n> adrelid oid (references pg_class.oid)\n> \tThe table this column belongs to\n> \n> adnum int2 (references pg_attribute.attnum)\n> \tThe number of the column\n> \n> adbin pg_node_tree\n> \tThe column default value, in nodeToString() representation. Use\n> \tpg_get_expr(adbin, adrelid) to convert it to an SQL expression.\n> \n> That is, let's go over to something that's more or less like a table-ized\n> <variablelist>, with the fixed items for an entry all written on the first\n> line, and then as much description text as we need. The actual markup\n> would be closely modeled on what we did for function-table entries.\n> \n> Thoughts?\n\n+1. Looks easy enough to read in a plaintext email, and if there are any\nminor style nuances on the HTML front, I'm confident we'll solve them.\n\nJonathan", "msg_date": "Mon, 4 May 2020 22:12:15 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Hello Tom,\n\n> oid oid\n> \tRow identifier\n>\n> adrelid oid (references pg_class.oid)\n> \tThe table this column belongs to\n>\n> adnum int2 (references pg_attribute.attnum)\n> \tThe number of the column\n>\n> adbin pg_node_tree\n> \tThe column default value, in nodeToString() representation. Use\n> \tpg_get_expr(adbin, adrelid) to convert it to an SQL expression.\n>\n> Thoughts?\n\n+1\n\nMy 0.02€: I'm wondering whether the description could/should match SQL \nsyntax, eg:\n\n oid OID\n adrelid OID REFERENCES pg_class(oid)\n adnum INT2 REFERENCES pg_attribute(attnum)\n …\n\nOr maybe just uppercase type names, especially when repeated?\n\n oid OID\n adrelid OID (references pg_class.oid)\n adnum INT2 (references pg_attribute.attnum)\n …\n\nI guess that reference targets would still be navigable.\n\n-- \nFabien.", "msg_date": "Tue, 5 May 2020 07:52:08 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog\n descriptions" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> My 0.02€: I'm wondering whether the description could/should match SQL \n> syntax, eg:\n\n> oid OID\n> adrelid OID REFERENCES pg_class(oid)\n> adnum INT2 REFERENCES pg_attribute(attnum)\n> …\n\n> Or maybe just uppercase type names, especially when repeated?\n\nMeh. I'm not a fan of overuse of upper case --- it's well established\nthat that's harder to read than lower or mixed case. And it's definitely\nproject policy that type names are generally treated as identifiers not\nkeywords, even if some of them happen to be keywords under the hood.\n\nThe markup I had in mind was <structfield> for the field name\nand <type> for the type name, but no decoration beyond that.\n\nAs for the references, it seems to me that your notation would lead\npeople to think that there are actual FK constraints in place, which\nof course there are not (especially not on the views). I'm not\nhugely against it but I prefer what I suggested.\n\n> I guess that reference targets would still be navigable.\n\nYeah, they'd still have <link> wrappers --- if I recall what those\nlook like in the documentation sources, they don't need any change\nexcept for addition of the \"(references ...)\" text.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 May 2020 10:27:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Here's a really quick-n-dirty prototype patch that just converts the\npg_aggregate table to the proposed style, plus a screenshot for those\nwho don't feel like actually building the docs with the patch.\n\nLooking at the results, it seems like we could really use a bit more\nhorizontal space between the column names and data types, and perhaps\nalso between the types and the (references) annotations. Other than\nthat it looks decent. I don't know what's the cleanest way to add\nsome space there --- I'd rather not have the SGML text do it explicitly.\n\nThere's room for complaint that this takes up more vertical space than\nthe old way, assuming you have a reasonably wide window. I'm not\nterribly bothered by that, but maybe someone else will be? I'm inclined\nto think that that's well worth the benefit that we won't feel compelled\nto keep column descriptions short.\n\nBTW, this being just a test hack, I repurposed the \"func_table_entry\" and\n\"func_signature\" style marker roles. If we do this for real then of\ncourse we'd want to use different roles, even if they happen to mark up\nthe same for now.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 05 May 2020 19:42:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 5/5/20 7:42 PM, Tom Lane wrote:\n> Here's a really quick-n-dirty prototype patch that just converts the\n> pg_aggregate table to the proposed style, plus a screenshot for those\n> who don't feel like actually building the docs with the patch.\n\nNot opposed to building the docs, but the screenshot expedites things ;)\n\n> Looking at the results, it seems like we could really use a bit more\n> horizontal space between the column names and data types, and perhaps\n> also between the types and the (references) annotations. Other than\n> that it looks decent. I don't know what's the cleanest way to add\n> some space there --- I'd rather not have the SGML text do it explicitly.\n\nIf each element (i.e. column name, data type) is wrapped in a HTML\nelement with its own class (e.g. a span) it's fairly easy to add that\nspace with CSS. I'm not sure the ramifications for the PDFs though.\n\n> There's room for complaint that this takes up more vertical space than\n> the old way, assuming you have a reasonably wide window. I'm not\n> terribly bothered by that, but maybe someone else will be? I'm inclined\n> to think that that's well worth the benefit that we won't feel compelled\n> to keep column descriptions short.\n\nI think for reference materials, vertical space is acceptable. It seems\nto be the \"mobile way\" of doing things, since people are scrolling down.\n\n(And unlike the mailing lists, we don't need to keep the font small ;)\n\nAnyway, +1\n\n> BTW, this being just a test hack, I repurposed the \"func_table_entry\" and\n> \"func_signature\" style marker roles. If we do this for real then of\n> course we'd want to use different roles, even if they happen to mark up\n> the same for now.\n\nAgreed.\n\nThanks,\n\nJonathan", "msg_date": "Tue, 5 May 2020 20:27:46 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "\nHello Tom,\n\n>> oid OID\n>\n> Meh. I'm not a fan of overuse of upper case --- it's well established\n> that that's harder to read than lower or mixed case. And it's definitely\n> project policy that type names are generally treated as identifiers not\n> keywords, even if some of them happen to be keywords under the hood.\n\nI found \"oid oid\" stuttering kind of strange, hence an attempt at \nsuggesting something that could distinguish them.\n\n> The markup I had in mind was <structfield> for the field name\n> and <type> for the type name, but no decoration beyond that.\n\nOk. If they are displayed a little differently afterwards that'd may help.\n\n> As for the references, it seems to me that your notation would lead\n> people to think that there are actual FK constraints in place, which\n> of course there are not (especially not on the views).\n\nIn practice the system ensures that the target exists, so it is as-if \nthere would be a foreign key enforced?\n\nMy point is that using differing syntaxes for the more-or-less the same \nconcept does not help user understand the semantics, but maybe that is \njust me.\n\n> I'm not hugely against it but I prefer what I suggested.\n\nOk!\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 6 May 2020 07:52:08 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog\n descriptions" }, { "msg_contents": "Hello\n\nI think the recent changes to CSS might have broken things in the XSLT\nbuild; apparently the SGML tooling did things differently. Compare the\nscreenshot of tables 67.2 and 67.3 ... 9.6 on the left, master on the\nright. Is the latter formatting correct/desirable?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 6 May 2020 17:18:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 5/6/20 5:18 PM, Alvaro Herrera wrote:\n> Hello\n> \n> I think the recent changes to CSS might have broken things in the XSLT\n> build; apparently the SGML tooling did things differently. Compare the\n> screenshot of tables 67.2 and 67.3 ... 9.6 on the left, master on the\n> right. Is the latter formatting correct/desirable?\n\nI know that 9.6 uses a different subset of the styles, and I recall the\ntext being blue during the original conversion. For example, the \"table\"\nin the 9.6 docs has a class of \"CALSTABLE\" whereas in master, it is\n\"table\" (and we operate off of it as \"table.table\" when doing lookups,\nto ensure anything else with class \"table\" is unaffected).\n\nThere's also not as much control over some of the older documentation as\nthere are less classes we can bind the CSS too.\n\nThe latest changes should only affect master (devel) and beyond.\n\nJonathan", "msg_date": "Wed, 6 May 2020 17:24:13 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 2020-May-06, Jonathan S. Katz wrote:\n\n> I know that 9.6 uses a different subset of the styles, and I recall the\n> text being blue during the original conversion. For example, the \"table\"\n> in the 9.6 docs has a class of \"CALSTABLE\" whereas in master, it is\n> \"table\" (and we operate off of it as \"table.table\" when doing lookups,\n> to ensure anything else with class \"table\" is unaffected).\n> \n> There's also not as much control over some of the older documentation as\n> there are less classes we can bind the CSS too.\n> \n> The latest changes should only affect master (devel) and beyond.\n\n... oh, okay. I guess I was reporting that the font on the new version\nseems to have got smaller. Looking at other pages, it appears that the\nfont is indeed a lot smaller in all tables, including those Tom has been\nediting. So maybe this is desirable for some reason. I'll have to keep\nmy magnifying glass handy, I suppose.\n\nAnyway, it seems <computeroutput> or whatever tag has been used in some\nof these new tables makes the font be larger. Another screenshot is\nattached to show this. Is this likewise desired? It also shows that\nthe main text body is sized similar to the <computeroutput> tagged text,\nnot the table contents text. (The browser is Brave, a Chromium\nderivative.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 6 May 2020 17:38:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> ... oh, okay. I guess I was reporting that the font on the new version\n> seems to have got smaller. Looking at other pages, it appears that the\n> font is indeed a lot smaller in all tables, including those Tom has been\n> editing. So maybe this is desirable for some reason. I'll have to keep\n> my magnifying glass handy, I suppose.\n\nHuh, browser specific maybe? The font doesn't seem any smaller to me,\nusing Safari.\n\n> Anyway, it seems <computeroutput> or whatever tag has been used in some\n> of these new tables makes the font be larger. Another screenshot is\n> attached to show this. Is this likewise desired? It also shows that\n> the main text body is sized similar to the <computeroutput> tagged text,\n> not the table contents text. (The browser is Brave, a Chromium\n> derivative.)\n\nI'm not getting that, either; to me it looks as attached. I agree\nwhat you're seeing is not as-intended.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 06 May 2020 18:56:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Here's a more fully fleshed out draft for this, with stylesheet\nmarkup to get extra space around the column type names.\n\nI realized that I can probably automate this conversion, unlike the\nfunction-table conversion: I'm not feeling any need to editorialize\non the column descriptions, so I can probably just extract the data\nfrom the SGML programmatically and rebuild it as I want. Seems like\nit should be a fairly quick process. So, if this seems good from\nyour standpoint, please push up the patch on main.css and then I'll\nsee about doing the edit.\n\n(BTW, said patch also removes some no-longer-used detritus from the\nearlier markup attempt.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 08 May 2020 18:53:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "\nHello Tom,\n\n> Here's a more fully fleshed out draft for this, with stylesheet\n> markup to get extra space around the column type names.\n\nI find this added spacing awkward, espacially as attribute names are \nalways one word anyway. I prefer the non spaced approach.\n\nIf spacing is discussed, should the layout rather try to align type \ninformation, eg:\n\n attribute type\n description\n ---\n foo bla\n this does this and that ...\n and here is an example about it\n ---\n foo-foo-foo bla-bla\n whatever bla bla blah bla foo foo foo ...\n and stuff\n\nI'm not sure how achievable it is from a xml & processing point of view,\nand how desirable it is, I'm just throwing it for consideration.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 9 May 2020 08:54:10 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog\n descriptions" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Here's a more fully fleshed out draft for this, with stylesheet\n>> markup to get extra space around the column type names.\n\n> I find this added spacing awkward, espacially as attribute names are \n> always one word anyway. I prefer the non spaced approach.\n\nIt's certainly arguable that that look is too heavy-handed. In the\nattached, I knocked down the extra space from 1em to 0.25em, which\nmakes it quite a bit subtler --- are you any happier with this?\n\nBTW, I don't think it's very accurate that \"attribute names are\nalways one word\" --- see the second attachment. Here if anything\nI'm wanting a little more space.\n\n> If spacing is discussed, should the layout rather try to align type \n> information, eg:\n\nI thought about that, but it seems extremely close to some of the\nearlier function-table layouts that were so widely panned. The SGML\nsource would have to be a lot uglier too, probably with explicit use\nof spanspec's on every row. It could be done no doubt, but I think\npeople would not see it as an improvement.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 May 2020 11:34:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Hello Tom,\n\n>>> Here's a more fully fleshed out draft for this, with stylesheet\n>>> markup to get extra space around the column type names.\n>\n>> I find this added spacing awkward, espacially as attribute names are\n>> always one word anyway. I prefer the non spaced approach.\n>\n> It's certainly arguable that that look is too heavy-handed. In the\n> attached, I knocked down the extra space from 1em to 0.25em, which\n> makes it quite a bit subtler --- are you any happier with this?\n\nYes, definitely.\n\n> BTW, I don't think it's very accurate that \"attribute names are\n> always one word\" --- see the second attachment.\n\nIndeed.\n\n> Here if anything I'm wanting a little more space.\n\nI'm fine with 0.25em which allows some breathing without looking awkward. \nMaybe a little more would still be okay, but not much more.\n\n>> If spacing is discussed, should the layout rather try to align type\n>> information, eg:\n>\n> I thought about that, but it seems extremely close to some of the\n> earlier function-table layouts that were so widely panned. The SGML\n> source would have to be a lot uglier too, probably with explicit use\n> of spanspec's on every row.\n\nHmmm, that's the kind of things I was afraid of.\n\n> It could be done no doubt, but I think people would not see it as an \n> improvement.\n\nPossibly. I'm a little at odds with Type not being above types, but far on \nthe left, so that you cannot really \"see\" that it is about the format, \nespecially with long attribute names:\n\n Column Type\n Description\n quite_a_long_attribute and_its_type\n ...\n\nThe horizontal distance between \"Type\" and \"and_its_type\" is so wide as to \nhide the clue that the former is describing the later. But maybe aligning \nwould be too ugly.\n\nIf I can suggest more adjustements, maybe the description margin is a too \nmuch, I'd propose reduce it to about 3 chars wide. Obviously any aesthetic \nopinion is by definition subjective and prone to differ from one person to \nthe next…\n\n-- \nFabien.", "msg_date": "Sat, 9 May 2020 19:30:11 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog\n descriptions" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Possibly. I'm a little at odds with Type not being above types, but far on \n> the left, so that you cannot really \"see\" that it is about the format, \n\nYeah, agreed. We can adjust the space in the header independently of\nwhat's in the table entries, so it'd be possible to put more space\nbetween \"Column\" and \"Type\" ... but I'm not sure if that would fix it.\n\n> If I can suggest more adjustements, maybe the description margin is a too \n> much, I'd propose reduce it to about 3 chars wide. Obviously any aesthetic \n> opinion is by definition subjective and prone to differ from one person to \n> the next…\n\nThis is more Jonathan's department than mine, but I personally prefer more\nindentation to less --- you want the column names to stick out so you can\nfind them. Anyway, the present indentation is (it looks like) the same\nas we are using in <variablelist>s, which this layout is based on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 14:09:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "Just FTR, here's a complete patch for this. I successfully regenerated\nthe column names, types, and ordering from the system catalogs, and\nplugged the descriptions back into that by dint of parsing them out of\nthe XML. The \"references\" data was based on findoidjoins' results plus\nhand additions to cover all the cases we are calling out now (there are\na lot of \"references\" links for attnums and a few other non-OID columns,\nplus references links for views which findoidjoins doesn't consider).\nSo I have pretty high confidence that this info is OK. I'd be too\nembarrassed to show anybody the code though ;-) ... it was just a brute\nforce hack.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 May 2020 12:27:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 5/10/20 12:27 PM, Tom Lane wrote:\n> Just FTR, here's a complete patch for this. \n\nCool. I'll play around with it tonight once I clear out release work.\nPer upthread reference, I believe you've become a CSS maven yourself.\n\n> I successfully regenerated\n> the column names, types, and ordering from the system catalogs, and\n> plugged the descriptions back into that by dint of parsing them out of\n> the XML. The \"references\" data was based on findoidjoins' results plus\n> hand additions to cover all the cases we are calling out now (there are\n> a lot of \"references\" links for attnums and a few other non-OID columns,\n> plus references links for views which findoidjoins doesn't consider).\n> So I have pretty high confidence that this info is OK. I'd be too\n> embarrassed to show anybody the code though ;-) ... it was just a brute\n> force hack.\nIf it works it works ;)\n\nJonathan", "msg_date": "Sun, 10 May 2020 14:03:32 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 5/10/20 2:03 PM, Jonathan S. Katz wrote:\n> On 5/10/20 12:27 PM, Tom Lane wrote:\n>> Just FTR, here's a complete patch for this. \n> \n> Cool. I'll play around with it tonight once I clear out release work.\n> Per upthread reference, I believe you've become a CSS maven yourself.\n\nTime slipped a bit (sorry!), but while prepping for the release I\nreviewed this. Visually, it looks WAY better. The code checks out too. I\nthink any tweaks would be primarily around personal preference on the\nUI, so it may be better just to commit and get it out in the wild.\n\n...and so I did. Committed[1].\n\nJonathan\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=93716f2a817dbdae8cccf86bc951b45b68ea52d9", "msg_date": "Wed, 13 May 2020 22:27:33 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Time slipped a bit (sorry!), but while prepping for the release I\n> reviewed this. Visually, it looks WAY better. The code checks out too. I\n> think any tweaks would be primarily around personal preference on the\n> UI, so it may be better just to commit and get it out in the wild.\n> ...and so I did. Committed[1].\n\nThanks, I'll push the docs change in a bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 22:52:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 2020-May-06, Alvaro Herrera wrote:\n\n> ... oh, okay. I guess I was reporting that the font on the new version\n> seems to have got smaller. Looking at other pages, it appears that the\n> font is indeed a lot smaller in all tables, including those Tom has been\n> editing. So maybe this is desirable for some reason. I'll have to keep\n> my magnifying glass handy, I suppose.\n\nI happened to notice that the font used in the tables get smaller if you\nmake the browser window narrower. So what was happening is that I was\nusing a window that didn't cover the entire screen.\n\nIf I let the window use my whole screen, the font in the table is the\nsame size than the text outside the table; if I reduce to ~1239 pixels,\nthe font becomes somewhat smaller; if I further reduce to ~953 pixels,\nit gets really small. Meanwhile, the non-table text keeps the same size\nthe whole time. (The pixel sizes at which changes occur seem to vary\nwith the zoom percentage I use, but the behavior is always there.)\n\nIs this something that CSS does somehow? Is this something you expected?\n\nHappens with both Brave (a Chromium derivate) and Firefox.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 16:11:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "út 2. 6. 2020 v 0:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> As of HEAD, building the PDF docs for A4 paper draws 538 \"contents\n> ... exceed the available area\" warnings. While this is a nice step\n> forward from where we were (v12 has more than 1500 such warnings),\n> we're far from done fixing that issue.\n>\n> A large chunk of the remaining warnings are about tables that describe\n> the columns of system catalogs, system views, and information_schema\n> views. The typical contents of a row in such a table are a field name,\n> a field data type, possibly a \"references\" link, and then a description.\n> Unsurprisingly, this does not work very well for descriptions of more\n> than a few words. And not infrequently, we *need* more than a few words.\n>\n> ISTM this is more or less the same problem we have/had with function\n> descriptions, and so I'm tempted to solve it in more or less the same\n> way. Let's redefine the table layout to look like, say, this for\n> pg_attrdef [1]:\n>\n> oid oid\n> Row identifier\n>\n> adrelid oid (references pg_class.oid)\n> The table this column belongs to\n>\n> adnum int2 (references pg_attribute.attnum)\n> The number of the column\n>\n> adbin pg_node_tree\n> The column default value, in nodeToString() representation. Use\n> pg_get_expr(adbin, adrelid) to convert it to an SQL expression.\n>\n> That is, let's go over to something that's more or less like a table-ized\n> <variablelist>, with the fixed items for an entry all written on the first\n> line, and then as much description text as we need. The actual markup\n> would be closely modeled on what we did for function-table entries.\n>\n> Thoughts?\n>\n\nI have spotted this change recently at progress monitoring devel docs (\nhttps://www.postgresql.org/docs/devel/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING).\nCurrent version seems a little chaotic since there are multiple tables on\nthe same page with 2 mixed layouts. Older layout (for example v12 one -\nhttps://www.postgresql.org/docs/12/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING)\nis much easier to read for me.\n\nIs this final change? I do not see any problem on this (progress\nmonitoring) page in old layout. Is there any example of problematic page?\nMaybe there's a different way to solve this. For example instead of\nin-lining long text as a column description, it should be possible to link\nto detailed description in custom paragraph or table. See description\ncolumn at table 27.22. at progress monitoring page for column \"phase\" for\nsimilar approach.\n\n\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/catalog-pg-attrdef.html\n>\n>\n>\n>\n>\n\nút 2. 6. 2020 v 0:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:As of HEAD, building the PDF docs for A4 paper draws 538 \"contents\n... exceed the available area\" warnings.  While this is a nice step\nforward from where we were (v12 has more than 1500 such warnings),\nwe're far from done fixing that issue.\n\nA large chunk of the remaining warnings are about tables that describe\nthe columns of system catalogs, system views, and information_schema\nviews.  The typical contents of a row in such a table are a field name,\na field data type, possibly a \"references\" link, and then a description.\nUnsurprisingly, this does not work very well for descriptions of more\nthan a few words.  And not infrequently, we *need* more than a few words.\n\nISTM this is more or less the same problem we have/had with function\ndescriptions, and so I'm tempted to solve it in more or less the same\nway.  Let's redefine the table layout to look like, say, this for\npg_attrdef [1]:\n\noid oid\n        Row identifier\n\nadrelid oid (references pg_class.oid)\n        The table this column belongs to\n\nadnum int2 (references pg_attribute.attnum)\n        The number of the column\n\nadbin pg_node_tree\n        The column default value, in nodeToString() representation. Use\n        pg_get_expr(adbin, adrelid) to convert it to an SQL expression.\n\nThat is, let's go over to something that's more or less like a table-ized\n<variablelist>, with the fixed items for an entry all written on the first\nline, and then as much description text as we need.  The actual markup\nwould be closely modeled on what we did for function-table entries.\n\nThoughts?I have spotted this change recently at progress monitoring devel docs (https://www.postgresql.org/docs/devel/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING). Current version seems a little chaotic since there are multiple tables on the same page with 2 mixed layouts. Older layout (for example v12 one - https://www.postgresql.org/docs/12/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING) is much easier to read for me.Is this final change? I do not see any problem on this (progress monitoring) page in old layout. Is there any example of problematic page? Maybe there's a different way to solve this. For example instead of in-lining long text as a column description, it should be possible to link to detailed description in custom paragraph or table. See description column at table 27.22. at progress monitoring page for column \"phase\" for similar approach. \n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/catalog-pg-attrdef.html", "msg_date": "Tue, 2 Jun 2020 00:40:09 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> I have spotted this change recently at progress monitoring devel docs (\n> https://www.postgresql.org/docs/devel/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING).\n> Current version seems a little chaotic since there are multiple tables on\n> the same page with 2 mixed layouts. Older layout (for example v12 one -\n> https://www.postgresql.org/docs/12/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING)\n> is much easier to read for me.\n\n> Is this final change? I do not see any problem on this (progress\n> monitoring) page in old layout. Is there any example of problematic page?\n> Maybe there's a different way to solve this. For example instead of\n> in-lining long text as a column description, it should be possible to link\n> to detailed description in custom paragraph or table. See description\n> column at table 27.22. at progress monitoring page for column \"phase\" for\n> similar approach.\n\nI'm not planning on revisiting that work, no. And converting every\ntable/view description table into two (or more?) tables sure doesn't\nsound like an improvement.\n\nPerhaps there's a case for reformatting the phase-description tables\nin the progress monitoring section to look more like the view tables.\n(I hadn't paid much attention to them, since they weren't causing PDF\nrendering problems.) On the other hand, you could argue that it's\ngood that they don't look like the view tables, since the info they\nare presenting is fundamentally different. I don't honestly see much\nwrong with the way it is now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 18:57:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" }, { "msg_contents": "On 6/1/20 6:57 PM, Tom Lane wrote:\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n>> I have spotted this change recently at progress monitoring devel docs (\n>> https://www.postgresql.org/docs/devel/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING).\n>> Current version seems a little chaotic since there are multiple tables on\n>> the same page with 2 mixed layouts. Older layout (for example v12 one -\n>> https://www.postgresql.org/docs/12/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING)\n>> is much easier to read for me.\n> \n>> Is this final change? I do not see any problem on this (progress\n>> monitoring) page in old layout. Is there any example of problematic page?\n>> Maybe there's a different way to solve this. For example instead of\n>> in-lining long text as a column description, it should be possible to link\n>> to detailed description in custom paragraph or table. See description\n>> column at table 27.22. at progress monitoring page for column \"phase\" for\n>> similar approach.\n> \n> I'm not planning on revisiting that work, no. And converting every\n> table/view description table into two (or more?) tables sure doesn't\n> sound like an improvement.\n> \n> Perhaps there's a case for reformatting the phase-description tables\n> in the progress monitoring section to look more like the view tables.\n> (I hadn't paid much attention to them, since they weren't causing PDF\n> rendering problems.) On the other hand, you could argue that it's\n> good that they don't look like the view tables, since the info they\n> are presenting is fundamentally different. I don't honestly see much\n> wrong with the way it is now.\n\nI think it looks fine. +1 for leaving it.\n\nJonathan", "msg_date": "Mon, 1 Jun 2020 19:17:17 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for docs formatting: catalog descriptions" } ]
[ { "msg_contents": "I have committed the first draft of the PG 13 release notes. You can\nsee them here:\n\n\thttps://momjian.us/pgsql_docs/release-13.html\n\nIt still needs markup, word wrap, and indenting. The community doc\nbuild should happen in a few hours.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 4 May 2020 23:16:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "PG 13 release notes, first draft" }, { "msg_contents": "On Tue, 5 May 2020 at 15:16, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThanks a lot for putting that together.\n\nIn previous years, during the development of this you've had HTML\ncomments to include the commit details. Are you going to do that this\nyear? or did they just disappear in some compilation phase you've\ndone?\n\nDavid\n\n\n", "msg_date": "Tue, 5 May 2020 16:10:32 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, 5 May 2020 at 16:10, David Rowley <dgrowleyml@gmail.com> wrote:\n> In previous years, during the development of this you've had HTML\n> comments to include the commit details. Are you going to do that this\n> year? or did they just disappear in some compilation phase you've\n> done?\n\nNever mind. I just saw them all in the commit you've pushed.\n\nDavid\n\n\n", "msg_date": "Tue, 5 May 2020 16:12:06 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 3:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nHi Bruce,\n\nThanks! Some feedback:\n\n+2020-04-08 [3985b600f] Support PrefetchBuffer() in recovery.\n+-->\n+\n+<para>\n+Speedup recovery by prefetching pages (Thomas Munro)\n\nUnfortunately that commit was just an infrastructural change to allow\nthe PrefetchBuffer() function to work in recovery, but the main\n\"prefetching during recovery\" patch to actually make use of it to go\nfaster didn't make it. So this item shouldn't be in the release\nnotes.\n\n+2020-04-07 [4c04be9b0] Introduce xid8-based functions to replace txid_XXX.\n+-->\n+\n+<para>\n+Update all transaction id functions to support xid8 (Thomas Munro)\n+</para>\n+\n+<para>\n+They use the same names as the xid data type versions.\n+</para>\n\nThe names are actually different. How about: \"New xid8-based\nfunctions replace the txid family of functions, but the older names\nare still supported for backward compatibility.\"\n\n+2019-10-16 [d5ac14f9c] Use libc version as a collation version on glibc systems\n+-->\n+\n+<para>\n+Use the glibc version as the collation version (Thomas Munro)\n+</para>\n+\n+<para>\n+If the glibc version changes, a warning will be issued when a\nmismatching collation is used.\n+</para>\n\nI would add a qualifier \"in some cases\", since it doesn't work for\ndefault collations yet. (That'll now have to wait for 14).\n\n\n", "msg_date": "Tue, 5 May 2020 16:14:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hello Bruce,\n\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> \thttps://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThanks for working on this.\n\n* Add CREATE DATABASE LOCALE option (Fabien COELHO)\n* Add function gen_random_uuid to generate version 4 UUIDs (Fabien COELHO)\n\nI'm not responsible for these, I just reviewed them. ISTM that the author \nfor both is the committer, Peter Eisentraut.\n\nMaybe there is something amiss in the commit-log-to-release-notes script?\nMy name clearly appears after \"reviewed by:?\"\n\n* \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n => The default is still to generate data client-side.\n\nI do not see a \"documentation\" section, whereas there has been significant \ndoc changes, such as function table layouts (Tom), glossary (Corey, \nJᅵrgen, Roger, Alvarro), binary/text string functions (Karl) and possibly \nothers. Having a good documentation contributes to making postgres a very \ngood tool, improving it is is not very glamorous, ISTM that such \ncontributions should not be overlooked.\n\n-- \nFabien.", "msg_date": "Tue, 5 May 2020 07:43:09 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi\n\nút 5. 5. 2020 v 5:16 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n>\n\nThere is not note about new polymorphic type \"anycompatible\"\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n\n\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n>\n>\n\nHiút 5. 5. 2020 v 5:16 odesílatel Bruce Momjian <bruce@momjian.us> napsal:I have committed the first draft of the PG 13 release notes.  You can\nsee them here:\n\n        https://momjian.us/pgsql_docs/release-13.html\n\nIt still needs markup, word wrap, and indenting.  The community doc\nbuild should happen in a few hours.There is not note about new polymorphic type \"anycompatible\"https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24e2885ee304cb6a94fdfc25a1a108344ed9f4f7 \n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n+ As you are, so once was I.  As I am, so you will be. +\n+                      Ancient Roman grave inscription +", "msg_date": "Tue, 5 May 2020 07:46:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi,\n\nOn Tue, May 5, 2020 at 7:47 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> út 5. 5. 2020 v 5:16 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n>>\n>> I have committed the first draft of the PG 13 release notes. You can\n>> see them here:\n>>\n>> https://momjian.us/pgsql_docs/release-13.html\n>>\n>> It still needs markup, word wrap, and indenting. The community doc\n>> build should happen in a few hours.\n>\n>\n> There is not note about new polymorphic type \"anycompatible\"\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n\nThere's also no note about avoiding full GIN index scan\n(https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4b754d6c16e16cc1a1adf12ab0f48603069a0efd).\nThat's a corner case optimization but it can be a huge improvement\nwhen you hit the problem.\n\n\n", "msg_date": "Tue, 5 May 2020 08:10:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 5:16 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n>\n\nThere is one entry \"Add support for collation versions on Windows\" where I\nam quoted as author. Actually, I was a reviewer, the author is Thomas Munro.\n\nAlso, I am credited as sole author of \"Allow to_date/to_timestamp to\nrecognize non-English month/day names\", when the case is that Tom Lane did\nmore than a few cosmetics changes when committing and I think he should be\nquoted as co-author (if he agrees).\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, May 5, 2020 at 5:16 AM Bruce Momjian <bruce@momjian.us> wrote:I have committed the first draft of the PG 13 release notes.  You can\nsee them here:\n\n        https://momjian.us/pgsql_docs/release-13.html\n\nIt still needs markup, word wrap, and indenting.  The community doc\nbuild should happen in a few hours. There is one entry \"Add support for collation versions on Windows\" where I am quoted as author. Actually, I was a reviewer, the author is Thomas Munro.Also, I am credited as sole author of \"Allow to_date/to_timestamp to recognize non-English month/day names\", when the case is that Tom Lane did more than a few cosmetics changes when committing and I think he should be quoted as co-author (if he agrees).Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 5 May 2020 10:07:35 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 4, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nI wanted to point out there are 180 changes listed in the release notes,\nwhich very closely matches the count of previous major releases. I\ndon't think there are as many major _features_ in this release as\nprevious ones.\n\nAlso, I see little to no progress on these features in PG 13:\n\n* online checksum changes\n* zheap\n* sharding\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 08:11:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi Bruce, thanks for working on this again!\n\n+<para>\n+Allow UTF-8 escapes, e.g., E'\\u####', in clients that don't use UTF-8\nencoding (Tom Lane)\n+</para>\n\nI believe the term we want here is \"Unicode escapes\". This patch is\nabout the server encoding, which formerly needed to be utf-8 for\nnon-ascii characters. (I think the client encoding doesn't matter as\nlong as ascii bytes are represented.)\n\n+<para>\n+The UTF-8 characters must be available in the server encoding.\n+</para>\n\nSame here, s/UTF-8/Unicode/.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 21:20:39 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 07:43:09AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Thanks for working on this.\n> \n> * Add CREATE DATABASE LOCALE option (Fabien COELHO)\n> * Add function gen_random_uuid to generate version 4 UUIDs (Fabien COELHO)\n> \n> I'm not responsible for these, I just reviewed them. ISTM that the author\n> for both is the committer, Peter Eisentraut.\n> \n> Maybe there is something amiss in the commit-log-to-release-notes script?\n> My name clearly appears after \"reviewed by:?\"\n\nSorry, those were all my mistaken reading of the commit message.\n\n> \n> * \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n> => The default is still to generate data client-side.\n\nMy point is that the docs are not clear about this. Can you fix it?\n\n> I do not see a \"documentation\" section, whereas there has been significant\n> doc changes, such as function table layouts (Tom), glossary (Corey, J�rgen,\n\nI did list the glossary.\n\n> Roger, Alvarro), binary/text string functions (Karl) and possibly others.\n\nI wasn't sure documentation _layout_ changes should be listed.\n\n> Having a good documentation contributes to making postgres a very good tool,\n> improving it is is not very glamorous, ISTM that such contributions should\n> not be overlooked.\n\nYes, I would be good to know from others if that kind of stuff should\nbe included.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:00:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 04:14:42PM +1200, Thomas Munro wrote:\n> On Tue, May 5, 2020 at 3:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> >\n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Hi Bruce,\n> \n> Thanks! Some feedback:\n> \n> +2020-04-08 [3985b600f] Support PrefetchBuffer() in recovery.\n> +-->\n> +\n> +<para>\n> +Speedup recovery by prefetching pages (Thomas Munro)\n> \n> Unfortunately that commit was just an infrastructural change to allow\n> the PrefetchBuffer() function to work in recovery, but the main\n> \"prefetching during recovery\" patch to actually make use of it to go\n> faster didn't make it. So this item shouldn't be in the release\n> notes.\n\nAgreed, removed.\n\n> +2020-04-07 [4c04be9b0] Introduce xid8-based functions to replace txid_XXX.\n> +-->\n> +\n> +<para>\n> +Update all transaction id functions to support xid8 (Thomas Munro)\n> +</para>\n> +\n> +<para>\n> +They use the same names as the xid data type versions.\n> +</para>\n> \n> The names are actually different. How about: \"New xid8-based\n> functions replace the txid family of functions, but the older names\n> are still supported for backward compatibility.\"\n\nAgreed.\n\n> +2019-10-16 [d5ac14f9c] Use libc version as a collation version on glibc systems\n> +-->\n> +\n> +<para>\n> +Use the glibc version as the collation version (Thomas Munro)\n> +</para>\n> +\n> +<para>\n> +If the glibc version changes, a warning will be issued when a\n> mismatching collation is used.\n> +</para>\n> \n> I would add a qualifier \"in some cases\", since it doesn't work for\n> default collations yet. (That'll now have to wait for 14).\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:01:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 07:46:34AM +0200, Pavel Stehule wrote:\n> Hi\n> \n> �t 5. 5. 2020 v�5:16 odes�latel Bruce Momjian <bruce@momjian.us> napsal:\n> \n> I have committed the first draft of the PG 13 release notes.� You can\n> see them here:\n> \n> � � � � https://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting.� The community doc\n> build should happen in a few hours.\n> \n> \n> There is not note about new polymorphic type \"anycompatible\"\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=\n> 24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n\nSorry I missed that. Must have thought it was non-visible work that was\npart of another features. Here is the new text:\n\n\tAdd polymorphic data types for use by functions requiring\n\tcompatible arguments (Pavel Stehule)\n\n\tThe new data types are anycompatible, anycompatiblearray,\n\tanycompatiblenonarray, and anycompatiblerange.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:18:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "út 5. 5. 2020 v 16:18 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Tue, May 5, 2020 at 07:46:34AM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > út 5. 5. 2020 v 5:16 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> >\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> >\n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> >\n> >\n> > There is not note about new polymorphic type \"anycompatible\"\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=\n> > 24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n>\n> Sorry I missed that. Must have thought it was non-visible work that was\n> part of another features. Here is the new text:\n>\n> Add polymorphic data types for use by functions requiring\n> compatible arguments (Pavel Stehule)\n>\n> The new data types are anycompatible, anycompatiblearray,\n> anycompatiblenonarray, and anycompatiblerange.\n>\n\nno problem, thank you\n\nPavel\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n\nút 5. 5. 2020 v 16:18 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Tue, May  5, 2020 at 07:46:34AM +0200, Pavel Stehule wrote:\n> Hi\n> \n> út 5. 5. 2020 v 5:16 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> \n>     I have committed the first draft of the PG 13 release notes.  You can\n>     see them here:\n> \n>             https://momjian.us/pgsql_docs/release-13.html\n> \n>     It still needs markup, word wrap, and indenting.  The community doc\n>     build should happen in a few hours.\n> \n> \n> There is not note about new polymorphic type \"anycompatible\"\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=\n> 24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n\nSorry I missed that.  Must have thought it was non-visible work that was\npart of another features.  Here is the new text:\n\n        Add polymorphic data types for use by functions requiring\n        compatible arguments (Pavel Stehule)\n\n        The new data types are anycompatible, anycompatiblearray,\n        anycompatiblenonarray, and anycompatiblerange.no problem, thank youPavel\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n+ As you are, so once was I.  As I am, so you will be. +\n+                      Ancient Roman grave inscription +", "msg_date": "Tue, 5 May 2020 16:19:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 08:10:54AM +0200, Julien Rouhaud wrote:\n> Hi,\n> \n> On Tue, May 5, 2020 at 7:47 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > Hi\n> >\n> > �t 5. 5. 2020 v 5:16 odes�latel Bruce Momjian <bruce@momjian.us> napsal:\n> >>\n> >> I have committed the first draft of the PG 13 release notes. You can\n> >> see them here:\n> >>\n> >> https://momjian.us/pgsql_docs/release-13.html\n> >>\n> >> It still needs markup, word wrap, and indenting. The community doc\n> >> build should happen in a few hours.\n> >\n> >\n> > There is not note about new polymorphic type \"anycompatible\"\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=24e2885ee304cb6a94fdfc25a1a108344ed9f4f7\n> \n> There's also no note about avoiding full GIN index scan\n> (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4b754d6c16e16cc1a1adf12ab0f48603069a0efd).\n> That's a corner case optimization but it can be a huge improvement\n> when you hit the problem.\n\nOK, I have added this item:\n\n\tAllow GIN indexes to more efficiently handle NOT restrictions (Nikita\n\tGlukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:26:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 10:07:35AM +0200, Juan Jos� Santamar�a Flecha wrote:\n> \n> On Tue, May 5, 2020 at 5:16 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I have committed the first draft of the PG 13 release notes.� You can\n> see them here:\n> \n> � � � � https://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting.� The community doc\n> build should happen in a few hours.\n> \n> �\n> There is one entry \"Add support for collation versions on Windows\" where I am\n> quoted as author. Actually, I was a reviewer, the author is Thomas Munro.\n\nOops, my mistake, fixed.\n\n> \n> Also, I am credited as sole author of \"Allow to_date/to_timestamp to recognize\n> non-English month/day names\", when the case is that Tom Lane did more than a\n> few cosmetics changes when committing and I think he should be quoted as\n> co-author (if he agrees).\n\nOK, updated. The text was:\n\n\t Juan Jos� Santamar�a Flecha, reviewed and modified by me,\n\nand with reviewed first, and the generic term modified, I had assumed\nyou would the the only one listed. Fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:29:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 09:20:39PM +0800, John Naylor wrote:\n> Hi Bruce, thanks for working on this again!\n> \n> +<para>\n> +Allow UTF-8 escapes, e.g., E'\\u####', in clients that don't use UTF-8\n> encoding (Tom Lane)\n> +</para>\n> \n> I believe the term we want here is \"Unicode escapes\". This patch is\n> about the server encoding, which formerly needed to be utf-8 for\n> non-ascii characters. (I think the client encoding doesn't matter as\n> long as ascii bytes are represented.)\n> \n> +<para>\n> +The UTF-8 characters must be available in the server encoding.\n> +</para>\n> \n> Same here, s/UTF-8/Unicode/.\n\nOK, new text is:\n\n\tAllow Unicode escapes, e.g., E'\\u####', in clients that don't use UTF-8\n\tencoding (Tom Lane)\n\t\n\tThe Unicode characters must be available in the server encoding.\n\nI kept the \"UTF-8 encoding\" since that is the only Unicode encoding we\nsupport.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 10:31:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-05, Bruce Momjian wrote:\n\n> On Tue, May 5, 2020 at 07:43:09AM +0200, Fabien COELHO wrote:\n\n> > I do not see a \"documentation\" section, whereas there has been significant\n> > doc changes, such as function table layouts (Tom), glossary (Corey, J�rgen,\n> \n> I did list the glossary.\n\nPlease do list J�rgen, Corey and Roger as authors of the glossary.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 11:14:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "In this item\n\n+<listitem>\n+<!--\n+Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n+2020-04-20 [5fc703946] Add ALTER .. NO DEPENDS ON\n+-->\n+\n+<para>\n+Add the ability to remove a function's dependency on an extension (Alvaro Herrera)\n+</para>\n+\n+<para>\n+The syntax is ALTER FUNCTION .. NO DEPENDS ON.\n+</para>\n+\n+</listitem>\n\nThis works for several object types, not just a functions. I propose\n\n Add the ability to remove an object's dependency on an extension (�lvaro Herrera)\n\n The object can be a function, materialized view, index, or trigger.\n The syntax is ALTER .. NO DEPENDS ON.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 11:22:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-05, Alvaro Herrera wrote:\n\n> On 2020-May-05, Bruce Momjian wrote:\n> \n> > On Tue, May 5, 2020 at 07:43:09AM +0200, Fabien COELHO wrote:\n> \n> > > I do not see a \"documentation\" section, whereas there has been significant\n> > > doc changes, such as function table layouts (Tom), glossary (Corey, J�rgen,\n> > \n> > I did list the glossary.\n> \n> Please do list J�rgen, Corey and Roger as authors of the glossary.\n\n(Actually I should be listed as well, as the time I spent on it was\nconsiderable.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 11:23:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:14:11AM -0400, Alvaro Herrera wrote:\n> On 2020-May-05, Bruce Momjian wrote:\n> \n> > On Tue, May 5, 2020 at 07:43:09AM +0200, Fabien COELHO wrote:\n> \n> > > I do not see a \"documentation\" section, whereas there has been significant\n> > > doc changes, such as function table layouts (Tom), glossary (Corey, J�rgen,\n> > \n> > I did list the glossary.\n> \n> Please do list J�rgen, Corey and Roger as authors of the glossary.\n\nDone, from the commit message:\n\n\tAdd a glossary to the documentation (Corey Huinker, J�rgen Purtz, Roger\n\tHarkavy, �lvaro Herrera)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 11:24:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:23:50AM -0400, Alvaro Herrera wrote:\n> On 2020-May-05, Alvaro Herrera wrote:\n> \n> > On 2020-May-05, Bruce Momjian wrote:\n> > \n> > > On Tue, May 5, 2020 at 07:43:09AM +0200, Fabien COELHO wrote:\n> > \n> > > > I do not see a \"documentation\" section, whereas there has been significant\n> > > > doc changes, such as function table layouts (Tom), glossary (Corey, J�rgen,\n> > > \n> > > I did list the glossary.\n> > \n> > Please do list J�rgen, Corey and Roger as authors of the glossary.\n> \n> (Actually I should be listed as well, as the time I spent on it was\n> considerable.)\n\nYep, already done, based on the commit text. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 11:40:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nOn 5/4/20 11:16 PM, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> \thttps://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n>\n\n\nPeter Eisentraut gets the credit for Unix domain sockets on Windows, not\nme, I just reviewed it.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 5 May 2020 11:48:47 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:48:47AM -0400, Andrew Dunstan wrote:\n> \n> On 5/4/20 11:16 PM, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> >\n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> >\n> \n> \n> Peter Eisentraut gets the credit for Unix domain sockets on Windows, not\n> me, I just reviewed it.\n\nSorry about that, fixed.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 11:54:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": ">\n>\n> >\n> > Please do list Jürgen, Corey and Roger as authors of the glossary.\n>\n> (Actually I should be listed as well, as the time I spent on it was\n> considerable.)\n>\n\n+1, the time spent was quite considerable\n\n> \n> Please do list Jürgen, Corey and Roger as authors of the glossary.\n\n(Actually I should be listed as well, as the time I spent on it was\nconsiderable.)+1, the time spent was quite considerable", "msg_date": "Tue, 5 May 2020 12:14:01 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "|Release date: 2020-05-03\n=> Should say 2020-XX-XX, before someone like me goes and installs it everywhere in sight.\n\n|These triggers cannot change the destination partition. \n=> Maybe say \"cannot change which partition is the destination\"\n\n|Allow incremental sorting (James Coleman, Alexander Korotkov) \ns/Allow/Implement/ ?\n\n|If a result is already sorted by several keys,\ns/keys/leading keys/\n\n| Allow hash aggregation to use disk storage for large aggregation result sets (Jeff Davis) \n| Previously, hash aggregation was not used if it was expected to use more than work_mem memory. This is controlled by enable_hashagg_disk. \n=> enable_hashagg_disk doesn't behave like other enable_* parameters.\nAs I understand, disabling it only \"opportunisitically\" avoids plans which are\n*expected* to overflow work_mem. I think we should specifically say that, and\nmaybe suggest recalibrating work_mem.\n\n|This new behavior sets pages as all-visible\nI think it should say \"can allow setting pages all-visible\"\nIt doesn't do anything special to force anything to be allvisible.\n\n| This is controlled by GUC wal_skip_threshold. \nI think you should say that's a size threshold which determines which strategy\nto use (WAL or fsync).\n\n| Improve the performance of replay of DROP DATABASE commands that use many tablespaces (Fujii Masao) \n\"when replaying DROP DATABASE commands if many tablespaces are in use\"\n\n|Improve performance for truncation of very larger relations (Kirk Jamison) \n*large\n\n|Server variable backtrace_functions specifies which C functions should generate backtraces on error. \nCould you say \"GUC\" so it's easy to search for ?\n\n| This is controlled by ssl_min_protocol_version. \n| This behavior can be enabled using wal_receiver_create_temp_slot. \n| This is controlled by logical_decoding_work_mem. \n| This is enabled using ignore_invalid_pages. \nSay GUC in these places, too ?\n\n|Previously, server restart was required to change primary_conninfo and primary_slot_name. \n*a* server restart\n\n| Speedup recovery by prefetching pages (Thomas Munro) \n\"Speed up\" or accelerate or \"Improve speed of\".\nSpeedup (one word) sounds like a noun.\n\n| Fix bugs in ALTER TABLE where later clauses overlap changes made by earlier clauses in the same command (Tom Lane) \ns/where/when/ ?\n\n| The new function, jsonb_set_lax(), allows null new values to either set the specified key to JSON null, delete the key, raise exception, or ignore operation. IS 'return_target' CLEAR? \nignore *the* operation\n\n| time zone-aware output. \ntimezone-aware ?\n\n| This makes \\gx equivalent to \\g (expanded=on). \nI would say: \"this allows syntax like \\g (expand=on) which is equivalent to \\gx\"\n\n|Allow pgbench to partition its 'accounts' table (Fabien COELHO) \nSometimes Fabien's name is/not capitalized.\n\n| This is enable using the -c/--restore-target-wal option. \n*enabled*\n\n| These long-supported options for this are called --superuser and --no-superuser. \n\"The supported\" not \"These long-supported\" ?\n\nI'm not sure, but maybe these patches of mine should be documented?\n\ncommit 24f62e93f314c107b4fa679869e5ba9adb2d545f\n Improve psql's \\d output for partitioned indexes.\n\ncommit c33869cc3bfc42bce822251f2fa1a2a346f86cc5\n psql \\d: Display table where trigger is defined, if inherited\n\n=> Alvaro said the functionality could conceivably be backpatched\n(nontrivially), which suggests this doesn't need to be documented, but I think\nbackpatch would be a bad idea, and I think it should be documented.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 11:45:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:45:03AM -0500, Justin Pryzby wrote:\n> |Release date: 2020-05-03\n> => Should say 2020-XX-XX, before someone like me goes and installs it everywhere in sight.\n\nAgreed!\n\n> |These triggers cannot change the destination partition. \n> => Maybe say \"cannot change which partition is the destination\"\n> \n> |Allow incremental sorting (James Coleman, Alexander Korotkov) \n> s/Allow/Implement/ ?\n\nAgreed.\n\n> |If a result is already sorted by several keys,\n> s/keys/leading keys/\n\nAgreed.\n\n> | Allow hash aggregation to use disk storage for large aggregation result sets (Jeff Davis) \n> | Previously, hash aggregation was not used if it was expected to use more than work_mem memory. This is controlled by enable_hashagg_disk. \n> => enable_hashagg_disk doesn't behave like other enable_* parameters.\n> As I understand, disabling it only \"opportunisitically\" avoids plans which are\n> *expected* to overflow work_mem. I think we should specifically say that, and\n> maybe suggest recalibrating work_mem.\n\nI went with \"avoided\":\n\n\tPreviously, hash aggregation was avoided if it was expected to use more\n\tthan work_mem memory. This is controlled by enable_hashagg_disk.\n\n> |This new behavior sets pages as all-visible\n> I think it should say \"can allow setting pages all-visible\"\n> It doesn't do anything special to force anything to be allvisible.\n\nOK, updated to:\n\n\tThis new behavior allows pages to be set as all-visible, which then\n\tallows index-only scans, and reduces the work necessary when the table\n\tneeds to be frozen.\n\n> | This is controlled by GUC wal_skip_threshold. \n> I think you should say that's a size threshold which determines which strategy\n> to use (WAL or fsync).\n\nI went with:\n\n\tThe WAL write amount where this happens is controlled by wal_skip_threshold.\n\nThey can use the doc link if they want more detail.\n\n> | Improve the performance of replay of DROP DATABASE commands that use many tablespaces (Fujii Masao) \n> \"when replaying DROP DATABASE commands if many tablespaces are in use\"\n\nOK, updated to:\n\n\tImprove the performance when replaying DROP DATABASE commands when many\n\ttablespaces are in use (Fujii Masao)\n\n> \n> |Improve performance for truncation of very larger relations (Kirk Jamison) \n> *large\n\nFixed.\n\n> |Server variable backtrace_functions specifies which C functions should generate backtraces on error. \n> Could you say \"GUC\" so it's easy to search for ?\n\nUh, I kind of back and forth on whether GUC or server variable is\nbetter. I kind of mix them up so people will know what GUC is.\n\n> | This is controlled by ssl_min_protocol_version. \n> | This behavior can be enabled using wal_receiver_create_temp_slot. \n> | This is controlled by logical_decoding_work_mem. \n> | This is enabled using ignore_invalid_pages. \n> Say GUC in these places, too ?\n\nOh, uh, I am not sure. These will all have links too.\n\n> |Previously, server restart was required to change primary_conninfo and primary_slot_name. \n> *a* server restart\n\nAgreed.\n\n> | Speedup recovery by prefetching pages (Thomas Munro) \n> \"Speed up\" or accelerate or \"Improve speed of\".\n> Speedup (one word) sounds like a noun.\n\nUh, someone requested I remove that, so I have.\n> \n> | Fix bugs in ALTER TABLE where later clauses overlap changes made by earlier clauses in the same command (Tom Lane) \n> s/where/when/ ?\n\nAgreed.\n\n> | The new function, jsonb_set_lax(), allows null new values to either set the specified key to JSON null, delete the key, raise exception, or ignore operation. IS 'return_target' CLEAR? \n> ignore *the* operation\n\nAgreed.\n\n> | time zone-aware output. \n> timezone-aware ?\n\nUh, time zone is two words. We can do time-zone-aware but that looks\nodd.\n\n> | This makes \\gx equivalent to \\g (expanded=on). \n> I would say: \"this allows syntax like \\g (expand=on) which is equivalent to \\gx\"\n\nI like that.\n\n> |Allow pgbench to partition its 'accounts' table (Fabien COELHO) \n> Sometimes Fabien's name is/not capitalized.\n\nI have already committed a fix for that.\n\n> | This is enable using the -c/--restore-target-wal option. \n> *enabled*\n\nAgreed.\n\n> | These long-supported options for this are called --superuser and --no-superuser. \n> \"The supported\" not \"These long-supported\" ?\n\nYep.\n\n> I'm not sure, but maybe these patches of mine should be documented?\n> \n> commit 24f62e93f314c107b4fa679869e5ba9adb2d545f\n> Improve psql's \\d output for partitioned indexes.\n> \n> commit c33869cc3bfc42bce822251f2fa1a2a346f86cc5\n> psql \\d: Display table where trigger is defined, if inherited\n> \n> => Alvaro said the functionality could conceivably be backpatched\n> (nontrivially), which suggests this doesn't need to be documented, but I think\n> backpatch would be a bad idea, and I think it should be documented.\n\nI looked at these and they looked like incremental changes to psql\ndisplay in a way that people would say \"nice\", but not really want to\nknow about it before-hand. Maybe I am wrong.\n\nThanks for all your fixes, all committed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 13:18:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 05, 2020 at 01:18:09PM -0400, Bruce Momjian wrote:\n> > |Release date: 2020-05-03\n> > => Should say 2020-XX-XX, before someone like me goes and installs it everywhere in sight.\n> \n> Agreed!\n> \n> > |These triggers cannot change the destination partition. \n> > => Maybe say \"cannot change which partition is the destination\"\n\nLooks like you copied my quote mark :(\n\n> > | Allow hash aggregation to use disk storage for large aggregation result sets (Jeff Davis) \n> > | Previously, hash aggregation was not used if it was expected to use more than work_mem memory. This is controlled by enable_hashagg_disk. \n> > => enable_hashagg_disk doesn't behave like other enable_* parameters.\n> > As I understand, disabling it only \"opportunisitically\" avoids plans which are\n> > *expected* to overflow work_mem. I think we should specifically say that, and\n> > maybe suggest recalibrating work_mem.\n> \n> I went with \"avoided\":\n> \n> \tPreviously, hash aggregation was avoided if it was expected to use more\n> \tthan work_mem memory. This is controlled by enable_hashagg_disk.\n\nI think we should expand on this:\n\n|Previously, hash aggregation was avoided if it was expected to use more than\n|work_mem memory, but it wasn't enforced, and hashing could still exceed\n|work_mem. To get back the old behavior, increasing work_mem.\n|\n|The parameter enable_hashagg_disk controls whether a plan which is *expected*\n|to spill to disk will be considered. During execution, an aggregate node which\n|exceeding work_mem will spill to disk regardless of this parameter.\n\nI wrote something similar here:\nhttps://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n\n> > | This is controlled by GUC wal_skip_threshold. \n> > I think you should say that's a size threshold which determines which strategy\n> > to use (WAL or fsync).\n> \n> I went with:\n> \tThe WAL write amount where this happens is controlled by wal_skip_threshold.\n>\n> They can use the doc link if they want more detail.\n\nI guess I would say \"relations larger than wal_skip_threshold will be fsynced\nrather than copied to WAL\"\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 12:50:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Do you want to include any of these?\n\n5823677acc Provide pgbench --show-script to dump built-in scripts.\nce8f946764 Report the time taken by pgbench initialization steps.\n751c63cea0 Remove pg_regress' --load-language option.\n33753ac9d7 Add object names to partition integrity violations.\n246f136e76 Improve handling of parameter differences in physical replication\na01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n2fc2a88e67 Remove obsolete information schema tables\nb9b408c487 Record parents of triggers (tgparentid)\n2b2bacdca0 Reset statement_timeout between queries of a multi-query string.\n09f08930f0 initdb: Change authentication defaults Change the defaults for the pg_hba.conf generated by initdb to \"peer\" or \"md5\"\n\n>Improve control of prepared statement parameter logging \n>The GUC setting log_parameter_max_length controls the maximum length of parameter values output during statement logging, and log_parameter_max_length_on_error allows parameters to be output on\nI think the original commit (ba79cb5d) gets lost in the description of the\ndetails and the following commit. I would say:\n|Emit parameter values during query bind/execute errors and allow control of the max length logged by statement logging (Alexey Bashtanov, �lvaro Herrera)\n|The GUC setting log_parameter_max_length controls the maximum length of parameter values output during statement logging, and log_parameter_max_length_on_error allows parameters to be output on\n|error.\nOr maybe split that into two items.\n\n> Allow track_activity_query_size to be set to 1MB (Vyacheslav Makarov)\nSay \"*up* to 1MB\".\n\n> super users\nsay \"superuser\" ?\n\n>Allow allow_system_table_mods to be changed after server start (Peter Eisentraut)\n>Disallow non-super users from modifying system tables when allow_system_table_mods is set (Peter Eisentraut) \nI think these two entries can be merged.\n\n>Add parallel control of the vacuumdb command using --parallel (Masahiko Sawada)\n>Allow reindexdb to operate in parallel (Julien Rouhaud)\n>Parallel mode is enabled with the new --jobs option. \n\nIt's probably worth saying that vacuumdb --parallel just passes (parallel N)\noption to the vacuum command, which means that it processes indexes in parallel.\nWhereas reindexdb --jobs is a client-side feature, creating a separate sessions\nfor each.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 12:50:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 12:50:01PM -0500, Justin Pryzby wrote:\n> On Tue, May 05, 2020 at 01:18:09PM -0400, Bruce Momjian wrote:\n> > > |Release date: 2020-05-03\n> > > => Should say 2020-XX-XX, before someone like me goes and installs it everywhere in sight.\n> > \n> > Agreed!\n> > \n> > > |These triggers cannot change the destination partition. \n> > > => Maybe say \"cannot change which partition is the destination\"\n> \n> Looks like you copied my quote mark :(\n\nI kind of liked it, but OK, removed. ;-)\n\n> > > | Allow hash aggregation to use disk storage for large aggregation result sets (Jeff Davis) \n> > > | Previously, hash aggregation was not used if it was expected to use more than work_mem memory. This is controlled by enable_hashagg_disk. \n> > > => enable_hashagg_disk doesn't behave like other enable_* parameters.\n> > > As I understand, disabling it only \"opportunisitically\" avoids plans which are\n> > > *expected* to overflow work_mem. I think we should specifically say that, and\n> > > maybe suggest recalibrating work_mem.\n> > \n> > I went with \"avoided\":\n> > \n> > \tPreviously, hash aggregation was avoided if it was expected to use more\n> > \tthan work_mem memory. This is controlled by enable_hashagg_disk.\n> \n> I think we should expand on this:\n> \n> |Previously, hash aggregation was avoided if it was expected to use more than\n> |work_mem memory, but it wasn't enforced, and hashing could still exceed\n> |work_mem. To get back the old behavior, increasing work_mem.\n\nI think work_mem has too many other effects to recommend just changing\nit for this.\n\n> |The parameter enable_hashagg_disk controls whether a plan which is *expected*\n> |to spill to disk will be considered. During execution, an aggregate node which\n> |exceeding work_mem will spill to disk regardless of this parameter.\n> \n> I wrote something similar here:\n> https://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n\nI think this kind of information should be in our docs, not really the\nrelease notes.\n\n> > > | This is controlled by GUC wal_skip_threshold. \n> > > I think you should say that's a size threshold which determines which strategy\n> > > to use (WAL or fsync).\n> > \n> > I went with:\n> > \tThe WAL write amount where this happens is controlled by wal_skip_threshold.\n> >\n> > They can use the doc link if they want more detail.\n> \n> I guess I would say \"relations larger than wal_skip_threshold will be fsynced\n> rather than copied to WAL\"\n\nHow is this?\n\n\tRelations larger than wal_skip_threshold will have their files fynsced\n\trather than writing their WAL records.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 14:10:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> Do you want to include any of these?\n> \n> 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> ce8f946764 Report the time taken by pgbench initialization steps.\n\nI am kind of unclear how much of pgbench changes to put in the release\nnotes since the use seems so specialized, but maybe that is wrong.\n\n> 751c63cea0 Remove pg_regress' --load-language option.\n\nWell, for the same reasons, that is our regression tests, which I assume\nis more for internal use.\n\n> 33753ac9d7 Add object names to partition integrity violations.\n\nImproving error messages is not something I usually cover. People like\nto see the better error messages, but don't really value knowing about\nthem before-hand, I am guessing.\n\n> 246f136e76 Improve handling of parameter differences in physical replication\n\nThat seems to fall in the category above.\n\n> a01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n\nUh, that didn't seem significant.\n\n> 2fc2a88e67 Remove obsolete information schema tables\n> b9b408c487 Record parents of triggers (tgparentid)\n> 2b2bacdca0 Reset statement_timeout between queries of a multi-query string.\n\nThat seemed like a bug fix for unusual usage.\n\n> 09f08930f0 initdb: Change authentication defaults Change the defaults for the pg_hba.conf generated by initdb to \"peer\" or \"md5\"\n\nUh, that was reverted:\n\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2019-07-22 [796188658] Revert \"initdb: Change authentication defaults\"\n\t\n\t Revert \"initdb: Change authentication defaults\"\n\t\n\t This reverts commit 09f08930f0f6fd4a7350ac02f29124b919727198.\n\t\n\t The buildfarm client needs some adjustments first.\n\n> >Improve control of prepared statement parameter logging \n> >The GUC setting log_parameter_max_length controls the maximum length of parameter values output during statement logging, and log_parameter_max_length_on_error allows parameters to be output on\n> I think the original commit (ba79cb5d) gets lost in the description of the\n> details and the following commit. I would say:\n> |Emit parameter values during query bind/execute errors and allow control of the max length logged by statement logging (Alexey Bashtanov, �lvaro Herrera)\n> |The GUC setting log_parameter_max_length controls the maximum length of parameter values output during statement logging, and log_parameter_max_length_on_error allows parameters to be output on\n> |error.\n> Or maybe split that into two items.\n\nI struggled on this one because we are both limiting parameter length\nwhen logging of normal queries _and_ adding paramater logging of error\nqueries, also with a length limit.\n\nI tried a few approaches but ended up with this:\n\n\tImprove control of prepared statement parameter logging (Alexey\n\tBashtanov, &Aacute;lvaro Herrera)\n\n\tThe GUC setting log_parameter_max_length controls the maximum\n\tlength of parameter values output during statement non-error\n\tlogging, and log_parameter_max_length_on_error does the same\n\tfor error statement logging. Previously, prepared statement\n\tparameters were not logged during errors.\n\n> > Allow track_activity_query_size to be set to 1MB (Vyacheslav Makarov)\n> Say \"*up* to 1MB\".\n\nAgreed.\n\n> > super users\n> say \"superuser\" ?\n\nAll fixed, thanks.\n\n> >Allow allow_system_table_mods to be changed after server start (Peter Eisentraut)\n> >Disallow non-super users from modifying system tables when allow_system_table_mods is set (Peter Eisentraut) \n> I think these two entries can be merged.\n\nUh, they are quite different. The first one is about not requiring a\nreboot, while the second is a fix for allowing users do things when it\nis set that they should not be able to do.\n\n> >Add parallel control of the vacuumdb command using --parallel (Masahiko Sawada)\n> >Allow reindexdb to operate in parallel (Julien Rouhaud)\n> >Parallel mode is enabled with the new --jobs option. \n> \n> It's probably worth saying that vacuumdb --parallel just passes (parallel N)\n> option to the vacuum command, which means that it processes indexes in parallel.\n> Whereas reindexdb --jobs is a client-side feature, creating a separate sessions\n> for each.\n\nOh, wow, good point, and very subtle. I used this text:\n\n\tAllow vacuum commands run by vacuumdb to operate in parallel mode\n\t(Masahiko Sawada)\n\t\n\tThis is enabled with the new --parallel option.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 14:37:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, 2020-05-04 at 23:16 -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nRegarding grouping sets:\n\nJust like hash aggregation, grouping sets could exceed work_mem by\nlarge amounts in v12. Now, in v13, they both use disk when appropriate.\n\nThere's an open item where we will consider tweaking the GUCs, so the\ndescriptions might change slightly.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 05 May 2020 11:44:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 05, 2020 at 02:10:24PM -0400, Bruce Momjian wrote:\n> > > > | This is controlled by GUC wal_skip_threshold. \n> > > > I think you should say that's a size threshold which determines which strategy\n> > > > to use (WAL or fsync).\n> > > \n> > > I went with:\n> > > \tThe WAL write amount where this happens is controlled by wal_skip_threshold.\n> > >\n> > > They can use the doc link if they want more detail.\n> > \n> > I guess I would say \"relations larger than wal_skip_threshold will be fsynced\n> > rather than copied to WAL\"\n> \n> How is this?\n> \n> \tRelations larger than wal_skip_threshold will have their files fynsced\n> \trather than writing their WAL records.\n\nI see I was too late, but:\n\nFix typo (fynsc) and maybe add parens().\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 13:45:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-05, Bruce Momjian wrote:\n\n> > |Allow incremental sorting (James Coleman, Alexander Korotkov) \n> > s/Allow/Implement/ ?\n> \n> Agreed.\n\nFWIW I think Tomas Vondra should be credited as coauthor of this\nfeature. He didn't list himself as author in the commit message but I'm\npretty sure that's out of modesty, not lack of contribution.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 15:21:00 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-05, Bruce Momjian wrote:\n\n> On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > Do you want to include any of these?\n> > \n> > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> > ce8f946764 Report the time taken by pgbench initialization steps.\n> \n> I am kind of unclear how much of pgbench changes to put in the release\n> notes since the use seems so specialized, but maybe that is wrong.\n\nMaybe it would make sense to group all pgbench changes in a subsection\nof their own?\n\n> > a01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n> > 2fc2a88e67 Remove obsolete information schema tables\n> \n> Uh, that didn't seem significant.\n\nMaybe have one item \"modernize information_schema\", and then describe\nall the changes together in a single item.\n\n> I tried a few approaches but ended up with this:\n> \n> \tImprove control of prepared statement parameter logging (Alexey\n> \tBashtanov, &Aacute;lvaro Herrera)\n> \n> \tThe GUC setting log_parameter_max_length controls the maximum\n> \tlength of parameter values output during statement non-error\n> \tlogging, and log_parameter_max_length_on_error does the same\n> \tfor error statement logging. Previously, prepared statement\n> \tparameters were not logged during errors.\n\nThis seems good to me. I think Tom Lane should be listed as coauthor of\nthis item.\n\n> I used this text:\n> \n> \tAllow vacuum commands run by vacuumdb to operate in parallel mode\n> \t(Masahiko Sawada)\n> \t\n> \tThis is enabled with the new --parallel option.\n\nI think the vacuumdb item should be merged with the item for 40d964ec9,\nsince this is just about vacuumdb gaining control of the new VACUUM\nfeature. It's not something you can use separately from that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 May 2020 15:44:37 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 5/5/20 3:21 PM, Alvaro Herrera wrote:\n> On 2020-May-05, Bruce Momjian wrote:\n> \n>>> |Allow incremental sorting (James Coleman, Alexander Korotkov)\n>>> s/Allow/Implement/ ?\n>>\n>> Agreed.\n> \n> FWIW I think Tomas Vondra should be credited as coauthor of this\n> feature. He didn't list himself as author in the commit message but I'm\n> pretty sure that's out of modesty, not lack of contribution.\n\n+1.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 5 May 2020 15:44:41 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 05, 2020 at 03:44:37PM -0400, Alvaro Herrera wrote:\n> On 2020-May-05, Bruce Momjian wrote:\n> > On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > > Do you want to include any of these?\n> > > \n> > > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> > > ce8f946764 Report the time taken by pgbench initialization steps.\n> > \n> > I am kind of unclear how much of pgbench changes to put in the release\n> > notes since the use seems so specialized, but maybe that is wrong.\n> \n> Maybe it would make sense to group all pgbench changes in a subsection\n> of their own?\n> \n> > > a01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n> > > 2fc2a88e67 Remove obsolete information schema tables\n> > \n> > Uh, that didn't seem significant.\n> \n> Maybe have one item \"modernize information_schema\", and then describe\n> all the changes together in a single item.\n\nFYI, I did the same as last year, so I found these from output of something\nlike git log --cherry-pick REL_12_STABLE...master\nI asked to avoid false negatives, not because I specifically want those commits\nmentioned.\n\n> > I used this text:\n> > \n> > \tAllow vacuum commands run by vacuumdb to operate in parallel mode\n> > \t(Masahiko Sawada)\n> > \t\n> > \tThis is enabled with the new --parallel option.\n> \n> I think the vacuumdb item should be merged with the item for 40d964ec9,\n> since this is just about vacuumdb gaining control of the new VACUUM\n> feature. It's not something you can use separately from that.\n\n+1\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 14:53:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:44:33AM -0700, Jeff Davis wrote:\n> On Mon, 2020-05-04 at 23:16 -0400, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Regarding grouping sets:\n> \n> Just like hash aggregation, grouping sets could exceed work_mem by\n> large amounts in v12. Now, in v13, they both use disk when appropriate.\n\nOh, another place I should change \"not used\" to \"avoided\". Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 16:16:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 01:45:37PM -0500, Justin Pryzby wrote:\n> On Tue, May 05, 2020 at 02:10:24PM -0400, Bruce Momjian wrote:\n> > > > > | This is controlled by GUC wal_skip_threshold. \n> > > > > I think you should say that's a size threshold which determines which strategy\n> > > > > to use (WAL or fsync).\n> > > > \n> > > > I went with:\n> > > > \tThe WAL write amount where this happens is controlled by wal_skip_threshold.\n> > > >\n> > > > They can use the doc link if they want more detail.\n> > > \n> > > I guess I would say \"relations larger than wal_skip_threshold will be fsynced\n> > > rather than copied to WAL\"\n> > \n> > How is this?\n> > \n> > \tRelations larger than wal_skip_threshold will have their files fynsced\n> > \trather than writing their WAL records.\n> \n> I see I was too late, but:\n> \n> Fix typo (fynsc) and maybe add parens().\n\nAh, I was looking for fsync to fix that and could not find it. Now I\nfound it with that spelling, I ended up using \"fsync'ed\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 16:18:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 03:21:00PM -0400, Alvaro Herrera wrote:\n> On 2020-May-05, Bruce Momjian wrote:\n> \n> > > |Allow incremental sorting (James Coleman, Alexander Korotkov) \n> > > s/Allow/Implement/ ?\n> > \n> > Agreed.\n> \n> FWIW I think Tomas Vondra should be credited as coauthor of this\n> feature. He didn't list himself as author in the commit message but I'm\n> pretty sure that's out of modesty, not lack of contribution.\n\nThanks, added. We will hunt that modesty down! LOL\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 16:22:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 03:44:37PM -0400, Alvaro Herrera wrote:\n> On 2020-May-05, Bruce Momjian wrote:\n> \n> > On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > > Do you want to include any of these?\n> > > \n> > > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> > > ce8f946764 Report the time taken by pgbench initialization steps.\n> > \n> > I am kind of unclear how much of pgbench changes to put in the release\n> > notes since the use seems so specialized, but maybe that is wrong.\n> \n> Maybe it would make sense to group all pgbench changes in a subsection\n> of their own?\n\npgbench already has its own section in the docs:\n\n\tE.1.3.8.2. pgbench\n\nI would be glad to expand it since it is easy to pick out pgbench items\nfrom the commit logs.\n\n> > > a01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n> > > 2fc2a88e67 Remove obsolete information schema tables\n> > \n> > Uh, that didn't seem significant.\n> \n> Maybe have one item \"modernize information_schema\", and then describe\n> all the changes together in a single item.\n\nUh, so I am unclear when we are adding items to information_schema\nbecause we now support them, and when they are new features of\ninformation_schema.\n\n> > I tried a few approaches but ended up with this:\n> > \n> > \tImprove control of prepared statement parameter logging (Alexey\n> > \tBashtanov, &Aacute;lvaro Herrera)\n> > \n> > \tThe GUC setting log_parameter_max_length controls the maximum\n> > \tlength of parameter values output during statement non-error\n> > \tlogging, and log_parameter_max_length_on_error does the same\n> > \tfor error statement logging. Previously, prepared statement\n> > \tparameters were not logged during errors.\n> \n> This seems good to me. I think Tom Lane should be listed as coauthor of\n> this item.\n\nAdded.\n\n> > I used this text:\n> > \n> > \tAllow vacuum commands run by vacuumdb to operate in parallel mode\n> > \t(Masahiko Sawada)\n> > \t\n> > \tThis is enabled with the new --parallel option.\n> \n> I think the vacuumdb item should be merged with the item for 40d964ec9,\n> since this is just about vacuumdb gaining control of the new VACUUM\n> feature. It's not something you can use separately from that.\n\nWell, it is under Server Commands and has a new flag, so I thought it\nshould be here.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 16:29:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-05, Bruce Momjian wrote:\n>> I tried a few approaches but ended up with this:\n>> \n>> Improve control of prepared statement parameter logging (Alexey\n>> Bashtanov, &Aacute;lvaro Herrera)\n>> \n>> The GUC setting log_parameter_max_length controls the maximum\n>> length of parameter values output during statement non-error\n>> logging, and log_parameter_max_length_on_error does the same\n>> for error statement logging. Previously, prepared statement\n>> parameters were not logged during errors.\n\n> This seems good to me. I think Tom Lane should be listed as coauthor of\n> this item.\n\nNot necessary, I didn't do much on that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 May 2020 17:31:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 02:37:16PM -0400, Bruce Momjian wrote:\n> On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > Do you want to include any of these?\n> > \n> > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n\nI have added the above entry to the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 17:34:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 05:31:26PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-May-05, Bruce Momjian wrote:\n> >> I tried a few approaches but ended up with this:\n> >> \n> >> Improve control of prepared statement parameter logging (Alexey\n> >> Bashtanov, &Aacute;lvaro Herrera)\n> >> \n> >> The GUC setting log_parameter_max_length controls the maximum\n> >> length of parameter values output during statement non-error\n> >> logging, and log_parameter_max_length_on_error does the same\n> >> for error statement logging. Previously, prepared statement\n> >> parameters were not logged during errors.\n> \n> > This seems good to me. I think Tom Lane should be listed as coauthor of\n> > this item.\n> \n> Not necessary, I didn't do much on that.\n\nOK, removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 17:35:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 05, 2020 at 05:34:26PM -0400, Bruce Momjian wrote:\n> On Tue, May 5, 2020 at 02:37:16PM -0400, Bruce Momjian wrote:\n> > On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > > Do you want to include any of these?\n> > > \n> > > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> \n> I have added the above entry to the release notes.\n\n+2019-07-16 [ce8f94676] Report the time taken by pgbench initialization steps.\n+Allow pgbench to dump script contents using --show-script (Fabien Coelho)\n\nI think you confused these?\n\nce8f946764 Report the time taken by pgbench initialization steps.\n5823677acc Provide pgbench --show-script to dump built-in scripts.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 16:39:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 04:39:58PM -0500, Justin Pryzby wrote:\n> On Tue, May 05, 2020 at 05:34:26PM -0400, Bruce Momjian wrote:\n> > On Tue, May 5, 2020 at 02:37:16PM -0400, Bruce Momjian wrote:\n> > > On Tue, May 5, 2020 at 12:50:11PM -0500, Justin Pryzby wrote:\n> > > > Do you want to include any of these?\n> > > > \n> > > > 5823677acc Provide pgbench --show-script to dump built-in scripts.\n> > \n> > I have added the above entry to the release notes.\n> \n> +2019-07-16 [ce8f94676] Report the time taken by pgbench initialization steps.\n> +Allow pgbench to dump script contents using --show-script (Fabien Coelho)\n> \n> I think you confused these?\n> \n> ce8f946764 Report the time taken by pgbench initialization steps.\n> 5823677acc Provide pgbench --show-script to dump built-in scripts.\n\nYou are correct, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 May 2020 17:43:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hello Bruce,\n\n>> * \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n>> => The default is still to generate data client-side.\n>\n> My point is that the docs are not clear about this.\n\nIndeed.\n\n> Can you fix it?\n\nSure. Attached patch adds an explicit sentence about it, as it was only \nhinted about in the default initialization command string, and removes a \nspurious empty paragraph found nearby.\n\n-- \nFabien.", "msg_date": "Wed, 6 May 2020 07:36:21 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\n\n> 5 мая 2020 г., в 08:16, Bruce Momjian <bruce@momjian.us> написал(а):\n> \n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nI'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\nhttps://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 6 May 2020 11:17:54 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' (Noah Misch)\n\nKyotaro Horiguchi authored that one. (I committed it.) The commit message\nnoted characteristics, some of which may deserve mention in the notes:\n\n- Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n- Out-of-tree table access methods will require changes.\n- Users setting a timeout on COMMIT may need to adjust that timeout, and\n log_min_duration_statement analysis will reflect time consumption moving to\n COMMIT from commands like COPY.\n- COPY has worked this way for awhile; this extends it to all modifications.\n\n\n", "msg_date": "Tue, 5 May 2020 23:39:10 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 6:16 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n\nGreat, thank you!\n\nIt seems that opclass parameters (911e702077) are not reflected in the\nrelease notes.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 6 May 2020 15:18:54 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nOn Tue, May 5, 2020 at 12:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThanks for this as always.\n\n+Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n+2019-08-07 [4e85642d9] Apply constraint exclusion more generally in partitionin\n+Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n+2019-08-13 [815ef2f56] Don't constraint-exclude partitioned tables as much\n+-->\n+\n+<para>\n+Improve cases where pruning of partitions can happen (Amit Langote,\nYuzuko Hosoya, Álvaro Herrera)\n+</para>\n\nThe following commit should be included with this item:\n\ncommit 489247b0e615592111226297a0564e11616361a5\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Sun Aug 4 11:18:45 2019 -0400\n\n Improve pruning of a default partition\n\nPrimary author for this commit and the person who raised various\nproblems that led to these improvements is Yuzuko Hosoya. So I think\nher name should go first.\n\n+Author: Etsuro Fujita <efujita@postgresql.org>\n+2020-04-08 [c8434d64c] Allow partitionwise joins in more cases.\n+Author: Tom Lane <tgl@sss.pgh.pa.us>\n+2020-04-07 [981643dcd] Allow partitionwise join to handle nested FULL JOIN USIN\n+-->\n+\n+<para>\n+Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\nEtsuro Fujita, Amit Langote)\n+</para>\n\nMaybe it would be better to break this into two items, because while\nc8434d64c is significant new functionality that I only contributed a\nfew review comments towards, 981643dcd is relatively minor surgery of\npartitionwise join code to handle FULL JOINs correctly. Tom's rewrite\nof my patch for the latter was pretty significant too, so maybe better\nto list his name as well.\n\n+<!--\n+Author: Peter Eisentraut <peter@eisentraut.org>\n+2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n+-->\n+\n+<para>\n+Allow logical replication to replicate partitioned tables (Amit Langote)\n+</para>\n+\n+</listitem>\n+\n+<listitem>\n+<!--\n+Author: Peter Eisentraut <peter@eisentraut.org>\n+2020-03-10 [17b9e7f9f] Support adding partitioned tables to publication\n+-->\n+\n+<para>\n+Allow partitioned tables to be added to replicated publications (Amit Langote)\n+</para>\n+\n+<para>\n+Partition additions/removals are replicated as well. Previously,\npartitions had to be replicated individually. HOW IS THIS DIFFERENT\nFROM THE ITEM ABOVE?\n+</para>\n\nThe former is subscription-side new functionality and the latter is\npublication-side and the two are somewhat independent.\n\nTill now, logical replication could only occur between relkind 'r'\nrelations. So the only way to keep a given partitioned table in sync\non the two servers was to manually add the leaf partitions (of relkind\n'r') to a publication and also manually keep the list of replicated\ntables up to date as partitions come and go, that is, by\nadding/removing them to/from the publication.\n\n17b9e7f9f (the second item) makes it possible for the partitioned\ntable (relkind 'p') to be added to the publication so that individual\nleaf partitions need not be manually kept in the publication.\nReplication still flows between the leaf partitions (relkind 'r'\nrelations) though.\n\nf1ac27bfd (the first item) makes is possible to replicate from a\nregular table (relkind 'r') into a partitioned table (relkind 'p').\nIf a given row is replicated into a partitioned table, the\nsubscription worker will route it to the correct leaf partition of\nthat partitioned table.\n\n+<listitem>\n+<!--\n+Author: Peter Eisentraut <peter@eisentraut.org>\n+2020-04-08 [83fd4532a] Allow publishing partition changes via ancestors\n+-->\n+\n+<para>\n+Allow CREATE PUBLICATION to control whether partitioned tables are\npublished as themselves or their ancestors (Amit Langote)\n+</para>\n+\n+<para>\n+The option is publish_via_partition_root.\n+</para>\n\nAnd this allows replication to optionally originate from relkind 'p'\nrelations on the publication server, whereas it could previously only\noriginate from relkind 'r' relations. Combined with the first item,\nusers can now replicate between partitioned tables that have a\ndifferent set of partitions on the two servers.\n\nMaybe it would make sense to combine the three into one item:\n\n<para>\nAdd support for logical replication of partitioned tables\n</para>\n\n<para>\nLogical replication can now occur between partitioned tables, where\npreviously it would only be allowed between regular tables. A new\npublication option <literal>publish_via_partition_root</literal>\ncontrols whether a leaf partition's changes are published as its own\nor as that of the ancestor that's actually published.\n</para>\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 00:06:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 05/05/20 10:31, Bruce Momjian wrote:\n> On Tue, May 5, 2020 at 09:20:39PM +0800, John Naylor wrote:\n>> ... This patch is\n>> about the server encoding, which formerly needed to be utf-8 for\n>> non-ascii characters. (I think the client encoding doesn't matter as\n>> long as ascii bytes are represented.)\n>>\n>> +<para>\n>> +The UTF-8 characters must be available in the server encoding.\n>> +</para>\n>>\n>> Same here, s/UTF-8/Unicode/.\n> \n> OK, new text is:\n> \n> \tAllow Unicode escapes, e.g., E'\\u####', in clients that don't use UTF-8\n> \tencoding (Tom Lane)\n> \t\n> \tThe Unicode characters must be available in the server encoding.\n> \n> I kept the \"UTF-8 encoding\" since that is the only Unicode encoding we\n> support.\n\nMy understanding also was that it matters little to this change what the\n/client's/ encoding is.\n\nThere used to be a limitation of the server's lexer that would reject\nUnicode escapes whenever the /server's/ encoding wasn't UTF-8 (even\nif the server's encoding contained the characters the escapes represent).\nI think that limitation is what was removed.\n\nI don't think the client encoding comes into it at all. Sure, you could\njust include the characters literally if they are in the client encoding,\nbut you might still choose to express them as escapes, and if you do they\nget passed that way to the server for interpretation.\n\nI had assumed the patch applied to all of the forms U&'\\####',\nU&'\\+######', E'\\u####', and E'\\U######' but I don't think I read\nthe patch to be sure of that.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 6 May 2020 16:01:44 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 05/06/20 16:01, Chapman Flack wrote:\n> I had assumed the patch applied to all of the forms U&'\\####',\n> U&'\\+######', E'\\u####', and E'\\U######' ...\n\nannnd that last form needs to have eight #s. (Can't be respelled with 4 ♭s.)\n\n\n-Chap\n\n\n", "msg_date": "Wed, 6 May 2020 16:07:19 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 07:36:21AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > > * \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n> > > => The default is still to generate data client-side.\n> > \n> > My point is that the docs are not clear about this.\n> \n> Indeed.\n> \n> > Can you fix it?\n> \n> Sure. Attached patch adds an explicit sentence about it, as it was only\n> hinted about in the default initialization command string, and removes a\n> spurious empty paragraph found nearby.\n\nThanks, patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 6 May 2020 19:07:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote:\n> \n> \n> > 5 мая 2020 г., в 08:16, Bruce Momjian <bruce@momjian.us> написал(а):\n> > \n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> I'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\n> https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n\nOK, I read the thread mentioned in the commit message and I now see the\nvalue of this change. Attached is the release note diff. Let me know\nif it needs improvement.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Wed, 6 May 2020 19:35:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote:\n> On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> > Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' (Noah Misch)\n> \n> Kyotaro Horiguchi authored that one. (I committed it.) The commit message\n> noted characteristics, some of which may deserve mention in the notes:\n\nFixed.\n\n> - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n\nThis was not backpatched?\n\n> - Out-of-tree table access methods will require changes.\n\nUh, I don't think we mention those.\n\n> - Users setting a timeout on COMMIT may need to adjust that timeout, and\n> log_min_duration_statement analysis will reflect time consumption moving to\n> COMMIT from commands like COPY.\n\nUh, not sure how to say that but I don't think we would normally mention that.\n\n> - COPY has worked this way for awhile; this extends it to all modifications.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 6 May 2020 19:40:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 03:18:54PM +0300, Alexander Korotkov wrote:\n> On Tue, May 5, 2020 at 6:16 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> \n> Great, thank you!\n> \n> It seems that opclass parameters (911e702077) are not reflected in the\n> release notes.\n\nUh, I have these items, just not that commit id:\n\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2020-03-30 [911e70207] Implement operator class parameters\n\t-->\n\t\n\t<para>\n\tAllow index operator classes to take parameters (Nikita Glukhov)\n\t</para>\n\t\n\t</listitem>\n\t\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2020-03-30 [911e70207] Implement operator class parameters\n\t-->\n\t\n\t<para>\n\tAllow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov)\n\t</para>\n\t\n\t</listitem>\n\nIs that OK?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 6 May 2020 19:46:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 06, 2020 at 07:35:34PM -0400, Bruce Momjian wrote:\n> On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote:\n> > I'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\n> > https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n> \n> OK, I read the thread mentioned in the commit message and I now see the\n> value of this change. Attached is the release note diff. Let me know\n> if it needs improvement.\n\nSorry I didn't see it earlier, but:\n\n> -Improve retrieving of only the leading bytes of TOAST values (Binguo Bao)\n> +Improve speed of TOAST decompression and the retrievel of only the leading bytes of TOAST values (Binguo Bao, Andrey Borodin)\n\nretrieval\n\nI will include this with my running doc patch if you don't want to make a\nseparate commit.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 6 May 2020 19:31:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 06, 2020 at 07:40:25PM -0400, Bruce Momjian wrote:\n> On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote:\n> > On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> > > Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' (Noah Misch)\n> > \n> > Kyotaro Horiguchi authored that one. (I committed it.) The commit message\n> > noted characteristics, some of which may deserve mention in the notes:\n> \n> Fixed.\n\nI don't see that change pushed (but it's not urgent).\n\n> > - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n> \n> This was not backpatched?\n\nRight.\n\n> > - Out-of-tree table access methods will require changes.\n> \n> Uh, I don't think we mention those.\n\nOkay. This point is relatively-important. On the other hand, the table\naccess methods known to me have maintainers who follow -hackers. They may\nlearn that way.\n\n> > - Users setting a timeout on COMMIT may need to adjust that timeout, and\n> > log_min_duration_statement analysis will reflect time consumption moving to\n> > COMMIT from commands like COPY.\n> \n> Uh, not sure how to say that but I don't think we would normally mention that.\n\nOkay.\n\n\n", "msg_date": "Wed, 6 May 2020 22:20:57 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\n\n> 7 мая 2020 г., в 04:35, Bruce Momjian <bruce@momjian.us> написал(а):\n> \n> On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote:\n>> \n>> \n>>> 5 мая 2020 г., в 08:16, Bruce Momjian <bruce@momjian.us> написал(а):\n>>> \n>>> I have committed the first draft of the PG 13 release notes. You can\n>>> see them here:\n>>> \n>>> \thttps://momjian.us/pgsql_docs/release-13.html\n>>> \n>>> It still needs markup, word wrap, and indenting. The community doc\n>>> build should happen in a few hours.\n>> \n>> I'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\n>> https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n> \n> OK, I read the thread mentioned in the commit message and I now see the\n> value of this change. Attached is the release note diff. Let me know\n> if it needs improvement.\n\nThere is one minor typo retrievel -> retrieval.\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 7 May 2020 11:25:47 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hello Bruce,\n\n>>>> * \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n>>>> => The default is still to generate data client-side.\n>>>\n>>> My point is that the docs are not clear about this.\n>>\n>> Indeed.\n>>\n>>> Can you fix it?\n>>\n>> Sure. Attached patch adds an explicit sentence about it, as it was only\n>> hinted about in the default initialization command string, and removes a\n>> spurious empty paragraph found nearby.\n>\n> Thanks, patch applied.\n\nOk.\n\nYou might remove the \"DOCUMENT THE DEFAULT…\" in the release note.\n\nI'm wondering about the commit comment: \"Reported-by: Fabien COELHO\", \nactually you reported it, not me!\n\nAfter looking again at the release notes, I do really think that \nsignificant documentation changes do not belong to the \"Source code\" \nsection but should be in separate \"Documentation\" section, and that more \nitems should be listed there, because they represent a lot of not-so-fun \nwork, especially Tom's restructuration of tables, and possibly others.\n\nAbout pgbench, ISTM that d37ddb745be07502814635585cbf935363c8a33d is worth \nmentionning because it is a user-visible change.\n\n-- \nFabien.", "msg_date": "Thu, 7 May 2020 08:29:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 12:06:33AM +0900, Amit Langote wrote:\n> Hi Bruce,\n> \n> On Tue, May 5, 2020 at 12:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> >\n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Thanks for this as always.\n> \n> +Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> +2019-08-07 [4e85642d9] Apply constraint exclusion more generally in partitionin\n> +Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> +2019-08-13 [815ef2f56] Don't constraint-exclude partitioned tables as much\n> +-->\n> +\n> +<para>\n> +Improve cases where pruning of partitions can happen (Amit Langote,\n> Yuzuko Hosoya, �lvaro Herrera)\n> +</para>\n> \n> The following commit should be included with this item:\n> \n> commit 489247b0e615592111226297a0564e11616361a5\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Sun Aug 4 11:18:45 2019 -0400\n> \n> Improve pruning of a default partition\n> \n> Primary author for this commit and the person who raised various\n> problems that led to these improvements is Yuzuko Hosoya. So I think\n> her name should go first.\n\nOK, I have moved her name to be first. FYI, this commit was backpatched\nback through PG 11, though the commit message doesn't mention that.\n\n\tcommit 8654407148\n\tAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\tDate: Sun Aug 4 11:18:45 2019 -0400\n\t\n\t Improve pruning of a default partition\n\t\n\t When querying a partitioned table containing a default partition, we\n\t were wrongly deciding to include it in the scan too early in the\n\t process, failing to exclude it in some cases. If we reinterpret the\n\t PruneStepResult.scan_default flag slightly, we can do a better job at\n\t detecting that it can be excluded. The change is that we avoid setting\n\t the flag for that pruning step unless the step absolutely requires the\n\t default partition to be scanned (in contrast with the previous\n\t arrangement, which was to set it unless the step was able to prune it).\n\t So get_matching_partitions() must explicitly check the partition that\n\t each returned bound value corresponds to in order to determine whether\n\t the default one needs to be included, rather than relying on the flag\n\t from the final step result.\n\t\n\t Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>\n\t Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>\n\t Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp\n\nFYI, I don't see backpatched commits when creating the release notes.\n\n> +Author: Etsuro Fujita <efujita@postgresql.org>\n> +2020-04-08 [c8434d64c] Allow partitionwise joins in more cases.\n> +Author: Tom Lane <tgl@sss.pgh.pa.us>\n> +2020-04-07 [981643dcd] Allow partitionwise join to handle nested FULL JOIN USIN\n> +-->\n> +\n> +<para>\n> +Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\n> Etsuro Fujita, Amit Langote)\n> +</para>\n> \n> Maybe it would be better to break this into two items, because while\n> c8434d64c is significant new functionality that I only contributed a\n> few review comments towards, 981643dcd is relatively minor surgery of\n\nWhat text would we use for the new item? I thought FULL JOIN was just\nanother case that matched the description I had.\n\n> partitionwise join code to handle FULL JOINs correctly. Tom's rewrite\n> of my patch for the latter was pretty significant too, so maybe better\n> to list his name as well.\n\nOK, I have added Tom's name.\n\n> +<!--\n> +Author: Peter Eisentraut <peter@eisentraut.org>\n> +2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n> +-->\n> +\n> +<para>\n> +Allow logical replication to replicate partitioned tables (Amit Langote)\n> +</para>\n> +\n> +</listitem>\n> +\n> +<listitem>\n> +<!--\n> +Author: Peter Eisentraut <peter@eisentraut.org>\n> +2020-03-10 [17b9e7f9f] Support adding partitioned tables to publication\n> +-->\n> +\n> +<para>\n> +Allow partitioned tables to be added to replicated publications (Amit Langote)\n> +</para>\n> +\n> +<para>\n> +Partition additions/removals are replicated as well. Previously,\n> partitions had to be replicated individually. HOW IS THIS DIFFERENT\n> FROM THE ITEM ABOVE?\n> +</para>\n> \n> The former is subscription-side new functionality and the latter is\n> publication-side and the two are somewhat independent.\n> \n> Till now, logical replication could only occur between relkind 'r'\n> relations. So the only way to keep a given partitioned table in sync\n> on the two servers was to manually add the leaf partitions (of relkind\n> 'r') to a publication and also manually keep the list of replicated\n> tables up to date as partitions come and go, that is, by\n> adding/removing them to/from the publication.\n> \n> 17b9e7f9f (the second item) makes it possible for the partitioned\n> table (relkind 'p') to be added to the publication so that individual\n> leaf partitions need not be manually kept in the publication.\n> Replication still flows between the leaf partitions (relkind 'r'\n> relations) though.\n> \n> f1ac27bfd (the first item) makes is possible to replicate from a\n> regular table (relkind 'r') into a partitioned table (relkind 'p').\n> If a given row is replicated into a partitioned table, the\n> subscription worker will route it to the correct leaf partition of\n> that partitioned table.\n\nWow, that is complicated.\n\n> +<listitem>\n> +<!--\n> +Author: Peter Eisentraut <peter@eisentraut.org>\n> +2020-04-08 [83fd4532a] Allow publishing partition changes via ancestors\n> +-->\n> +\n> +<para>\n> +Allow CREATE PUBLICATION to control whether partitioned tables are\n> published as themselves or their ancestors (Amit Langote)\n> +</para>\n> +\n> +<para>\n> +The option is publish_via_partition_root.\n> +</para>\n> \n> And this allows replication to optionally originate from relkind 'p'\n> relations on the publication server, whereas it could previously only\n> originate from relkind 'r' relations. Combined with the first item,\n> users can now replicate between partitioned tables that have a\n> different set of partitions on the two servers.\n> \n> Maybe it would make sense to combine the three into one item:\n> \n> <para>\n> Add support for logical replication of partitioned tables\n> </para>\n> \n> <para>\n> Logical replication can now occur between partitioned tables, where\n> previously it would only be allowed between regular tables. A new\n> publication option <literal>publish_via_partition_root</literal>\n> controls whether a leaf partition's changes are published as its own\n> or as that of the ancestor that's actually published.\n> </para>\n\nI think trying to put this all into one item is too complex, but I did\nmerge two of the items together, so we have two items now:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-03-10 [17b9e7f9f] Support adding partitioned tables to publication\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-04-08 [83fd4532a] Allow publishing partition changes via ancestors\n\t-->\n\t\n\t<para>\n\tAllow partitioned tables to be replicated via publications (Amit Langote)\n\t</para>\n\t\n\t<para>\n\tPreviously, partitions had to be replicated individually. Now\n\tpartitioned tables can be published explicitly causing all partitions\n\tto be automatically published. Addition/removal of partitions from\n\tpartitioned tables are automatically added/removed on subscribers.\n\tThe CREATE PUBLICATION option publish_via_partition_root controls whether\n\tpartitioned tables are published as themselves or their ancestors.\n\t</para>\n\t\n\t</listitem>\n\t\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n\t-->\n\t\n\t<para>\n\tAllow non-partitioned tables to be logically replicated to subscribers\n\tthat receive the rows into partitioned tables (Amit Langote)\n\t</para>\n\t\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 08:48:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 08:48:49AM -0400, Bruce Momjian wrote:\n> I think trying to put this all into one item is too complex, but I did\n> merge two of the items together, so we have two items now:\n\nI ended up adjusting the wording again, so please review the commit or\nthe website:\n\n\thttps://momjian.us/pgsql_docs/release-13.html\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:02:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 08:29:55AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > > > > * \"DOCUMENT THE DEFAULT GENERATION METHOD\"\n> > > > > => The default is still to generate data client-side.\n> > > > \n> > > > My point is that the docs are not clear about this.\n> > > \n> > > Indeed.\n> > > \n> > > > Can you fix it?\n> > > \n> > > Sure. Attached patch adds an explicit sentence about it, as it was only\n> > > hinted about in the default initialization command string, and removes a\n> > > spurious empty paragraph found nearby.\n> > \n> > Thanks, patch applied.\n> \n> Ok.\n> \n> You might remove the \"DOCUMENT THE DEFAULT…\" in the release note.\n\nOh, yes, of course.\n\n> I'm wondering about the commit comment: \"Reported-by: Fabien COELHO\",\n> actually you reported it, not me!\n\nUh, kind of, yeah, but via email, you did. ;-)\n\n> After looking again at the release notes, I do really think that significant\n> documentation changes do not belong to the \"Source code\" section but should\n> be in separate \"Documentation\" section, and that more items should be listed\n> there, because they represent a lot of not-so-fun work, especially Tom's\n> restructuration of tables, and possibly others.\n\nUh, can someone else give an opinion on this? I am not sure how hard or\nun-fun an item is should be used as criteria.\n\n> About pgbench, ISTM that d37ddb745be07502814635585cbf935363c8a33d is worth\n> mentionning because it is a user-visible change.\n\nUh, that is not usually something I mention because, like error message\nchanges, it is nice, but few people need to know about it before they\nsee it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:08:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 07:31:14PM -0500, Justin Pryzby wrote:\n> On Wed, May 06, 2020 at 07:35:34PM -0400, Bruce Momjian wrote:\n> > On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote:\n> > > I'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\n> > > https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n> > \n> > OK, I read the thread mentioned in the commit message and I now see the\n> > value of this change. Attached is the release note diff. Let me know\n> > if it needs improvement.\n> \n> Sorry I didn't see it earlier, but:\n> \n> > -Improve retrieving of only the leading bytes of TOAST values (Binguo Bao)\n> > +Improve speed of TOAST decompression and the retrievel of only the leading bytes of TOAST values (Binguo Bao, Andrey Borodin)\n> \n> retrieval\n> \n> I will include this with my running doc patch if you don't want to make a\n> separate commit.\n\nFixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:09:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 11:25:47AM +0500, Andrey M. Borodin wrote:\n> \n> \n> > 7 мая 2020 г., в 04:35, Bruce Momjian <bruce@momjian.us> написал(а):\n> > \n> > On Wed, May 6, 2020 at 11:17:54AM +0500, Andrey M. Borodin wrote:\n> >> \n> >> \n> >>> 5 мая 2020 г., в 08:16, Bruce Momjian <bruce@momjian.us> написал(а):\n> >>> \n> >>> I have committed the first draft of the PG 13 release notes. You can\n> >>> see them here:\n> >>> \n> >>> \thttps://momjian.us/pgsql_docs/release-13.html\n> >>> \n> >>> It still needs markup, word wrap, and indenting. The community doc\n> >>> build should happen in a few hours.\n> >> \n> >> I'm not sure, but probably it worth mentioning in \"General performance\" section that TOAST (and everything pglz-compressed) decompression should be significantly faster in v13.\n> >> https://github.com/postgres/postgres/commit/c60e520f6e0e8db9618cad042df071a6752f3c06\n> > \n> > OK, I read the thread mentioned in the commit message and I now see the\n> > value of this change. Attached is the release note diff. Let me know\n> > if it needs improvement.\n> \n> There is one minor typo retrievel -> retrieval.\n> Thanks!\n\nGot it, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:09:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 10:20:57PM -0700, Noah Misch wrote:\n> On Wed, May 06, 2020 at 07:40:25PM -0400, Bruce Momjian wrote:\n> > On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote:\n> > > On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> > > > Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' (Noah Misch)\n> > > \n> > > Kyotaro Horiguchi authored that one. (I committed it.) The commit message\n> > > noted characteristics, some of which may deserve mention in the notes:\n> > \n> > Fixed.\n> \n> I don't see that change pushed (but it's not urgent).\n\nI got stuck on Amit's partition items and my head couldn't process any\nmore, so I went to bed, and just committed it now. I was afraid to have\npending stuff uncommitted, but I am also hesitant to do a commit for\neach change.\n\n> > > - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n> > \n> > This was not backpatched?\n> \n> Right.\n\nOh. So you are saying we could lose COPY data on a crash, even after a\ncommit. That seems bad. Can you show me the commit info? I can't find\nit.\n\n> > > - Out-of-tree table access methods will require changes.\n> > \n> > Uh, I don't think we mention those.\n> \n> Okay. This point is relatively-important. On the other hand, the table\n> access methods known to me have maintainers who follow -hackers. They may\n> learn that way.\n\nThat was my thought.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:38:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 6, 2020 at 04:01:44PM -0400, Chapman Flack wrote:\n> On 05/05/20 10:31, Bruce Momjian wrote:\n> > On Tue, May 5, 2020 at 09:20:39PM +0800, John Naylor wrote:\n> >> ... This patch is\n> >> about the server encoding, which formerly needed to be utf-8 for\n> >> non-ascii characters. (I think the client encoding doesn't matter as\n> >> long as ascii bytes are represented.)\n> >>\n> >> +<para>\n> >> +The UTF-8 characters must be available in the server encoding.\n> >> +</para>\n> >>\n> >> Same here, s/UTF-8/Unicode/.\n> > \n> > OK, new text is:\n> > \n> > \tAllow Unicode escapes, e.g., E'\\u####', in clients that don't use UTF-8\n> > \tencoding (Tom Lane)\n> > \t\n> > \tThe Unicode characters must be available in the server encoding.\n> > \n> > I kept the \"UTF-8 encoding\" since that is the only Unicode encoding we\n> > support.\n> \n> My understanding also was that it matters little to this change what the\n> /client's/ encoding is.\n> \n> There used to be a limitation of the server's lexer that would reject\n> Unicode escapes whenever the /server's/ encoding wasn't UTF-8 (even\n> if the server's encoding contained the characters the escapes represent).\n> I think that limitation is what was removed.\n> \n> I don't think the client encoding comes into it at all. Sure, you could\n> just include the characters literally if they are in the client encoding,\n> but you might still choose to express them as escapes, and if you do they\n> get passed that way to the server for interpretation.\n\nAh, very good point. New text is:\n\n\tAllow Unicode escapes, e.g., E'\\u####', in databases that do not\n\tuse UTF-8 encoding (Tom Lane)\n\n\tThe Unicode characters must be available in the database encoding.\n\n> I had assumed the patch applied to all of the forms U&'\\####',\n> U&'\\+######', E'\\u####', and E'\\U######' but I don't think I read\n> the patch to be sure of that.\n\nI am only using E'\\u####' as an example.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 09:46:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> OK, I have moved her name to be first. FYI, this commit was backpatched\n> back through PG 11, though the commit message doesn't mention that.\n\nIf it was back-patched, it should not be appearing in the v13 release\nnotes at all, surely?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 09:49:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, May 7, 2020 at 08:29:55AM +0200, Fabien COELHO wrote:\n>> After looking again at the release notes, I do really think that significant\n>> documentation changes do not belong to the \"Source code\" section but should\n>> be in separate \"Documentation\" section, and that more items should be listed\n>> there, because they represent a lot of not-so-fun work, especially Tom's\n>> restructuration of tables, and possibly others.\n\n> Uh, can someone else give an opinion on this? I am not sure how hard or\n> un-fun an item is should be used as criteria.\n\nHistorically we don't document documentation changes at all, do we?\nIt seems (a) pointless and (b) circular.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 09:52:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 09:49:49AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > OK, I have moved her name to be first. FYI, this commit was backpatched\n> > back through PG 11, though the commit message doesn't mention that.\n> \n> If it was back-patched, it should not be appearing in the v13 release\n> notes at all, surely?\n\nWell, her name was there already for a later commit that was not\nbackpatched, so I just moved her name earlier. The fact that her name\nwas moved earlier because of something that was backpatched is\ninconsistent, but I don't know enough about the work that went into the\nitem to comment on that. I will need someone to tell me, of the commits\nthat only appear in PG 13, what should be the name order.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 10:12:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 11:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, May 7, 2020 at 09:49:49AM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > OK, I have moved her name to be first. FYI, this commit was backpatched\n> > > back through PG 11, though the commit message doesn't mention that.\n> >\n> > If it was back-patched, it should not be appearing in the v13 release\n> > notes at all, surely?\n>\n> Well, her name was there already for a later commit that was not\n> backpatched, so I just moved her name earlier. The fact that her name\n> was moved earlier because of something that was backpatched is\n> inconsistent, but I don't know enough about the work that went into the\n> item to comment on that. I will need someone to tell me, of the commits\n> that only appear in PG 13, what should be the name order.\n\nSorry, I misremembered that the patch to make default partition\npruning more aggressive was not backpatched, because I thought at the\ntime that the patch had turned somewhat complex, but indeed it was\nbackpatched; in 11.5 release notes:\n\n Prune a partitioned table's default partition (that is, avoid\nuselessly scanning it) in more cases (Yuzuko Hosoya)\n\nSorry for the noise.\n\nI think it's okay for her name to appear first even considering the\ncommits that only appear in PG 13, because my role was mainly\nreviewing the work and perhaps posting an updated version of her\npatch.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 23:30:40 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 9:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, May 7, 2020 at 12:06:33AM +0900, Amit Langote wrote:\n> > +Author: Etsuro Fujita <efujita@postgresql.org>\n> > +2020-04-08 [c8434d64c] Allow partitionwise joins in more cases.\n> > +Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > +2020-04-07 [981643dcd] Allow partitionwise join to handle nested FULL JOIN USIN\n> > +-->\n> > +\n> > +<para>\n> > +Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\n> > Etsuro Fujita, Amit Langote)\n> > +</para>\n> >\n> > Maybe it would be better to break this into two items, because while\n> > c8434d64c is significant new functionality that I only contributed a\n> > few review comments towards, 981643dcd is relatively minor surgery of\n>\n> What text would we use for the new item? I thought FULL JOIN was just\n> another case that matched the description I had.\n\nc8434d64c implements a new feature whereby, to use partitionwise join,\npartition bounds of the tables being joined no longer have to match\nexactly. I think it might be better to mention this explicitly\nbecause it enables partitionwise joins to be used in more partitioning\nsetups.\n\n981643dcd fixes things so that 3-way and higher FULL JOINs can now be\nperformed partitionwise. I am okay with even omitting this if it\ndoesn't sound big enough to be its own item.\n\n> I think trying to put this all into one item is too complex, but I did\n> merge two of the items together, so we have two items now:\n>\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2020-03-10 [17b9e7f9f] Support adding partitioned tables to publication\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2020-04-08 [83fd4532a] Allow publishing partition changes via ancestors\n> -->\n>\n> <para>\n> Allow partitioned tables to be replicated via publications (Amit Langote)\n> </para>\n>\n> <para>\n> Previously, partitions had to be replicated individually. Now\n> partitioned tables can be published explicitly causing all partitions\n> to be automatically published. Addition/removal of partitions from\n> partitioned tables are automatically added/removed on subscribers.\n> The CREATE PUBLICATION option publish_via_partition_root controls whether\n> partitioned tables are published as themselves or their ancestors.\n> </para>\n\nThanks. Sounds good except I think the last sentence should read:\n\n...controls whether partition changes are published as their own or as\ntheir ancestor's.\n\n> </listitem>\n>\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n> -->\n>\n> <para>\n> Allow non-partitioned tables to be logically replicated to subscribers\n> that receive the rows into partitioned tables (Amit Langote)\n> </para>\n\nHmm, why it make it sound like this works only if the source table is\nnon-partitioned? The source table can be anything, a regular\nnon-partitioned table, or a partitioned one.\n\nHow about:\n\nAllow logical replication into partitioned tables on subscribers\n\nPreviously, it was allowed only into regular [ non-partitioned ] tables.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 May 2020 00:32:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 11:30:40PM +0900, Amit Langote wrote:\n> On Thu, May 7, 2020 at 11:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Well, her name was there already for a later commit that was not\n> > backpatched, so I just moved her name earlier. The fact that her name\n> > was moved earlier because of something that was backpatched is\n> > inconsistent, but I don't know enough about the work that went into the\n> > item to comment on that. I will need someone to tell me, of the commits\n> > that only appear in PG 13, what should be the name order.\n> \n> Sorry, I misremembered that the patch to make default partition\n> pruning more aggressive was not backpatched, because I thought at the\n> time that the patch had turned somewhat complex, but indeed it was\n> backpatched; in 11.5 release notes:\n> \n> Prune a partitioned table's default partition (that is, avoid\n> uselessly scanning it) in more cases (Yuzuko Hosoya)\n> \n> Sorry for the noise.\n> \n> I think it's okay for her name to appear first even considering the\n> commits that only appear in PG 13, because my role was mainly\n> reviewing the work and perhaps posting an updated version of her\n> patch.\n\nOK, confirmed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 12:38:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 12:32:16AM +0900, Amit Langote wrote:\n> On Thu, May 7, 2020 at 9:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Thu, May 7, 2020 at 12:06:33AM +0900, Amit Langote wrote:\n> > > +Author: Etsuro Fujita <efujita@postgresql.org>\n> > > +2020-04-08 [c8434d64c] Allow partitionwise joins in more cases.\n> > > +Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > > +2020-04-07 [981643dcd] Allow partitionwise join to handle nested FULL JOIN USIN\n> > > +-->\n> > > +\n> > > +<para>\n> > > +Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\n> > > Etsuro Fujita, Amit Langote)\n> > > +</para>\n> > >\n> > > Maybe it would be better to break this into two items, because while\n> > > c8434d64c is significant new functionality that I only contributed a\n> > > few review comments towards, 981643dcd is relatively minor surgery of\n> >\n> > What text would we use for the new item? I thought FULL JOIN was just\n> > another case that matched the description I had.\n> \n> c8434d64c implements a new feature whereby, to use partitionwise join,\n> partition bounds of the tables being joined no longer have to match\n> exactly. I think it might be better to mention this explicitly\n> because it enables partitionwise joins to be used in more partitioning\n> setups.\n\nWell, the text says:\n\n\tAllow partitionwise joins to happen in more cases (Ashutosh Bapat,\n\tEtsuro Fujita, Amit Langote, Tom Lane)\n\nIsn't that what you just said? I just added this paragraph:\n\n\tFor example, partitionwise joins can now happen between partitioned\n\ttables where the ancestors do not exactly match.\n\nDoes that help?\n\n> 981643dcd fixes things so that 3-way and higher FULL JOINs can now be\n> performed partitionwise. I am okay with even omitting this if it\n> doesn't sound big enough to be its own item.\n> \n> > I think trying to put this all into one item is too complex, but I did\n> > merge two of the items together, so we have two items now:\n> >\n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2020-03-10 [17b9e7f9f] Support adding partitioned tables to publication\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2020-04-08 [83fd4532a] Allow publishing partition changes via ancestors\n> > -->\n> >\n> > <para>\n> > Allow partitioned tables to be replicated via publications (Amit Langote)\n> > </para>\n> >\n> > <para>\n> > Previously, partitions had to be replicated individually. Now\n> > partitioned tables can be published explicitly causing all partitions\n> > to be automatically published. Addition/removal of partitions from\n> > partitioned tables are automatically added/removed on subscribers.\n> > The CREATE PUBLICATION option publish_via_partition_root controls whether\n> > partitioned tables are published as themselves or their ancestors.\n> > </para>\n> \n> Thanks. Sounds good except I think the last sentence should read:\n> \n> ...controls whether partition changes are published as their own or as\n> their ancestor's.\n\nOK, done.\n\n> > </listitem>\n> >\n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n> > -->\n> >\n> > <para>\n> > Allow non-partitioned tables to be logically replicated to subscribers\n> > that receive the rows into partitioned tables (Amit Langote)\n> > </para>\n> \n> Hmm, why it make it sound like this works only if the source table is\n> non-partitioned? The source table can be anything, a regular\n> non-partitioned table, or a partitioned one.\n\nWell, we already covered the publish partitioned case in the above item.\n\n> How about:\n> \n> Allow logical replication into partitioned tables on subscribers\n> \n> Previously, it was allowed only into regular [ non-partitioned ] tables.\n\nOK, I used this wording:\n\n\tAllow logical replication into partitioned tables on subscribers (Amit\n\tLangote)\n\t\n\tPreviously, subscribers could only receive rows into non-partitioned\n\ttables.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 May 2020 13:06:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 2:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> <para>\n> Allow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov)\n> </para>\n\nShould we specify which particular opclasses are affected? Or at\nleast mention it affects core and particular contribs?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 7 May 2020 20:12:30 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 05/07/20 09:46, Bruce Momjian wrote:\n> Ah, very good point. New text is:\n> \n> \tAllow Unicode escapes, e.g., E'\\u####', in databases that do not\n> \tuse UTF-8 encoding (Tom Lane)\n> \n> \tThe Unicode characters must be available in the database encoding.\n> ...\n> \n> I am only using E'\\u####' as an example.\n\nHmm, how about:\n\n\tAllow Unicode escapes, e.g., E'\\u####' or U&'\\####', to represent\n\tany character available in the database encoding, even when that\n\tencoding is not UTF-8.\n\nwhich I suggest as I recall more clearly that the former condition\nwas not that such escapes were always rejected in other encodings; it was\nthat they were rejected if they represented characters outside of ASCII.\n(Yossarian let out a respectful whistle.)\n\nMy inclination is to give at least one example each of the E and U&\nform, if only so the casual reader of the notes may think \"say! I hadn't\nheard of that other form!\" and be inspired to find out about it. But\nperhaps it seems too much.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 7 May 2020 14:08:58 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nOn Mon, May 4, 2020 at 8:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n\nI see that you have an entry for the deduplication feature:\n\n\"More efficiently store duplicates in btree indexes (Anastasia\nLubennikova, Peter Geoghegan)\"\n\nI would like to provide some input on this. Fortunately it's much\neasier to explain than the B-Tree work that went into Postgres 12. I\nthink that you should point out that deduplication works by storing\nthe duplicates in the obvious way: Only storing the key once per\ndistinct value (or once per distinct combination of values in the case\nof multi-column indexes), followed by an array of TIDs (i.e. a posting\nlist). Each TID points to a separate row in the table.\n\nIt won't be uncommon for this to make indexes as much as 3x smaller\n(it depends on a number of different factors that you can probably\nguess). I wrote a summary of how it works for power users in the\nB-Tree documentation chapter, which you might want to link to in the\nrelease notes:\n\nhttps://www.postgresql.org/docs/devel/btree-implementation.html#BTREE-DEDUPLICATION\n\nUsers that pg_upgrade will have to REINDEX to actually use the\nfeature, regardless of which version they've upgraded from. There are\nalso some limited caveats about the data types that can use\ndeduplication, and stuff like that -- see the documentation section I\nlinked to.\n\nFinally, you might want to note that the feature is enabled by\ndefault, and can be disabled by setting the \"deduplicate_items\" index\nstorage option to \"off\". (We have yet to make a final decision on\nwhether the feature should be enabled before the first stable release\nof Postgres 13, though -- I have an open item for that.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 May 2020 11:54:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThanks Bruce for compiling the release notes. Here are some comments\nfrom me, after looking at the state of the notes as of f2ff203.\n\nShould e2e02191 be added to the notes? This commit means that we\nactually dropped support for Windows 2000 (finally) at run-time.\n\nAt the same time I see no mention of 79dfa8af, which added better\nerror handling when backends the SSL context with incorrect bounds.\n\nWhat about fc8cb94, which basically means that vacuumlo and oid2name\nare able to now support coloring output for their logging?\n\n<para>\nDocument color support (Peter Eisentraut)\n</para>\n[...]\n<para>\nTHIS WAS NOT DOCUMENTED BEFORE?\n</para>\nNot sure that there is a point to add that to the release notes.\n--\nMichael", "msg_date": "Fri, 8 May 2020 11:55:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 2:06 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, May 8, 2020 at 12:32:16AM +0900, Amit Langote wrote:\n> > c8434d64c implements a new feature whereby, to use partitionwise join,\n> > partition bounds of the tables being joined no longer have to match\n> > exactly. I think it might be better to mention this explicitly\n> > because it enables partitionwise joins to be used in more partitioning\n> > setups.\n>\n> Well, the text says:\n>\n> Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\n> Etsuro Fujita, Amit Langote, Tom Lane)\n>\n> Isn't that what you just said? I just added this paragraph:\n>\n> For example, partitionwise joins can now happen between partitioned\n> tables where the ancestors do not exactly match.\n>\n> Does that help?\n\nYes, although \"ancestors do not exactly match\" doesn't make clear what\nabout partitioned tables doesn't match. \"partition bounds do not\nexactly match\" would.\n\n> > > <para>\n> > > Previously, partitions had to be replicated individually. Now\n> > > partitioned tables can be published explicitly causing all partitions\n> > > to be automatically published. Addition/removal of partitions from\n> > > partitioned tables are automatically added/removed on subscribers.\n> > > The CREATE PUBLICATION option publish_via_partition_root controls whether\n> > > partitioned tables are published as themselves or their ancestors.\n> > > </para>\n> >\n> > Thanks. Sounds good except I think the last sentence should read:\n> >\n> > ...controls whether partition changes are published as their own or as\n> > their ancestor's.\n>\n> OK, done.\n\nHmm, I see that you only took \"as their own\".\n\n- ...controls whether partitioned tables are published as themselves\nor their ancestors.\n+ ...controls whether partitioned tables are published as their own or\ntheir ancestors.\n\nand that makes the new sentence sound less clear. I mainly wanted\n\"partitioned table\" replaced by \"partition\", because only then the\nphrase \"as their own or their ancestor's\" would make sense.\n\nI know our partitioning terminology can be very confusing with many\nterms including at least \"partitioned table\", \"partition\", \"ancestor\",\n\"leaf partition\", \"parent\", \"child\", etc. that I see used.\n\n> > > </listitem>\n> > >\n> > > <listitem>\n> > > <!--\n> > > Author: Peter Eisentraut <peter@eisentraut.org>\n> > > 2020-04-06 [f1ac27bfd] Add logical replication support to replicate into partit\n> > > -->\n> > >\n> > > <para>\n> > > Allow non-partitioned tables to be logically replicated to subscribers\n> > > that receive the rows into partitioned tables (Amit Langote)\n> > > </para>\n> >\n> > Hmm, why it make it sound like this works only if the source table is\n> > non-partitioned? The source table can be anything, a regular\n> > non-partitioned table, or a partitioned one.\n>\n> Well, we already covered the publish partitioned case in the above item.\n>\n> > How about:\n> >\n> > Allow logical replication into partitioned tables on subscribers\n> >\n> > Previously, it was allowed only into regular [ non-partitioned ] tables.\n>\n> OK, I used this wording:\n>\n> Allow logical replication into partitioned tables on subscribers (Amit\n> Langote)\n>\n> Previously, subscribers could only receive rows into non-partitioned\n> tables.\n\nThis is fine, thanks.\n\nI have attached a patch with my suggestions above.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 8 May 2020 12:07:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 07, 2020 at 09:38:34AM -0400, Bruce Momjian wrote:\n> On Wed, May 6, 2020 at 10:20:57PM -0700, Noah Misch wrote:\n> > On Wed, May 06, 2020 at 07:40:25PM -0400, Bruce Momjian wrote:\n> > > On Tue, May 5, 2020 at 11:39:10PM -0700, Noah Misch wrote:\n> > > > On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> > > > > Allow skipping of WAL for new tables and indexes if wal_level is 'minimal' (Noah Misch)\n> > > > \n> > > > Kyotaro Horiguchi authored that one. (I committed it.) The commit message\n> > > > noted characteristics, some of which may deserve mention in the notes:\n> > > \n> > > Fixed.\n> > \n> > I don't see that change pushed (but it's not urgent).\n> \n> I got stuck on Amit's partition items and my head couldn't process any\n> more, so I went to bed, and just committed it now. I was afraid to have\n> pending stuff uncommitted, but I am also hesitant to do a commit for\n> each change.\n\nGot it, +1 for batching such changes.\n\n> > > > - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n> > > \n> > > This was not backpatched?\n> > \n> > Right.\n> \n> Oh. So you are saying we could lose COPY data on a crash, even after a\n> commit. That seems bad. Can you show me the commit info? I can't find\n> it.\n\ncommit c6b9204\nAuthor: Noah Misch <noah@leadboat.com>\nAuthorDate: Sat Apr 4 12:25:34 2020 -0700\nCommit: Noah Misch <noah@leadboat.com>\nCommitDate: Sat Apr 4 12:25:34 2020 -0700\n\n Skip WAL for new relfilenodes, under wal_level=minimal.\n \n Until now, only selected bulk operations (e.g. COPY) did this. If a\n given relfilenode received both a WAL-skipping COPY and a WAL-logged\n operation (e.g. INSERT), recovery could lose tuples from the COPY. See\n src/backend/access/transam/README section \"Skipping WAL for New\n RelFileNode\" for the new coding rules. Maintainers of table access\n methods should examine that section.\n \n To maintain data durability, just before commit, we choose between an\n fsync of the relfilenode and copying its contents to WAL. A new GUC,\n wal_skip_threshold, guides that choice. If this change slows a workload\n that creates small, permanent relfilenodes under wal_level=minimal, try\n adjusting wal_skip_threshold. Users setting a timeout on COMMIT may\n need to adjust that timeout, and log_min_duration_statement analysis\n will reflect time consumption moving to COMMIT from commands like COPY.\n \n Internally, this requires a reliable determination of whether\n RollbackAndReleaseCurrentSubTransaction() would unlink a relation's\n current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the\n specification of rd_createSubid such that the field is zero when a new\n rel has an old rd_node. Make relcache.c retain entries for certain\n dropped relations until end of transaction.\n \n Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN.\n Future servers accept older WAL, so this bump is discretionary.\n \n Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert\n Haas. Heikki Linnakangas and Michael Paquier implemented earlier\n designs that materially clarified the problem. Reviewed, in earlier\n designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,\n Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.\n \n Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org\n\n\n", "msg_date": "Thu, 7 May 2020 21:22:02 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nHello Tom,\n\n>> Uh, can someone else give an opinion on this? I am not sure how hard or\n>> un-fun an item is should be used as criteria.\n\n> Historically we don't document documentation changes at all, do we?\n\nISTM that the \"we did not do it previously\" is as weak an argument as \nun-fun-ness:-)\n\n> It seems (a) pointless\n\nI disagree, on the very principle of free software values as a social \nmovement.\n\nDocumentation improvements should be encouraged, and recognizing these in \nthe release notes contributes to do that for what is a lot of unpaid work \ngiven freely by many people. I do not see this as \"pointless\", on the \ncontrary, having something \"free\" in a mostly mercantile world is odd \nenough to deserve some praise.\n\nHow many hours have you spent on the function operator table improvements? \nIf someone else had contributed that and only that to a release, would it \nnot justify two lines of implicit thanks somewhere down in the release \nnotes?\n\nMoreover adding a documentation section costs next to nothing, so what is \nthe actual point of not doing it? Also, having some documentation \nimprovements listed under \"source code\" does not make sense: writing good, \nprecise and structured English is not \"source code\".\n\n> and (b) circular.\n\nMeh. The whole documentation is \"circular\" by construction, with \nreferences from one section to the next and back, indexes, glossary, \nacronyms, tutorials, whatever.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 8 May 2020 09:14:01 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-05-05 22:29, Bruce Momjian wrote:\n>>>> a01e1b8b9d Add new part SQL/MDA to information_schema.sql_parts 33e27c3785c5ce8a3264d6af2550ec5adcebc517\n>>>> 2fc2a88e67 Remove obsolete information schema tables\n>>> Uh, that didn't seem significant.\n>> Maybe have one item \"modernize information_schema\", and then describe\n>> all the changes together in a single item.\n> Uh, so I am unclear when we are adding items to information_schema\n> because we now support them, and when they are new features of\n> information_schema.\n\nThe addition was because it's a new SQL standard part that was published \nin the meantime.\n\nThe removals were because they no longer exist in the current standard \nversion and keeping them otherwise didn't make sense.\n\nNeither of these need to be mentioned in the release notes IMO.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 11:40:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 5, 2020 at 8:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n>\n> https://momjian.us/pgsql_docs/release-13.html\n>\n\nThanks for the work. I was today going through the release notes and\nwas wondering whether we should consider adding information about some\nother work done for PG13.\n1. We have allowed an (auto)vacuum to display additional information\nabout heap or index in case of an error in commit b61d161c14 [1].\nNow, in general, it might not be worth saying much about error\ninformation but I think this one could help users in case they have\nsome corruption. For example, if one of the indexes on a relation has\nsome corrupted data (due to bad hardware or some bug), it will let the\nuser know the index information, and the user can take appropriate\naction like either Reindex or maybe drop and recreate the index to\novercome the problem.\n2. In the \"Source Code\" section, we can add information about\ninfrastructure enhancement for parallelism. Basically, \"Allow\nrelation extension and page lock to conflict among parallel-group\nmembers\" [2][3]. This will allow improving the parallelism further in\nmany cases like (a) we can allow multiple workers to operate on a heap\nand index in a parallel vacuum, (b) we can allow parallel Inserts,\netc.\n\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b61d161c146328ae6ba9ed937862d66e5c8b035a\n[2] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=85f6b49c2c53fb1e08d918ec9305faac13cf7ad6\n[3] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3ba59ccc896e8877e2fbfb8d4f148904cad5f9b0\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 9 May 2020 11:16:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 12:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, May 8, 2020 at 2:06 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Fri, May 8, 2020 at 12:32:16AM +0900, Amit Langote wrote:\n> > > c8434d64c implements a new feature whereby, to use partitionwise join,\n> > > partition bounds of the tables being joined no longer have to match\n> > > exactly. I think it might be better to mention this explicitly\n> > > because it enables partitionwise joins to be used in more partitioning\n> > > setups.\n> >\n> > Well, the text says:\n> >\n> > Allow partitionwise joins to happen in more cases (Ashutosh Bapat,\n> > Etsuro Fujita, Amit Langote, Tom Lane)\n> >\n> > Isn't that what you just said? I just added this paragraph:\n> >\n> > For example, partitionwise joins can now happen between partitioned\n> > tables where the ancestors do not exactly match.\n> >\n> > Does that help?\n>\n> Yes, although \"ancestors do not exactly match\" doesn't make clear what\n> about partitioned tables doesn't match. \"partition bounds do not\n> exactly match\" would.\n\n+1 for that change.\n\nThank you for taking the time to this!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 9 May 2020 20:35:04 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "> In ltree, when using adjacent asterisks with braces, e.g. \"*{2}.*{3}\", properly interpret that as \"*{5}\" (Nikita Glukhov)\n\nI think that should say \".*\" not \"*\", as in:\n\n> In ltree, when using adjacent asterisks with braces, e.g. \".*{2}.*{3}\", properly interpret that as \"*{5}\" (Nikita Glukhov)\n\nThe existing text clearly came from the commit message, which (based on its\nregression tests) I think was the source of the missing dot.\n\ncommit 9950c8aadf0edd31baec74a729d47d94af636c06\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Mar 28 18:31:05 2020 -0400\n\n Fix lquery's behavior for consecutive '*' items.\n \n Something like \"*{2}.*{3}\" should presumably mean the same as\n \"*{5}\", but it didn't. Improve that.\n ...\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 10 May 2020 15:09:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft (ltree dot star)" }, { "msg_contents": "On Sat, May 9, 2020 at 11:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 5, 2020 at 8:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> >\n>\n> Thanks for the work. I was today going through the release notes and\n> was wondering whether we should consider adding information about some\n> other work done for PG13.\n> 1. We have allowed an (auto)vacuum to display additional information\n> about heap or index in case of an error in commit b61d161c14 [1].\n> Now, in general, it might not be worth saying much about error\n> information but I think this one could help users in case they have\n> some corruption. For example, if one of the indexes on a relation has\n> some corrupted data (due to bad hardware or some bug), it will let the\n> user know the index information, and the user can take appropriate\n> action like either Reindex or maybe drop and recreate the index to\n> overcome the problem.\n> 2. In the \"Source Code\" section, we can add information about\n> infrastructure enhancement for parallelism. Basically, \"Allow\n> relation extension and page lock to conflict among parallel-group\n> members\" [2][3]. This will allow improving the parallelism further in\n> many cases like (a) we can allow multiple workers to operate on a heap\n> and index in a parallel vacuum, (b) we can allow parallel Inserts,\n> etc.\n>\n\nOne more observation:\n\nAllow inserts to trigger autovacuum activity (Laurenz Albe, Darafei\nPraliaskouski)\nThis new behavior allows pages to be set as all-visible, which then\nallows index-only scans, ...\n\nThe above sentence sounds to mean that this feature allows index-only\nscans in more number of cases after this feature. Is that what you\nintend to say? If so, is that correct? Because I think this will\nallow index-only scans to skip \"Heap Fetches\" in more cases.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 May 2020 10:52:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nOn 2020/05/05 12:16, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThanks for working on this! :-D\n\nCould you add \"Vinayak Pokale\" as a co-author of the following feature since\nI sometimes read his old patch to create a patch [1] ?\n\n=======================\nE.1.3.1.6. System Views\n\n- Add system view pg_stat_progress_analyze to report analyze progress (Álvaro Herrera, Tatsuro Yamada)\n\n+ Add system view pg_stat_progress_analyze to report analyze progress (Álvaro Herrera, Tatsuro Yamada, Vinayak Pokale)\n=======================\n\n[1]\nhttps://www.postgresql.org/message-id/20190813140127.GA4933%40alvherre.pgsql\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Mon, 11 May 2020 15:19:50 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 08:12:30PM +0300, Alexander Korotkov wrote:\n> On Thu, May 7, 2020 at 2:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > <para>\n> > Allow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov)\n> > </para>\n> \n> Should we specify which particular opclasses are affected? Or at\n> least mention it affects core and particular contribs?\n\nSorry for the delay in replying. Yes, I agree we should list all of\nthose operator class cases where we now take arguments. I looked at the\npatch but got confused and missed the doc changes that clearly need to\nbe in the release notes. I see these operator classes now taking\nparameters, as you helpfully listed in your commit message:\n\n\ttsvector_ops\n\tgist_ltree_ops\n\tgist__ltree_ops\n\tgist_trgm_ops\n\tgist_hstore_ops\n\tgist__int_ops\n\tgist__intbig_ops\n\nI assume the double-underscore is because the first underscore is to\nseparate words, and the second one is for the array designation, right?\n\nSo my big question is whether people will understand when they are using\nthese operator classes, since many of them are defaults. Can you use an\noperator class parameter when you are just using the default operator\nclass and not specifying its name? What I thinking that just saying\nthe operator class take arguments might not be helpful. I think I see\nenough detail in the documentation to write release note items for\nthese, but I will have to point out they need to specify the operator\nclass, even if it is the default, right?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 09:20:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 4:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Sorry for the delay in replying. Yes, I agree we should list all of\n> those operator class cases where we now take arguments. I looked at the\n> patch but got confused and missed the doc changes that clearly need to\n> be in the release notes. I see these operator classes now taking\n> parameters, as you helpfully listed in your commit message:\n>\n> tsvector_ops\n> gist_ltree_ops\n> gist__ltree_ops\n> gist_trgm_ops\n> gist_hstore_ops\n> gist__int_ops\n> gist__intbig_ops\n>\n> I assume the double-underscore is because the first underscore is to\n> separate words, and the second one is for the array designation, right?\n\nYes, this is true.\n\n> So my big question is whether people will understand when they are using\n> these operator classes, since many of them are defaults. Can you use an\n> operator class parameter when you are just using the default operator\n> class and not specifying its name?\n\nActually no. Initial version of patch allowed to explicitly specify\nDEFAULT keyword instead of opclass name. But I didn't like idea to\nallow keyword instead of name there.\n\nI've tried to implement syntax allowing specifying parameters without\nboth new keyword and opclass name, but that causes a lot of grammar\nproblems.\n\nFinally, I've decided to provide parameters functionality only when\nspecifying opclass name. My motivation is that opclass parameters is\nfunctionality for advanced users, who are deeply concerned into what\nopclass do. For this category of users it's natural to know the\nopclass name.\n\n> What I thinking that just saying\n> the operator class take arguments might not be helpful. I think I see\n> enough detail in the documentation to write release note items for\n> these, but I will have to point out they need to specify the operator\n> class, even if it is the default, right?\n\nMy point was that we should specify where to look to find new\nfunctionality. We can don't write opclass names, because those names\nmight be confusing for users who are not aware of them. We may\nbriefly say that new parameters are introduced for GiST for tsvector,\ncontrib/intarray, contrib/pg_trgm, contrib/ltree, contrib/hstore.\nWhat do you think?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 11 May 2020 20:41:00 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hello\n\n> +<!--\n> +Author: Alexander Korotkov <akorotkov@postgresql.org>\n> +2020-03-08 [b0b5e20cd] Show opclass and opfamily related information in psql\n> +-->\n> +\n> +<para>\n> +Add psql commands to report operator classes and operator families (Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov)\n> +</para>\n\nI think this item should list the commands in question:\n\\dA, \\dAc, \\dAf, \\dAo, \\dAp\n(All the other psql entries in the relnotes do that).\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 May 2020 16:50:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-11, Alvaro Herrera wrote:\n\n> Hello\n> \n> > +<!--\n> > +Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > +2020-03-08 [b0b5e20cd] Show opclass and opfamily related information in psql\n> > +-->\n> > +\n> > +<para>\n> > +Add psql commands to report operator classes and operator families (Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov)\n> > +</para>\n> \n> I think this item should list the commands in question:\n> \\dA, \\dAc, \\dAf, \\dAo, \\dAp\n> (All the other psql entries in the relnotes do that).\n\nSorry, it's the last four only -- \\dA is an older command.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 May 2020 17:09:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 08:41:00PM +0300, Alexander Korotkov wrote:\n> On Mon, May 11, 2020 at 4:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Sorry for the delay in replying. Yes, I agree we should list all of\n> > those operator class cases where we now take arguments. I looked at the\n> > patch but got confused and missed the doc changes that clearly need to\n> > be in the release notes. I see these operator classes now taking\n> > parameters, as you helpfully listed in your commit message:\n> >\n> > tsvector_ops\n> > gist_ltree_ops\n> > gist__ltree_ops\n> > gist_trgm_ops\n> > gist_hstore_ops\n> > gist__int_ops\n> > gist__intbig_ops\n> >\n> > I assume the double-underscore is because the first underscore is to\n> > separate words, and the second one is for the array designation, right?\n> \n> Yes, this is true.\n\nOK.\n\n> > So my big question is whether people will understand when they are using\n> > these operator classes, since many of them are defaults. Can you use an\n> > operator class parameter when you are just using the default operator\n> > class and not specifying its name?\n> \n> Actually no. Initial version of patch allowed to explicitly specify\n> DEFAULT keyword instead of opclass name. But I didn't like idea to\n> allow keyword instead of name there.\n> \n> I've tried to implement syntax allowing specifying parameters without\n> both new keyword and opclass name, but that causes a lot of grammar\n> problems.\n> \n> Finally, I've decided to provide parameters functionality only when\n> specifying opclass name. My motivation is that opclass parameters is\n> functionality for advanced users, who are deeply concerned into what\n> opclass do. For this category of users it's natural to know the\n> opclass name.\n\nYes, that is good analysis, and your final decision was probably\ncorrect. I now see that the complexity is not a big problem.\n\n> > What I thinking that just saying\n> > the operator class take arguments might not be helpful. I think I see\n> > enough detail in the documentation to write release note items for\n> > these, but I will have to point out they need to specify the operator\n> > class, even if it is the default, right?\n> \n> My point was that we should specify where to look to find new\n> functionality. We can don't write opclass names, because those names\n> might be confusing for users who are not aware of them. We may\n> briefly say that new parameters are introduced for GiST for tsvector,\n> contrib/intarray, contrib/pg_trgm, contrib/ltree, contrib/hstore.\n> What do you think?\n\nOK, I have applied the attached patch, which I now think is the right\nlevel of detail, given your information above. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Mon, 11 May 2020 18:52:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 02:08:58PM -0400, Chapman Flack wrote:\n> On 05/07/20 09:46, Bruce Momjian wrote:\n> > Ah, very good point. New text is:\n> > \n> > \tAllow Unicode escapes, e.g., E'\\u####', in databases that do not\n> > \tuse UTF-8 encoding (Tom Lane)\n> > \n> > \tThe Unicode characters must be available in the database encoding.\n> > ...\n> > \n> > I am only using E'\\u####' as an example.\n> \n> Hmm, how about:\n> \n> \tAllow Unicode escapes, e.g., E'\\u####' or U&'\\####', to represent\n> \tany character available in the database encoding, even when that\n> \tencoding is not UTF-8.\n> \n> which I suggest as I recall more clearly that the former condition\n> was not that such escapes were always rejected in other encodings; it was\n> that they were rejected if they represented characters outside of ASCII.\n> (Yossarian let out a respectful whistle.)\n\nI like your wording, but the \"that encoding\" wasn't clear enough for me,\nso I reworded it to:\n\n\tAllow Unicode escapes, e.g., E'\\u####', U&'\\####', to represent any\n\tcharacter available in the database encoding, even when the database\n\tencoding is not UTF-8 (Tom Lane)\n\n> My inclination is to give at least one example each of the E and U&\n> form, if only so the casual reader of the notes may think \"say! I hadn't\n> heard of that other form!\" and be inspired to find out about it. But\n> perhaps it seems too much.\n\nSure, works for me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 19:00:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 11:54:12AM -0700, Peter Geoghegan wrote:\n> Hi Bruce,\n> \n> On Mon, May 4, 2020 at 8:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> \n> I see that you have an entry for the deduplication feature:\n> \n> \"More efficiently store duplicates in btree indexes (Anastasia\n> Lubennikova, Peter Geoghegan)\"\n> \n> I would like to provide some input on this. Fortunately it's much\n> easier to explain than the B-Tree work that went into Postgres 12. I\n -----------------\n\nWell, that's good! :-)\n\n> think that you should point out that deduplication works by storing\n> the duplicates in the obvious way: Only storing the key once per\n> distinct value (or once per distinct combination of values in the case\n> of multi-column indexes), followed by an array of TIDs (i.e. a posting\n> list). Each TID points to a separate row in the table.\n\nThese are not details that should be in the release notes since the\ninternal representation is not important for its use.\n\n> It won't be uncommon for this to make indexes as much as 3x smaller\n> (it depends on a number of different factors that you can probably\n> guess). I wrote a summary of how it works for power users in the\n> B-Tree documentation chapter, which you might want to link to in the\n> release notes:\n> \n> https://www.postgresql.org/docs/devel/btree-implementation.html#BTREE-DEDUPLICATION\n> \n> Users that pg_upgrade will have to REINDEX to actually use the\n> feature, regardless of which version they've upgraded from. There are\n> also some limited caveats about the data types that can use\n> deduplication, and stuff like that -- see the documentation section I\n> linked to.\n\nI have added text to this about pg_upgrade:\n\n\tUsers upgrading with pg_upgrade will need to use REINDEX to make\n\tuse of this feature.\n\n> Finally, you might want to note that the feature is enabled by\n> default, and can be disabled by setting the \"deduplicate_items\" index\n> storage option to \"off\". (We have yet to make a final decision on\n> whether the feature should be enabled before the first stable release\n> of Postgres 13, though -- I have an open item for that.)\n\nWell, again, I don't think the average user needs to know this can be\ndisabled. They can look at the docs of this feature to see that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 19:10:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 11:55:33AM +0900, Michael Paquier wrote:\n> On Mon, May 04, 2020 at 11:16:00PM -0400, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Thanks Bruce for compiling the release notes. Here are some comments\n> from me, after looking at the state of the notes as of f2ff203.\n> \n> Should e2e02191 be added to the notes? This commit means that we\n> actually dropped support for Windows 2000 (finally) at run-time.\n\nOh, yes. This is much more important than the removal of support for\nnon-ELF BSD systems, which I already listed. The new text is:\n\n\tRemove support for Windows 2000 (Michael Paquier)\n\n> At the same time I see no mention of 79dfa8af, which added better\n> error handling when backends the SSL context with incorrect bounds.\n\nI skipped that commit since people don't normally care about better\nerror messages until they see the error message, and then they are happy\nit is there, unless this is some chronic error message problem we are\nfixing.\n\n> What about fc8cb94, which basically means that vacuumlo and oid2name\n> are able to now support coloring output for their logging?\n\nI thought this fell into the previous category about error messages, but\ncoloring is different. Can we say these utilities now honor the color\nenvironment variables? Are these the only new ones?\n\n> <para>\n> Document color support (Peter Eisentraut)\n> </para>\n> [...]\n> <para>\n> THIS WAS NOT DOCUMENTED BEFORE?\n> </para>\n> Not sure that there is a point to add that to the release notes.\n\nOK, removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 19:18:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 12:07:09PM +0900, Amit Langote wrote:\n> > OK, I used this wording:\n> >\n> > Allow logical replication into partitioned tables on subscribers (Amit\n> > Langote)\n> >\n> > Previously, subscribers could only receive rows into non-partitioned\n> > tables.\n> \n> This is fine, thanks.\n> \n> I have attached a patch with my suggestions above.\n\nOK, I slightly modified the wording of your first change, patch\nattached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Mon, 11 May 2020 19:51:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 4:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > think that you should point out that deduplication works by storing\n> > the duplicates in the obvious way: Only storing the key once per\n> > distinct value (or once per distinct combination of values in the case\n> > of multi-column indexes), followed by an array of TIDs (i.e. a posting\n> > list). Each TID points to a separate row in the table.\n>\n> These are not details that should be in the release notes since the\n> internal representation is not important for its use.\n\nI am not concerned about describing the specifics of the on-disk\nrepresentation, and I don't feel too strongly about the storage\nparameter (leave it out). I only ask that the wording convey the fact\nthat the deduplication feature is not just a quantitative improvement\n-- it's a qualitative behavioral change, that will help data\nwarehousing in particular. This wasn't the case with the v12 work on\nB-Tree duplicates (as I said last year, I thought of the v12 stuff as\nfixing a problem more than an enhancement).\n\nWith the deduplication feature added to Postgres v13, the B-Tree code\ncan now gracefully deal with low cardinality data by compressing the\nduplicates as needed. This is comparable to bitmap indexes in\nproprietary database systems, but without most of their disadvantages\n(in particular around heavyweight locking, deadlocks that abort\ntransactions, etc). It's easy to imagine this making a big difference\nwith analytics workloads. The v12 work made indexes with lots of\nduplicates 15%-30% smaller (compared to v11), but the v13 work can\nmake them 60% - 80% smaller in many common cases (compared to v12). In\nextreme cases indexes might even be ~12x smaller (though that will be\nrare).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 May 2020 17:05:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 7, 2020 at 09:22:02PM -0700, Noah Misch wrote:\n> On Thu, May 07, 2020 at 09:38:34AM -0400, Bruce Momjian wrote:\n> > > > > - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n> > > > \n> > > > This was not backpatched?\n> > > \n> > > Right.\n> > \n> > Oh. So you are saying we could lose COPY data on a crash, even after a\n> > commit. That seems bad. Can you show me the commit info? I can't find\n> > it.\n> \n> commit c6b9204\n> Author: Noah Misch <noah@leadboat.com>\n> AuthorDate: Sat Apr 4 12:25:34 2020 -0700\n> Commit: Noah Misch <noah@leadboat.com>\n> CommitDate: Sat Apr 4 12:25:34 2020 -0700\n> \n> Skip WAL for new relfilenodes, under wal_level=minimal.\n> \n> Until now, only selected bulk operations (e.g. COPY) did this. If a\n> given relfilenode received both a WAL-skipping COPY and a WAL-logged\n> operation (e.g. INSERT), recovery could lose tuples from the COPY. See\n> src/backend/access/transam/README section \"Skipping WAL for New\n> RelFileNode\" for the new coding rules. Maintainers of table access\n> methods should examine that section.\n\nOK, so how do we want to document this? Do I mention in the text below\nthe WAL skipping item that this fixes a bug where a mix of simultaneous\nCOPY and INSERT into a table could lose rows during crash recovery, or\ncreate a new item?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:12:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 05:05:29PM -0700, Peter Geoghegan wrote:\n> On Mon, May 11, 2020 at 4:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > think that you should point out that deduplication works by storing\n> > > the duplicates in the obvious way: Only storing the key once per\n> > > distinct value (or once per distinct combination of values in the case\n> > > of multi-column indexes), followed by an array of TIDs (i.e. a posting\n> > > list). Each TID points to a separate row in the table.\n> >\n> > These are not details that should be in the release notes since the\n> > internal representation is not important for its use.\n> \n> I am not concerned about describing the specifics of the on-disk\n> representation, and I don't feel too strongly about the storage\n> parameter (leave it out). I only ask that the wording convey the fact\n> that the deduplication feature is not just a quantitative improvement\n> -- it's a qualitative behavioral change, that will help data\n> warehousing in particular. This wasn't the case with the v12 work on\n> B-Tree duplicates (as I said last year, I thought of the v12 stuff as\n> fixing a problem more than an enhancement).\n> \n> With the deduplication feature added to Postgres v13, the B-Tree code\n> can now gracefully deal with low cardinality data by compressing the\n> duplicates as needed. This is comparable to bitmap indexes in\n> proprietary database systems, but without most of their disadvantages\n> (in particular around heavyweight locking, deadlocks that abort\n> transactions, etc). It's easy to imagine this making a big difference\n> with analytics workloads. The v12 work made indexes with lots of\n> duplicates 15%-30% smaller (compared to v11), but the v13 work can\n> make them 60% - 80% smaller in many common cases (compared to v12). In\n> extreme cases indexes might even be ~12x smaller (though that will be\n> rare).\n\nAgreed. How is this?\n\n\tThis allows efficient btree indexing of low cardinality columns.\n\tUsers upgrading with pg_upgrade will need to use REINDEX to make use of\n\tthis feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:14:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 8, 2020 at 09:14:01AM +0200, Fabien COELHO wrote:\n> > It seems (a) pointless\n> \n> I disagree, on the very principle of free software values as a social\n> movement.\n> \n> Documentation improvements should be encouraged, and recognizing these in\n> the release notes contributes to do that for what is a lot of unpaid work\n> given freely by many people. I do not see this as \"pointless\", on the\n> contrary, having something \"free\" in a mostly mercantile world is odd enough\n> to deserve some praise.\n> \n> How many hours have you spent on the function operator table improvements?\n> If someone else had contributed that and only that to a release, would it\n> not justify two lines of implicit thanks somewhere down in the release\n> notes?\n> \n> Moreover adding a documentation section costs next to nothing, so what is\n> the actual point of not doing it? Also, having some documentation\n> improvements listed under \"source code\" does not make sense: writing good,\n> precise and structured English is not \"source code\".\n\nWe have long discussed how much of the release notes is to reward\nbehavior, and we have settled on having the names on the items, and the\nAcknowledgments section at the bottom. If you want to revisit that\ndecision, you should start a new thread because doing it for just this\nitem doesn't make sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:17:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Sat, May 9, 2020 at 11:16:27AM +0530, Amit Kapila wrote:\n> On Tue, May 5, 2020 at 8:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> >\n> > https://momjian.us/pgsql_docs/release-13.html\n> >\n> \n> Thanks for the work. I was today going through the release notes and\n> was wondering whether we should consider adding information about some\n> other work done for PG13.\n> 1. We have allowed an (auto)vacuum to display additional information\n> about heap or index in case of an error in commit b61d161c14 [1].\n> Now, in general, it might not be worth saying much about error\n> information but I think this one could help users in case they have\n> some corruption. For example, if one of the indexes on a relation has\n> some corrupted data (due to bad hardware or some bug), it will let the\n> user know the index information, and the user can take appropriate\n> action like either Reindex or maybe drop and recreate the index to\n> overcome the problem.\n\nI mentioned my approach to error message changes in a previous email\ntoday:\n\n\tI skipped that commit since people don't normally care about\n\tbetter error messages until they see the error message, and then\n\tthey are happy it is there, unless this is some chronic error\n\tmessage problem we are fixing.\n\n> 2. In the \"Source Code\" section, we can add information about\n> infrastructure enhancement for parallelism. Basically, \"Allow\n> relation extension and page lock to conflict among parallel-group\n> members\" [2][3]. This will allow improving the parallelism further in\n> many cases like (a) we can allow multiple workers to operate on a heap\n> and index in a parallel vacuum, (b) we can allow parallel Inserts,\n> etc.\n\nUh, if there is no user-visible behavior change, this seems too low\nlevel for the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:20:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 10:52:41AM +0530, Amit Kapila wrote:\n> One more observation:\n> \n> Allow inserts to trigger autovacuum activity (Laurenz Albe, Darafei\n> Praliaskouski)\n> This new behavior allows pages to be set as all-visible, which then\n> allows index-only scans, ...\n> \n> The above sentence sounds to mean that this feature allows index-only\n> scans in more number of cases after this feature. Is that what you\n> intend to say? If so, is that correct? Because I think this will\n\nYes.\n\n> allow index-only scans to skip \"Heap Fetches\" in more cases.\n\nUh, by definition an index-only scan only scans the index, not the heap,\nright? It is true there are fewer heap fetches, but fewer heap features\nI thought mean more index-only scans.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:22:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Sun, May 10, 2020 at 03:09:47PM -0500, Justin Pryzby wrote:\n> > In ltree, when using adjacent asterisks with braces, e.g. \"*{2}.*{3}\", properly interpret that as \"*{5}\" (Nikita Glukhov)\n> \n> I think that should say \".*\" not \"*\", as in:\n> \n> > In ltree, when using adjacent asterisks with braces, e.g. \".*{2}.*{3}\", properly interpret that as \"*{5}\" (Nikita Glukhov)\n> \n> The existing text clearly came from the commit message, which (based on its\n> regression tests) I think was the source of the missing dot.\n> \n> commit 9950c8aadf0edd31baec74a729d47d94af636c06\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Sat Mar 28 18:31:05 2020 -0400\n> \n> Fix lquery's behavior for consecutive '*' items.\n> \n> Something like \"*{2}.*{3}\" should presumably mean the same as\n> \"*{5}\", but it didn't. Improve that.\n> ...\n\nOK, fixed to be:\n\n\tIn ltree, when using adjacent asterisks with braces, e.g. \".*{2}.*{3}\",\n\tproperly interpret that as \".*{5}\" (Nikita Glukhov)\n\nI added two dots.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:33:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft (ltree dot star)" }, { "msg_contents": "On Mon, May 11, 2020 at 5:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Agreed. How is this?\n>\n> This allows efficient btree indexing of low cardinality columns.\n> Users upgrading with pg_upgrade will need to use REINDEX to make use of\n> this feature.\n\nI still think that the release notes should say that the key is only\nstored once, while TIDs that identify table rows are stored together\nas an array. Everything that's helpful (or harmful) about the feature\nhappens as a consequence of that. This isn't hard to grasp\nintuitively, and is totally in line with how things like Oracle bitmap\nindexes are presented to ordinary users.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 May 2020 17:33:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 03:19:50PM +0900, Tatsuro Yamada wrote:\n> Hi Bruce,\n> \n> On 2020/05/05 12:16, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Thanks for working on this! :-D\n> \n> Could you add \"Vinayak Pokale\" as a co-author of the following feature since\n> I sometimes read his old patch to create a patch [1] ?\n> \n> =======================\n> E.1.3.1.6. System Views\n> \n> - Add system view pg_stat_progress_analyze to report analyze progress (�lvaro Herrera, Tatsuro Yamada)\n> \n> + Add system view pg_stat_progress_analyze to report analyze progress (�lvaro Herrera, Tatsuro Yamada, Vinayak Pokale)\n> =======================\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:34:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-11, Bruce Momjian wrote:\n\n> We have long discussed how much of the release notes is to reward\n> behavior, and we have settled on having the names on the items, and the\n> Acknowledgments section at the bottom.\n\nYes, we had to touch the source code in order to add documentation; but\nso what? Everything touches the source code, but that doesn't mean it\nshould be listed there. I don't see what's the problem with having a\nnew subsection in the relnotes entitled \"Documentation\" where these two\nitems appears (glossary + new doc table format) are listed. It's not\nlike it's going to cost us a lot of space or anything.\n\nI don't think there is any circularity argument BTW -- we're not going\nto document that we added release notes. And changing the table format\nis not entirely pointless, given that we've historically had trouble\nwith these tables (read: they're totally unusable in PDF).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 May 2020 20:34:56 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "|Allow function call backtraces of errors to be logged (Peter Eisentraut, �lvaro Herrera)\n|Server variable backtrace_functions specifies which C functions should generate backtraces on error.\n\nI think the details in the description are eclipsing the most important thing:\nbacktraces on Assert(). I would say \"Support for showing backtraces on error\".\n\nRegarding this one:\n|Add system view pg_shmem_allocations to display shared memory usage (Andres Freund, Robert Haas)\n|WHAT IS THE ENTRY WITH NO NAME?\n\nThere seems to be two special, \"unnamed\" cases:\nsrc/backend/storage/ipc/shmem.c- /* output shared memory allocated but not counted via the shmem index */\nsrc/backend/storage/ipc/shmem.c: values[0] = CStringGetTextDatum(\"<anonymous>\");\n...\nsrc/backend/storage/ipc/shmem.c- /* output as-of-yet unused shared memory */\nsrc/backend/storage/ipc/shmem.c- nulls[0] = true;\n\nThat seems to be adequately documented:\nhttps://www.postgresql.org/docs/devel/view-pg-shmem-allocations.html\n|NULL for unused memory and <anonymous> for anonymous allocations.\n\nI would remove this part:\n\"Previously, this could only be set at server start.\"\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 May 2020 19:41:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 10:52:41AM +0530, Amit Kapila wrote:\n> > 1. We have allowed an (auto)vacuum to display additional information\n> > about heap or index in case of an error in commit b61d161c14 [1].\n> > Now, in general, it might not be worth saying much about error\n> > information but I think this one could help users in case they have\n> > some corruption. For example, if one of the indexes on a relation has\n> > some corrupted data (due to bad hardware or some bug), it will let the\n> > user know the index information, and the user can take appropriate\n> > action like either Reindex or maybe drop and recreate the index to\n> > overcome the problem.\n\nI'm not opposed to including it, but I think it's still true that the user\ndoesn't need to know in advance that the error message will be additionally\nhelpful in the event of corruption. If we were to include more \"error\" items,\nwe might also include these:\n\n71a8a4f6e36547bb060dbcc961ea9b57420f7190 Add backtrace support for error reporting\n17a28b03645e27d73bf69a95d7569b61e58f06eb Improve the internal implementation of ereport().\n05f18c6b6b6e4b44302ee20a042cedc664532aa2 Added relation name in error messages for constraint checks.\n33753ac9d7bc83dd9ccee9d5e678ed78a0725b4e Add object names to partition integrity violations.\n\n> One more observation:\n> \n> Allow inserts to trigger autovacuum activity (Laurenz Albe, Darafei\n> Praliaskouski)\n> This new behavior allows pages to be set as all-visible, which then\n> allows index-only scans, ...\n\n> The above sentence sounds to mean that this feature allows index-only\n> scans in more number of cases after this feature. Is that what you\n> intend to say? If so, is that correct? Because I think this will\n> allow index-only scans to skip \"Heap Fetches\" in more cases.\n\nI think what you mean is that the autovacuum feature, in addition to\nencouraging the *planner* to choose an indexonly scan, will *also* allow (at\nexecution time) fewer heap fetches for a plan which would have\nalready/previously used IOS. Right ? So maybe it should say \"allows OR\nIMPROVES index-only scans\" or \"allows plans which use IOS to run more\nefficiently\".\n\nSeparate from Amit's comment, I suggest to say:\n| This new behavior allows autovacuum to set pages as all-visible, which then\n| allows index-only scans, ...\n\n..otherwise it sounds like this feature implemented the concept of\n\"all-visible\".\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 May 2020 19:41:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 04:50:50PM -0400, Alvaro Herrera wrote:\n> Hello\n> \n> > +<!--\n> > +Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > +2020-03-08 [b0b5e20cd] Show opclass and opfamily related information in psql\n> > +-->\n> > +\n> > +<para>\n> > +Add psql commands to report operator classes and operator families (Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov)\n> > +</para>\n> \n> I think this item should list the commands in question:\n> \\dA, \\dAc, \\dAf, \\dAo, \\dAp\n> (All the other psql entries in the relnotes do that).\n\nGood idea. I added this paragraph:\n\n\tThe new commands are \\dAc, \\dAf, \\dAo, and \\dAp.\n\nI didn't see any changes to \\dA except regression tests.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:54:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 08:34:56PM -0400, Alvaro Herrera wrote:\n> On 2020-May-11, Bruce Momjian wrote:\n> \n> > We have long discussed how much of the release notes is to reward\n> > behavior, and we have settled on having the names on the items, and the\n> > Acknowledgments section at the bottom.\n> \n> Yes, we had to touch the source code in order to add documentation; but\n> so what? Everything touches the source code, but that doesn't mean it\n> should be listed there. I don't see what's the problem with having a\n> new subsection in the relnotes entitled \"Documentation\" where these two\n> items appears (glossary + new doc table format) are listed. It's not\n> like it's going to cost us a lot of space or anything.\n> \n> I don't think there is any circularity argument BTW -- we're not going\n> to document that we added release notes. And changing the table format\n> is not entirely pointless, given that we've historically had trouble\n> with these tables (read: they're totally unusable in PDF).\n\nWell, are you suggesting a new section because the glossary shouldn't be\nlisted under source code, or because you want the function reformatting\nadded? We just need to understand what the purpose is. We already have\nthe glossary listed, since that is new and user-visible.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 20:57:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 07:41:55PM -0500, Justin Pryzby wrote:\n> On Mon, May 11, 2020 at 10:52:41AM +0530, Amit Kapila wrote:\n> > One more observation:\n> > \n> > Allow inserts to trigger autovacuum activity (Laurenz Albe, Darafei\n> > Praliaskouski)\n> > This new behavior allows pages to be set as all-visible, which then\n> > allows index-only scans, ...\n> \n> > The above sentence sounds to mean that this feature allows index-only\n> > scans in more number of cases after this feature. Is that what you\n> > intend to say? If so, is that correct? Because I think this will\n> > allow index-only scans to skip \"Heap Fetches\" in more cases.\n> \n> I think what you mean is that the autovacuum feature, in addition to\n> encouraging the *planner* to choose an indexonly scan, will *also* allow (at\n> execution time) fewer heap fetches for a plan which would have\n> already/previously used IOS. Right ? So maybe it should say \"allows OR\n> IMPROVES index-only scans\" or \"allows plans which use IOS to run more\n> efficiently\".\n\nYes, I see your point now. New text is:\n\n\tThis new behavior reduces the work necessary when the table\n\tneeds to be frozen and allows pages to be set as all-visible.\n\tAll-visible pages allow index-only scans to access fewer heap rows.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 21:06:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 05:09:54PM -0400, Alvaro Herrera wrote:\n> On 2020-May-11, Alvaro Herrera wrote:\n> \n> > Hello\n> > \n> > > +<!--\n> > > +Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > > +2020-03-08 [b0b5e20cd] Show opclass and opfamily related information in psql\n> > > +-->\n> > > +\n> > > +<para>\n> > > +Add psql commands to report operator classes and operator families (Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov)\n> > > +</para>\n> > \n> > I think this item should list the commands in question:\n> > \\dA, \\dAc, \\dAf, \\dAo, \\dAp\n> > (All the other psql entries in the relnotes do that).\n> \n> Sorry, it's the last four only -- \\dA is an older command.\n\nOK, confirmed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 21:06:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 07:41:01PM -0500, Justin Pryzby wrote:\n> |Allow function call backtraces of errors to be logged (Peter Eisentraut, �lvaro Herrera)\n> |Server variable backtrace_functions specifies which C functions should generate backtraces on error.\n> \n> I think the details in the description are eclipsing the most important thing:\n> backtraces on Assert(). I would say \"Support for showing backtraces on error\".\n\nUh, you mean this adds backtraces for errors and asserts? Are\nnon-developers running assert builds?\n\n> Regarding this one:\n> |Add system view pg_shmem_allocations to display shared memory usage (Andres Freund, Robert Haas)\n> |WHAT IS THE ENTRY WITH NO NAME?\n> \n> There seems to be two special, \"unnamed\" cases:\n> src/backend/storage/ipc/shmem.c- /* output shared memory allocated but not counted via the shmem index */\n> src/backend/storage/ipc/shmem.c: values[0] = CStringGetTextDatum(\"<anonymous>\");\n> ...\n> src/backend/storage/ipc/shmem.c- /* output as-of-yet unused shared memory */\n> src/backend/storage/ipc/shmem.c- nulls[0] = true;\n> \n> That seems to be adequately documented:\n> https://www.postgresql.org/docs/devel/view-pg-shmem-allocations.html\n> |NULL for unused memory and <anonymous> for anonymous allocations.\n\nOK, thanks. Comment removed.\n\n> I would remove this part:\n> \"Previously, this could only be set at server start.\"\n\nOK, you rae saying it is already clear, agreed, removed.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 21:14:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 05:33:40PM -0700, Peter Geoghegan wrote:\n> On Mon, May 11, 2020 at 5:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Agreed. How is this?\n> >\n> > This allows efficient btree indexing of low cardinality columns.\n> > Users upgrading with pg_upgrade will need to use REINDEX to make use of\n> > this feature.\n> \n> I still think that the release notes should say that the key is only\n> stored once, while TIDs that identify table rows are stored together\n> as an array. Everything that's helpful (or harmful) about the feature\n> happens as a consequence of that. This isn't hard to grasp\n> intuitively, and is totally in line with how things like Oracle bitmap\n> indexes are presented to ordinary users.\n\nI still don't think these details belong in the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 21:15:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 09:15:43PM -0400, Bruce Momjian wrote:\n> On Mon, May 11, 2020 at 05:33:40PM -0700, Peter Geoghegan wrote:\n> > On Mon, May 11, 2020 at 5:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Agreed. How is this?\n> > >\n> > > This allows efficient btree indexing of low cardinality columns.\n> > > Users upgrading with pg_upgrade will need to use REINDEX to make use of\n> > > this feature.\n> > \n> > I still think that the release notes should say that the key is only\n> > stored once, while TIDs that identify table rows are stored together\n> > as an array. Everything that's helpful (or harmful) about the feature\n> > happens as a consequence of that. This isn't hard to grasp\n> > intuitively, and is totally in line with how things like Oracle bitmap\n> > indexes are presented to ordinary users.\n> \n> I still don't think these details belong in the release notes.\n\nOK, I was able to add some of it cleanly:\n\n\tThis allows efficient btree indexing of low cardinality columns by\n\tstoring duplicate keys only once. Users upgrading with pg_upgrade\n\twill need to use REINDEX to make use of this feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 21:23:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 8:51 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, May 8, 2020 at 12:07:09PM +0900, Amit Langote wrote:\n> > I have attached a patch with my suggestions above.\n>\n> OK, I slightly modified the wording of your first change, patch\n> attached.\n\nThanks. I checked that what you committed looks fine.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 10:28:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 6:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n> OK, I was able to add some of it cleanly:\n>\n> This allows efficient btree indexing of low cardinality columns by\n> storing duplicate keys only once. Users upgrading with pg_upgrade\n> will need to use REINDEX to make use of this feature.\n\nThat seems like a reasonable compromise. Thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 May 2020 18:40:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 07:18:56PM -0400, Bruce Momjian wrote:\n> On Fri, May 8, 2020 at 11:55:33AM +0900, Michael Paquier wrote:\n>> Should e2e02191 be added to the notes? This commit means that we\n>> actually dropped support for Windows 2000 (finally) at run-time.\n> \n> Oh, yes. This is much more important than the removal of support for\n> non-ELF BSD systems, which I already listed. The new text is:\n> \n> \tRemove support for Windows 2000 (Michael Paquier)\n\nSounds fine to me.\n\n>> At the same time I see no mention of 79dfa8af, which added better\n>> error handling when backends the SSL context with incorrect bounds.\n> \n> I skipped that commit since people don't normally care about better\n> error messages until they see the error message, and then they are happy\n> it is there, unless this is some chronic error message problem we are\n> fixing.\n\nOkay.\n\n> I thought this fell into the previous category about error messages, but\n> coloring is different. Can we say these utilities now honor the color\n> environment variables?\n\nExactly, I actually became aware of that possibility after plugging\nin the common logging APIs to oid2name and vacuumlo as of fc8cb94b so\nthis was not mentioned in the log message. And anything using\nsrc/common/logging.c can make use of the colorized output with\nPG_COLOR[S] set.\n\n> Are these the only new ones?\n\nI can recall an extra one in this case: pgbench as of 30a3e77. And I\ndon't see any new callers of pg_logging_init() in the stuff that\nalready existed in ~12.\n--\nMichael", "msg_date": "Tue, 12 May 2020 10:54:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I like your wording, but the \"that encoding\" wasn't clear enough for me,\n> so I reworded it to:\n\n> \tAllow Unicode escapes, e.g., E'\\u####', U&'\\####', to represent any\n> \tcharacter available in the database encoding, even when the database\n> \tencoding is not UTF-8 (Tom Lane)\n\nHow about \"to be used for\" instead of \"to represent\"? \"Represent\" kind\nof sounds like we're using these on output, which we aren't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 22:07:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-11, Bruce Momjian wrote:\n\n> On Mon, May 11, 2020 at 08:34:56PM -0400, Alvaro Herrera wrote:\n\n> > Yes, we had to touch the source code in order to add documentation; but\n> > so what? Everything touches the source code, but that doesn't mean it\n> > should be listed there. I don't see what's the problem with having a\n> > new subsection in the relnotes entitled \"Documentation\" where these two\n> > items appears (glossary + new doc table format) are listed. It's not\n> > like it's going to cost us a lot of space or anything.\n\n> Well, are you suggesting a new section because the glossary shouldn't be\n> listed under source code, or because you want the function reformatting\n> added? We just need to understand what the purpose is. We already have\n> the glossary listed, since that is new and user-visible.\n\nIMO the table reformatting change is significant enough to be\nnoteworthy. I'm suggesting that a new Documentation subsection would\nlist both that and the glossary, separately from Source Code -- so it'd\nbe E.1.3.10 and Source Code would be E.1.3.11.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 May 2020 22:16:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Well, are you suggesting a new section because the glossary shouldn't be\n> listed under source code, or because you want the function reformatting\n> added? We just need to understand what the purpose is. We already have\n> the glossary listed, since that is new and user-visible.\n\nThe implication of what you say here is that \"is it user-visible?\"\nis a criterion for whether to release-note something. By that logic\nwe probably *should* relnote the function table layout changes, because\nthey sure as heck are user-visible. People might or might not notice\naddition of a glossary, but I think just about every user consults\nthe function/operator tables regularly.\n\nI concur with Alvaro's position that if we are listing documentation\nchanges, pushing them under \"Source Code\" is not the way to do it.\nThat subsection has always been understood to be \"stuff you don't\ncare about if you're not a hacker\".\n\nSo that sort of leads me to the conclusion that \"major documentation\nchanges\" might be a reasonable sub-heading for the release notes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 22:23:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 10:54:52AM +0900, Michael Paquier wrote:\n> On Mon, May 11, 2020 at 07:18:56PM -0400, Bruce Momjian wrote:\n> > On Fri, May 8, 2020 at 11:55:33AM +0900, Michael Paquier wrote:\n> >> Should e2e02191 be added to the notes? This commit means that we\n> >> actually dropped support for Windows 2000 (finally) at run-time.\n> > \n> > Oh, yes. This is much more important than the removal of support for\n> > non-ELF BSD systems, which I already listed. The new text is:\n> > \n> > \tRemove support for Windows 2000 (Michael Paquier)\n> \n> Sounds fine to me.\n> \n> >> At the same time I see no mention of 79dfa8af, which added better\n> >> error handling when backends the SSL context with incorrect bounds.\n> > \n> > I skipped that commit since people don't normally care about better\n> > error messages until they see the error message, and then they are happy\n> > it is there, unless this is some chronic error message problem we are\n> > fixing.\n> \n> Okay.\n> \n> > I thought this fell into the previous category about error messages, but\n> > coloring is different. Can we say these utilities now honor the color\n> > environment variables?\n> \n> Exactly, I actually became aware of that possibility after plugging\n> in the common logging APIs to oid2name and vacuumlo as of fc8cb94b so\n> this was not mentioned in the log message. And anything using\n> src/common/logging.c can make use of the colorized output with\n> PG_COLOR[S] set.\n> \n> > Are these the only new ones?\n> \n> I can recall an extra one in this case: pgbench as of 30a3e77. And I\n> don't see any new callers of pg_logging_init() in the stuff that\n> already existed in ~12.\n\nI am not sure we even mentioned this in 12. Should we document this\nsomewhere? Maybe a blog posting?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 22:46:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 10:07:23PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I like your wording, but the \"that encoding\" wasn't clear enough for me,\n> > so I reworded it to:\n> \n> > \tAllow Unicode escapes, e.g., E'\\u####', U&'\\####', to represent any\n> > \tcharacter available in the database encoding, even when the database\n> > \tencoding is not UTF-8 (Tom Lane)\n> \n> How about \"to be used for\" instead of \"to represent\"? \"Represent\" kind\n> of sounds like we're using these on output, which we aren't.\n\nUh, I think \"used for\" is worse though, since we are not using it. I\ndon't get the \"output\" feel of the word at all.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 22:48:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 6:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, May 11, 2020 at 07:41:55PM -0500, Justin Pryzby wrote:\n> > On Mon, May 11, 2020 at 10:52:41AM +0530, Amit Kapila wrote:\n> > > One more observation:\n> > >\n> > > Allow inserts to trigger autovacuum activity (Laurenz Albe, Darafei\n> > > Praliaskouski)\n> > > This new behavior allows pages to be set as all-visible, which then\n> > > allows index-only scans, ...\n> >\n> > > The above sentence sounds to mean that this feature allows index-only\n> > > scans in more number of cases after this feature. Is that what you\n> > > intend to say? If so, is that correct? Because I think this will\n> > > allow index-only scans to skip \"Heap Fetches\" in more cases.\n> >\n> > I think what you mean is that the autovacuum feature, in addition to\n> > encouraging the *planner* to choose an indexonly scan, will *also* allow (at\n> > execution time) fewer heap fetches for a plan which would have\n> > already/previously used IOS. Right ? So maybe it should say \"allows OR\n> > IMPROVES index-only scans\" or \"allows plans which use IOS to run more\n> > efficiently\".\n>\n> Yes, I see your point now. New text is:\n>\n> This new behavior reduces the work necessary when the table\n> needs to be frozen and allows pages to be set as all-visible.\n> All-visible pages allow index-only scans to access fewer heap rows.\n>\n\nThe next text LGTM. Thanks.\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 08:19:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 10:23:53PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Well, are you suggesting a new section because the glossary shouldn't be\n> > listed under source code, or because you want the function reformatting\n> > added? We just need to understand what the purpose is. We already have\n> > the glossary listed, since that is new and user-visible.\n> \n> The implication of what you say here is that \"is it user-visible?\"\n> is a criterion for whether to release-note something. By that logic\n> we probably *should* relnote the function table layout changes, because\n> they sure as heck are user-visible. People might or might not notice\n> addition of a glossary, but I think just about every user consults\n> the function/operator tables regularly.\n> \n> I concur with Alvaro's position that if we are listing documentation\n> changes, pushing them under \"Source Code\" is not the way to do it.\n> That subsection has always been understood to be \"stuff you don't\n> care about if you're not a hacker\".\n> \n> So that sort of leads me to the conclusion that \"major documentation\n> changes\" might be a reasonable sub-heading for the release notes.\n\nOK, section and item added, patch attached,\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Mon, 11 May 2020 22:59:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020/05/12 9:34, Bruce Momjian wrote:\n>> Could you add \"Vinayak Pokale\" as a co-author of the following feature since\n>> I sometimes read his old patch to create a patch [1] ?\n>>\n>> =======================\n>> E.1.3.1.6. System Views\n>>\n>> - Add system view pg_stat_progress_analyze to report analyze progress (Álvaro Herrera, Tatsuro Yamada)\n>>\n>> + Add system view pg_stat_progress_analyze to report analyze progress (Álvaro Herrera, Tatsuro Yamada, Vinayak Pokale)\n>> =======================\n> \n> Done.\n\n\nHi Bruce,\n\nThank you!\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n\n", "msg_date": "Tue, 12 May 2020 12:00:14 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 6:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, May 11, 2020 at 10:52:41AM +0530, Amit Kapila wrote:\n> > > 1. We have allowed an (auto)vacuum to display additional information\n> > > about heap or index in case of an error in commit b61d161c14 [1].\n> > > Now, in general, it might not be worth saying much about error\n> > > information but I think this one could help users in case they have\n> > > some corruption. For example, if one of the indexes on a relation has\n> > > some corrupted data (due to bad hardware or some bug), it will let the\n> > > user know the index information, and the user can take appropriate\n> > > action like either Reindex or maybe drop and recreate the index to\n> > > overcome the problem.\n>\n> I'm not opposed to including it, but I think it's still true that the user\n> doesn't need to know in advance that the error message will be additionally\n> helpful in the event of corruption. If we were to include more \"error\" items,\n> we might also include these:\n>\n> 71a8a4f6e36547bb060dbcc961ea9b57420f7190 Add backtrace support for error reporting\n> 17a28b03645e27d73bf69a95d7569b61e58f06eb Improve the internal implementation of ereport().\n> 05f18c6b6b6e4b44302ee20a042cedc664532aa2 Added relation name in error messages for constraint checks.\n> 33753ac9d7bc83dd9ccee9d5e678ed78a0725b4e Add object names to partition integrity violations.\n>\n\nI think the first one (Add backtrace support for error reporting)\nseems to be quite useful as it can help to detect the problems faster.\nI think having a simple rule as Bruce has w.r.t \"error messages\" makes\nit easier to decide whether to take a particular commit or not but I\nfeel some of these could help users to know the new functionality and\nmight encourage them to upgrade to the new version. Sure, nobody is\ngoing to move due to only these features but along with other things,\nimproved error handling is a good thing to know.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 08:36:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 05/11/20 22:48, Bruce Momjian wrote:\n> On Mon, May 11, 2020 at 10:07:23PM -0400, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> \tAllow Unicode escapes, e.g., E'\\u####', U&'\\####', to represent any\n>>> \tcharacter available in the database encoding, even when the database\n>>> \tencoding is not UTF-8 (Tom Lane)\n>>\n>> How about \"to be used for\" instead of \"to represent\"? \"Represent\" kind\n>> of sounds like we're using these on output, which we aren't.\n> \n> Uh, I think \"used for\" is worse though, since we are not using it. I\n> don't get the \"output\" feel of the word at all.\n\n'specify' ?\n\n-Chap\n\n\n", "msg_date": "Mon, 11 May 2020 23:08:35 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 11:08:35PM -0400, Chapman Flack wrote:\n> On 05/11/20 22:48, Bruce Momjian wrote:\n> > On Mon, May 11, 2020 at 10:07:23PM -0400, Tom Lane wrote:\n> >> Bruce Momjian <bruce@momjian.us> writes:\n> >>> \tAllow Unicode escapes, e.g., E'\\u####', U&'\\####', to represent any\n> >>> \tcharacter available in the database encoding, even when the database\n> >>> \tencoding is not UTF-8 (Tom Lane)\n> >>\n> >> How about \"to be used for\" instead of \"to represent\"? \"Represent\" kind\n> >> of sounds like we're using these on output, which we aren't.\n> > \n> > Uh, I think \"used for\" is worse though, since we are not using it. I\n> > don't get the \"output\" feel of the word at all.\n> \n> 'specify' ?\n\nI like that word if Tom prefers it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 11 May 2020 23:15:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, May 11, 2020 at 11:08:35PM -0400, Chapman Flack wrote:\n>> 'specify' ?\n\n> I like that word if Tom prefers it.\n\n'specify' works for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 23:38:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "At Mon, 11 May 2020 20:12:04 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, May 7, 2020 at 09:22:02PM -0700, Noah Misch wrote:\n> > On Thu, May 07, 2020 at 09:38:34AM -0400, Bruce Momjian wrote:\n> > > > > > - Crash recovery was losing tuples written via COPY TO. This fixes the bug.\n> > > > > \n> > > > > This was not backpatched?\n> > > > \n> > > > Right.\n> > > \n> > > Oh. So you are saying we could lose COPY data on a crash, even after a\n> > > commit. That seems bad. Can you show me the commit info? I can't find\n> > > it.\n> > \n> > commit c6b9204\n> > Author: Noah Misch <noah@leadboat.com>\n> > AuthorDate: Sat Apr 4 12:25:34 2020 -0700\n> > Commit: Noah Misch <noah@leadboat.com>\n> > CommitDate: Sat Apr 4 12:25:34 2020 -0700\n> > \n> > Skip WAL for new relfilenodes, under wal_level=minimal.\n> > \n> > Until now, only selected bulk operations (e.g. COPY) did this. If a\n> > given relfilenode received both a WAL-skipping COPY and a WAL-logged\n> > operation (e.g. INSERT), recovery could lose tuples from the COPY. See\n> > src/backend/access/transam/README section \"Skipping WAL for New\n> > RelFileNode\" for the new coding rules. Maintainers of table access\n> > methods should examine that section.\n> \n> OK, so how do we want to document this? Do I mention in the text below\n> the WAL skipping item that this fixes a bug where a mix of simultaneous\n> COPY and INSERT into a table could lose rows during crash recovery, or\n> create a new item?\n\nFWIW, as dicussed upthread, I suppose that the API change is not going\nto be in relnotes.\n\nsomething like this?\n\n- Fix bug of WAL-skipping optimiazation \n\nPreviously a trasaction doing both of COPY and a WAL-logged operations\nlike INSERT while wal_level=minimal can lead to loss of COPY'ed rows\nthrough crash recovery. Also this fix extends the WAL-skipping\noptimiazation to all kinds of bulk insert operations.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 May 2020 13:09:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nHello Bruce,\n\n> OK, section and item added, patch attached,\n\nThanks!\n\nSome items that might be considered for the added documentation section:\n\n * e1ff780485\n * 34a0a81bfb\n * e829337d42\n\n * \"Document color support (Peter Eisentraut)\"\n THIS WAS NOT DOCUMENTED BEFORE?\n\nNot as such, AFAICR it was vaguely hinted about in the documentation of \ncommand that would use it, but not even all of them. Now there is a new \nspecific section.\n\n-- \nFabien!\n\n\n", "msg_date": "Tue, 12 May 2020 09:43:10 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-May-07, Bruce Momjian wrote:\n\n> OK, I have moved her name to be first. FYI, this commit was backpatched\n> back through PG 11, though the commit message doesn't mention that.\n\nAt some point I became an avid user of our src/tools/git_changelog, and\nthen it stopped making sense for me to highlight in the commit message\nthe fact that the commit is back-patched, since it's so obvious there.\nMaybe that's wrong and I should get back in the habit of mentioning it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 13:47:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 11, 2020 at 11:38:33PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, May 11, 2020 at 11:08:35PM -0400, Chapman Flack wrote:\n> >> 'specify' ?\n> \n> > I like that word if Tom prefers it.\n> \n> 'specify' works for me.\n\nSure, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 May 2020 16:23:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 01:09:08PM +0900, Kyotaro Horiguchi wrote:\n> > > commit c6b9204\n> > > Author: Noah Misch <noah@leadboat.com>\n> > > AuthorDate: Sat Apr 4 12:25:34 2020 -0700\n> > > Commit: Noah Misch <noah@leadboat.com>\n> > > CommitDate: Sat Apr 4 12:25:34 2020 -0700\n> > > \n> > > Skip WAL for new relfilenodes, under wal_level=minimal.\n> > > \n> > > Until now, only selected bulk operations (e.g. COPY) did this. If a\n> > > given relfilenode received both a WAL-skipping COPY and a WAL-logged\n> > > operation (e.g. INSERT), recovery could lose tuples from the COPY. See\n> > > src/backend/access/transam/README section \"Skipping WAL for New\n> > > RelFileNode\" for the new coding rules. Maintainers of table access\n> > > methods should examine that section.\n> > \n> > OK, so how do we want to document this? Do I mention in the text below\n> > the WAL skipping item that this fixes a bug where a mix of simultaneous\n> > COPY and INSERT into a table could lose rows during crash recovery, or\n> > create a new item?\n> \n> FWIW, as dicussed upthread, I suppose that the API change is not going\n> to be in relnotes.\n> \n> something like this?\n> \n> - Fix bug of WAL-skipping optimiazation \n> \n> Previously a trasaction doing both of COPY and a WAL-logged operations\n> like INSERT while wal_level=minimal can lead to loss of COPY'ed rows\n> through crash recovery. Also this fix extends the WAL-skipping\n> optimiazation to all kinds of bulk insert operations.\n\nUh, that kind of mixes the bug fix and the feature in a way that it is\nhard to understand. How about this?\n\n\tAllow skipping of WAL for new tables and indexes if wal_level is\n\t'minimal' (Kyotaro Horiguchi)\n\n\tRelations larger than wal_skip_threshold will have their files\n\tfsync'ed rather than writing their WAL records. Previously this\n\twas done only for COPY operations, but the implementation had a\n\tbug that could cause data loss during crash recovery.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 May 2020 16:38:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 09:43:10AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > OK, section and item added, patch attached,\n> \n> Thanks!\n> \n> Some items that might be considered for the added documentation section:\n> \n> * e1ff780485\n\nI was told in this email thread to not include that one.\n\n> * 34a0a81bfb\n\nWe already have:\n\n\tReformat tables containing function information for better\n\tclarity (Tom Lane)\n\nso it seems it is covered as part of this.\n\n> * e829337d42\n\nUh, this is a doc link formatting addition. I think this falls into the\nerror message logic, where it is nice when people want it, but they\ndon't need to know about it ahead of time.\n\n> * \"Document color support (Peter Eisentraut)\"\n> THIS WAS NOT DOCUMENTED BEFORE?\n> \n> Not as such, AFAICR it was vaguely hinted about in the documentation of\n> command that would use it, but not even all of them. Now there is a new\n> specific section.\n\nAgain, this is the first hash you gave.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 May 2020 17:15:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 01:47:38PM -0400, Alvaro Herrera wrote:\n> On 2020-May-07, Bruce Momjian wrote:\n> \n> > OK, I have moved her name to be first. FYI, this commit was backpatched\n> > back through PG 11, though the commit message doesn't mention that.\n> \n> At some point I became an avid user of our src/tools/git_changelog, and\n> then it stopped making sense for me to highlight in the commit message\n> the fact that the commit is back-patched, since it's so obvious there.\n> Maybe that's wrong and I should get back in the habit of mentioning it.\n\nUh, not sure. I don't need it since, as you said,\nsrc/tools/git_changelog covers it, but someone got confused since they\nlooked at just the commit message without looking at\nsrc/tools/git_changelog.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 May 2020 17:16:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "At Tue, 12 May 2020 16:38:09 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Tue, May 12, 2020 at 01:09:08PM +0900, Kyotaro Horiguchi wrote:\n> > > > commit c6b9204\n> > > > Author: Noah Misch <noah@leadboat.com>\n> > > > AuthorDate: Sat Apr 4 12:25:34 2020 -0700\n> > > > Commit: Noah Misch <noah@leadboat.com>\n> > > > CommitDate: Sat Apr 4 12:25:34 2020 -0700\n> > > > \n> > > > Skip WAL for new relfilenodes, under wal_level=minimal.\n> > > > \n> > > > Until now, only selected bulk operations (e.g. COPY) did this. If a\n> > > > given relfilenode received both a WAL-skipping COPY and a WAL-logged\n> > > > operation (e.g. INSERT), recovery could lose tuples from the COPY. See\n> > > > src/backend/access/transam/README section \"Skipping WAL for New\n> > > > RelFileNode\" for the new coding rules. Maintainers of table access\n> > > > methods should examine that section.\n> > > \n> > > OK, so how do we want to document this? Do I mention in the text below\n> > > the WAL skipping item that this fixes a bug where a mix of simultaneous\n> > > COPY and INSERT into a table could lose rows during crash recovery, or\n> > > create a new item?\n> > \n> > FWIW, as dicussed upthread, I suppose that the API change is not going\n> > to be in relnotes.\n> > \n> > something like this?\n> > \n> > - Fix bug of WAL-skipping optimiazation \n> > \n> > Previously a trasaction doing both of COPY and a WAL-logged operations\n> > like INSERT while wal_level=minimal can lead to loss of COPY'ed rows\n> > through crash recovery. Also this fix extends the WAL-skipping\n> > optimiazation to all kinds of bulk insert operations.\n> \n> Uh, that kind of mixes the bug fix and the feature in a way that it is\n> hard to understand. How about this?\n> \n> \tAllow skipping of WAL for new tables and indexes if wal_level is\n> \t'minimal' (Kyotaro Horiguchi)\n> \n> \tRelations larger than wal_skip_threshold will have their files\n> \tfsync'ed rather than writing their WAL records. Previously this\n> \twas done only for COPY operations, but the implementation had a\n> \tbug that could cause data loss during crash recovery.\n\nI see it. It is giving weight on improvement. Looks good the overall\nstructure of the description above. However, wal-skipping is always\ndone regardless of table size. wal_skip_threshold is an optimization\nto choose which to use fsync or FPI records (that is, not WAL records\nin the common sense) at commit for speed.\n\nSo how about the following?\n\nAll kinds of bulk-insertion are not WAL-logged then fsync'ed at\ncommit. Using FPI WAL records instead of fsync for relations smaller\nthan wal_skip_threshold. Previously this was done only for COPY\noperations and always using fsync, but the implementation had a bug\nthat could cause data loss during crash recovery.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 May 2020 11:56:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nHello Bruce,\n\n>> * e1ff780485\n>\n> I was told in this email thread to not include that one.\n\nOk.\n\n>> * 34a0a81bfb\n>\n> We already have:\n>\n> \tReformat tables containing function information for better\n> \tclarity (Tom Lane)\n>\n> so it seems it is covered as part of this.\n\nAFAICR this one is not by the same author, and although the point was \nabout better clarity, it was not about formating but rather about \nrestructuring text vs binary string function documentations. Then Tom \nreformatted the result.\n\n>> * e829337d42\n>\n> Uh, this is a doc link formatting addition. I think this falls into the\n> error message logic, where it is nice when people want it, but they\n> don't need to know about it ahead of time.\n\nHmmm. ISTM that this is not really about \"error message logic\", it is \nabout navigating to libpq functions when one is reference in the \ndescription of another to check what it does, which I had to do a lot \nwhile developing some performance testing code for a project.\n\n>> * \"Document color support (Peter Eisentraut)\"\n>> THIS WAS NOT DOCUMENTED BEFORE?\n>>\n>> Not as such, AFAICR it was vaguely hinted about in the documentation of\n>> command that would use it, but not even all of them. Now there is a new\n>> specific section.\n>\n> Again, this is the first hash you gave.\n\nPossibly, but as the \"THIS WAS NOT DOCUMENTED BEFORE?\" question seemed to \nstill be in the release notes, I gathered that the information had not \nreached its destination, hence the possible repetition. But maybe the \nissue is that this answer is not satisfactory. Sorry for the \ninconvenience.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 13 May 2020 07:18:38 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 13, 2020 at 11:56:33AM +0900, Kyotaro Horiguchi wrote:\n> > \n> > \tAllow skipping of WAL for new tables and indexes if wal_level is\n> > \t'minimal' (Kyotaro Horiguchi)\n> > \n> > \tRelations larger than wal_skip_threshold will have their files\n> > \tfsync'ed rather than writing their WAL records. Previously this\n> > \twas done only for COPY operations, but the implementation had a\n> > \tbug that could cause data loss during crash recovery.\n> \n> I see it. It is giving weight on improvement. Looks good the overall\n> structure of the description above. However, wal-skipping is always\n> done regardless of table size. wal_skip_threshold is an optimization\n> to choose which to use fsync or FPI records (that is, not WAL records\n> in the common sense) at commit for speed.\n\nWell, as far as users are concerned, everything wrtiten to WAL is a WAL\nrecord.\n\n> So how about the following?\n> \n> All kinds of bulk-insertion are not WAL-logged then fsync'ed at\n> commit. Using FPI WAL records instead of fsync for relations smaller\n> than wal_skip_threshold. Previously this was done only for COPY\n> operations and always using fsync, but the implementation had a bug\n> that could cause data loss during crash recovery.\n\nThat is too much detail for the release notes. We already will link to\nthe docs. Why put it here?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 13 May 2020 11:15:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, May 13, 2020 at 07:18:38AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > > * e1ff780485\n> > \n> > I was told in this email thread to not include that one.\n> \n> Ok.\n> \n> > > * 34a0a81bfb\n> > \n> > We already have:\n> > \n> > \tReformat tables containing function information for better\n> > \tclarity (Tom Lane)\n> > \n> > so it seems it is covered as part of this.\n> \n> AFAICR this one is not by the same author, and although the point was about\n> better clarity, it was not about formating but rather about restructuring\n> text vs binary string function documentations. Then Tom reformatted the\n> result.\n\nWell, we were not even clear we should document changes in the functions\nsection, so going into details of all the changes seems unwise.\n\n> > > * e829337d42\n> > \n> > Uh, this is a doc link formatting addition. I think this falls into the\n> > error message logic, where it is nice when people want it, but they\n> > don't need to know about it ahead of time.\n> \n> Hmmm. ISTM that this is not really about \"error message logic\", it is about\n> navigating to libpq functions when one is reference in the description of\n> another to check what it does, which I had to do a lot while developing some\n> performance testing code for a project.\n\nI don't see it.\n\n> > > * \"Document color support (Peter Eisentraut)\"\n> > > THIS WAS NOT DOCUMENTED BEFORE?\n> > > \n> > > Not as such, AFAICR it was vaguely hinted about in the documentation of\n> > > command that would use it, but not even all of them. Now there is a new\n> > > specific section.\n> > \n> > Again, this is the first hash you gave.\n> \n> Possibly, but as the \"THIS WAS NOT DOCUMENTED BEFORE?\" question seemed to\n> still be in the release notes, I gathered that the information had not\n> reached its destination, hence the possible repetition. But maybe the issue\n> is that this answer is not satisfactory. Sorry for the inconvenience.\n\nI removed it already based on feedback from someone else.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 13 May 2020 11:22:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "At Wed, 13 May 2020 11:15:18 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Wed, May 13, 2020 at 11:56:33AM +0900, Kyotaro Horiguchi wrote:\n> > > \n> > > \tAllow skipping of WAL for new tables and indexes if wal_level is\n> > > \t'minimal' (Kyotaro Horiguchi)\n> > > \n> > > \tRelations larger than wal_skip_threshold will have their files\n> > > \tfsync'ed rather than writing their WAL records. Previously this\n> > > \twas done only for COPY operations, but the implementation had a\n> > > \tbug that could cause data loss during crash recovery.\n> > \n> > I see it. It is giving weight on improvement. Looks good the overall\n> > structure of the description above. However, wal-skipping is always\n> > done regardless of table size. wal_skip_threshold is an optimization\n> > to choose which to use fsync or FPI records (that is, not WAL records\n> > in the common sense) at commit for speed.\n> \n> Well, as far as users are concerned, everything wrtiten to WAL is a WAL\n> record.\n\nI think that the significant point here is not that persistence is\nensured by which of fsync or WAL , but whether WAL records are written\nfor every insertion. The commit-time WA is just an alternative of\nfsync, which is faster than fsync'ing separate files for smaller\nfiles.\n\n> > So how about the following?\n> > \n> > All kinds of bulk-insertion are not WAL-logged then fsync'ed at\n> > commit. Using FPI WAL records instead of fsync for relations smaller\n> > than wal_skip_threshold. Previously this was done only for COPY\n> > operations and always using fsync, but the implementation had a bug\n> > that could cause data loss during crash recovery.\n> \n> That is too much detail for the release notes. We already will link to\n> the docs. Why put it here?\n\nIt is just an more accurate (not an detailed) version of the\npreviously proposed description. If we simplify that, I choose to\nremove explanation on wal_skip_threshold.\n\nHow about this?\n\nWAL-logging is now skipped while all kinds of bulk-insertion, then\nrelations are sync'ed to disk at commit. Previously this was done\nonly for COPY operations, but the implementation had a bug that could\ncause data loss during crash recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 May 2020 09:51:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 14, 2020 at 09:51:41AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 13 May 2020 11:15:18 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > On Wed, May 13, 2020 at 11:56:33AM +0900, Kyotaro Horiguchi wrote:\n> It is just an more accurate (not an detailed) version of the\n> previously proposed description. If we simplify that, I choose to\n> remove explanation on wal_skip_threshold.\n> \n> How about this?\n> \n> WAL-logging is now skipped while all kinds of bulk-insertion, then\n> relations are sync'ed to disk at commit. Previously this was done\n> only for COPY operations, but the implementation had a bug that could\n> cause data loss during crash recovery.\n\nOK, I went with this text, stating WAL \"generation\" is skipped:\n\n\tAllow skipping of WAL for full table writes if wal_level is 'minimal'\n\t(Kyotaro Horiguchi)\n\t\n\tRelations larger than wal_skip_threshold will have their files\n\tfsync'ed rather than generating WAL. Previously this was done\n\tonly for COPY operations, but the implementation had a bug that\n\tcould cause data loss during crash recovery.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 13 May 2020 22:40:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nHello Bruce,\n\n>>>> * 34a0a81bfb\n>>>\n>>> We already have:\n>>>\n>>> \tReformat tables containing function information for better\n>>> \tclarity (Tom Lane)\n>>>\n>>> so it seems it is covered as part of this.\n>>\n>> AFAICR this one is not by the same author, and although the point was about\n>> better clarity, it was not about formating but rather about restructuring\n>> text vs binary string function documentations. Then Tom reformatted the\n>> result.\n>\n> Well, we were not even clear we should document changes in the functions\n> section, so going into details of all the changes seems unwise.\n\nThe restructuring was a significant change, and ISTM that another function \nof the release note is also to implicitely thank contributors (their name \nis appended, which does not bring any useful information about the feature \nfrom a release note perspective) hence my suggestion to include this one,\nthe author of which is not Tom Lane.\n\n>>>> * e829337d42\n>>>\n>>> Uh, this is a doc link formatting addition. I think this falls into the\n>>> error message logic, where it is nice when people want it, but they\n>>> don't need to know about it ahead of time.\n>>\n>> [...]\n>\n> I don't see it.\n\nWhile reading again the sequence, ISTM that I did not understand your \nfirst answer, so my answer was kind-of off topic, sorry. This is indeed \n\"link formatting addition\", which helps making the libpq doc more usable.\n\nProbably you do not need to know about it in advance, but I do not think \nthat it is a good reason not to include it: with the same argument, a \nperformance improvement would not need to be advertise, you'll see it when \nyou need it. The same holds for all non-functional improvements, and there \nare many which are listed.\n\n>> Possibly, but as the \"THIS WAS NOT DOCUMENTED BEFORE?\" question seemed to\n>> still be in the release notes, I gathered that the information had not\n>> reached its destination, hence the possible repetition. But maybe the issue\n>> is that this answer is not satisfactory. Sorry for the inconvenience.\n>\n> I removed it already based on feedback from someone else.\n\nGood. I looked at the online version which is off the latest commits by a \nfew hours.\n\nI'd consider moving \"Upgrade to use DocBook 4.5 (Peter Eisentraut)\" to the \ndoc section, maybe.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 14 May 2020 07:23:05 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "At Wed, 13 May 2020 22:40:52 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, May 14, 2020 at 09:51:41AM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 13 May 2020 11:15:18 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > On Wed, May 13, 2020 at 11:56:33AM +0900, Kyotaro Horiguchi wrote:\n> > It is just an more accurate (not an detailed) version of the\n> > previously proposed description. If we simplify that, I choose to\n> > remove explanation on wal_skip_threshold.\n> > \n> > How about this?\n> > \n> > WAL-logging is now skipped while all kinds of bulk-insertion, then\n> > relations are sync'ed to disk at commit. Previously this was done\n> > only for COPY operations, but the implementation had a bug that could\n> > cause data loss during crash recovery.\n> \n> OK, I went with this text, stating WAL \"generation\" is skipped:\n> \n> \tAllow skipping of WAL for full table writes if wal_level is 'minimal'\n> \t(Kyotaro Horiguchi)\n> \t\n> \tRelations larger than wal_skip_threshold will have their files\n> \tfsync'ed rather than generating WAL. Previously this was done\n> \tonly for COPY operations, but the implementation had a bug that\n> \tcould cause data loss during crash recovery.\n\nAlthough I can't help feeling it out-of-point a bit, it is right in\napperarance. So, I don't object it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 May 2020 15:23:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 14, 2020 at 07:23:05AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > > > > * 34a0a81bfb\n> > > > \n> > > > We already have:\n> > > > \n> > > > \tReformat tables containing function information for better\n> > > > \tclarity (Tom Lane)\n> > > > \n> > > > so it seems it is covered as part of this.\n> > > \n> > > AFAICR this one is not by the same author, and although the point was about\n> > > better clarity, it was not about formating but rather about restructuring\n> > > text vs binary string function documentations. Then Tom reformatted the\n> > > result.\n> > \n> > Well, we were not even clear we should document changes in the functions\n> > section, so going into details of all the changes seems unwise.\n> \n> The restructuring was a significant change, and ISTM that another function\n> of the release note is also to implicitely thank contributors (their name is\n> appended, which does not bring any useful information about the feature from\n> a release note perspective) hence my suggestion to include this one,\n> the author of which is not Tom Lane.\n\nWe list people's names next to items. We don't list items to list\npeople's names, as far as I know of the policy. If you want to change\nthat, you will need to start a new thread and get agreement.\n\n> > > > > * e829337d42\n> > > > \n> > > > Uh, this is a doc link formatting addition. I think this falls into the\n> > > > error message logic, where it is nice when people want it, but they\n> > > > don't need to know about it ahead of time.\n> > > \n> > > [...]\n> > \n> > I don't see it.\n> \n> While reading again the sequence, ISTM that I did not understand your first\n> answer, so my answer was kind-of off topic, sorry. This is indeed \"link\n> formatting addition\", which helps making the libpq doc more usable.\n\n> Probably you do not need to know about it in advance, but I do not think\n> that it is a good reason not to include it: with the same argument, a\n> performance improvement would not need to be advertise, you'll see it when\n> you need it. The same holds for all non-functional improvements, and there\n> are many which are listed.\n\nPeformance items are listed only if they will produce a visible change\nin performance, or enable new workloads that were too slow in the past.\n\n> > > Possibly, but as the \"THIS WAS NOT DOCUMENTED BEFORE?\" question seemed to\n> > > still be in the release notes, I gathered that the information had not\n> > > reached its destination, hence the possible repetition. But maybe the issue\n> > > is that this answer is not satisfactory. Sorry for the inconvenience.\n> > \n> > I removed it already based on feedback from someone else.\n> \n> Good. I looked at the online version which is off the latest commits by a\n> few hours.\n> \n> I'd consider moving \"Upgrade to use DocBook 4.5 (Peter Eisentraut)\" to the\n> doc section, maybe.\n\nAgreed, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 May 2020 11:50:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-05-12 02:41, Justin Pryzby wrote:\n> I'm not opposed to including it, but I think it's still true that the user\n> doesn't need to know in advance that the error message will be additionally\n> helpful in the event of corruption. If we were to include more \"error\" items,\n> we might also include these:\n> \n> 71a8a4f6e36547bb060dbcc961ea9b57420f7190 Add backtrace support for error reporting\n\nThis is actually a legitimate user-visible feature and should be listed \nsomewhere.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 May 2020 23:01:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 14, 2020 at 2:02 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-05-12 02:41, Justin Pryzby wrote:\n> > I'm not opposed to including it, but I think it's still true that the user\n> > doesn't need to know in advance that the error message will be additionally\n> > helpful in the event of corruption. If we were to include more \"error\" items,\n> > we might also include these:\n> >\n> > 71a8a4f6e36547bb060dbcc961ea9b57420f7190 Add backtrace support for error reporting\n>\n> This is actually a legitimate user-visible feature and should be listed\n> somewhere.\n\n+1 -- it's very handy. Plus it has user-facing knobs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 14 May 2020 14:02:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, May 14, 2020 at 11:01:51PM +0200, Peter Eisentraut wrote:\n> On 2020-05-12 02:41, Justin Pryzby wrote:\n> > I'm not opposed to including it, but I think it's still true that the user\n> > doesn't need to know in advance that the error message will be additionally\n> > helpful in the event of corruption. If we were to include more \"error\" items,\n> > we might also include these:\n> > \n> > 71a8a4f6e36547bb060dbcc961ea9b57420f7190 Add backtrace support for error reporting\n> \n> This is actually a legitimate user-visible feature and should be listed\n> somewhere.\n\nOn Thu, May 14, 2020 at 02:02:52PM -0700, Peter Geoghegan wrote:\n> +1 -- it's very handy. Plus it has user-facing knobs.\n\nThat's already included:\n\n| Allow function call backtraces of errors to be logged (Peter Eisentraut, �lvaro Herrera)\n| Server variable backtrace_functions specifies which C functions should generate backtraces on error.\n\nI 1) I failed to double check my list; and, 2) intended for that to be\ninterpretted as items which could be moved to a separate \"error reporting\"\nsection of the release notes.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 14 May 2020 16:10:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\n\nOn 2020/05/05 12:16, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nMany thanks for working on this!\n\nWhen I did \"make html\", I got the following message.\n\n Link element has no content and no Endterm. Nothing to show in the link to sepgsql\n\n\"Allow <link linkend=\"sepgsql\"/> to control access to the\" in release-13.sgml\nseems to have caused this. Also I found it's converted into \"Allow ??? to\n control access to the\", i.e., ??? was used.\n\n- Allow <link linkend=\"sepgsql\"/> to control access to the\n+ Allow <link linkend=\"sepgsql\">sepgsql</link> to control access to the\n\nShouldn't we change that as the above?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 May 2020 15:55:19 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 15, 2020 at 03:55:19PM +0900, Fujii Masao wrote:\n> \n> \n> On 2020/05/05 12:16, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-13.html\n> > \n> > It still needs markup, word wrap, and indenting. The community doc\n> > build should happen in a few hours.\n> \n> Many thanks for working on this!\n> \n> When I did \"make html\", I got the following message.\n> \n> Link element has no content and no Endterm. Nothing to show in the link to sepgsql\n> \n> \"Allow <link linkend=\"sepgsql\"/> to control access to the\" in release-13.sgml\n> seems to have caused this. Also I found it's converted into \"Allow ??? to\n> control access to the\", i.e., ??? was used.\n> \n> - Allow <link linkend=\"sepgsql\"/> to control access to the\n> + Allow <link linkend=\"sepgsql\">sepgsql</link> to control access to the\n> \n> Shouldn't we change that as the above?\n\nActually, it should be:\n\n\t<xref linkend=\"sepgsql\"/>\n\nbecause we are using the text from the link. See\ndoc/src/sgml/README.links for details on xref links. Release notes\nupdated. Odd I got no warning for this on 'make check'.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 15 May 2020 08:29:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\n\nOn 2020/05/15 21:29, Bruce Momjian wrote:\n> On Fri, May 15, 2020 at 03:55:19PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/05/05 12:16, Bruce Momjian wrote:\n>>> I have committed the first draft of the PG 13 release notes. You can\n>>> see them here:\n>>>\n>>> \thttps://momjian.us/pgsql_docs/release-13.html\n>>>\n>>> It still needs markup, word wrap, and indenting. The community doc\n>>> build should happen in a few hours.\n>>\n>> Many thanks for working on this!\n>>\n>> When I did \"make html\", I got the following message.\n>>\n>> Link element has no content and no Endterm. Nothing to show in the link to sepgsql\n>>\n>> \"Allow <link linkend=\"sepgsql\"/> to control access to the\" in release-13.sgml\n>> seems to have caused this. Also I found it's converted into \"Allow ??? to\n>> control access to the\", i.e., ??? was used.\n>>\n>> - Allow <link linkend=\"sepgsql\"/> to control access to the\n>> + Allow <link linkend=\"sepgsql\">sepgsql</link> to control access to the\n>>\n>> Shouldn't we change that as the above?\n> \n> Actually, it should be:\n> \n> \t<xref linkend=\"sepgsql\"/>\n> \n> because we are using the text from the link.\n\nYes, this works.\n\n> See\n> doc/src/sgml/README.links for details on xref links. Release notes\n> updated.\n\nThanks!\n\n> Odd I got no warning for this on 'make check'. \n\nI'm not sure why, but btw I got the message when I compiled the document on Mac.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 May 2020 21:54:55 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, May 15, 2020 at 09:54:55PM +0900, Fujii Masao wrote:\n> > Actually, it should be:\n> > \n> > \t<xref linkend=\"sepgsql\"/>\n> > \n> > because we are using the text from the link.\n> \n> Yes, this works.\n> \n> > See\n> > doc/src/sgml/README.links for details on xref links. Release notes\n> > updated.\n> \n> Thanks!\n> \n> > Odd I got no warning for this on 'make check'.\n> \n> I'm not sure why, but btw I got the message when I compiled the document on Mac.\n\nI don't think I looked at the HTML build output, only the check one, so\nthat might be the cause.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 15 May 2020 09:08:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "> On 5 May 2020, at 05:16, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n\nSpotted a typo we probably should fix: s/PostgresSQL/PostgreSQL/ =)\n\ncheers ./daniel", "msg_date": "Mon, 18 May 2020 11:18:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\nThanks, applied.\n\n---------------------------------------------------------------------------\n\nOn Mon, May 18, 2020 at 11:18:51AM +0200, Daniel Gustafsson wrote:\n> > On 5 May 2020, at 05:16, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > I have committed the first draft of the PG 13 release notes. You can\n> > see them here:\n> \n> Spotted a typo we probably should fix: s/PostgresSQL/PostgreSQL/ =)\n> \n> cheers ./daniel\n> \n\n> diff --git a/doc/src/sgml/release-13.sgml b/doc/src/sgml/release-13.sgml\n> index c39a6ad38e..7f864da162 100644\n> --- a/doc/src/sgml/release-13.sgml\n> +++ b/doc/src/sgml/release-13.sgml\n> @@ -215,7 +215,7 @@ Author: Tom Lane <tgl@sss.pgh.pa.us>\n> \n> <para>\n> Remove support for defining <link linkend=\"sql-createopclass\">operator\n> - classes</link> using pre-<productname>PostgresSQL</productname>\n> + classes</link> using pre-<productname>PostgreSQL</productname>\n> 8.0 syntax (Daniel Gustafsson)\n> </para>\n> </listitem>\n> @@ -228,7 +228,7 @@ Author: Tom Lane <tgl@sss.pgh.pa.us>\n> \n> <para>\n> Remove support for defining <link linkend=\"sql-altertable\">foreign key\n> - constraints</link> using pre-<productname>PostgresSQL</productname>\n> + constraints</link> using pre-<productname>PostgreSQL</productname>\n> 7.3 syntax (Daniel Gustafsson)\n> </para>\n> </listitem>\n> @@ -242,7 +242,7 @@ Author: Tom Lane <tgl@sss.pgh.pa.us>\n> <para>\n> Remove support for \"opaque\" <link\n> linkend=\"sql-createtype\">pseudo-types</link> used by\n> - pre-<productname>PostgresSQL</productname> 7.3 servers (Daniel\n> + pre-<productname>PostgreSQL</productname> 7.3 servers (Daniel\n> Gustafsson)\n> </para>\n> </listitem>\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 18 May 2020 10:20:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Spotted this in the release notes:\n\n <para>\n Add extension <application>bool_plperl</application> which transforms\n <acronym>SQL</acronym> booleans to/from PL/Perl booleans (Ivan\n Panchenko) WHERE IS THIS DOCUMENTED?\n </para>\n\nbool_plperl is documented in \"44.1. PL/Perl Functions and Arguments\", but not\nwith a separate section (which fwiw I don't disagree with). Linking there from\nthe release notes entry would require some rewriting to make it fit; I would\njust remove the placeholder question for this one.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 25 May 2020 10:54:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, May 25, 2020 at 10:54:03AM +0200, Daniel Gustafsson wrote:\n> Spotted this in the release notes:\n> \n> <para>\n> Add extension <application>bool_plperl</application> which transforms\n> <acronym>SQL</acronym> booleans to/from PL/Perl booleans (Ivan\n> Panchenko) WHERE IS THIS DOCUMENTED?\n> </para>\n> \n> bool_plperl is documented in \"44.1. PL/Perl Functions and Arguments\", but not\n> with a separate section (which fwiw I don't disagree with). Linking there from\n> the release notes entry would require some rewriting to make it fit; I would\n> just remove the placeholder question for this one.\n\nThanks, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 21:54:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi,\n\nI realized that PG 13 release note still has the following entry:\n\n<!--\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n2020-03-20 [4e6209134] pg_dump: Add FOREIGN to ALTER statements, if appropriate\n-->\n\n <para>\n Add <literal>FOREIGN</literal> to <command>ALTER</command> statements,\n if appropriate (Luis Carril)\n </para>\n\n <para>\n WHAT IS THIS ABOUT?\n </para>\n </listitem>\n\n </itemizedlist>\n\nIIUC this entry is about that pg_dump adds FOREIGN word to ALTER TABLE\ncommand. Please find the attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 26 Jun 2020 17:24:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, Jun 26, 2020 at 05:24:16PM +0900, Masahiko Sawada wrote:\n> Hi,\n> \n> I realized that PG 13 release note still has the following entry:\n> \n> <!--\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> 2020-03-20 [4e6209134] pg_dump: Add FOREIGN to ALTER statements, if appropriate\n> -->\n> \n> <para>\n> Add <literal>FOREIGN</literal> to <command>ALTER</command> statements,\n> if appropriate (Luis Carril)\n> </para>\n> \n> <para>\n> WHAT IS THIS ABOUT?\n> </para>\n> </listitem>\n> \n> </itemizedlist>\n> \n> IIUC this entry is about that pg_dump adds FOREIGN word to ALTER TABLE\n> command. Please find the attached patch.\n\nOK, so if that is, what used to happen before? Did it still work\nwithout the FOREIGN keyword? If so, I am thinking we should just remove\nthis item.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 12:55:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-Jun-26, Bruce Momjian wrote:\n\n> On Fri, Jun 26, 2020 at 05:24:16PM +0900, Masahiko Sawada wrote:\n\n> > Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > 2020-03-20 [4e6209134] pg_dump: Add FOREIGN to ALTER statements, if appropriate\n> > -->\n> > \n> > <para>\n> > Add <literal>FOREIGN</literal> to <command>ALTER</command> statements,\n> > if appropriate (Luis Carril)\n> > </para>\n\n> > IIUC this entry is about that pg_dump adds FOREIGN word to ALTER TABLE\n> > command. Please find the attached patch.\n> \n> OK, so if that is, what used to happen before? Did it still work\n> without the FOREIGN keyword? If so, I am thinking we should just remove\n> this item.\n\nI tend to agree, it's not a change significant enough to be documented\nin the relnotes, i think.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 16:20:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Reading Luis Carril's other entry in the relnotes,\n\n Allow pg_dump --include-foreign-data to dump data from foreign servers (Luis Carril)\n\nIt seems to suggest that --include-foreign-data existed previously,\nwhich is not true. I would have worded it as \"Add --include-foreign-data\noption to pg_dump to allow dumping data from foreign servers\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 16:23:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Fri, Jun 26, 2020 at 04:23:26PM -0400, Alvaro Herrera wrote:\n> Reading Luis Carril's other entry in the relnotes,\n> \n> Allow pg_dump --include-foreign-data to dump data from foreign servers (Luis Carril)\n> \n> It seems to suggest that --include-foreign-data existed previously,\n> which is not true. I would have worded it as \"Add --include-foreign-data\n> option to pg_dump to allow dumping data from foreign servers\".\n\nOK, pg_dump item about FOREIGN keyword removed from PG 13 release notes,\nand the above item clarified. Patch attached and applied to PG 13.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Fri, 26 Jun 2020 18:24:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nOn Fri, Jun 26, 2020 at 3:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Patch attached and applied to PG 13.\n\nI committed the hash_mem_multiplier GUC to Postgres 13 just now.\n\nThere should be a note about this in the Postgres 13 release notes,\nfor the usual reasons. More importantly, the \"Allow hash aggregation\nto use disk storage for large aggregation result sets\" feature should\nreference the new GUC directly. Users should be advised that the GUC\nmay be useful in cases where they upgrade and experience a performance\nregression linked to slower hash aggregation. Just including a\ndocumentation link for the GUC would be very helpful.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 15:34:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, Jul 29, 2020 at 03:34:22PM -0700, Peter Geoghegan wrote:\n> Hi Bruce,\n> \n> On Fri, Jun 26, 2020 at 3:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Patch attached and applied to PG 13.\n> \n> I committed the hash_mem_multiplier GUC to Postgres 13 just now.\n> \n> There should be a note about this in the Postgres 13 release notes,\n> for the usual reasons. More importantly, the \"Allow hash aggregation\n> to use disk storage for large aggregation result sets\" feature should\n> reference the new GUC directly. Users should be advised that the GUC\n> may be useful in cases where they upgrade and experience a performance\n> regression linked to slower hash aggregation. Just including a\n> documentation link for the GUC would be very helpful.\n\nI came up with the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Wed, 29 Jul 2020 21:30:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, Jul 29, 2020 at 6:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > There should be a note about this in the Postgres 13 release notes,\n> > for the usual reasons. More importantly, the \"Allow hash aggregation\n> > to use disk storage for large aggregation result sets\" feature should\n> > reference the new GUC directly. Users should be advised that the GUC\n> > may be useful in cases where they upgrade and experience a performance\n> > regression linked to slower hash aggregation. Just including a\n> > documentation link for the GUC would be very helpful.\n>\n> I came up with the attached patch.\n\nI was thinking something along like the following (after the existing\nsentence about avoiding hash aggs in the planner):\n\nIf you find that hash aggregation is slower than in previous major\nreleases of PostgreSQL, it may be useful to increase the value of\nhash_mem_multiplier. This allows hash aggregation to use more memory\nwithout affecting competing query operations that are generally less\nlikely to put any additional memory to good use.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 19:00:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, Jul 29, 2020 at 07:00:43PM -0700, Peter Geoghegan wrote:\n> On Wed, Jul 29, 2020 at 6:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > There should be a note about this in the Postgres 13 release notes,\n> > > for the usual reasons. More importantly, the \"Allow hash aggregation\n> > > to use disk storage for large aggregation result sets\" feature should\n> > > reference the new GUC directly. Users should be advised that the GUC\n> > > may be useful in cases where they upgrade and experience a performance\n> > > regression linked to slower hash aggregation. Just including a\n> > > documentation link for the GUC would be very helpful.\n> >\n> > I came up with the attached patch.\n> \n> I was thinking something along like the following (after the existing\n> sentence about avoiding hash aggs in the planner):\n> \n> If you find that hash aggregation is slower than in previous major\n> releases of PostgreSQL, it may be useful to increase the value of\n> hash_mem_multiplier. This allows hash aggregation to use more memory\n> without affecting competing query operations that are generally less\n> likely to put any additional memory to good use.\n\nWell, that seems to be repeating what is already in the docs for\nhash_mem_multiplier, which I try to avoid. One other direction is to\nput something in the incompatibilities section. Does that make sense?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 29 Jul 2020 22:48:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, Jul 29, 2020 at 7:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Well, that seems to be repeating what is already in the docs for\n> hash_mem_multiplier, which I try to avoid. One other direction is to\n> put something in the incompatibilities section. Does that make sense?\n\nI would prefer to put it next to the hash agg item itself. It's more\nlikely to be noticed there, and highlighting it a little seems\nwarranted.\n\nOTOH, this may not be a problem at all for many individual users.\nFraming it as a tip rather than a compatibility item gets that across.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 19:55:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wednesday, July 29, 2020, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, Jul 29, 2020 at 6:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > There should be a note about this in the Postgres 13 release notes,\n> > > for the usual reasons. More importantly, the \"Allow hash aggregation\n> > > to use disk storage for large aggregation result sets\" feature should\n> > > reference the new GUC directly. Users should be advised that the GUC\n> > > may be useful in cases where they upgrade and experience a performance\n> > > regression linked to slower hash aggregation. Just including a\n> > > documentation link for the GUC would be very helpful.\n> >\n> > I came up with the attached patch.\n>\n> I was thinking something along like the following (after the existing\n> sentence about avoiding hash aggs in the planner):\n>\n> If you find that hash aggregation is slower than in previous major\n> releases of PostgreSQL, it may be useful to increase the value of\n> hash_mem_multiplier. This allows hash aggregation to use more memory\n> without affecting competing query operations that are generally less\n> likely to put any additional memory to good use.\n>\n>\nHow about adding wording for GROUP BY as well to cater to users who are\nmore comfortable thinking in terms of SQL statements instead of execution\nplans?\n\nDavid J.\n\nOn Wednesday, July 29, 2020, Peter Geoghegan <pg@bowt.ie> wrote:On Wed, Jul 29, 2020 at 6:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > There should be a note about this in the Postgres 13 release notes,\n> > for the usual reasons. More importantly, the \"Allow hash aggregation\n> > to use disk storage for large aggregation result sets\" feature should\n> > reference the new GUC directly. Users should be advised that the GUC\n> > may be useful in cases where they upgrade and experience a performance\n> > regression linked to slower hash aggregation. Just including a\n> > documentation link for the GUC would be very helpful.\n>\n> I came up with the attached patch.\n\nI was thinking something along like the following (after the existing\nsentence about avoiding hash aggs in the planner):\n\nIf you find that hash aggregation is slower than in previous major\nreleases of PostgreSQL, it may be useful to increase the value of\nhash_mem_multiplier. This allows hash aggregation to use more memory\nwithout affecting competing query operations that are generally less\nlikely to put any additional memory to good use.\nHow about adding wording for GROUP BY as well to cater to users who are more comfortable thinking in terms of SQL statements instead of execution plans?David J.", "msg_date": "Wed, 29 Jul 2020 20:43:24 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Wed, Jul 29, 2020 at 08:43:24PM -0700, David G. Johnston wrote:\n> On Wednesday, July 29, 2020, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Wed, Jul 29, 2020 at 6:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > There should be a note about this in the Postgres 13 release notes,\n> > > for the usual reasons. More importantly, the \"Allow hash aggregation\n> > > to use disk storage for large aggregation result sets\" feature should\n> > > reference the new GUC directly. Users should be advised that the GUC\n> > > may be useful in cases where they upgrade and experience a performance\n> > > regression linked to slower hash aggregation. Just including a\n> > > documentation link for the GUC would be very helpful.\n> >\n> > I came up with the attached patch.\n> \n> I was thinking something along like the following (after the existing\n> sentence about avoiding hash aggs in the planner):\n> \n> If you find that hash aggregation is slower than in previous major\n> releases of PostgreSQL, it may be useful to increase the value of\n> hash_mem_multiplier. This allows hash aggregation to use more memory\n> without affecting competing query operations that are generally less\n> likely to put any additional memory to good use.\n\nI came up with a more verbose documentation suggestion, attached.\n\n> How about adding wording for GROUP BY as well to cater to users who are more\n> comfortable thinking in terms of SQL statements instead of execution plans?\n\nUh, it is unclear exactly what SQL generates what node types, so I want\nto avoid this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Thu, 30 Jul 2020 13:32:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, Jul 30, 2020 at 10:32 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I came up with a more verbose documentation suggestion, attached.\n\nI'm okay with this.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jul 2020 10:45:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Thu, Jul 30, 2020 at 10:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jul 30, 2020 at 10:32 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I came up with a more verbose documentation suggestion, attached.\n>\n> I'm okay with this.\n\nAre you going to push this soon, Bruce?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:35:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, Aug 3, 2020 at 11:35:50AM -0700, Peter Geoghegan wrote:\n> On Thu, Jul 30, 2020 at 10:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Thu, Jul 30, 2020 at 10:32 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I came up with a more verbose documentation suggestion, attached.\n> >\n> > I'm okay with this.\n> \n> Are you going to push this soon, Bruce?\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 3 Aug 2020 17:02:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "There's some obvious improvements for which I include a patch.\n\n>Add alternate version of jsonb_setI() with special NULL handling (Andrew Dunstan)\n>The new function, jsonb_set_lax(), allows null new values to either set the specified key to JSON null, delete the key, raise exception, or ignore the operation. \njsonb_set()\nraise *an* exception\nnew null values?\n\n>IS 'return_target' CLEAR?\nI haven't used these before, but following the examples, it seems to return the\n\"target\" argument, unchanged.\n\n| Add function min_scale() that returns the number of digits to the right the decimal point that is required to represent the numeric value with full precision (Pavel Stehule)\nright *of*\nthat *are* required ?\n\n|The old function names were kept for backward compatibility. DO WE HAVE NEW NAMES?\n\n=> I think the docs are clear:\n> In releases of PostgreSQL before 13 there was no xid8 type, so variants of these functions were provided that used bigint to represent a 64-bit XID, with a correspondingly distinct snapshot data type txid_snapshot. These older functions have txid in their names. They are still supported for backward compatibility, but may be removed from a future release. See Table 9.76....\n\n> This improves performance for queries that access many object. The internal List API has also been improved.\nmany objects*\n\n|Allow skipping of WAL for full table writes if wal_level is minimal (Kyotaro Horiguchi)\nAllow WAL writes to be skipped...\nAnd: this is not related to full_page_writes.\n\n| Enable Unix-domain sockets support on Windows (Peter Eisentraut)\nEnable support for ..\n\n| Improve the performance when replaying DROP DATABASE commands when many tablespaces are in use (Fujii Masao)\ns/the//\n\n|Allow a sample of statements to be logged (Adrien Nayrat)\nAllow logging a sample of statements\n\n|Allow WAL receivers use a temporary replication slot if a permanent one is not specified (Peter Eisentraut, Sergei Kornilov)\n*to* use\n\n| Add leader_pid to pg_stat_activity to report parallel worker ownership (Julien Rouhaud)\ns/ownership/leader/ ?\n\n|Allow WAL recovery to continue even if invalid pages are referenced (Fujii Masao)\nremove \"WAL\" or say:\n>Allow recovery to continue even if invalid pages are referenced by WAL (Fujii Masao)\n\n\n\n\n\n\n\n\nA few things I have't fixed in my patch:\n\n|Previously, this value was adjusted before effecting the number of concurrent requests. This value is now used directly. Conversion of old values to new ones can be done using:\n|SELECT round(sum(OLD / n::float)) FROM generate_series(1, OLD) s(n);\n\nI think the round() should be aliased, \"AS new\".\nI think \"before effecting\" is confusing, maybe say:\n| Previously, the effective value was computed internally from the user-supplied parameter...\n\n|Allow partitioned tables to be logically replicated via publications (Amit Langote)\n|Previously, partitions had to be replicated individually. Now partitioned tables can be published explicitly causing all partitions to be automatically published. Addition/removal of partitions from partitioned tables are automatically added/removed from publications. The CREATE PUBLICATION option publish_via_partition_root controls whether changes to partitions are published as their own or their ancestor's.\n\n=> \"causing any future partitions to be automatically published\".\n\n\"addition/removal .. are automatically\" isn't right\n\n|Implement incremental sorting (James Coleman, Alexander Korotkov, Tomas Vondra)\n|If a result is already sorted by several leading keys, this allows for batch sorting of additional trailing keys because the previous keys are already equal. This is controlled by enable_incremental_sort.\n\ns/several/one or more/\nremove \"additional\" ?\nremove \"batch\" ?\nmaybe \"of ONLY trailing keys\"\n\n|Allow inserts to trigger autovacuum activity (Laurenz Albe, Darafei Praliaskouski)\n|This new behavior reduces the work necessary when the table needs to be frozen and allows pages to be set as all-visible. All-visible pages allow index-only scans to access fewer heap rows.\n\nThere's a lot of \"allow\" here, but it sounds like relaxing a restriction when\nactually this is a new functionality. Maybe:\n| Allow autovacuum to be triggered by INSERTs.\n| ..and allows autovacuum to set pages as all-visible.\n\n\n\n\n\nI already mentioned a couple things back in May that still stand out;:\n\n|Add jsonpath .datetime() method (Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov)\n|This allows json values to be converted to timestamps, which can then be processed in jsonpath expressions. This also adds jsonpath functions that support time zone-aware output.\ntimezone-aware or time-zone-aware, if you must.\n\n> Allow vacuum commands run by vacuumdb to operate in parallel mode (Masahiko Sawada)\n=> I think this is still going to be lost/misunderstood/confuse some people.\nvacuumdb already supports -j. This allows it to run vacuum(parallel N). So\nmaybe say \"...to process indexes in parallel\".\n\n-- \nJustin", "msg_date": "Mon, 7 Sep 2020 08:40:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, May 12, 2020 at 10:28 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, May 12, 2020 at 8:51 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Fri, May 8, 2020 at 12:07:09PM +0900, Amit Langote wrote:\n> > > I have attached a patch with my suggestions above.\n> >\n> > OK, I slightly modified the wording of your first change, patch\n> > attached.\n>\n> Thanks. I checked that what you committed looks fine.\n\nSorry about not having reported this earlier, but I had noticed that\nthe wording of the partitioned tables logical replication item isn't\ncorrect grammatically, which I noticed again while going through the\nwebpage. Attached fixes it as follows:\n\n- to be automatically published. Addition/removal of partitions from\n- partitioned tables are automatically added/removed from publications.\n+ to be automatically published. Adding/removing partitions from\n+ a partitioned table automatically adds/removes them from publications.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Sep 2020 18:46:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Hi,\n\nOn 5/4/20 11:16 PM, Bruce Momjian wrote:\n> I have committed the first draft of the PG 13 release notes. You can\n> see them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-13.html\n> \n> It still needs markup, word wrap, and indenting. The community doc\n> build should happen in a few hours.\n\nThank you again for compiling and maintaining the release notes through\nanother major release cycle, I know it's no small undertaking!\n\nAttached is a proposal for the \"major enhancements\" section. I borrowed\nfrom the press release[1] but tried to stay true to the release notes\nformat and listed out the enhancements that way.\n\nOpen to suggestion, formatting changes, etc.\n\nThanks!\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/3bd579f8-438a-ed1a-ee20-738292099aae%40postgresql.org", "msg_date": "Wed, 9 Sep 2020 16:57:11 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Attached is a proposal for the \"major enhancements\" section. I borrowed\n> from the press release[1] but tried to stay true to the release notes\n> format and listed out the enhancements that way.\n\nPushed with some very minor wording tweaks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Sep 2020 13:14:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/10/20 1:14 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Attached is a proposal for the \"major enhancements\" section. I borrowed\n>> from the press release[1] but tried to stay true to the release notes\n>> format and listed out the enhancements that way.\n> \n> Pushed with some very minor wording tweaks.\n\nThanks! The tweaks were minor enough that it took a few readthroughs to\ncatch them.\n\nOne thing that did not make it through was this:\n\n- <para>2020-XX-XX, CURRENT AS OF 2020-08-09</para>\n+ <para>2020-09-24, CURRENT AS OF 2020-09-09</para>\n\nIs the plan to update that at a later date? Understandable if so, but\nwanted to check.\n\nThanks,\n\nJonathan", "msg_date": "Thu, 10 Sep 2020 14:33:25 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> One thing that did not make it through was this:\n\n> - <para>2020-XX-XX, CURRENT AS OF 2020-08-09</para>\n> + <para>2020-09-24, CURRENT AS OF 2020-09-09</para>\n\nYeah, that's a placeholder to recall how far back to look for additional\nchanges to the relnotes, so unless you already read the git history that\nfar back and concluded nothing needed documenting, that was premature.\n\n(I've just about finished making those updates and making an editorial\npass over the notes, so I will change it in a little bit.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Sep 2020 16:31:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Mon, Sep 07, 2020 at 08:40:26AM -0500, Justin Pryzby wrote:\n\nRebasing onto 3965de54e718600a4703233936e56a3202caf73f, I'm left with:\n\ndiff --git a/doc/src/sgml/release-13.sgml b/doc/src/sgml/release-13.sgml\nindex 8fffc6fe0a..69d143e10c 100644\n--- a/doc/src/sgml/release-13.sgml\n+++ b/doc/src/sgml/release-13.sgml\n@@ -131,7 +131,7 @@ Author: Thomas Munro <tmunro@postgresql.org>\n </para>\n \n <programlisting>\n-SELECT round(sum(<replaceable>OLDVALUE</replaceable> / n::float)) FROM generate_series(1, <replaceable>OLDVALUE</replaceable>) s(n);\n+SELECT round(sum(<replaceable>OLDVALUE</replaceable> / n::float)) AS newvalue FROM generate_series(1, <replaceable>OLDVALUE</replaceable>) s(n);\n </programlisting>\n </listitem>\n \n@@ -776,8 +776,8 @@ Author: Noah Misch <noah@leadboat.com>\n -->\n \n <para>\n- Allow skipping of <acronym>WAL</acronym> for <link\n- linkend=\"guc-full-page-writes\">full table writes</link> if <xref\n+ Allow <acronym>WAL</acronym> writes to be skipped during a transaction\n+ which creates or rewrites a relation if <xref\n linkend=\"guc-wal-level\"/> is <literal>minimal</literal> (Kyotaro\n Horiguchi)\n </para>\n@@ -1007,7 +1007,7 @@ Author: Michael Paquier <michael@paquier.xyz>\n \n <para>\n Add <structfield>leader_pid</structfield> to <xref\n- linkend=\"pg-stat-activity-view\"/> to report parallel worker ownership\n+ linkend=\"pg-stat-activity-view\"/> to report parallel worker's leader process\n (Julien Rouhaud)\n </para>\n </listitem>\n@@ -1262,8 +1262,8 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n -->\n \n <para>\n- Enable <link linkend=\"client-authentication\">Unix-domain sockets</link>\n- support on Windows (Peter Eisentraut)\n+ Enable support for <link linkend=\"client-authentication\">Unix-domain sockets</link>\n+ on Windows (Peter Eisentraut)\n </para>\n </listitem>\n \n@@ -1391,8 +1391,8 @@ Author: Fujii Masao <fujii@postgresql.org>\n -->\n \n <para>\n- Allow <acronym>WAL</acronym> recovery to continue even if invalid\n- pages are referenced (Fujii Masao)\n+ Allow recovery to continue even if invalid\n+ pages are referenced by <acronym>WAL</acronym> (Fujii Masao)\n </para>\n \n <para>\n\n\n", "msg_date": "Thu, 10 Sep 2020 17:27:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Rebasing onto 3965de54e718600a4703233936e56a3202caf73f, I'm left with:\n\nSorry, I hadn't seen that you submitted more updates. I pushed\nthese with minor additional wordsmithing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Sep 2020 18:44:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 2020-09-09 22:57, Jonathan S. Katz wrote:\n> + <listitem>\n> + <para>\n> + Parallelized vacuuming of B-tree indexes\n> + </para>\n> + </listitem>\n\nI don't think B-tree indexes are relevant here. AFAICT, this feature \napplies to all indexes.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 15 Sep 2020 06:57:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, 15 Sep 2020 at 13:56, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-09-09 22:57, Jonathan S. Katz wrote:\n> > + <listitem>\n> > + <para>\n> > + Parallelized vacuuming of B-tree indexes\n> > + </para>\n> > + </listitem>\n>\n> I don't think B-tree indexes are relevant here. AFAICT, this feature\n> applies to all indexes.\n>\n\nYes, parallel vacuum applies to all types of indexes provided by\nPostgreSQL binary, and other types of indexes also can use it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 15 Sep 2020 18:22:26 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 5:22 AM, Masahiko Sawada wrote:\n> On Tue, 15 Sep 2020 at 13:56, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2020-09-09 22:57, Jonathan S. Katz wrote:\n>>> + <listitem>\n>>> + <para>\n>>> + Parallelized vacuuming of B-tree indexes\n>>> + </para>\n>>> + </listitem>\n>>\n>> I don't think B-tree indexes are relevant here. AFAICT, this feature\n>> applies to all indexes.\n>>\n> \n> Yes, parallel vacuum applies to all types of indexes provided by\n> PostgreSQL binary, and other types of indexes also can use it.\n\nI'm not sure where I got B-tree from. I've attached a correction.\n\nThanks,\n\nJonathan", "msg_date": "Tue, 15 Sep 2020 09:49:02 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/15/20 5:22 AM, Masahiko Sawada wrote:\n>> On Tue, 15 Sep 2020 at 13:56, Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> I don't think B-tree indexes are relevant here. AFAICT, this feature\n>>> applies to all indexes.\n\n> I'm not sure where I got B-tree from. I've attached a correction.\n\nRight, pushed. I clarified the main entry for this a tad, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 10:59:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 9:49 AM, Jonathan S. Katz wrote:\n> On 9/15/20 5:22 AM, Masahiko Sawada wrote:\n>> On Tue, 15 Sep 2020 at 13:56, Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>>\n>>> On 2020-09-09 22:57, Jonathan S. Katz wrote:\n>>>> + <listitem>\n>>>> + <para>\n>>>> + Parallelized vacuuming of B-tree indexes\n>>>> + </para>\n>>>> + </listitem>\n>>>\n>>> I don't think B-tree indexes are relevant here. AFAICT, this feature\n>>> applies to all indexes.\n>>>\n>>\n>> Yes, parallel vacuum applies to all types of indexes provided by\n>> PostgreSQL binary, and other types of indexes also can use it.\n> \n> I'm not sure where I got B-tree from. I've attached a correction.\n\nOn a different note, I became aware of this[1] and noticed that dropping\n\"CREATE EXTENSION ... FROM\" was not listed in the incompatibilities\nsection, so proposing the attached. I have no strong opinions on the\nfinal wording, mainly wanted to get it listed.\n\nThanks,\n\nJonathan\n\n[1] https://trac.osgeo.org/postgis/ticket/4753", "msg_date": "Tue, 15 Sep 2020 11:00:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On a different note, I became aware of this[1] and noticed that dropping\n> \"CREATE EXTENSION ... FROM\" was not listed in the incompatibilities\n> section, so proposing the attached. I have no strong opinions on the\n> final wording, mainly wanted to get it listed.\n\nIt is listed already in the \"Additional Modules\" section (about line 2940\nin release-13.sgml as of right now). I don't know Bruce's exact reasoning\nfor not including it in the \"Incompatibilities\" section, but I tend to\nagree that it shouldn't be significant to any real-world user. I think\nthat the postgis testing issue you reference is just an ancient test\ncase that they should drop as no longer relevant.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 11:45:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 11:45 AM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On a different note, I became aware of this[1] and noticed that dropping\n>> \"CREATE EXTENSION ... FROM\" was not listed in the incompatibilities\n>> section, so proposing the attached. I have no strong opinions on the\n>> final wording, mainly wanted to get it listed.\n> \n> It is listed already in the \"Additional Modules\" section (about line 2940\n> in release-13.sgml as of right now).\n\n...sort of. It talks about the feature, but does not talk about the\nsyntax removal, which is what I was originally searching for in the\nrelease notes.\n\n> I don't know Bruce's exact reasoning\n> for not including it in the \"Incompatibilities\" section, but I tend to\n> agree that it shouldn't be significant to any real-world user. \n\nI do tend agree with this intuitively but don't have any data to back it\nup either way. That said, we did modify the command and it would be good\nto at least mention the fact we dropped \"FROM\" somewhere in the release\nnotes. It provides a good reference in case someone reports an \"issue\"\nin the future stemming from dropping the \"FROM\" keyword, we can say \"oh\nit changed in PG13, see XYZ.\"\n\n(Speaking from having used the release notes to perform similar\ntroubleshooting).\n\n> I think\n> that the postgis testing issue you reference is just an ancient test\n> case that they should drop as no longer relevant.\n\nSure, and AIUI they're going to do that, mostly that was a reference\npoint to the changed syntax.\n\nJonathan", "msg_date": "Tue, 15 Sep 2020 11:56:40 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/15/20 11:45 AM, Tom Lane wrote:\n>> It is listed already in the \"Additional Modules\" section (about line 2940\n>> in release-13.sgml as of right now).\n\n> ...sort of. It talks about the feature, but does not talk about the\n> syntax removal, which is what I was originally searching for in the\n> release notes.\n\nAh. OK, we can certainly extend it to mention that. How about\n(not bothering with markup yet)\n\n Remove support for upgrading unpackaged (pre-9.1) extensions (Tom Lane)\n+\n+The FROM option of CREATE EXTENSION is no longer supported.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 12:05:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 12:05 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/15/20 11:45 AM, Tom Lane wrote:\n>>> It is listed already in the \"Additional Modules\" section (about line 2940\n>>> in release-13.sgml as of right now).\n> \n>> ...sort of. It talks about the feature, but does not talk about the\n>> syntax removal, which is what I was originally searching for in the\n>> release notes.\n> \n> Ah. OK, we can certainly extend it to mention that. How about\n> (not bothering with markup yet)\n> \n> Remove support for upgrading unpackaged (pre-9.1) extensions (Tom Lane)\n> +\n> +The FROM option of CREATE EXTENSION is no longer supported.\n\n+1.\n\nWith that in place, I'm more ambivalent to whether or not it's mentioned\nin the incompatibilities section as well, though would lean towards\nhaving a mention of it there as it technically is one. But I don't feel\ntoo strongly about it.\n\nJonathan", "msg_date": "Tue, 15 Sep 2020 12:12:16 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/15/20 12:05 PM, Tom Lane wrote:\n>> Ah. OK, we can certainly extend it to mention that. How about\n>> (not bothering with markup yet)\n>> \n>> Remove support for upgrading unpackaged (pre-9.1) extensions (Tom Lane)\n>> +\n>> +The FROM option of CREATE EXTENSION is no longer supported.\n\n> +1.\n\n> With that in place, I'm more ambivalent to whether or not it's mentioned\n> in the incompatibilities section as well, though would lean towards\n> having a mention of it there as it technically is one. But I don't feel\n> too strongly about it.\n\nAfter thinking a bit, I'm inclined to agree that we should move it\nto \"Incompatibilities\". It is a core server change (third-party\nextensions don't have a choice to opt out, as per postgis' issue),\nand even if it's trivial, we have some even-more-trivial issues\nlisted there, like command tag changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 13:05:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 1:05 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/15/20 12:05 PM, Tom Lane wrote:\n>>> Ah. OK, we can certainly extend it to mention that. How about\n>>> (not bothering with markup yet)\n>>>\n>>> Remove support for upgrading unpackaged (pre-9.1) extensions (Tom Lane)\n>>> +\n>>> +The FROM option of CREATE EXTENSION is no longer supported.\n> \n>> +1.\n> \n>> With that in place, I'm more ambivalent to whether or not it's mentioned\n>> in the incompatibilities section as well, though would lean towards\n>> having a mention of it there as it technically is one. But I don't feel\n>> too strongly about it.\n> \n> After thinking a bit, I'm inclined to agree that we should move it\n> to \"Incompatibilities\". It is a core server change (third-party\n> extensions don't have a choice to opt out, as per postgis' issue),\n> and even if it's trivial, we have some even-more-trivial issues\n> listed there, like command tag changes.\n\nHow about this?\n\nJonathan", "msg_date": "Tue, 15 Sep 2020 14:05:24 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/15/20 1:05 PM, Tom Lane wrote:\n>> After thinking a bit, I'm inclined to agree that we should move it\n>> to \"Incompatibilities\". It is a core server change (third-party\n>> extensions don't have a choice to opt out, as per postgis' issue),\n>> and even if it's trivial, we have some even-more-trivial issues\n>> listed there, like command tag changes.\n\n> How about this?\n\nThe other incompatibilities are only listed once, if they're minor.\nI was just about to commit the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 15 Sep 2020 14:11:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 2:11 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/15/20 1:05 PM, Tom Lane wrote:\n>>> After thinking a bit, I'm inclined to agree that we should move it\n>>> to \"Incompatibilities\". It is a core server change (third-party\n>>> extensions don't have a choice to opt out, as per postgis' issue),\n>>> and even if it's trivial, we have some even-more-trivial issues\n>>> listed there, like command tag changes.\n> \n>> How about this?\n> \n> The other incompatibilities are only listed once, if they're minor.\n> I was just about to commit the attached.\n\nEven better. +1\n\nJonathan", "msg_date": "Tue, 15 Sep 2020 14:25:27 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/15/20 2:11 PM, Tom Lane wrote:\n>> The other incompatibilities are only listed once, if they're minor.\n>> I was just about to commit the attached.\n\n> Even better. +1\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 14:30:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On 9/15/20 2:30 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/15/20 2:11 PM, Tom Lane wrote:\n>>> The other incompatibilities are only listed once, if they're minor.\n>>> I was just about to commit the attached.\n> \n>> Even better. +1\n> \n> Pushed, thanks for looking.\n\nThanks for modifying...though I have a gripe about it being labeled a\ngripe[1] ;) Though it gave me a good laugh...\n\nJonathan\n\n[1]\nhttps://git.postgresql.org/pg/commitdiff/d42c6176446440b185fcb95c214b7e40d5758b60", "msg_date": "Tue, 15 Sep 2020 14:37:43 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" }, { "msg_contents": "On Tue, Sep 15, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pushed, thanks for looking.\n\nI think that the Postgres 13 release notes should mention the\nenhancement to contrib/amcheck made by Alexander's commit d114cc53.\n\nI suggest something along the lines of: \"Make the cross-level\nverification checks performed by contrib/amcheck's\nbt_index_parent_check() function more robust\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Sep 2020 14:58:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 13 release notes, first draft" } ]
[ { "msg_contents": "Hi,\n\nin the Postgres documentation it says: \"PostgreSQLprovides the index \nmethods B-tree, hash, GiST, SP-GiST, GIN, and BRIN. Users can also \ndefine their own index methods, but that is fairly complicated.\" \n(https://www.postgresql.org/docs/12/sql-createindex.html)\n\nEven though it's described as fairly complicated: If I would want to \ndefine my own index method, what would be a good approach to do so?\n\nBest regards\n\n\n\n\n\n\n\nHi,\nin the Postgres documentation it says: \"PostgreSQL provides the index\n methods B-tree, hash, GiST, SP-GiST, GIN, and BRIN. Users can\n also define their own index methods, but that is fairly\n complicated.\"\n (https://www.postgresql.org/docs/12/sql-createindex.html)\nEven though it's\n described as fairly complicated: If I would want to define my\n own index method, what would be a good approach to do so?\nBest regards", "msg_date": "Tue, 5 May 2020 14:21:07 +0200", "msg_from": "Benjamin Schaller <benjamin.schaller@s2018.tu-chemnitz.de>", "msg_from_op": true, "msg_subject": "Own index methods" }, { "msg_contents": "Benjamin Schaller <benjamin.schaller@s2018.tu-chemnitz.de> writes:\n> Even though it's described as fairly complicated: If I would want to \n> define my own index method, what would be a good approach to do so?\n\ncontrib/bloom would make a sensible template, perhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 May 2020 10:10:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Own index methods" }, { "msg_contents": "Hi!\n\n> 5 мая 2020 г., в 17:21, Benjamin Schaller <benjamin.schaller@s2018.tu-chemnitz.de> написал(а):\n> \n> Even though it's described as fairly complicated: If I would want to define my own index method, what would be a good approach to do so?\n\nI'm working on presentation describing how to fork AM out of core to extension. Hope to be available soon. I'll send you a link when it's available.\n\nThis small code copy-pasting helps to narrow focus (postgres codebase is big), makes experiments with new not yet committed features easier and allows to \"specialise\" generic indexes more precisely.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 6 May 2020 11:10:14 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Own index methods" }, { "msg_contents": "On Tue, May 5, 2020 at 5:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Benjamin Schaller <benjamin.schaller@s2018.tu-chemnitz.de> writes:\n> > Even though it's described as fairly complicated: If I would want to\n> > define my own index method, what would be a good approach to do so?\n>\n> contrib/bloom would make a sensible template, perhaps.\n\n+1\n\nYou can also take a look at https://github.com/postgrespro/rum\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 6 May 2020 11:14:49 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Own index methods" }, { "msg_contents": "On Wed, May 06, 2020 at 11:14:49AM +0300, Alexander Korotkov wrote:\n> You can also take a look at https://github.com/postgrespro/rum\n\nPlease note that we have also an extra, mostly-blank, template as of\nsrc/test/modules/dummy_index_am/ which has been added in v13 for\nmainly testing purposes, but you can use it as a base for any new\nstuff you are willing to try.\n--\nMichael", "msg_date": "Wed, 6 May 2020 21:16:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Own index methods" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16419\nLogged by: Saeed Hubaishan\nEmail address: dar_alathar@hotmail.com\nPostgreSQL version: 12.2\nOperating system: Windows 10x64\nDescription: \n\nselect to_date('-1-01-01','yyyy-mm-dd');\r\nwill get \r\n0002-01-01 BC", "msg_date": "Wed, 06 May 2020 21:26:55 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, May 6, 2020 at 2:58 PM PG Bug reporting form <noreply@postgresql.org>\nwrote:\n\n> The following bug has been logged on the website:\n>\n> Bug reference: 16419\n> Logged by: Saeed Hubaishan\n> Email address: dar_alathar@hotmail.com\n> PostgreSQL version: 12.2\n> Operating system: Windows 10x64\n> Description:\n>\n> select to_date('-1-01-01','yyyy-mm-dd');\n> will get\n> 0002-01-01 BC\n>\n\nYep...\n\nselect to_date('1','YYYY')::text; // Year 1 AD\nselect to_date('0','YYYY')::text; // Year 1 BC (there is no year zero)\nselect to_date('-1','YYYY')::text; // Year 2 BC\n\nto_date tries very hard to not error - if you need to use it make sure your\ndata conforms to the format you specify.\n\nDavid J.\n\nOn Wed, May 6, 2020 at 2:58 PM PG Bug reporting form <noreply@postgresql.org> wrote:The following bug has been logged on the website:\n\nBug reference:      16419\nLogged by:          Saeed Hubaishan\nEmail address:      dar_alathar@hotmail.com\nPostgreSQL version: 12.2\nOperating system:   Windows 10x64\nDescription:        \n\nselect to_date('-1-01-01','yyyy-mm-dd');\nwill get \n0002-01-01 BCYep...select to_date('1','YYYY')::text; // Year 1 ADselect to_date('0','YYYY')::text; // Year 1 BC (there is no year zero)\n\nselect to_date('-1','YYYY')::text; // Year 2 BCto_date tries very hard to not error - if you need to use it make sure your data conforms to the format you specify.David J.", "msg_date": "Wed, 6 May 2020 15:45:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Any one suppose that these functions return the same:\nmake_date(-1,1,1)\nto_date('-1-01-01','yyyy-mm-dd')\nBut make_date will give 0001-01-01 BC\nAnd to_date will give 0002-01-01 BC\n\nIf you think this is right behavior I think this must be documented\n\n\nمن: David G. Johnston <david.g.johnston@gmail.com>\n‏‏تم الإرسال: Thursday, May 7, 2020 1:45:14 AM\nإلى: dar_alathar@hotmail.com <dar_alathar@hotmail.com>; PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>\n‏‏الموضوع: Re: BUG #16419: wrong parsing BC year in to_date() function\n\nOn Wed, May 6, 2020 at 2:58 PM PG Bug reporting form <noreply@postgresql.org<mailto:noreply@postgresql.org>> wrote:\nThe following bug has been logged on the website:\n\nBug reference: 16419\nLogged by: Saeed Hubaishan\nEmail address: dar_alathar@hotmail.com<mailto:dar_alathar@hotmail.com>\nPostgreSQL version: 12.2\nOperating system: Windows 10x64\nDescription:\n\nselect to_date('-1-01-01','yyyy-mm-dd');\nwill get\n0002-01-01 BC\n\nYep...\n\nselect to_date('1','YYYY')::text; // Year 1 AD\nselect to_date('0','YYYY')::text; // Year 1 BC (there is no year zero)\nselect to_date('-1','YYYY')::text; // Year 2 BC\n\nto_date tries very hard to not error - if you need to use it make sure your data conforms to the format you specify.\n\nDavid J.\n\nمن: David G. Johnston<mailto:david.g.johnston@gmail.com>\nإرسال: ‏الخميس,‏ ‏14 ‏رمضان,‏ ‏1441 ‏01:45 ‏ص\nالموضوع: Re: BUG #16419: wrong parsing BC year in to_date() function\n\nOn Wed, May 6, 2020 at 2:58 PM PG Bug reporting form <noreply@postgresql.org<mailto:noreply@postgresql.org>> wrote:\nThe following bug has been logged on the website:\n\nBug reference: 16419\nLogged by: Saeed Hubaishan\nEmail address: dar_alathar@hotmail.com<mailto:dar_alathar@hotmail.com>\nPostgreSQL version: 12.2\nOperating system: Windows 10x64\nDescription:\n\nselect to_date('-1-01-01','yyyy-mm-dd');\nwill get\n0002-01-01 BC\n\nYep...\n\nselect to_date('1','YYYY')::text; // Year 1 AD\nselect to_date('0','YYYY')::text; // Year 1 BC (there is no year zero)\nselect to_date('-1','YYYY')::text; // Year 2 BC\n\nto_date tries very hard to not error - if you need to use it make sure your data conforms to the format you specify.\n\nDavid J.", "msg_date": "Thu, 7 May 2020 00:59:28 +0000", "msg_from": "\n =?windows-1256?B?z8fRIMfhwsvH0SDh4eTU0SDmx+HK5tLt2i3V5NrHwSBEYXIgQWxhdGhh?=\n =?windows-1256?Q?r-Yemen?= <dar_alathar@hotmail.com>", "msg_from_op": false, "msg_subject": "\n =?windows-1256?Q?=D1=CF:_BUG_#16419:_wrong_parsing_BC_year_in_to=5Fdate()?=\n =?windows-1256?Q?_function?=" }, { "msg_contents": "‪On Wed, May 6, 2020 at 6:31 PM ‫دار الآثار للنشر والتوزيع-صنعاء Dar\nAlathar-Yemen‬‎ <dar_alathar@hotmail.com> wrote:‬\n\n> Any one suppose that these functions return the same:\n> make_date(-1,1,1)\n> to_date('-1-01-01','yyyy-mm-dd')\n>\n> But make_date will give 0001-01-01 BC\n>\n> And to_date will give 0002-01-01 BC\n>\n>\n>\nInteresting...and a fair point.\n\nWhat seems to be happening here is that to_date is trying to be helpful by\ndoing:\n\nselect to_date('0000','YYYY'); // 0001-01-01 BC\n\nIt does this seemingly by subtracting one from the year, making it\npositive, then (I infer) appending \"BC\" to the result. Thus for the year\n\"-1\" it yields \"0002-01-01 BC\"\n\nmake_date just chooses to reject the year 0 and treat the negative as an\nalternative to specifying BC\n\nThere seems to be zero tests for to_date involving negative years, and the\ndocumentation doesn't talk of them.\n\nI'll let the -hackers speak up as to how they want to go about handling\nto_date (research how it behaves in the other database it tries to emulate\nand either document or possibly change the behavior in v14) but do suggest\nthat a simple explicit description of how to_date works in the presence of\nnegative years be back-patched. A bullet in the usage notes section\nprobably suffices:\n\n\"If a YYYY format string captures a negative year, or 0000, it will treat\nit as a BC year after decreasing the value by one. So 0000 maps to 1 BC\nand -1 maps to 2 BC and so on.\"\n\nSo, no, make_date and to_date do not agree on this point; and they do not\nhave to. There is no way to specify \"BC\" in make_date function so using\nnegative there makes sense. You can specify BC in the input string for\nto_date and indeed that is the only supported (documented) way to do so.\n\nDavid J.\n\n‪On Wed, May 6, 2020 at 6:31 PM ‫دار الآثار للنشر والتوزيع-صنعاء Dar Alathar-Yemen‬‎ <dar_alathar@hotmail.com> wrote:‬\n\n\nAny one suppose that these functions return the same:\nmake_date(-1,1,1)\nto_date('-1-01-01','yyyy-mm-dd')\nBut make_date will give 0001-01-01 BC\nAnd to_date will give 0002-01-01 BC\nInteresting...and a fair point.What seems to be happening here is that to_date is trying to be helpful by doing:select to_date('0000','YYYY'); // 0001-01-01 BCIt does this seemingly by subtracting one from the year, making it positive, then (I infer) appending \"BC\" to the result.  Thus for the year \"-1\" it yields \"0002-01-01 BC\"make_date just chooses to reject the year 0 and treat the negative as an alternative to specifying BCThere seems to be zero tests for to_date involving negative years, and the documentation doesn't talk of them.I'll let the -hackers speak up as to how they want to go about handling to_date (research how it behaves in the other database it tries to emulate and either document or possibly change the behavior in v14) but do suggest that a simple explicit description of how to_date works in the presence of negative years be back-patched.  A bullet in the usage notes section probably suffices:\"If a YYYY format string captures a negative year, or 0000, it will treat it as a BC year after decreasing the value by one.  So 0000 maps to 1 BC and -1 maps to 2 BC and so on.\"So, no, make_date and to_date do not agree on this point; and they do not have to.  There is no way to specify \"BC\" in make_date function so using negative there makes sense.  You can specify BC in the input string for to_date and indeed that is the only supported (documented) way to do so.David J.", "msg_date": "Wed, 6 May 2020 20:12:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, May 6, 2020 at 8:12 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> It does this seemingly by subtracting one from the year, making it\n> positive, then (I infer) appending \"BC\" to the result. Thus for the year\n> \"-1\" it yields \"0002-01-01 BC\"\n>\n>\nSpecifically:\n\nhttps://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L236\n\n/*\n * There is no 0 AD. Years go from 1 BC to 1 AD, so we make it\n * positive and map year == -1 to year zero, and shift all negative\n * years up one. For interval years, we just return the year.\n */\n#define ADJUST_YEAR(year, is_interval) ((is_interval) ? (year) : ((year) <=\n0 ? -((year) - 1) : (year)))\n\nThe code comment took me a bit to process - seems like the following would\nbe better (if its right - I don't know why interval is a pure no-op while\nnon-interval normalizes to a positive integer).\n\nYears go from 1 BC to 1 AD, so we adjust the year zero, and all negative\nyears, by shifting them away one year, We then return the positive value\nof the result because the caller tracks the BC/AD aspect of the year\nseparately and only deals with positive year values coming out of this\nmacro. Intervals denote the distance away from 0 a year is so we can\nsimply take the supplied value and return it. Interval processing code\nexpects a negative result for intervals going into BC.\n\nDavid J.\n\nOn Wed, May 6, 2020 at 8:12 PM David G. Johnston <david.g.johnston@gmail.com> wrote:It does this seemingly by subtracting one from the year, making it positive, then (I infer) appending \"BC\" to the result.  Thus for the year \"-1\" it yields \"0002-01-01 BC\"Specifically:https://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L236/* *\tThere is no 0 AD.  Years go from 1 BC to 1 AD, so we make it *\tpositive and map year == -1 to year zero, and shift all negative *\tyears up one.  For interval years, we just return the year. */#define ADJUST_YEAR(year, is_interval)\t((is_interval) ? (year) : ((year) <= 0 ? -((year) - 1) : (year)))The code comment took me a bit to process - seems like the following would be better (if its right - I don't know why interval is a pure no-op while non-interval normalizes to a positive integer).Years go from 1 BC to 1 AD, so we adjust the year zero, and all negative years, by shifting them away one year,  We then return the positive value of the result because the caller tracks the BC/AD aspect of the year separately and only deals with positive year values coming out of this macro.  Intervals denote the distance away from 0 a year is so we can simply take the supplied value and return it.  Interval processing code expects a negative result for intervals going into BC.David J.", "msg_date": "Wed, 6 May 2020 22:05:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "To make \"to_date\" work as \"make_date\" with negative years these llines:\nhttps://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L4559-L4560 :\n if (tmfc.bc && tm->tm_year > 0)\n tm->tm_year = -(tm->tm_year - 1);\nmust be changed to:\n if (tmfc.bc && tm->tm_year > 0)\n {\n tm->tm_year = -(tm->tm_year - 1);\n }\n else if (tm->tm_year < 0) {\n tm->tm_year ++;\n }\n\n\n\n\n\n\n\n\n\n\n\nTo make \"to_date\" work as \"make_date\" with negative years these llines:\nhttps://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L4559-L4560\n :\n                                            if (tmfc.bc && tm->tm_year > 0)\n                                                           tm->tm_year = -(tm->tm_year - 1);\nmust be changed to:\n                                            if (tmfc.bc && tm->tm_year > 0)\n                                            {\n                                                           tm->tm_year = -(tm->tm_year - 1);\n                                            }\n                                            else if (tm->tm_year < 0) {\n                                                           tm->tm_year ++;\n                                             }", "msg_date": "Thu, 7 May 2020 10:23:35 +0000", "msg_from": "\n =?iso-8859-6?B?z8fRIMfkwsvH0SDk5ObU0SDox+TK6NLq2S3V5tnHwSBEYXIgQWxhdGhh?=\n =?iso-8859-6?Q?r-Yemen?= <dar_alathar@hotmail.com>", "msg_from_op": false, "msg_subject": "\n =?iso-8859-6?Q?=D1=CF:_BUG_#16419:_wrong_parsing_BC_year_in_to=5Fdate()_f?=\n =?iso-8859-6?Q?unction?=" }, { "msg_contents": "research how it behaves in the other database it tries to emulate and either document or possibly change the behavior in v14\nAs in https://stackoverflow.com/questions/6779521/how-do-i-insert-a-bc-date-into-oracle and http://rwijk.blogspot.com/2008/10/year-zero.html\nIn Oracle\n\nto_date('-4700/01/01','syyyy/mm/dd')\nreturns\n\n01/01/4700 BC\nIn documents https://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_commands_1029.htm#OLADM780\n\nYEAR\n\nSYEAR\n\n\n\nYear, spelled out; S prefixes BC dates with a minus sign (-).\n\nYYYY\n\nSYYYY\n\n\n\n4-digit year; S prefixes BC dates with a minus sign.\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\nresearch how it behaves in the other database it tries to emulate and either document or possibly change the behavior in v14\n\n\n\nAs in\nhttps://stackoverflow.com/questions/6779521/how-do-i-insert-a-bc-date-into-oracle\n and http://rwijk.blogspot.com/2008/10/year-zero.html\nIn Oracle\n\nto_date('-4700/01/01','syyyy/mm/dd')\nreturns\n\n01/01/4700 BC\nIn documents\nhttps://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_commands_1029.htm#OLADM780\n\n\n\n\nYEAR\nSYEAR\n\n\n \n\n\nYear, spelled out; S prefixes BC dates with a minus sign (-).\n\n\n\n\nYYYY\nSYYYY\n\n\n \n\n\n4-digit year; S prefixes BC dates with a minus sign.", "msg_date": "Thu, 7 May 2020 11:48:40 +0000", "msg_from": "\n =?iso-8859-6?B?z8fRIMfkwsvH0SDk5ObU0SDox+TK6NLq2S3V5tnHwSBEYXIgQWxhdGhh?=\n =?iso-8859-6?Q?r-Yemen?= <dar_alathar@hotmail.com>", "msg_from_op": false, "msg_subject": "\n =?iso-8859-6?Q?=D1=CF:_BUG_#16419:_wrong_parsing_BC_year_in_to=5Fdate()_f?=\n =?iso-8859-6?Q?unction?=" }, { "msg_contents": "Redirecting to -hackers for visibility. I feel there needs to be something\ndone here, even if just documentation (a bullet in the usage notes section\n- and a code comment update for the macro) pointing this out and not\nchanging any behavior.\n\nDavid J.\n\nOn Wed, May 6, 2020 at 8:12 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> ‪On Wed, May 6, 2020 at 6:31 PM ‫دار الآثار للنشر والتوزيع-صنعاء Dar\n> Alathar-Yemen‬‎ <dar_alathar@hotmail.com> wrote:‬\n>\n>> Any one suppose that these functions return the same:\n>> make_date(-1,1,1)\n>> to_date('-1-01-01','yyyy-mm-dd')\n>>\n>> But make_date will give 0001-01-01 BC\n>>\n>> And to_date will give 0002-01-01 BC\n>>\n>>\n>>\n> Interesting...and a fair point.\n>\n> What seems to be happening here is that to_date is trying to be helpful by\n> doing:\n>\n> select to_date('0000','YYYY'); // 0001-01-01 BC\n>\n> It does this seemingly by subtracting one from the year, making it\n> positive, then (I infer) appending \"BC\" to the result. Thus for the year\n> \"-1\" it yields \"0002-01-01 BC\"\n>\n> make_date just chooses to reject the year 0 and treat the negative as an\n> alternative to specifying BC\n>\n> There seems to be zero tests for to_date involving negative years, and the\n> documentation doesn't talk of them.\n>\n> I'll let the -hackers speak up as to how they want to go about handling\n> to_date (research how it behaves in the other database it tries to emulate\n> and either document or possibly change the behavior in v14) but do suggest\n> that a simple explicit description of how to_date works in the presence of\n> negative years be back-patched. A bullet in the usage notes section\n> probably suffices:\n>\n> \"If a YYYY format string captures a negative year, or 0000, it will treat\n> it as a BC year after decreasing the value by one. So 0000 maps to 1 BC\n> and -1 maps to 2 BC and so on.\"\n>\n> So, no, make_date and to_date do not agree on this point; and they do not\n> have to. There is no way to specify \"BC\" in make_date function so using\n> negative there makes sense. You can specify BC in the input string for\n> to_date and indeed that is the only supported (documented) way to do so.\n>\n>\n[and the next email]\n\n\n> Specifically:\n>\n>\n> https://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L236\n>\n> /*\n> * There is no 0 AD. Years go from 1 BC to 1 AD, so we make it\n> * positive and map year == -1 to year zero, and shift all negative\n> * years up one. For interval years, we just return the year.\n> */\n> #define ADJUST_YEAR(year, is_interval) ((is_interval) ? (year) : ((year)\n> <= 0 ? -((year) - 1) : (year)))\n>\n> The code comment took me a bit to process - seems like the following would\n> be better (if its right - I don't know why interval is a pure no-op while\n> non-interval normalizes to a positive integer).\n>\n> Years go from 1 BC to 1 AD, so we adjust the year zero, and all negative\n> years, by shifting them away one year, We then return the positive value\n> of the result because the caller tracks the BC/AD aspect of the year\n> separately and only deals with positive year values coming out of this\n> macro. Intervals denote the distance away from 0 a year is so we can\n> simply take the supplied value and return it. Interval processing code\n> expects a negative result for intervals going into BC.\n>\n> David J.\n>\n>\n\nRedirecting to -hackers for visibility.  I feel there needs to be something done here, even if just documentation (a bullet in the usage notes section - and a code comment update for the macro) pointing this out and not changing any behavior.David J.On Wed, May 6, 2020 at 8:12 PM David G. Johnston <david.g.johnston@gmail.com> wrote:‪On Wed, May 6, 2020 at 6:31 PM ‫دار الآثار للنشر والتوزيع-صنعاء Dar Alathar-Yemen‬‎ <dar_alathar@hotmail.com> wrote:‬\n\n\nAny one suppose that these functions return the same:\nmake_date(-1,1,1)\nto_date('-1-01-01','yyyy-mm-dd')\nBut make_date will give 0001-01-01 BC\nAnd to_date will give 0002-01-01 BC\nInteresting...and a fair point.What seems to be happening here is that to_date is trying to be helpful by doing:select to_date('0000','YYYY'); // 0001-01-01 BCIt does this seemingly by subtracting one from the year, making it positive, then (I infer) appending \"BC\" to the result.  Thus for the year \"-1\" it yields \"0002-01-01 BC\"make_date just chooses to reject the year 0 and treat the negative as an alternative to specifying BCThere seems to be zero tests for to_date involving negative years, and the documentation doesn't talk of them.I'll let the -hackers speak up as to how they want to go about handling to_date (research how it behaves in the other database it tries to emulate and either document or possibly change the behavior in v14) but do suggest that a simple explicit description of how to_date works in the presence of negative years be back-patched.  A bullet in the usage notes section probably suffices:\"If a YYYY format string captures a negative year, or 0000, it will treat it as a BC year after decreasing the value by one.  So 0000 maps to 1 BC and -1 maps to 2 BC and so on.\"So, no, make_date and to_date do not agree on this point; and they do not have to.  There is no way to specify \"BC\" in make_date function so using negative there makes sense.  You can specify BC in the input string for to_date and indeed that is the only supported (documented) way to do so. [and the next email] Specifically:https://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L236/* * There is no 0 AD.  Years go from 1 BC to 1 AD, so we make it * positive and map year == -1 to year zero, and shift all negative * years up one.  For interval years, we just return the year. */#define ADJUST_YEAR(year, is_interval) ((is_interval) ? (year) : ((year) <= 0 ? -((year) - 1) : (year)))The code comment took me a bit to process - seems like the following would be better (if its right - I don't know why interval is a pure no-op while non-interval normalizes to a positive integer).Years go from 1 BC to 1 AD, so we adjust the year zero, and all negative years, by shifting them away one year,  We then return the positive value of the result because the caller tracks the BC/AD aspect of the year separately and only deals with positive year values coming out of this macro.  Intervals denote the distance away from 0 a year is so we can simply take the supplied value and return it.  Interval processing code expects a negative result for intervals going into BC.David J.", "msg_date": "Tue, 12 May 2020 18:09:39 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Tue, 2020-05-12 at 18:09 -0700, David G. Johnston wrote:\n> Redirecting to -hackers for visibility. I feel there needs to be something done here, even if just documentation (a bullet in the usage notes section - and a code comment update for the macro)\n> pointing this out and not changing any behavior.\n> \n> David J.\n> \n> On Wed, May 6, 2020 at 8:12 PM David G. Johnston <david.g.johnston@gmail.com> wrote:\n> > ‪On Wed, May 6, 2020 at 6:31 PM ‫دار الآثار للنشر والتوزيع-صنعاء Dar Alathar-Yemen‬‎ <dar_alathar@hotmail.com> wrote:‬\n> > > Any one suppose that these functions return the same:\n> > > make_date(-1,1,1)\n> > > to_date('-1-01-01','yyyy-mm-dd')\n> > > \n> > > But make_date will give 0001-01-01 BC\n> > > \n> > > And to_date will give 0002-01-01 BC\n> > > \n> > > \n> > > \n> > > \n> > \n> > Interesting...and a fair point.\n> > \n> > What seems to be happening here is that to_date is trying to be helpful by doing:\n> > \n> > select to_date('0000','YYYY'); // 0001-01-01 BC\n> > \n> > It does this seemingly by subtracting one from the year, making it positive, then (I infer) appending \"BC\" to the result. Thus for the year \"-1\" it yields \"0002-01-01 BC\"\n> > \n> > make_date just chooses to reject the year 0 and treat the negative as an alternative to specifying BC\n> > \n> > There seems to be zero tests for to_date involving negative years, and the documentation doesn't talk of them.\n> > \n> > I'll let the -hackers speak up as to how they want to go about handling to_date (research how it behaves in the other database it tries to emulate and either document or possibly change the\n> > behavior in v14) but do suggest that a simple explicit description of how to_date works in the presence of negative years be back-patched. A bullet in the usage notes section probably suffices:\n> > \n> > \"If a YYYY format string captures a negative year, or 0000, it will treat it as a BC year after decreasing the value by one. So 0000 maps to 1 BC and -1 maps to 2 BC and so on.\"\n> > \n> > So, no, make_date and to_date do not agree on this point; and they do not have to. There is no way to specify \"BC\" in make_date function so using negative there makes sense. You can specify BC\n> > in the input string for to_date and indeed that is the only supported (documented) way to do so.\n> > \n> > \n> \n> \n> [and the next email]\n> \n> > Specifically:\n> > \n> > https://github.com/postgres/postgres/blob/fb544735f11480a697fcab791c058adc166be1fa/src/backend/utils/adt/formatting.c#L236\n> > \n> > /*\n> > * There is no 0 AD. Years go from 1 BC to 1 AD, so we make it\n> > * positive and map year == -1 to year zero, and shift all negative\n> > * years up one. For interval years, we just return the year.\n> > */\n> > #define ADJUST_YEAR(year, is_interval) ((is_interval) ? (year) : ((year) <= 0 ? -((year) - 1) : (year)))\n> > \n> > The code comment took me a bit to process - seems like the following would be better (if its right - I don't know why interval is a pure no-op while non-interval normalizes to a positive integer).\n> > \n> > Years go from 1 BC to 1 AD, so we adjust the year zero, and all negative years, by shifting them away one year, We then return the positive value of the result because the caller tracks the BC/AD\n> > aspect of the year separately and only deals with positive year values coming out of this macro. Intervals denote the distance away from 0 a year is so we can simply take the supplied value and\n> > return it. Interval processing code expects a negative result for intervals going into BC.\n> > \n> > David J.\n\nSince \"to_date\" is an Oracle compatibility function, here is what Oracle 18.4 has to say to that:\n\nSQL> SELECT to_date('0000', 'YYYY') FROM dual;\nSELECT to_date('0000', 'YYYY') FROM dual\n *\nERROR at line 1:\nORA-01841: (full) year must be between -4713 and +9999, and not be 0\n\n\nSQL> SELECT to_date('-0001', 'YYYY') FROM dual;\nSELECT to_date('-0001', 'YYYY') FROM dual\n *\nERROR at line 1:\nORA-01841: (full) year must be between -4713 and +9999, and not be 0\n\n\nSQL> SELECT to_date('-0001', 'SYYYY') FROM dual;\n\nTO_DATE('-0001','SYYYY\n----------------------\n0001-05-01 00:00:00 BC\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 13 May 2020 05:56:18 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Tue, May 12, 2020 at 8:56 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Tue, 2020-05-12 at 18:09 -0700, David G. Johnston wrote:\n> > Redirecting to -hackers for visibility. I feel there needs to be\n> something done here, even if just documentation (a bullet in the usage\n> notes section - and a code comment update for the macro)\n> > pointing this out and not changing any behavior.\n>\n> Since \"to_date\" is an Oracle compatibility function, here is what Oracle\n> 18.4 has to say to that:\n>\n> SQL> SELECT to_date('0000', 'YYYY') FROM dual;\n> SELECT to_date('0000', 'YYYY') FROM dual\n> *\n> ERROR at line 1:\n> ORA-01841: (full) year must be between -4713 and +9999, and not be 0\n>\n>\nAttached is a concrete patch (back-patchable hopefully) documenting the\ncurrent reality.\n\nAs noted in the patch commit message (commentary really):\n\nmake_timestamp not agreeing with make_date on how to handle negative years\nshould probably just be fixed - but that is for someone else to handle.\n\nWhether to actually change the behavior of to_date is up for debate though\nI would presume it would not be back-patched.\n\nDavid J.", "msg_date": "Wed, 15 Jul 2020 09:26:53 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Jul 15, 2020 at 09:26:53AM -0700, David G. Johnston wrote:\n> On Tue, May 12, 2020 at 8:56 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> On Tue, 2020-05-12 at 18:09 -0700, David G. Johnston wrote:\n> > Redirecting to -hackers for visibility.� I feel there needs to be\n> something done here, even if just documentation (a bullet in the usage\n> notes section - and a code comment update for the macro)\n> > pointing this out and not changing any behavior.\n> \n> Since \"to_date\" is an Oracle compatibility function, here is what Oracle\n> 18.4 has to say to that:\n> \n> SQL> SELECT to_date('0000', 'YYYY') FROM dual;\n> SELECT to_date('0000', 'YYYY') FROM dual\n> � � � � � � � �*\n> ERROR at line 1:\n> ORA-01841: (full) year must be between -4713 and +9999, and not be 0\n> \n> \n> \n> Attached is a concrete patch (back-patchable hopefully) documenting the current\n> reality.\n> \n> As noted in the patch commit message (commentary really):\n> \n> make_timestamp not agreeing with make_date on how to handle negative years\n> should probably just be fixed - but that is for someone else to handle.\n> \n> Whether to actually change the behavior of to_date is up for debate though I\n> would presume it would not be back-patched.\n\nOK, so, looking at this thread, we have to_date() treating -1 as -2 BC,\nmake_date() treating -1 as 1 BC, and we have Oracle, which to_date() is\nsupposed to match, making -1 as 1 BC.\n\nBecause we already have the to_date/make_date inconsistency, and the -1\nto -2 BC mapping is confusing, and doesn't match Oracle, I think the\nclean solution is to change PG 14 to treat -1 as 1 BC, and document the\nincompatibility in the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 3 Sep 2020 21:21:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Thu, Sep 3, 2020 at 6:21 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 15, 2020 at 09:26:53AM -0700, David G. Johnston wrote:\n>\n> > Whether to actually change the behavior of to_date is up for debate\n> though I\n> > would presume it would not be back-patched.\n>\n> OK, so, looking at this thread, we have to_date() treating -1 as -2 BC,\n> make_date() treating -1 as 1 BC, and we have Oracle, which to_date() is\n> supposed to match, making -1 as 1 BC.\n>\n> Because we already have the to_date/make_date inconsistency, and the -1\n> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n> incompatibility in the release notes.\n>\n\nI agree that someone else should write another patch to fix the behavior\nfor v14. Still suggest committing the proposed patch to master and all\nsupported versions to document the existing behavior correctly. The fix\npatch can work from that.\n\nDavid J.\n\nOn Thu, Sep 3, 2020 at 6:21 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 15, 2020 at 09:26:53AM -0700, David G. Johnston wrote:\n> Whether to actually change the behavior of to_date is up for debate though I\n> would presume it would not be back-patched.\n\nOK, so, looking at this thread, we have to_date() treating -1 as -2 BC,\nmake_date() treating -1 as 1 BC, and we have Oracle, which to_date() is\nsupposed to match, making -1 as 1 BC.\n\nBecause we already have the to_date/make_date inconsistency, and the -1\nto -2 BC mapping is confusing, and doesn't match Oracle, I think the\nclean solution is to change PG 14 to treat -1 as 1 BC, and document the\nincompatibility in the release notes.I agree that someone else should write another patch to fix the behavior for v14.  Still suggest committing the proposed patch to master and all supported versions to document the existing behavior correctly.  The fix patch can work from that.David J.", "msg_date": "Fri, 4 Sep 2020 12:45:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Fri, Sep 4, 2020 at 12:45:36PM -0700, David G. Johnston wrote:\n> On Thu, Sep 3, 2020 at 6:21 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Jul 15, 2020 at 09:26:53AM -0700, David G. Johnston wrote:\n> \n> > Whether to actually change the behavior of to_date is up for debate\n> though I\n> > would presume it would not be back-patched.\n> \n> OK, so, looking at this thread, we have to_date() treating -1 as -2 BC,\n> make_date() treating -1 as 1 BC, and we have Oracle, which to_date() is\n> supposed to match, making -1 as 1 BC.\n> \n> Because we already have the to_date/make_date inconsistency, and the -1\n> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n> incompatibility in the release notes.\n> \n> \n> I agree that someone else should write another patch to fix the behavior for\n> v14.� Still suggest committing the proposed patch to master and all supported\n> versions to document the existing behavior correctly.� The fix patch can work\n> from that.\n\nI think we need to apply the patches to all branches at the same time. \nI am not sure we want to document a behavior we know will change in PG\n14.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 4 Sep 2020 16:12:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On 2020-09-04 21:45, David G. Johnston wrote:\n> On Thu, Sep 3, 2020 at 6:21 PM Bruce Momjian <bruce@momjian.us \n> <mailto:bruce@momjian.us>> wrote:\n> \n> On Wed, Jul 15, 2020 at 09:26:53AM -0700, David G. Johnston wrote:\n> \n> > Whether to actually change the behavior of to_date is up for\n> debate though I\n> > would presume it would not be back-patched.\n> \n> OK, so, looking at this thread, we have to_date() treating -1 as -2 BC,\n> make_date() treating -1 as 1 BC, and we have Oracle, which to_date() is\n> supposed to match, making -1 as 1 BC.\n> \n> Because we already have the to_date/make_date inconsistency, and the -1\n> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n> incompatibility in the release notes.\n> \n> \n> I agree that someone else should write another patch to fix the behavior \n> for v14.  Still suggest committing the proposed patch to master and all \n> supported versions to document the existing behavior correctly.  The fix \n> patch can work from that.\n\nAdding support for negative years in make_timestamp seems pretty \nstraightforward; see attached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 4 Sep 2020 23:05:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nPatch looks good to me.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 25 Sep 2020 11:39:46 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Sep 4, 2020 at 12:45:36PM -0700, David G. Johnston wrote:\n>>> Because we already have the to_date/make_date inconsistency, and the -1\n>>> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n>>> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n>>> incompatibility in the release notes.\n\n>> I agree that someone else should write another patch to fix the behavior for\n>> v14.  Still suggest committing the proposed patch to master and all supported\n>> versions to document the existing behavior correctly.  The fix patch can work\n>> from that.\n\n> I think we need to apply the patches to all branches at the same time. \n> I am not sure we want to document a behavior we know will change in PG\n> 14.\n\nI think this is nuts. The current behavior is obviously broken;\nwe should just treat it as a bug and fix it, including back-patching.\nI do not think there is a compatibility problem of any significance.\nWho out there is going to have an application that is relying on the\nability to insert BC dates in this way?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Sep 2020 13:26:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Adding support for negative years in make_timestamp seems pretty \n> straightforward; see attached patch.\n\nIn hopes of moving things along, I pushed that, along with documentation\nadditions. I couldn't quite convince myself that it was a bug fix\nthough, so no back-patch. (I don't think we really need any doc\nchanges about it in the back branches, either.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Sep 2020 13:50:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "I wrote:\n> I think this is nuts. The current behavior is obviously broken;\n> we should just treat it as a bug and fix it, including back-patching.\n> I do not think there is a compatibility problem of any significance.\n> Who out there is going to have an application that is relying on the\n> ability to insert BC dates in this way?\n\nConcretely, I propose the attached. This adjusts Dar Alathar-Yemen's\npatch (it didn't do the right thing IMO for the combination of bc\nand year < 0) and adds test cases and docs.\n\nOracle would have us throw an error for year zero, but our historical\nbehavior has been to read it as 1 BC. That's not so obviously wrong\nthat I'd want to change it in the back branches. Maybe it could be\ndone as a follow-up change in HEAD.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 29 Sep 2020 14:18:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Tue, Sep 29, 2020 at 01:26:29PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Sep 4, 2020 at 12:45:36PM -0700, David G. Johnston wrote:\n> >>> Because we already have the to_date/make_date inconsistency, and the -1\n> >>> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n> >>> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n> >>> incompatibility in the release notes.\n> \n> >> I agree that someone else should write another patch to fix the behavior for\n> >> v14.� Still suggest committing the proposed patch to master and all supported\n> >> versions to document the existing behavior correctly.� The fix patch can work\n> >> from that.\n> \n> > I think we need to apply the patches to all branches at the same time. \n> > I am not sure we want to document a behavior we know will change in PG\n> > 14.\n> \n> I think this is nuts. The current behavior is obviously broken;\n> we should just treat it as a bug and fix it, including back-patching.\n> I do not think there is a compatibility problem of any significance.\n> Who out there is going to have an application that is relying on the\n> ability to insert BC dates in this way?\n\nYou are agreeing with what I am suggesting then?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 13:56:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Sep 29, 2020 at 01:26:29PM -0400, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> On Fri, Sep 4, 2020 at 12:45:36PM -0700, David G. Johnston wrote:\n>>>> Because we already have the to_date/make_date inconsistency, and the -1\n>>>> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n>>>> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n>>>> incompatibility in the release notes.\n\n>>> I agree that someone else should write another patch to fix the behavior for\n>>> v14.  Still suggest committing the proposed patch to master and all supported\n>>> versions to document the existing behavior correctly.  The fix patch can work\n>>> from that.\n\n>> I think this is nuts. The current behavior is obviously broken;\n>> we should just treat it as a bug and fix it, including back-patching.\n>> I do not think there is a compatibility problem of any significance.\n>> Who out there is going to have an application that is relying on the\n>> ability to insert BC dates in this way?\n\n> You are agreeing with what I am suggesting then?\n\nHm, I read your reference to \"the release notes\" as suggesting that\nwe should change it only in a major release, ie HEAD only (and it\nlooks like David read it the same). If you meant minor release notes,\nthen we're on the same page.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 14:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 02:11:54PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Sep 29, 2020 at 01:26:29PM -0400, Tom Lane wrote:\n> >> Bruce Momjian <bruce@momjian.us> writes:\n> >>> On Fri, Sep 4, 2020 at 12:45:36PM -0700, David G. Johnston wrote:\n> >>>> Because we already have the to_date/make_date inconsistency, and the -1\n> >>>> to -2 BC mapping is confusing, and doesn't match Oracle, I think the\n> >>>> clean solution is to change PG 14 to treat -1 as 1 BC, and document the\n> >>>> incompatibility in the release notes.\n> \n> >>> I agree that someone else should write another patch to fix the behavior for\n> >>> v14.� Still suggest committing the proposed patch to master and all supported\n> >>> versions to document the existing behavior correctly.� The fix patch can work\n> >>> from that.\n> \n> >> I think this is nuts. The current behavior is obviously broken;\n> >> we should just treat it as a bug and fix it, including back-patching.\n> >> I do not think there is a compatibility problem of any significance.\n> >> Who out there is going to have an application that is relying on the\n> >> ability to insert BC dates in this way?\n> \n> > You are agreeing with what I am suggesting then?\n> \n> Hm, I read your reference to \"the release notes\" as suggesting that\n> we should change it only in a major release, ie HEAD only (and it\n> looks like David read it the same). If you meant minor release notes,\n> then we're on the same page.\n\nYes, I was thinking just the major release notes. What are you\nsuggesting, and what did you ultimately decide to do? What I didn't\nwant to do was to document the old behavior in the old docs and change\nit in PG 14.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 14:42:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, Sep 30, 2020 at 02:11:54PM -0400, Tom Lane wrote:\n>> Hm, I read your reference to \"the release notes\" as suggesting that\n>> we should change it only in a major release, ie HEAD only (and it\n>> looks like David read it the same). If you meant minor release notes,\n>> then we're on the same page.\n\n> Yes, I was thinking just the major release notes. What are you\n> suggesting, and what did you ultimately decide to do? What I didn't\n> want to do was to document the old behavior in the old docs and change\n> it in PG 14.\n\nActually, I was just finishing up back-patching the patch I posted\nyesterday. I think we should just fix it, not document that it's\nbroken.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 14:50:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 02:50:31PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Sep 30, 2020 at 02:11:54PM -0400, Tom Lane wrote:\n> >> Hm, I read your reference to \"the release notes\" as suggesting that\n> >> we should change it only in a major release, ie HEAD only (and it\n> >> looks like David read it the same). If you meant minor release notes,\n> >> then we're on the same page.\n> \n> > Yes, I was thinking just the major release notes. What are you\n> > suggesting, and what did you ultimately decide to do? What I didn't\n> > want to do was to document the old behavior in the old docs and change\n> > it in PG 14.\n> \n> Actually, I was just finishing up back-patching the patch I posted\n> yesterday. I think we should just fix it, not document that it's\n> broken.\n\nAgreed, that's what I wanted. You stated in a later email you couldn't\nconvince yourself of the backpatch, which is why I asked.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 15:05:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, Sep 30, 2020 at 02:50:31PM -0400, Tom Lane wrote:\n>> Actually, I was just finishing up back-patching the patch I posted\n>> yesterday. I think we should just fix it, not document that it's\n>> broken.\n\n> Agreed, that's what I wanted. You stated in a later email you couldn't\n> convince yourself of the backpatch, which is why I asked.\n\nOh, I see where our wires are crossed. I meant that I couldn't\nconvince myself to back-patch the make_timestamp() change.\n(I'm still willing to listen to an argument to do so, if anyone\nwants to make one --- but that part feels more like a feature\naddition than a bug fix.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 15:11:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 03:11:06PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Sep 30, 2020 at 02:50:31PM -0400, Tom Lane wrote:\n> >> Actually, I was just finishing up back-patching the patch I posted\n> >> yesterday. I think we should just fix it, not document that it's\n> >> broken.\n> \n> > Agreed, that's what I wanted. You stated in a later email you couldn't\n> > convince yourself of the backpatch, which is why I asked.\n> \n> Oh, I see where our wires are crossed. I meant that I couldn't\n> convince myself to back-patch the make_timestamp() change.\n> (I'm still willing to listen to an argument to do so, if anyone\n> wants to make one --- but that part feels more like a feature\n> addition than a bug fix.)\n\nOK, at least this is addressed fully in PG 14 and beyond.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 16:20:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Tue, Sep 29, 2020 at 1:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think this is nuts. The current behavior is obviously broken;\n> we should just treat it as a bug and fix it, including back-patching.\n> I do not think there is a compatibility problem of any significance.\n> Who out there is going to have an application that is relying on the\n> ability to insert BC dates in this way?\n\nI think that's entirely the wrong way to look at it. If nobody is\nusing the feature, then it will not break anything to change the\nbehavior, but on the other hand there is no reason to fix the bug\neither. But if people are using the feature, making it behave\ndifferently in the next minor release is going to break their\napplications. I disagree *strongly* with making such changes in stable\nbranches and feel that the change to those branches should be\nreverted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Sep 2020 16:49:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Sep 29, 2020 at 1:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think this is nuts. The current behavior is obviously broken;\n>> we should just treat it as a bug and fix it, including back-patching.\n>> I do not think there is a compatibility problem of any significance.\n>> Who out there is going to have an application that is relying on the\n>> ability to insert BC dates in this way?\n\n> I think that's entirely the wrong way to look at it. If nobody is\n> using the feature, then it will not break anything to change the\n> behavior, but on the other hand there is no reason to fix the bug\n> either. But if people are using the feature, making it behave\n> differently in the next minor release is going to break their\n> applications. I disagree *strongly* with making such changes in stable\n> branches and feel that the change to those branches should be\n> reverted.\n\nBy that logic, we should never fix any bug in a back branch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 17:35:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> By that logic, we should never fix any bug in a back branch.\n\nNo, by that logic, we should not change any behavior in a back-branch\nupon which a customer is plausibly relying. No one relies on a certain\nquery causing a server crash, for example, or a cache lookup failure,\nso fixing those things can only help people. But there is no reason at\nall why someone shouldn't be relying on this very old and\nlong-established behavior not to change in a minor release.\n\nOne reason they might do that is because there was a discussion about\nwhat I believe to this exact same case 4 years ago in which you and I\nboth endorsed the position you are now claiming is so unreasonable\nthat nobody will mind if we change it in a minor release.\n\nhttps://www.postgresql.org/message-id/flat/CAKOSWNmwCH0wx6MApc1A8ww%2B%2BEQmG07AZ3t6w_XjRrV1xeZpTA%40mail.gmail.com\n\nSo you now think this should be back-patched when previously you\ndidn't even think it was be good enough for master.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Sep 2020 18:10:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Sep 30, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> By that logic, we should never fix any bug in a back branch.\n\n> No, by that logic, we should not change any behavior in a back-branch\n> upon which a customer is plausibly relying.\n\nI guess where we differ here is on the idea that somebody is plausibly\nrelying on to_date() to parse a BC date inaccurately.\n\n> One reason they might do that is because there was a discussion about\n> what I believe to this exact same case 4 years ago in which you and I\n> both endorsed the position you are now claiming is so unreasonable\n> that nobody will mind if we change it in a minor release.\n> https://www.postgresql.org/message-id/flat/CAKOSWNmwCH0wx6MApc1A8ww%2B%2BEQmG07AZ3t6w_XjRrV1xeZpTA%40mail.gmail.com\n\nWhat I complained about in that thread was mainly that that\npatch was simultaneously trying to get stricter (throw error for\nyear zero) and laxer (parse negative years as BC).\n\nAlso, we did not in that thread have the information that Oracle\ntreats negative years as BC. Now that we do, the situation is\ndifferent, and I'm willing to change my mind about it. Admittedly,\nOracle seems to require an \"S\" in the format to parse a leading\ndash as meaning a negative year. But given that our code is willing\nto read the case as a negative year without that, it seems pretty\nsilly to decide that it should read it as an off-by-one negative\nyear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 18:36:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 06:10:38PM -0400, Robert Haas wrote:\n> On Wed, Sep 30, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > By that logic, we should never fix any bug in a back branch.\n> \n> No, by that logic, we should not change any behavior in a back-branch\n> upon which a customer is plausibly relying. No one relies on a certain\n> query causing a server crash, for example, or a cache lookup failure,\n> so fixing those things can only help people. But there is no reason at\n> all why someone shouldn't be relying on this very old and\n> long-established behavior not to change in a minor release.\n\nThat is an interesting distinction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 18:36:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, Sep 30, 2020 at 06:10:38PM -0400, Robert Haas wrote:\n>> On Wed, Sep 30, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> By that logic, we should never fix any bug in a back branch.\n\n>> No, by that logic, we should not change any behavior in a back-branch\n>> upon which a customer is plausibly relying. No one relies on a certain\n>> query causing a server crash, for example, or a cache lookup failure,\n>> so fixing those things can only help people. But there is no reason at\n>> all why someone shouldn't be relying on this very old and\n>> long-established behavior not to change in a minor release.\n\n> That is an interesting distinction.\n\nI don't want to sound like I'm totally without sympathy for Robert's\nargument. But I do say it's a judgment call, and my judgment remains\nthat this patch is appropriate to back-patch.\n\nWe do not have, and never have had, a project policy against\nback-patching non-crash-related behavioral changes. If we did,\nwe would not for example put timezone database updates into the\nback branches. It's not terribly hard to imagine such updates\nbreaking applications that expected the meaning of, say,\n'2022-04-01 12:34 Europe/Paris' to hold still. But we do it\nanyway.\n\nAs another not-too-old example, I'll cite Robert's own commits\n0278d3f79/a08bfe742. The argument for a back-patch there was\npretty much only that we were writing an alleged tar file that\ndidn't conform to the letter of the POSIX spec. It's possible\nto imagine that somebody had written bespoke archive-reading\ncode that failed after we changed the output; but that didn't\nseem probable enough to justify continuing to violate the standard.\n\nIn this case the \"standard\" in question is the Oracle-derived\nexpectation that to_date will read negative years as BC, and\nI'd argue that the possibility that someone already has code\nthat relies on getting an off-by-one result is outweighed by the\nlikelihood that the misbehavior will hurt somebody in the future.\n\nThis calculus would obviously change if we knew of such code\nor thought it was really probable for it to exist. That's\nwhat makes it a judgment call.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 19:26:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 07:26:55PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Sep 30, 2020 at 06:10:38PM -0400, Robert Haas wrote:\n> >> On Wed, Sep 30, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> By that logic, we should never fix any bug in a back branch.\n> \n> >> No, by that logic, we should not change any behavior in a back-branch\n> >> upon which a customer is plausibly relying. No one relies on a certain\n> >> query causing a server crash, for example, or a cache lookup failure,\n> >> so fixing those things can only help people. But there is no reason at\n> >> all why someone shouldn't be relying on this very old and\n> >> long-established behavior not to change in a minor release.\n> \n> > That is an interesting distinction.\n> \n> I don't want to sound like I'm totally without sympathy for Robert's\n> argument. But I do say it's a judgment call, and my judgment remains\n> that this patch is appropriate to back-patch.\n\nAgreed. I was just thinking it was an interesting classification that\nno one relies on crashes, or query failures.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 19:41:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We do not have, and never have had, a project policy against\n> back-patching non-crash-related behavioral changes. If we did,\n> we would not for example put timezone database updates into the\n> back branches. It's not terribly hard to imagine such updates\n> breaking applications that expected the meaning of, say,\n> '2022-04-01 12:34 Europe/Paris' to hold still. But we do it\n> anyway.\n>\n> As another not-too-old example, I'll cite Robert's own commits\n> 0278d3f79/a08bfe742. The argument for a back-patch there was\n> pretty much only that we were writing an alleged tar file that\n> didn't conform to the letter of the POSIX spec. It's possible\n> to imagine that somebody had written bespoke archive-reading\n> code that failed after we changed the output; but that didn't\n> seem probable enough to justify continuing to violate the standard.\n\nRight. Ultimately, this comes down to a judgement call about what you\nthink people are likely to rely on, and what you think they are\nunlikely to rely on. If I recall correctly, I thought that case was a\nclose call, and back-patched because you argued for it. Either way, it\ndoes seem very unlikely that someone would write archive-reading code\nthat relies on the presence of an extra 511 zero bytes, because (1) it\nwould be a lot easier to just use 'tar', (2) such code would fail if\nused with a tar archive generated by anything other than PostgreSQL,\nand (3) such code would fail on a tar archive generated by PostgreSQL\nbut without using -R. It is just barely plausible that someone has a\nversion of 'tar' that fails on the bogus archive and will work with\nthat fix, though I would guess that's also pretty unlikely.\n\nBut the present case does not seem to me to be comparable. If someone\nis using to_date() to construct date values, I can't see why they\nwouldn't test it, find out how it works with BC values, and then make\nthe application that generates the input to that function do the right\nthing for the actual behavior of the function. There are discussions\nof the behavior of to_date with YYYY = 0 going back at least 10 years\non this mailing list, and more recent discussions of the behavior of\nnegative numbers. Point being: I knew about the behavior that was here\nreported as a bug and have known about it for years, and if I were\nstill an application developer I can easily imagine having coded to\nit. I don't know why someone else should not have done the same. The\nfact that we've suddenly discovered that this is not what Oracle does\ndoesn't mean that no users have discovered that it is what PostgreSQL\ndoes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Sep 2020 20:24:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "On Wed, Sep 30, 2020 at 5:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> The\n> fact that we've suddenly discovered that this is not what Oracle does\n> doesn't mean that no users have discovered that it is what PostgreSQL\n> does.\n>\n\nPresently I cannot seem to make up my mind so I'm going to go with my\noriginal opinion which was to only change the behavior in v14. In part\nbecause it seems appropriate given our generally laissez-faire attitude\ntoward this particular feature.\n\nDavid J.\n\nOn Wed, Sep 30, 2020 at 5:24 PM Robert Haas <robertmhaas@gmail.com> wrote:The\nfact that we've suddenly discovered that this is not what Oracle does\ndoesn't mean that no users have discovered that it is what PostgreSQL\ndoes.Presently I cannot seem to make up my mind so I'm going to go with my original opinion which was to only change the behavior in v14.  In part because it seems appropriate given our generally laissez-faire attitude toward this particular feature.David J.", "msg_date": "Wed, 30 Sep 2020 17:38:05 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Right. Ultimately, this comes down to a judgement call about what you\n> think people are likely to rely on, and what you think they are\n> unlikely to rely on.\n\nGood, so at least we agree on that principle.\n\n> But the present case does not seem to me to be comparable. If someone\n> is using to_date() to construct date values, I can't see why they\n> wouldn't test it, find out how it works with BC values, and then make\n> the application that generates the input to that function do the right\n> thing for the actual behavior of the function. There are discussions\n> of the behavior of to_date with YYYY = 0 going back at least 10 years\n> on this mailing list, and more recent discussions of the behavior of\n> negative numbers.\n\nSure, we have at least two bug reports proving that people have\ninvestigated this. What I'm saying is unlikely is that there are any\nproduction applications in which it matters. I doubt that, say, the\nItalian government has a citizenry database in which they've recorded\nJulius Caesar's birthday; and even if they do, they're probably not\nsquirting the data through to_date; and even if they are, they're more\nlikely using the positive-year-with-BC representation, because that's\nthe only one that PG will emit. Even if they've got code that somehow\nrelies on to_date working this way, it's almost certainly getting zero\nuse in practice.\n\nI probably wouldn't have taken an interest in this at all, were it\nnot for the proposal that we document the misbehavior. Doing that\nrather than fixing it just seems silly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 20:40:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16419: wrong parsing BC year in to_date() function" } ]
[ { "msg_contents": "If pg_basebackup is not able to read BLCKSZ content from file, then it\njust emits a warning \"could not verify checksum in file \"____\" block\nX: read buffer size X and page size 8192 differ\" currently but misses\nto error with \"checksum error occurred\". Only if it can read 8192 and\nchecksum mismatch happens will it error in the end.\n\nRepro is pretty simple:\n/usr/local/pgsql/bin/initdb -k /tmp/postgres\n/usr/local/pgsql/bin/pg_ctl -D /tmp/postgres -l /tmp/logfile start\n# just create random file of size not in multiple of 8192\necho \"corruption\" > /tmp/postgres/base/12696/44444\n\nWithout the fix:\n$ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummy\nWARNING: could not verify checksum in file \"./base/12696/44444\", block 0:\nread buffer size 11 and page size 8192 differ\n$ echo $?\n0\n\nWith the fix:\n$ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummy\nWARNING: could not verify checksum in file \"./base/12696/44444\", block 0:\nread buffer size 11 and page size 8192 differ\npg_basebackup: error: checksum error occurred\n$ echo $?\n1\n\n\nI think it's an important case to be handled and should not be silently\nskipped,\nunless I am missing something. This one liner should fix it:\n\ndiff --git a/src/backend/replication/basebackup.c\nb/src/backend/replication/basebackup.c\nindex fbdc28ec39..68febbedf0 100644\n--- a/src/backend/replication/basebackup.c\n+++ b/src/backend/replication/basebackup.c\n@@ -1641,6 +1641,7 @@ sendFile(const char *readfilename, const char\n*tarfilename,\n \"differ\",\n readfilename,\nblkno, (int) cnt, BLCKSZ)));\n verify_checksum = false;\n+ checksum_failures++;\n }\n\n if (verify_checksum)\n\nIf pg_basebackup is not able to read BLCKSZ content from file, then itjust emits a warning \"could not verify checksum in file \"____\" blockX: read buffer size X and page size 8192 differ\" currently but missesto error with \"checksum error occurred\". Only if it can read 8192 andchecksum mismatch happens will it error in the end.Repro is pretty simple:/usr/local/pgsql/bin/initdb -k /tmp/postgres/usr/local/pgsql/bin/pg_ctl -D /tmp/postgres -l /tmp/logfile start# just create random file of size not in multiple of 8192echo \"corruption\" > /tmp/postgres/base/12696/44444Without the fix:$ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummyWARNING:  could not verify checksum in file \"./base/12696/44444\", block 0: read buffer size 11 and page size 8192 differ$ echo $?0With the fix:$ /usr/local/pgsql/bin/pg_basebackup -D /tmp/dummyWARNING:  could not verify checksum in file \"./base/12696/44444\", block 0: read buffer size 11 and page size 8192 differpg_basebackup: error: checksum error occurred$ echo $?1I think it's an important case to be handled and should not be silently skipped,unless I am missing something. This one liner should fix it:diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.cindex fbdc28ec39..68febbedf0 100644--- a/src/backend/replication/basebackup.c+++ b/src/backend/replication/basebackup.c@@ -1641,6 +1641,7 @@ sendFile(const char *readfilename, const char *tarfilename,                                                        \"differ\",                                                        readfilename, blkno, (int) cnt, BLCKSZ)));                        verify_checksum = false;+                      checksum_failures++;                }                 if (verify_checksum)", "msg_date": "Wed, 6 May 2020 14:48:20 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "pg_basebackup misses to report checksum error" }, { "msg_contents": "On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> If pg_basebackup is not able to read BLCKSZ content from file, then it\n> just emits a warning \"could not verify checksum in file \"____\" block\n> X: read buffer size X and page size 8192 differ\" currently but misses\n> to error with \"checksum error occurred\". Only if it can read 8192 and\n> checksum mismatch happens will it error in the end.\n\nI don't think it's a good idea to conflate \"hey, we can't checksum\nthis because the size is strange\" with \"hey, the checksum didn't\nmatch\". Suppose the a file has 1000 full blocks and a partial block.\nAll 1000 blocks have good checksums. With your change, ISTM that we'd\nfirst emit a warning saying that the checksum couldn't be verified,\nand then we'd emit a second warning saying that there was 1 checksum\nverification failure, which would also be reported to the stats\nsystem. I don't think that's what we want. There might be an argument\nfor making this code trigger...\n\n ereport(ERROR,\n (errcode(ERRCODE_DATA_CORRUPTED),\n errmsg(\"checksum verification failure during base backup\")));\n\n...but I wouldn't for that reason inflate the number of blocks that\nare reported as having failures.\n\nYMMV, of course.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 May 2020 18:02:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup misses to report checksum error" }, { "msg_contents": "On Wed, May 6, 2020 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > If pg_basebackup is not able to read BLCKSZ content from file, then it\n> > just emits a warning \"could not verify checksum in file \"____\" block\n> > X: read buffer size X and page size 8192 differ\" currently but misses\n> > to error with \"checksum error occurred\". Only if it can read 8192 and\n> > checksum mismatch happens will it error in the end.\n>\n> I don't think it's a good idea to conflate \"hey, we can't checksum\n> this because the size is strange\" with \"hey, the checksum didn't\n> match\". Suppose the a file has 1000 full blocks and a partial block.\n> All 1000 blocks have good checksums. With your change, ISTM that we'd\n> first emit a warning saying that the checksum couldn't be verified,\n> and then we'd emit a second warning saying that there was 1 checksum\n> verification failure, which would also be reported to the stats\n> system. I don't think that's what we want.\n\n\nI feel the intent of reporting \"total checksum verification failure\" is to\nreport corruption. Which way is the secondary piece of the puzzle. Not\nbeing able to read checksum itself to verify is also corruption and is\nchecksum verification failure I think. WARNINGs will provide fine grained\nclarity on what type of checksum verification failure it is, so I am not\nsure we really need fine grained clarity in \"total numbers\" to\ndifferentiate these two types.\n\nNot reporting anything to the stats system and at end reporting there is\nchecksum failure would be more weird, right? Or will say\nERRCODE_DATA_CORRUPTED with some other message and not checksum\nverification failure.\n\nThere might be an argument\n> for making this code trigger...\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_DATA_CORRUPTED),\n> errmsg(\"checksum verification failure during base\n> backup\")));\n>\n> ...but I wouldn't for that reason inflate the number of blocks that\n> are reported as having failures.\n>\n\nWhen checksum verification is turned on and the issue is detected, I\nstrongly feel ERROR must be triggered as silently reporting success doesn't\nseem right.\nI can introduce one more variable just to capture and track files with such\ncases. But will we report them separately to stats? How? Also, do we want\nto have separate WARNING for the total number of files in this category?\nThose all seem slight complications but if wind is blowing in that\ndirection, I am ready to fly that way.\n\nOn Wed, May 6, 2020 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> If pg_basebackup is not able to read BLCKSZ content from file, then it\n> just emits a warning \"could not verify checksum in file \"____\" block\n> X: read buffer size X and page size 8192 differ\" currently but misses\n> to error with \"checksum error occurred\". Only if it can read 8192 and\n> checksum mismatch happens will it error in the end.\n\nI don't think it's a good idea to conflate \"hey, we can't checksum\nthis because the size is strange\" with \"hey, the checksum didn't\nmatch\". Suppose the a file has 1000 full blocks and a partial block.\nAll 1000 blocks have good checksums. With your change, ISTM that we'd\nfirst emit a warning saying that the checksum couldn't be verified,\nand then we'd emit a second warning saying that there was 1 checksum\nverification failure, which would also be reported to the stats\nsystem. I don't think that's what we want. I feel the intent of reporting \"total checksum verification failure\" is to report corruption. Which way is the secondary piece of the puzzle. Not being able to read checksum itself to verify is also corruption and is checksum verification failure I think. WARNINGs will provide fine grained clarity on what type of checksum verification failure it is, so I am not sure we really need fine grained clarity in \"total numbers\" to differentiate these two types.Not reporting anything to the stats system and at end reporting there is checksum failure would be more weird, right? Or will say ERRCODE_DATA_CORRUPTED with some other message and not checksum verification failure.There might be an argument\nfor making this code trigger...\n\n        ereport(ERROR,\n                (errcode(ERRCODE_DATA_CORRUPTED),\n                 errmsg(\"checksum verification failure during base backup\")));\n\n...but I wouldn't for that reason inflate the number of blocks that\nare reported as having failures.When checksum verification is turned on and the issue is detected, I strongly feel ERROR must be triggered as silently reporting success doesn't seem right.I can introduce one more variable just to capture and track files with such cases. But will we report them separately to stats? How? Also, do we want to have separate WARNING for the total number of files in this category? Those all seem slight complications but if wind is blowing in that direction, I am ready to fly that way.", "msg_date": "Thu, 7 May 2020 10:15:45 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup misses to report checksum error" }, { "msg_contents": "Greetings,\n\n* Ashwin Agrawal (aagrawal@pivotal.io) wrote:\n> On Wed, May 6, 2020 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > > If pg_basebackup is not able to read BLCKSZ content from file, then it\n> > > just emits a warning \"could not verify checksum in file \"____\" block\n> > > X: read buffer size X and page size 8192 differ\" currently but misses\n> > > to error with \"checksum error occurred\". Only if it can read 8192 and\n> > > checksum mismatch happens will it error in the end.\n> >\n> > I don't think it's a good idea to conflate \"hey, we can't checksum\n> > this because the size is strange\" with \"hey, the checksum didn't\n> > match\". Suppose the a file has 1000 full blocks and a partial block.\n> > All 1000 blocks have good checksums. With your change, ISTM that we'd\n> > first emit a warning saying that the checksum couldn't be verified,\n> > and then we'd emit a second warning saying that there was 1 checksum\n> > verification failure, which would also be reported to the stats\n> > system. I don't think that's what we want.\n> \n> I feel the intent of reporting \"total checksum verification failure\" is to\n> report corruption. Which way is the secondary piece of the puzzle. Not\n> being able to read checksum itself to verify is also corruption and is\n> checksum verification failure I think. WARNINGs will provide fine grained\n> clarity on what type of checksum verification failure it is, so I am not\n> sure we really need fine grained clarity in \"total numbers\" to\n> differentiate these two types.\n\nAre we absolutely sure that there's no way for a partial block to end up\nbeing seen by pg_basebackup, which is just doing routine filesystem\nread() calls, during normal operation though..? Across all platforms?\n\nWe certainly don't want to end up reporting a false positive by saying\nthat there's been corruption when it was just that the file was getting\nextended and a read() happened to catch an incomplete write(), or\nsomething along those lines.\n\nThanks,\n\nStephen", "msg_date": "Thu, 7 May 2020 13:24:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup misses to report checksum error" }, { "msg_contents": "On Thu, May 7, 2020 at 10:25 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Ashwin Agrawal (aagrawal@pivotal.io) wrote:\n> > On Wed, May 6, 2020 at 3:02 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > > On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> > > > If pg_basebackup is not able to read BLCKSZ content from file, then\n> it\n> > > > just emits a warning \"could not verify checksum in file \"____\" block\n> > > > X: read buffer size X and page size 8192 differ\" currently but misses\n> > > > to error with \"checksum error occurred\". Only if it can read 8192 and\n> > > > checksum mismatch happens will it error in the end.\n> > >\n> > > I don't think it's a good idea to conflate \"hey, we can't checksum\n> > > this because the size is strange\" with \"hey, the checksum didn't\n> > > match\". Suppose the a file has 1000 full blocks and a partial block.\n> > > All 1000 blocks have good checksums. With your change, ISTM that we'd\n> > > first emit a warning saying that the checksum couldn't be verified,\n> > > and then we'd emit a second warning saying that there was 1 checksum\n> > > verification failure, which would also be reported to the stats\n> > > system. I don't think that's what we want.\n> >\n> > I feel the intent of reporting \"total checksum verification failure\" is\n> to\n> > report corruption. Which way is the secondary piece of the puzzle. Not\n> > being able to read checksum itself to verify is also corruption and is\n> > checksum verification failure I think. WARNINGs will provide fine grained\n> > clarity on what type of checksum verification failure it is, so I am not\n> > sure we really need fine grained clarity in \"total numbers\" to\n> > differentiate these two types.\n>\n> Are we absolutely sure that there's no way for a partial block to end up\n> being seen by pg_basebackup, which is just doing routine filesystem\n> read() calls, during normal operation though..? Across all platforms?\n>\n\nOkay, that's a good point, I didn't think about it. This comment to skip\nverifying checksum, I suppose convinces, can't be sure and hence can't\nreport partial blocks as corruption.\n\n/*\n * Only check pages which have not been modified since the\n * start of the base backup. Otherwise, they might have been\n * written only halfway and the checksum would not be valid.\n * However, replaying WAL would reinstate the correct page in\n * this case. We also skip completely new pages, since they\n * don't have a checksum yet.\n */\n\nMight be nice to have a similar comment for the partial block case to\ndocument why we can't report it as corruption. Thanks.\n\nOn Thu, May 7, 2020 at 10:25 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Ashwin Agrawal (aagrawal@pivotal.io) wrote:\n> On Wed, May 6, 2020 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, May 6, 2020 at 5:48 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > > If pg_basebackup is not able to read BLCKSZ content from file, then it\n> > > just emits a warning \"could not verify checksum in file \"____\" block\n> > > X: read buffer size X and page size 8192 differ\" currently but misses\n> > > to error with \"checksum error occurred\". Only if it can read 8192 and\n> > > checksum mismatch happens will it error in the end.\n> >\n> > I don't think it's a good idea to conflate \"hey, we can't checksum\n> > this because the size is strange\" with \"hey, the checksum didn't\n> > match\". Suppose the a file has 1000 full blocks and a partial block.\n> > All 1000 blocks have good checksums. With your change, ISTM that we'd\n> > first emit a warning saying that the checksum couldn't be verified,\n> > and then we'd emit a second warning saying that there was 1 checksum\n> > verification failure, which would also be reported to the stats\n> > system. I don't think that's what we want.\n> \n> I feel the intent of reporting \"total checksum verification failure\" is to\n> report corruption. Which way is the secondary piece of the puzzle. Not\n> being able to read checksum itself to verify is also corruption and is\n> checksum verification failure I think. WARNINGs will provide fine grained\n> clarity on what type of checksum verification failure it is, so I am not\n> sure we really need fine grained clarity in \"total numbers\" to\n> differentiate these two types.\n\nAre we absolutely sure that there's no way for a partial block to end up\nbeing seen by pg_basebackup, which is just doing routine filesystem\nread() calls, during normal operation though..?  Across all platforms?Okay, that's a good point, I didn't think about it. This comment to skip verifying checksum, I suppose convinces, can't be sure and hence can't report partial blocks as corruption.\t\t\t\t/* * Only check pages which have not been modified since the \t\t\t\t * start of the base backup. Otherwise, they might have been \t\t\t\t * written only halfway and the checksum would not be valid. \t\t\t\t * However, replaying WAL would reinstate the correct page in \t\t\t\t * this case. We also skip completely new pages, since they \t\t\t\t * don't have a checksum yet. \t\t\t\t */Might be nice to have a similar comment for the partial block case to document why we can't report it as corruption. Thanks.", "msg_date": "Thu, 7 May 2020 11:18:48 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup misses to report checksum error" } ]
[ { "msg_contents": "I see that commit 33a94bae605edf3ceda6751916f0b1af3e88630a removed\nsmgrdounlinkfork() because it was dead code. Should we also remove\nsmgrdounlink() now? It also appears to be dead code.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 May 2020 20:03:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Should smgrdounlink() be removed?" }, { "msg_contents": "On Thu, May 7, 2020 at 8:33 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I see that commit 33a94bae605edf3ceda6751916f0b1af3e88630a removed\n> smgrdounlinkfork() because it was dead code. Should we also remove\n> smgrdounlink() now? It also appears to be dead code.\n>\n\nI could not find any code reference to smgrdounlink, I feel it can be\nremoved.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, May 7, 2020 at 8:33 AM Peter Geoghegan <pg@bowt.ie> wrote:>> I see that commit 33a94bae605edf3ceda6751916f0b1af3e88630a removed> smgrdounlinkfork() because it was dead code. Should we also remove> smgrdounlink() now? It also appears to be dead code.>I could not find any code reference to smgrdounlink, I feel it can be removed.Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 May 2020 09:48:35 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "On Thu, May 07, 2020 at 09:48:35AM +0530, vignesh C wrote:\n> I could not find any code reference to smgrdounlink, I feel it can be\n> removed.\n\nThe last use of smgrdounlink() was b416691. I have just looked at\nDebian Code Search and github, and could not find a hit with the\nfunction being used in some custom extension code, so it feels like a\nsafe bet to remove it.\n--\nMichael", "msg_date": "Thu, 7 May 2020 13:49:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "\n\nOn 2020/05/07 13:49, Michael Paquier wrote:\n> On Thu, May 07, 2020 at 09:48:35AM +0530, vignesh C wrote:\n>> I could not find any code reference to smgrdounlink, I feel it can be\n>> removed.\n> \n> The last use of smgrdounlink() was b416691. I have just looked at\n> Debian Code Search and github, and could not find a hit with the\n> function being used in some custom extension code, so it feels like a\n> safe bet to remove it.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 May 2020 16:57:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "On Thu, May 07, 2020 at 04:57:00PM +0900, Fujii Masao wrote:\n> On 2020/05/07 13:49, Michael Paquier wrote:\n>> On Thu, May 07, 2020 at 09:48:35AM +0530, vignesh C wrote:\n>> > I could not find any code reference to smgrdounlink, I feel it can be\n>> > removed.\n>> \n>> The last use of smgrdounlink() was b416691. I have just looked at\n>> Debian Code Search and github, and could not find a hit with the\n>> function being used in some custom extension code, so it feels like a\n>> safe bet to remove it.\n> \n> +1\n\nSo this gives the attached. Any thoughts?\n--\nMichael", "msg_date": "Thu, 7 May 2020 20:33:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "On Thu, May 7, 2020 at 4:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n> So this gives the attached. Any thoughts?\n\nThat seems fine.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 May 2020 09:18:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "On Thu, May 07, 2020 at 09:18:52AM -0700, Peter Geoghegan wrote:\n> On Thu, May 7, 2020 at 4:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> So this gives the attached. Any thoughts?\n> \n> That seems fine.\n\nThanks for the review. If there are no objections, I would like to\napply that by tomorrow. So please let me know.\n--\nMichael", "msg_date": "Sat, 9 May 2020 13:17:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "Fine with me.\n\nPeter Geoghegan\n(Sent from my phone)\n\nFine with me.Peter Geoghegan(Sent from my phone)", "msg_date": "Fri, 8 May 2020 21:21:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Should smgrdounlink() be removed?" }, { "msg_contents": "On Fri, May 08, 2020 at 09:21:25PM -0700, Peter Geoghegan wrote:\n> Fine with me.\n\nThanks, Peter. Done.\n--\nMichael", "msg_date": "Sun, 10 May 2020 11:00:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should smgrdounlink() be removed?" } ]
[ { "msg_contents": "Hi,\n\nI am looking for feedback on the possibility of adding a table expansion\nhook to PostgreSQL (see attached patch). The motivation for this is to\nallow extensions to optimize table expansion. In particular, TimescaleDB\ndoes its own table expansion in order to apply a number of optimizations,\nincluding partition pruning (note that TimescaleDB uses inheritance since\nPostgreSQL 9.6 rather than declarative partitioning ). There's currently no\nofficial hook for table expansion, but TimescaleDB has been using the\nget_relation_info hook for this purpose. Unfortunately, PostgreSQL 12 broke\nthis for us since it moved expansion to a later stage where we can no\nlonger control it without some pretty bad hacks. Given that PostgreSQL 12\nchanged the expansion state of a table for the get_relation_info hook, we\nare thinking about this as a regression and are wondering if this could be\nconsidered against the head of PG 12 or maybe even PG 13 (although we\nrealize feature freeze has been reached)?\n\nThe attached patch is against PostgreSQL master (commit fb544735) and is\nabout ~10 lines of code. It doesn't change any existing behavior; it only\nallows getting control of expand_inherited_rtentry, which would make a huge\ndifference for TimescaleDB.\n\nBest regards,\n\nErik\nEngineering team lead\nTimescale", "msg_date": "Thu, 7 May 2020 10:11:14 +0200", "msg_from": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com>", "msg_from_op": true, "msg_subject": "Feedback on table expansion hook (including patch)" }, { "msg_contents": "that sounds really really useful.\ni can see a ton of use cases for that.\nwe also toyed with the idea recently of having pluggable FSM strategies.\nthat one could be quite useful as well.\n\n\tregards,\n\n\t\thans\n\n\n> On 07.05.2020, at 10:11, Erik Nordström <erik@timescale.com> wrote:\n> \n> Hi,\n> \n> I am looking for feedback on the possibility of adding a table expansion hook to PostgreSQL (see attached patch). The motivation for this is to allow extensions to optimize table expansion. In particular, TimescaleDB does its own table expansion in order to apply a number of optimizations, including partition pruning (note that TimescaleDB uses inheritance since PostgreSQL 9.6 rather than declarative partitioning ). There's currently no official hook for table expansion, but TimescaleDB has been using the get_relation_info hook for this purpose. Unfortunately, PostgreSQL 12 broke this for us since it moved expansion to a later stage where we can no longer control it without some pretty bad hacks. Given that PostgreSQL 12 changed the expansion state of a table for the get_relation_info hook, we are thinking about this as a regression and are wondering if this could be considered against the head of PG 12 or maybe even PG 13 (although we realize feature freeze has been reached)?\n> \n> The attached patch is against PostgreSQL master (commit fb544735) and is about ~10 lines of code. It doesn't change any existing behavior; it only allows getting control of expand_inherited_rtentry, which would make a huge difference for TimescaleDB.\n> \n> Best regards,\n> \n> Erik\n> Engineering team lead\n> Timescale\n> \n> \n> <table-expansion-hook.patch>\n\n--\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com <https://www.cybertec-postgresql.com/>\n\n\n\n\n\n\n\n\nthat sounds really really useful.i can see a ton of use cases for that.we also toyed with the idea recently of having pluggable FSM strategies.that one could be quite useful as well. regards, hansOn 07.05.2020, at 10:11, Erik Nordström <erik@timescale.com> wrote:Hi,I am looking for feedback on the possibility of adding a table expansion hook to PostgreSQL (see attached patch). The motivation for this is to allow extensions to optimize table expansion. In particular, TimescaleDB does its own table expansion in order to apply a number of optimizations, including partition pruning (note that TimescaleDB uses inheritance since PostgreSQL 9.6 rather than declarative partitioning ). There's currently no official hook for table expansion, but TimescaleDB has been using the get_relation_info hook for this purpose. Unfortunately, PostgreSQL 12 broke this for us since it moved expansion to a later stage where we can no longer control it without some pretty bad hacks. Given that PostgreSQL 12 changed the expansion state of a table for the get_relation_info hook, we are thinking about this as a regression and are wondering if this could be considered against the head of PG 12 or maybe even PG 13 (although we realize feature freeze has been reached)?The attached patch is against PostgreSQL master (commit fb544735) and is about ~10 lines of code. It doesn't change any existing behavior; it only allows getting control of expand_inherited_rtentry, which would make a huge difference for TimescaleDB.Best regards,ErikEngineering team leadTimescale\n<table-expansion-hook.patch>\n--Cybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 7 May 2020 10:26:10 +0200", "msg_from": "=?utf-8?B?IkhhbnMtSsO8cmdlbiBTY2jDtm5pZyAoUG9zdGdyZVNRTCki?=\n <postgres@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "On Thu, 7 May 2020 at 05:11, Erik Nordström <erik@timescale.com> wrote:\n\n>\n> I am looking for feedback on the possibility of adding a table expansion\n> hook to PostgreSQL (see attached patch). The motivation for this is to\n> allow extensions to optimize table expansion. In particular, TimescaleDB\n> does its own table expansion in order to apply a number of optimizations,\n> including partition pruning (note that TimescaleDB uses inheritance since\n> PostgreSQL 9.6 rather than declarative partitioning ). There's currently no\n> official hook for table expansion, but TimescaleDB has been using the\n> get_relation_info hook for this purpose. Unfortunately, PostgreSQL 12 broke\n> this for us since it moved expansion to a later stage where we can no\n> longer control it without some pretty bad hacks. Given that PostgreSQL 12\n> changed the expansion state of a table for the get_relation_info hook, we\n> are thinking about this as a regression and are wondering if this could be\n> considered against the head of PG 12 or maybe even PG 13 (although we\n> realize feature freeze has been reached)?\n>\n>\nI reviewed your patch and it looks good to me. You mentioned that it would\nbe useful for partitioning using table inheritance but it could also be\nused for declarative partitioning (at least until someone decides to detach\nit from inheritance infrastructure).\n\nUnfortunately, you showed up late here. Even though the hook is a\nstraightforward feature, Postgres does not add new features to released\nversions or after the feature freeze.\n\nThe only point that I noticed was that you chose \"control over\" but similar\ncode uses \"control in\".\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 7 May 2020 at 05:11, Erik Nordström <erik@timescale.com> wrote:I am looking for feedback on the possibility of adding a table expansion hook to PostgreSQL (see attached patch). The motivation for this is to allow extensions to optimize table expansion. In particular, TimescaleDB does its own table expansion in order to apply a number of optimizations, including partition pruning (note that TimescaleDB uses inheritance since PostgreSQL 9.6 rather than declarative partitioning ). There's currently no official hook for table expansion, but TimescaleDB has been using the get_relation_info hook for this purpose. Unfortunately, PostgreSQL 12 broke this for us since it moved expansion to a later stage where we can no longer control it without some pretty bad hacks. Given that PostgreSQL 12 changed the expansion state of a table for the get_relation_info hook, we are thinking about this as a regression and are wondering if this could be considered against the head of PG 12 or maybe even PG 13 (although we realize feature freeze has been reached)?I reviewed your patch and it looks good to me. You mentioned that it would be useful for partitioning using table inheritance but it could also be used for declarative partitioning (at least until someone decides to detach it from inheritance infrastructure).Unfortunately, you showed up late here. Even though the hook is a straightforward feature, Postgres does not add new features to released versions or after the feature freeze.The only point that I noticed was that you chose \"control over\" but similar code uses \"control in\".-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 15 Sep 2020 23:14:56 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nThis patch implements useful improvement and the reviewer approved the code. It lacks a test, but looking at previously committed hooks, I think it is not mandatory. \r\nSo, I move it to RFC.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 02 Nov 2020 15:55:39 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "On 07.05.20 10:11, Erik Nordström wrote:\n> I am looking for feedback on the possibility of adding a table expansion \n> hook to PostgreSQL (see attached patch). The motivation for this is to \n> allow extensions to optimize table expansion. In particular, TimescaleDB \n> does its own table expansion in order to apply a number of \n> optimizations, including partition pruning (note that TimescaleDB uses \n> inheritance since PostgreSQL 9.6 rather than declarative partitioning ). \n> There's currently no official hook for table expansion, but TimescaleDB \n> has been using the get_relation_info hook for this purpose. \n> Unfortunately, PostgreSQL 12 broke this for us since it moved expansion \n> to a later stage where we can no longer control it without some pretty \n> bad hacks.\n\nUnlike the get_relation_info_hook, your proposed hook would *replace* \nexpand_inherited_rtentry() rather than just tack on additional actions. \nThat seems awfully fragile. Could you do with a hook that does \nadditional things rather than replace a whole chunk of built-in code?\n\n\n\n", "msg_date": "Thu, 4 Mar 2021 15:56:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 07.05.20 10:11, Erik Nordström wrote:\n>> I am looking for feedback on the possibility of adding a table expansion \n>> hook to PostgreSQL (see attached patch).\n\n> Unlike the get_relation_info_hook, your proposed hook would *replace* \n> expand_inherited_rtentry() rather than just tack on additional actions. \n> That seems awfully fragile. Could you do with a hook that does \n> additional things rather than replace a whole chunk of built-in code?\n\nI suppose Erik is assuming that he could call expand_inherited_rtentry\n(or better, the previous hook occupant) when his special case doesn't\napply. But I'm suspicious that he'd still end up duplicating large\nchunks of optimizer/util/inherit.c in order to carry out the special\ncase, since almost all of that is private/static functions. It\ndoes seem like a more narrowly-scoped hook might be better.\n\nWould it be unreasonable of us to ask for a worked-out example making\nuse of the proposed hook? That'd go a long way towards resolving the\nquestion of whether you can do anything useful without duplicating\nlots of code.\n\nI've also been wondering, given the table-AM projects that are\ngoing on, whether we shouldn't refactor things to give partitioned\ntables a special access method, and then shove most of the planner\nand executor's hard-wired partitioning logic into access method\ncallbacks. That would make it a lot more feasible for extensions\nto implement custom partitioning-like behavior ... or so I guess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Mar 2021 13:09:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "On Sat, Mar 06, 2021 at 01:09:10PM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 07.05.20 10:11, Erik Nordstr�m wrote:\n> >> I am looking for feedback on the possibility of adding a table expansion \n> >> hook to PostgreSQL (see attached patch).\n> \n> > Unlike the get_relation_info_hook, your proposed hook would *replace* \n> > expand_inherited_rtentry() rather than just tack on additional actions. \n> > That seems awfully fragile. Could you do with a hook that does \n> > additional things rather than replace a whole chunk of built-in code?\n> \n> I suppose Erik is assuming that he could call expand_inherited_rtentry\n> (or better, the previous hook occupant) when his special case doesn't\n> apply. But I'm suspicious that he'd still end up duplicating large\n> chunks of optimizer/util/inherit.c in order to carry out the special\n> case, since almost all of that is private/static functions. It\n> does seem like a more narrowly-scoped hook might be better.\n> \n> Would it be unreasonable of us to ask for a worked-out example making\n> use of the proposed hook? That'd go a long way towards resolving the\n> question of whether you can do anything useful without duplicating\n> lots of code.\n> \n> I've also been wondering, given the table-AM projects that are\n> going on, whether we shouldn't refactor things to give partitioned\n> tables a special access method, and then shove most of the planner\n> and executor's hard-wired partitioning logic into access method\n> callbacks. That would make it a lot more feasible for extensions\n> to implement custom partitioning-like behavior ... or so I guess.\n\nThat seems pretty reasonable. I suspect that that process will expose\nbits of the planning and execution machinery that have gotten less\nisolated than they should be.\n\nMore generally, and I'll start a separate thread on this, we should be\nworking up to including a reference implementation, however tiny, of\nevery extension point we supply in order to ensure that our APIs are\nat a minimum reasonably usable and remain so.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 6 Mar 2021 18:37:59 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "Thank you all for the feedback and insights.\n\nYes, the intention is to *replace* expand_inherited_rtentry() in the same\nway planner_hook replaces standard_planner().\n\nSome background: TimescaleDB implements its own partitioning based on\ninheritance that predates declarative partitioning. The extension would use\nthe table expansion hook to do its own table expansion based on\nextension-specific metadata. There was a pretty clean (but still hacky) way\nto do it via the get_relation_info_hook(), but PostgreSQL 12 changed the\norder of events so that this hook no longer works for this purpose. For\nPostgreSQL 12+, we'd have to copy/replace a lot of PostgreSQL code to make\nour custom expansion still work, and the proposed hook would allow us to\nget rid of this ugliness.\n\nWith the proposed table expansion hook, you could of course also first call\nexpand_inherited_rtentry() yourself and then modify the result or do\nadditional things. However, the way we'd like to use this in TimescaleDB is\nto more-or-less replace the current expansion code since we do not rely on\ndeclarative partitioning. I am not sure a more narrowly-scoped hook makes\nsense, because it would tie you to a certain way of doing things. That\nwould defeat the purpose of the hook. Note that expand_inherited_rtenry()\nimmediately branches off based on type of relation: one branch for\ninheritance, one for partitioning, and so on. So, doing this in a more\nnarrow scope would probably require one hook per relation type or at least\na common hook with some extra info on where you are in that expansion.\nAnother way of looking at this is to view TimescaleDB as offering a new\nrelation type for partitioning, so it is natural that it would have its own\nexpansion branch, just like inheritance and partitioning. There are a\ncouple of functions that might be useful to expose publicly, however, like\nexpand_single_inheritance_child() since it is called from both the\ninheritance and the partitioning branch.\n\nLooking at this problem more generally, though, it does seem to me that\nPostgreSQL would benefit from a more general and official\ntable-partitioning API that would allow custom partitioning\nimplementations, or make it part of the table access method API. However,\nwhile that is an interesting thing to explore, I think this table expansion\nhook goes a long way until such an API is available.\n\nI can provide a code example of how we'd use the table expansion hook in\nTimescaleDB. A more standalone example probably requires a bit more work\nthough.\n\nBest,\n\nErik\n\nOn Sat, Mar 6, 2021 at 7:38 PM David Fetter <david@fetter.org> wrote:\n\n> On Sat, Mar 06, 2021 at 01:09:10PM -0500, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > On 07.05.20 10:11, Erik Nordström wrote:\n> > >> I am looking for feedback on the possibility of adding a table\n> expansion\n> > >> hook to PostgreSQL (see attached patch).\n> >\n> > > Unlike the get_relation_info_hook, your proposed hook would *replace*\n> > > expand_inherited_rtentry() rather than just tack on additional\n> actions.\n> > > That seems awfully fragile. Could you do with a hook that does\n> > > additional things rather than replace a whole chunk of built-in code?\n> >\n> > I suppose Erik is assuming that he could call expand_inherited_rtentry\n> > (or better, the previous hook occupant) when his special case doesn't\n> > apply. But I'm suspicious that he'd still end up duplicating large\n> > chunks of optimizer/util/inherit.c in order to carry out the special\n> > case, since almost all of that is private/static functions. It\n> > does seem like a more narrowly-scoped hook might be better.\n> >\n> > Would it be unreasonable of us to ask for a worked-out example making\n> > use of the proposed hook? That'd go a long way towards resolving the\n> > question of whether you can do anything useful without duplicating\n> > lots of code.\n> >\n> > I've also been wondering, given the table-AM projects that are\n> > going on, whether we shouldn't refactor things to give partitioned\n> > tables a special access method, and then shove most of the planner\n> > and executor's hard-wired partitioning logic into access method\n> > callbacks. That would make it a lot more feasible for extensions\n> > to implement custom partitioning-like behavior ... or so I guess.\n>\n> That seems pretty reasonable. I suspect that that process will expose\n> bits of the planning and execution machinery that have gotten less\n> isolated than they should be.\n>\n> More generally, and I'll start a separate thread on this, we should be\n> working up to including a reference implementation, however tiny, of\n> every extension point we supply in order to ensure that our APIs are\n> at a minimum reasonably usable and remain so.\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nThank you all for the feedback and insights.Yes, the intention is to *replace* expand_inherited_rtentry() in the same way planner_hook replaces standard_planner(). Some background: TimescaleDB implements its own partitioning based on inheritance that predates declarative partitioning. The extension would use the table expansion hook to do its own table expansion based on extension-specific metadata. There was a pretty clean (but still hacky) way to do it via the get_relation_info_hook(), but PostgreSQL 12 changed the order of events so that this hook no longer works for this purpose. For PostgreSQL 12+, we'd have to copy/replace a lot of PostgreSQL code to make our custom expansion still work, and the proposed hook would allow us to get rid of this ugliness.With the proposed table expansion hook, you could of course also first call expand_inherited_rtentry() yourself and then modify the result or do additional things. However, the way we'd like to use this in TimescaleDB is to more-or-less replace the current expansion code since we do not rely on declarative partitioning. I am not sure a more narrowly-scoped hook makes sense, because it would tie you to a certain way of doing things. That would defeat the purpose of the hook. Note that expand_inherited_rtenry() immediately branches off based on type of relation: one branch for inheritance, one for partitioning, and so on. So, doing this in a more narrow scope would probably require one hook per relation type or at least a common hook with some extra info on where you are in that expansion. Another way of looking at this is to view TimescaleDB as offering a new relation type for partitioning, so it is natural that it would have its own expansion branch, just like inheritance and partitioning. There are a couple of functions that might be useful to expose publicly, however, like expand_single_inheritance_child() since it is called from both the inheritance and the partitioning branch.Looking at this problem more generally, though, it does seem to me that PostgreSQL would benefit from a more general and official table-partitioning API that would allow custom partitioning implementations, or make it part of the table access method API. However, while that is an interesting thing to explore, I think this table expansion hook goes a long way until such an API is available.I can provide a code example of how we'd use the table expansion hook in TimescaleDB. A more standalone example probably requires a bit more work though.Best,ErikOn Sat, Mar 6, 2021 at 7:38 PM David Fetter <david@fetter.org> wrote:On Sat, Mar 06, 2021 at 01:09:10PM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 07.05.20 10:11, Erik Nordström wrote:\n> >> I am looking for feedback on the possibility of adding a table expansion \n> >> hook to PostgreSQL (see attached patch).\n> \n> > Unlike the get_relation_info_hook, your proposed hook would *replace* \n> > expand_inherited_rtentry() rather than just tack on additional actions. \n> > That seems awfully fragile.  Could you do with a hook that does \n> > additional things rather than replace a whole chunk of built-in code?\n> \n> I suppose Erik is assuming that he could call expand_inherited_rtentry\n> (or better, the previous hook occupant) when his special case doesn't\n> apply.  But I'm suspicious that he'd still end up duplicating large\n> chunks of optimizer/util/inherit.c in order to carry out the special\n> case, since almost all of that is private/static functions.  It\n> does seem like a more narrowly-scoped hook might be better.\n> \n> Would it be unreasonable of us to ask for a worked-out example making\n> use of the proposed hook?  That'd go a long way towards resolving the\n> question of whether you can do anything useful without duplicating\n> lots of code.\n> \n> I've also been wondering, given the table-AM projects that are\n> going on, whether we shouldn't refactor things to give partitioned\n> tables a special access method, and then shove most of the planner\n> and executor's hard-wired partitioning logic into access method\n> callbacks.  That would make it a lot more feasible for extensions\n> to implement custom partitioning-like behavior ... or so I guess.\n\nThat seems pretty reasonable. I suspect that that process will expose\nbits of the planning and execution machinery that have gotten less\nisolated than they should be.\n\nMore generally, and I'll start a separate thread on this, we should be\nworking up to including a reference implementation, however tiny, of\nevery extension point we supply in order to ensure that our APIs are\nat a minimum reasonably usable and remain so.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Mon, 29 Mar 2021 10:18:20 +0200", "msg_from": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "Hi Erik,\n\n> Thank you all for the feedback and insights.\n>\n> Yes, the intention is to *replace* expand_inherited_rtentry() in the same way planner_hook replaces standard_planner().\n\nThis patch probably doesn't need yet another reviewer, but since there\nis a little controversy about if the hook should replace a procedure\nor be called after it, I decided to put my two cents in. The proposed\napproach is very flexible - it allows to modify the arguments, the\nresult, to completely replace the procedure, etc. I don't think that\ncalling a hook after the procedure was called (or before) will be very\nuseful.\n\nThe patch applies to `master` branch (6d177e28) and passes all the\ntests on MacOS.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 11 May 2021 15:29:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "Hello,\n\n> Thank you all for the feedback and insights.\n>\n> Yes, the intention is to *replace* expand_inherited_rtentry() in the same way planner_hook replaces standard_planner().\n>\n\nThis patch is really useful. We are working on developing hypothetical\npartitioning as a feature of HypoPG[1][2], but we hit the same problem\nas TimescaleDB. Therefore we would also be thrilled to have that hook.\n\nHypothetical partitioning allows users to define multiple partitioning\nschemes on real tables and real data hypothetically, and shows resulting\nqueries' plan/cost with EXPLAIN using hypothetical partitioning schemes.\nUsers can quickly check how their queries would behave if some tables\nwere partitioned, and try different partitioning schemes. HypoPG does\ntable expansion again according to the defined hypothetical partitioning\nschemes. For this purpose, we used get_relation_info hook, but in PG12,\ntable expansion was moved, so we cannot do that using\nget_relation_info hook. This is exactly the same problem Erik has.\nTherefore the proposed hook would allow us to support hypothetical partitioning.\n\n\n[1] https://github.com/HypoPG/hypopg/tree/REL1_STABLE\n[2] https://github.com/HypoPG/hypopg/tree/master\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 16:48:29 +0900", "msg_from": "yuzuko <yuzukohosoya@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "On Wed, May 12, 2021 at 04:48:29PM +0900, yuzuko wrote:\n> Hello,\n> \n> > Thank you all for the feedback and insights.\n> >\n> > Yes, the intention is to *replace* expand_inherited_rtentry() in the same way planner_hook replaces standard_planner().\n> >\n> \n> This patch is really useful. We are working on developing hypothetical\n> partitioning as a feature of HypoPG[1][2], but we hit the same problem\n> as TimescaleDB. Therefore we would also be thrilled to have that hook.\n> \n> Hypothetical partitioning allows users to define multiple partitioning\n> schemes on real tables and real data hypothetically, and shows resulting\n> queries' plan/cost with EXPLAIN using hypothetical partitioning schemes.\n> Users can quickly check how their queries would behave if some tables\n> were partitioned, and try different partitioning schemes. HypoPG does\n> table expansion again according to the defined hypothetical partitioning\n> schemes. For this purpose, we used get_relation_info hook, but in PG12,\n> table expansion was moved, so we cannot do that using\n> get_relation_info hook. This is exactly the same problem Erik has.\n> Therefore the proposed hook would allow us to support hypothetical partitioning.\n\nSorry for missing that thread until now. And yes as Hosoya-san just mentioned,\nwe faced the exact same problem when implementing hypothetical partitioning,\nand eventually had to stop as the changes in pg12 prevented it. So +1 for\nintroducing such a hook, it would also be useful for that usecase.\n\n\n", "msg_date": "Wed, 12 May 2021 16:01:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "(Sorry about being very late to this thread.)\n\nOn Sun, Mar 7, 2021 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 07.05.20 10:11, Erik Nordström wrote:\n> >> I am looking for feedback on the possibility of adding a table expansion\n> >> hook to PostgreSQL (see attached patch).\n>\n> > Unlike the get_relation_info_hook, your proposed hook would *replace*\n> > expand_inherited_rtentry() rather than just tack on additional actions.\n> > That seems awfully fragile. Could you do with a hook that does\n> > additional things rather than replace a whole chunk of built-in code?\n>\n> I suppose Erik is assuming that he could call expand_inherited_rtentry\n> (or better, the previous hook occupant) when his special case doesn't\n> apply. But I'm suspicious that he'd still end up duplicating large\n> chunks of optimizer/util/inherit.c in order to carry out the special\n> case, since almost all of that is private/static functions. It\n> does seem like a more narrowly-scoped hook might be better.\n\nYeah, I do wonder if all of the things that are now done under\nexpand_inherited_rtentry() are not necessary for Timescale child\nrelations for the queries to work correctly? In 428b260f87, the\ncommit in v12 responsible for this discussion AFAICS, and more\nrecently in 86dc9005, we introduced a bunch of logic in the\nexapnd_inherited_rtentry() path to do with adding junk columns to the\ntop-level query targetlist that was not there earlier. So I'd think\nthat expand_inherited_rtentry()'s job used to be much simpler pre-v12\nso that an extension dealing with inheritance child relations could\nmore easily replicate its functionality, but that may not necessarily\nbe true anymore. Granted, a lot of that new logic is to account for\nforeign table children, which perhaps doesn't matter to most\nextensions. But I'd be more careful about the stuff added in\n86dc9005, like add_row_identity_var/columns().\n\n> Would it be unreasonable of us to ask for a worked-out example making\n> use of the proposed hook? That'd go a long way towards resolving the\n> question of whether you can do anything useful without duplicating\n> lots of code.\n>\n> I've also been wondering, given the table-AM projects that are\n> going on, whether we shouldn't refactor things to give partitioned\n> tables a special access method, and then shove most of the planner\n> and executor's hard-wired partitioning logic into access method\n> callbacks. That would make it a lot more feasible for extensions\n> to implement custom partitioning-like behavior ... or so I guess.\n\nInteresting proposition...\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 22:19:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" }, { "msg_contents": "On Wed, May 12, 2021 at 10:19:17PM +0900, Amit Langote wrote:\n> (Sorry about being very late to this thread.)\n> \n> > Would it be unreasonable of us to ask for a worked-out example making\n> > use of the proposed hook? That'd go a long way towards resolving the\n> > question of whether you can do anything useful without duplicating\n> > lots of code.\n> >\n> > I've also been wondering, given the table-AM projects that are\n> > going on, whether we shouldn't refactor things to give partitioned\n> > tables a special access method, and then shove most of the planner\n> > and executor's hard-wired partitioning logic into access method\n> > callbacks. That would make it a lot more feasible for extensions\n> > to implement custom partitioning-like behavior ... or so I guess.\n> \n> Interesting proposition...\n> \n\nSince there is no clear definition here, we seems to be expecting an\nexample of how the hook will be used and there have been no activity\nsince may.\n\nI suggest we move this to Returned with feedback. Which I'll do in a\ncouple hours.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Thu, 9 Sep 2021 13:26:00 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Feedback on table expansion hook (including patch)" } ]
[ { "msg_contents": "Hi,\n\nThe following description in pg_buffercace is no longer true.\n\nWhen the pg_buffercache view is accessed, internal buffer manager\nlocks are taken for long enough to copy all the buffer state data that\nthe view will display. This ensures that the view produces a\nconsistent set of results, while not blocking normal buffer activity\nlonger than necessary. Nonetheless there could be some impact on\ndatabase performance if this view is read often.\n\nWe changed pg_buffercache_page so that it doesn't take buffer manager\nlocks in commit 6e654546fb6. Therefore from version 10,\npg_buffercache_page has less impact on normal buffer activity less but\nmight not return a consistent snapshot across all buffers instead.\n\nI've attached a patch to fix the documentation.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 7 May 2020 17:52:40 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix pg_buffercache document" }, { "msg_contents": "On Thu, May 7, 2020 at 2:23 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> The following description in pg_buffercace is no longer true.\n>\n> When the pg_buffercache view is accessed, internal buffer manager\n> locks are taken for long enough to copy all the buffer state data that\n> the view will display. This ensures that the view produces a\n> consistent set of results, while not blocking normal buffer activity\n> longer than necessary. Nonetheless there could be some impact on\n> database performance if this view is read often.\n>\n> We changed pg_buffercache_page so that it doesn't take buffer manager\n> locks in commit 6e654546fb6. Therefore from version 10,\n> pg_buffercache_page has less impact on normal buffer activity less but\n> might not return a consistent snapshot across all buffers instead.\n>\n\n+1.\n\nThere is a typo in the patch (queris/queries). How about if slightly\nreword it as \"Since buffer manager locks are not taken to copy the\nbuffer state data that the view will display, accessing\n<structname>pg_buffercache</structname> view has less impact on normal\nbuffer activity but it doesn't provide a consistent set of results\nacross all buffers. However, we ensure that the information of each\nbuffer is self-consistent.\"?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 14:42:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_buffercache document" }, { "msg_contents": "On Thu, 7 May 2020 at 18:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 7, 2020 at 2:23 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi,\n> >\n> > The following description in pg_buffercace is no longer true.\n> >\n> > When the pg_buffercache view is accessed, internal buffer manager\n> > locks are taken for long enough to copy all the buffer state data that\n> > the view will display. This ensures that the view produces a\n> > consistent set of results, while not blocking normal buffer activity\n> > longer than necessary. Nonetheless there could be some impact on\n> > database performance if this view is read often.\n> >\n> > We changed pg_buffercache_page so that it doesn't take buffer manager\n> > locks in commit 6e654546fb6. Therefore from version 10,\n> > pg_buffercache_page has less impact on normal buffer activity less but\n> > might not return a consistent snapshot across all buffers instead.\n> >\n>\n> +1.\n>\n> There is a typo in the patch (queris/queries). How about if slightly\n> reword it as \"Since buffer manager locks are not taken to copy the\n> buffer state data that the view will display, accessing\n> <structname>pg_buffercache</structname> view has less impact on normal\n> buffer activity but it doesn't provide a consistent set of results\n> across all buffers. However, we ensure that the information of each\n> buffer is self-consistent.\"?\n\nThank you for your idea. Agreed.\n\nAttached the updated version patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 7 May 2020 18:22:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_buffercache document" }, { "msg_contents": "On Thu, May 7, 2020 at 2:53 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 7 May 2020 at 18:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 7, 2020 at 2:23 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > The following description in pg_buffercace is no longer true.\n> > >\n> > > When the pg_buffercache view is accessed, internal buffer manager\n> > > locks are taken for long enough to copy all the buffer state data that\n> > > the view will display. This ensures that the view produces a\n> > > consistent set of results, while not blocking normal buffer activity\n> > > longer than necessary. Nonetheless there could be some impact on\n> > > database performance if this view is read often.\n> > >\n> > > We changed pg_buffercache_page so that it doesn't take buffer manager\n> > > locks in commit 6e654546fb6. Therefore from version 10,\n> > > pg_buffercache_page has less impact on normal buffer activity less but\n> > > might not return a consistent snapshot across all buffers instead.\n> > >\n> >\n> > +1.\n> >\n> > There is a typo in the patch (queris/queries). How about if slightly\n> > reword it as \"Since buffer manager locks are not taken to copy the\n> > buffer state data that the view will display, accessing\n> > <structname>pg_buffercache</structname> view has less impact on normal\n> > buffer activity but it doesn't provide a consistent set of results\n> > across all buffers. However, we ensure that the information of each\n> > buffer is self-consistent.\"?\n>\n> Thank you for your idea. Agreed.\n>\n> Attached the updated version patch.\n>\n\nLGTM. I will commit this tomorrow unless there are more comments.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 16:02:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_buffercache document" }, { "msg_contents": "On Thu, May 7, 2020 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 7, 2020 at 2:53 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > >\n> > > There is a typo in the patch (queris/queries). How about if slightly\n> > > reword it as \"Since buffer manager locks are not taken to copy the\n> > > buffer state data that the view will display, accessing\n> > > <structname>pg_buffercache</structname> view has less impact on normal\n> > > buffer activity but it doesn't provide a consistent set of results\n> > > across all buffers. However, we ensure that the information of each\n> > > buffer is self-consistent.\"?\n> >\n> > Thank you for your idea. Agreed.\n> >\n> > Attached the updated version patch.\n> >\n>\n> LGTM. I will commit this tomorrow unless there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 May 2020 10:06:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_buffercache document" }, { "msg_contents": "On Fri, 8 May 2020 at 13:36, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 7, 2020 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 7, 2020 at 2:53 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > >\n> > > > There is a typo in the patch (queris/queries). How about if slightly\n> > > > reword it as \"Since buffer manager locks are not taken to copy the\n> > > > buffer state data that the view will display, accessing\n> > > > <structname>pg_buffercache</structname> view has less impact on normal\n> > > > buffer activity but it doesn't provide a consistent set of results\n> > > > across all buffers. However, we ensure that the information of each\n> > > > buffer is self-consistent.\"?\n> > >\n> > > Thank you for your idea. Agreed.\n> > >\n> > > Attached the updated version patch.\n> > >\n> >\n> > LGTM. I will commit this tomorrow unless there are more comments.\n> >\n>\n> Pushed.\n>\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 14:28:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_buffercache document" } ]
[ { "msg_contents": "Hello,\n\nIt is possible to startup an instance with min > max, without the system \ncomplaining:\n\nmrechte=# show min_wal_size ;\n\n2020-05-07 11:12:11.422 CEST [11098] LOG: durᅵe : 0.279 ms\n\n min_wal_size\n\n--------------\n\n 128MB\n\n(1 ligne)\n\n\n\nmrechte=# show max_wal_size ;\n\n2020-05-07 11:12:12.814 CEST [11098] LOG: durᅵe : 0.275 ms\n\n max_wal_size\n\n--------------\n\n 64MB\n\n(1 ligne)\n\n\nThis could be an issue ?\n\n\n", "msg_date": "Thu, 7 May 2020 11:12:56 +0200", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <marc4@rechte.fr>", "msg_from_op": true, "msg_subject": "min_wal_size > max_wal_size is accepted" }, { "msg_contents": "Hi,\n\nLe jeu. 7 mai 2020 à 11:13, Marc Rechté <marc4@rechte.fr> a écrit :\n\n> Hello,\n>\n> It is possible to startup an instance with min > max, without the system\n> complaining:\n>\n> mrechte=# show min_wal_size ;\n>\n> 2020-05-07 11:12:11.422 CEST [11098] LOG: durée : 0.279 ms\n>\n> min_wal_size\n>\n> --------------\n>\n> 128MB\n>\n> (1 ligne)\n>\n>\n>\n> mrechte=# show max_wal_size ;\n>\n> 2020-05-07 11:12:12.814 CEST [11098] LOG: durée : 0.275 ms\n>\n> max_wal_size\n>\n> --------------\n>\n> 64MB\n>\n> (1 ligne)\n>\n>\n> This could be an issue ?\n>\n>\nI don't see how this could be an issue. You'll get a checkpoint every time\n64MB have been written before checkpoint_timeout kicked in. And WAL files\nwill be removed if you have more than 128MB of them.\n\nNot the smartest configuration, but not a damaging one either.\n\n\n-- \nGuillaume.\n\nHi,Le jeu. 7 mai 2020 à 11:13, Marc Rechté <marc4@rechte.fr> a écrit :Hello,\n\nIt is possible to startup an instance with min > max, without the system \ncomplaining:\n\nmrechte=# show min_wal_size ;\n\n2020-05-07 11:12:11.422 CEST [11098] LOG:  durée : 0.279 ms\n\n  min_wal_size\n\n--------------\n\n  128MB\n\n(1 ligne)\n\n\n\nmrechte=# show max_wal_size ;\n\n2020-05-07 11:12:12.814 CEST [11098] LOG:  durée : 0.275 ms\n\n  max_wal_size\n\n--------------\n\n  64MB\n\n(1 ligne)\n\n\nThis could be an issue ?\nI don't see how this could be an issue. You'll get a checkpoint every time 64MB have been written before checkpoint_timeout kicked in. And WAL files will be removed if you have more than 128MB of them.Not the smartest configuration, but not a damaging one either.-- Guillaume.", "msg_date": "Thu, 7 May 2020 15:09:07 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": false, "msg_subject": "Re: min_wal_size > max_wal_size is accepted" }, { "msg_contents": "> Hi,\n> \n> Le jeu. 7 mai 2020 à 11:13, Marc Rechté <marc4@rechte.fr \n> <mailto:marc4@rechte.fr>> a écrit :\n> \n> Hello,\n> \n> It is possible to startup an instance with min > max, without the\n> system\n> complaining:\n> \n> mrechte=# show min_wal_size ;\n> \n> 2020-05-07 11:12:11.422 CEST [11098] LOG:  durée : 0.279 ms\n> \n>   min_wal_size\n> \n> --------------\n> \n>   128MB\n> \n> (1 ligne)\n> \n> \n> \n> mrechte=# show max_wal_size ;\n> \n> 2020-05-07 11:12:12.814 CEST [11098] LOG:  durée : 0.275 ms\n> \n>   max_wal_size\n> \n> --------------\n> \n>   64MB\n> \n> (1 ligne)\n> \n> \n> This could be an issue ?\n> \n> \n> I don't see how this could be an issue. You'll get a checkpoint every \n> time 64MB have been written before checkpoint_timeout kicked in. And WAL \n> files will be removed if you have more than 128MB of them.\n> \n> Not the smartest configuration, but not a damaging one either.\n> \n> \n> -- \n> Guillaume.\n\nI have some doubts when I see such code in \nbackend/access/transam/xlog.c:2334\n\n\n\n\tif (recycleSegNo < minSegNo)\n\n\t\trecycleSegNo = minSegNo;\n\n\tif (recycleSegNo > maxSegNo)\n\n\t\trecycleSegNo = maxSegNo;\n\n\n\n", "msg_date": "Thu, 7 May 2020 15:18:23 +0200", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <marc4@rechte.fr>", "msg_from_op": true, "msg_subject": "Re: min_wal_size > max_wal_size is accepted" } ]
[ { "msg_contents": "Hi, all\n\nIt appeared than GIN index sometimes lose results if simultaneously:\n\n\n1 if query operand contains weight marks\n\n2 if weight-marked operand is negated by ! operator\n\n3 if there are only logical (not phrase) operators from this negation\ntowards the root of query tree.\n\n\ne.g. '!crew:A'::tsquery refuse to find 'crew:BCD'::tsvector\n\n\nSeems it is in all versions of PG.\n\n\nThe patch is intended to deal with the issue. Also it contains tests for\nthese rare condition.\n\n\nPavel Borisov.", "msg_date": "Thu, 7 May 2020 17:26:17 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> It appeared than GIN index sometimes lose results if simultaneously:\n> 1 if query operand contains weight marks\n> 2 if weight-marked operand is negated by ! operator\n> 3 if there are only logical (not phrase) operators from this negation\n> towards the root of query tree.\n\nNice catch ... but if you try it with a GIST index, that fails too.\n\nEven if it were only GIN indexes, this patch is an utter hack.\nIt might accidentally work for the specific case of NOT with\na single QI_VAL node as argument, but not for anything more\ncomplicated.\n\nI think the root of the problem is that if we have a query using\nweights, and we are testing tsvector data that lacks positions/weights,\nwe can never say there's definitely a match. I don't see any decently\nclean way to fix this without redefining the TSExecuteCallback API\nto return a tri-state YES/NO/MAYBE result, because really we need to\ndecide that it's MAYBE at the level of processing the QI_VAL node,\nnot later on. I'd tried to avoid that in e81e5741a, but maybe we\nshould just bite that bullet, and not worry about whether there's\nany third-party code providing its own TSExecuteCallback routine.\ncodesearch.debian.net suggests that there are no external callers\nof TS_execute, so maybe we can get away with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 17:15:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "I wrote:\n> I think the root of the problem is that if we have a query using\n> weights, and we are testing tsvector data that lacks positions/weights,\n> we can never say there's definitely a match. I don't see any decently\n> clean way to fix this without redefining the TSExecuteCallback API\n> to return a tri-state YES/NO/MAYBE result, because really we need to\n> decide that it's MAYBE at the level of processing the QI_VAL node,\n> not later on. I'd tried to avoid that in e81e5741a, but maybe we\n> should just bite that bullet, and not worry about whether there's\n> any third-party code providing its own TSExecuteCallback routine.\n> codesearch.debian.net suggests that there are no external callers\n> of TS_execute, so maybe we can get away with that.\n\n0001 attached is a proposed patch that does it that way. Given the\nAPI break involved, it's not quite clear what to do with this.\nISTM we have three options:\n\n1. Ignore the API issue and back-patch. Given the apparent lack of\nexternal callers of TS_execute, maybe we can get away with that;\nbut I wonder if we'd get pushback from distros that have automatic\nABI-break detectors in place.\n\n2. Assume we can't backpatch, but it's still OK to slip this into\nv13. (This option clearly has a limited shelf life, but I think\nwe could get away with it until late beta.)\n\n3. Assume we'd better hold this till v14.\n\nI find #3 unduly conservative, seeing that this is clearly a bug\nfix, but on the other hand #1 is a bit scary. Aside from the API\nissue, it's not impossible that this has introduced some corner\ncase behavioral changes that we'd consider to be new bugs rather\nthan bug fixes.\n\nAnyway, some notes for reviewers:\n\n* The core idea of the patch is to make the TS_execute callbacks\nhave ternary results and to insist they return TS_MAYBE in any\ncase where the correct result is uncertain.\n\n* That fixes the bug at hand, and it also allows getting rid of\nsome kluges at higher levels. The GIN code no longer needs its\nown TS_execute_ternary implementation, and the GIST code no longer\nneeds to suppose that it can't trust NOT results.\n\n* I put some effort into not leaking memory within tsvector_op.c's\ncheckclass_str and checkcondition_str. (The final output array\ncan still get leaked, I believe. Fixing that seems like material\nfor a different patch, and it might not be worth any trouble.)\n\n* The new test cases in tstypes.sql are to verify that we didn't\nchange behavior of the basic tsvector @@ tsquery code. There wasn't\nany coverage of these cases before, and the logic for checkclass_str\nwithout position info had to be tweaked to preserve this behavior.\n\n* The new cases in tsearch verify that the GIN and GIST code gives\nthe same results as the basic operator.\n\nNow, as for the 0002 patch attached: after 0001, the only TS_execute()\ncallers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\nwhich I'd already complained is probably a bug, and the first of\nthe two calls in tsrank.c's Cover(). It seems difficult to me to\nargue that it's not a bug for Cover() to process NOT in one call\nbut not the other --- moreover, if there was any argument for that\nonce upon a time, it probably falls to the ground now that (a) we\nhave a less buggy implementation of NOT and (b) the presence of\nphrase queries significantly raises the importance of not taking\nshort-cuts. Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\nflag and has TS_execute compute NOT expressions accurately all the\ntime.\n\nAs it stands, 0002 changes no regression test results, which I'm\nafraid speaks more to our crummy test coverage than anything else;\ntests that exercise those two functions with NOT-using queries\nwould easily show that there is a difference.\n\nEven if we decide to back-patch 0001, I would not suggest\nback-patching 0002, as it's more nearly a definitional change\nthan a bug fix. But I think it's a good idea anyway.\n\nI'll stick this in the queue for the July commitfest, in case\nanybody wants to review it.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 16 May 2020 19:14:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Hi, all!\nBelow is my variant how to patch Gin-Gist weights issue:\n1. First of all I propose to shift from previously Gin's own TS_execute\nvariant and leave only two: TS_execute with bool result and bool type\ncallback and ternary TS_execute_recurse with ternary callback. I suppose\nall legacy consistent callers can still use bool via provided wrapper.\n2. I integrated logic for indexes which do not support weights and\npositions inside (which gives MAYBE in certain cases on negation) inside\nprevious TS_execute_recurse function called with additional flag for this\nclass of indexes.\n3. Check function for GIST and GIN now gives ternary result and is called\nwith ternary type callback. I think in future nothing prevents smoothly\nshifting callback functions, check functions and even TS_execute result to\nternary.\n\nSo I also send my variant patch for review and discussion.\n\nRegards,\nPavel Borisov\n\nвс, 17 мая 2020 г. в 03:14, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> I wrote:\n> > I think the root of the problem is that if we have a query using\n> > weights, and we are testing tsvector data that lacks positions/weights,\n> > we can never say there's definitely a match. I don't see any decently\n> > clean way to fix this without redefining the TSExecuteCallback API\n> > to return a tri-state YES/NO/MAYBE result, because really we need to\n> > decide that it's MAYBE at the level of processing the QI_VAL node,\n> > not later on. I'd tried to avoid that in e81e5741a, but maybe we\n> > should just bite that bullet, and not worry about whether there's\n> > any third-party code providing its own TSExecuteCallback routine.\n> > codesearch.debian.net suggests that there are no external callers\n> > of TS_execute, so maybe we can get away with that.\n>\n> 0001 attached is a proposed patch that does it that way. Given the\n> API break involved, it's not quite clear what to do with this.\n> ISTM we have three options:\n>\n> 1. Ignore the API issue and back-patch. Given the apparent lack of\n> external callers of TS_execute, maybe we can get away with that;\n> but I wonder if we'd get pushback from distros that have automatic\n> ABI-break detectors in place.\n>\n> 2. Assume we can't backpatch, but it's still OK to slip this into\n> v13. (This option clearly has a limited shelf life, but I think\n> we could get away with it until late beta.)\n>\n> 3. Assume we'd better hold this till v14.\n>\n> I find #3 unduly conservative, seeing that this is clearly a bug\n> fix, but on the other hand #1 is a bit scary. Aside from the API\n> issue, it's not impossible that this has introduced some corner\n> case behavioral changes that we'd consider to be new bugs rather\n> than bug fixes.\n>\n> Anyway, some notes for reviewers:\n>\n> * The core idea of the patch is to make the TS_execute callbacks\n> have ternary results and to insist they return TS_MAYBE in any\n> case where the correct result is uncertain.\n>\n> * That fixes the bug at hand, and it also allows getting rid of\n> some kluges at higher levels. The GIN code no longer needs its\n> own TS_execute_ternary implementation, and the GIST code no longer\n> needs to suppose that it can't trust NOT results.\n>\n> * I put some effort into not leaking memory within tsvector_op.c's\n> checkclass_str and checkcondition_str. (The final output array\n> can still get leaked, I believe. Fixing that seems like material\n> for a different patch, and it might not be worth any trouble.)\n>\n> * The new test cases in tstypes.sql are to verify that we didn't\n> change behavior of the basic tsvector @@ tsquery code. There wasn't\n> any coverage of these cases before, and the logic for checkclass_str\n> without position info had to be tweaked to preserve this behavior.\n>\n> * The new cases in tsearch verify that the GIN and GIST code gives\n> the same results as the basic operator.\n>\n> Now, as for the 0002 patch attached: after 0001, the only TS_execute()\n> callers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\n> which I'd already complained is probably a bug, and the first of\n> the two calls in tsrank.c's Cover(). It seems difficult to me to\n> argue that it's not a bug for Cover() to process NOT in one call\n> but not the other --- moreover, if there was any argument for that\n> once upon a time, it probably falls to the ground now that (a) we\n> have a less buggy implementation of NOT and (b) the presence of\n> phrase queries significantly raises the importance of not taking\n> short-cuts. Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\n> flag and has TS_execute compute NOT expressions accurately all the\n> time.\n>\n> As it stands, 0002 changes no regression test results, which I'm\n> afraid speaks more to our crummy test coverage than anything else;\n> tests that exercise those two functions with NOT-using queries\n> would easily show that there is a difference.\n>\n> Even if we decide to back-patch 0001, I would not suggest\n> back-patching 0002, as it's more nearly a definitional change\n> than a bug fix. But I think it's a good idea anyway.\n>\n> I'll stick this in the queue for the July commitfest, in case\n> anybody wants to review it.\n>\n> regards, tom lane\n>\n>", "msg_date": "Sun, 17 May 2020 23:53:21 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "1. Really if it's possible to avoid bool callbacks at all and shift\neverywhere to ternary it makes code quite beautiful and even. But I also\nthink we are still not obliged to drop support for (legacy or otherwise)\nbool callbacks and also consistent functions form some old extensions (I\ndon't know for sur, whether they exist) which expect old style bool result\nfrom TS_execute.\n\nIn my patch I used ternary logic from TS_execute_recurse on, which can be\ncalled by \"new\" ternary consistent callers and leave bool TS_execute, which\nworks as earlier. It also makes callback function wrapping to allow some\nhypothetical old extension enjoy binary behavior. I am not sure it is very\nmuch necessary but as it is not hard I'd propose somewhat leave this\nfeature by combining patches.\n\n2. Overall I see two reasons to consider when choosing ternary/boolean\ncalls in TS_execute: speed and compatibility. I'd like to make some\nperformance tests for different types of queries (plain without weights,\nand containing weights in some or all operands) to evaluate first of these\neffects in both cases.\n\nThen we'll have reasons to commit a certain type of patch or maybe some\ncombination of them.\n\nBest regards,\nPavel Borisov.\n\nвс, 17 мая 2020 г. в 23:53, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> Hi, all!\n> Below is my variant how to patch Gin-Gist weights issue:\n> 1. First of all I propose to shift from previously Gin's own TS_execute\n> variant and leave only two: TS_execute with bool result and bool type\n> callback and ternary TS_execute_recurse with ternary callback. I suppose\n> all legacy consistent callers can still use bool via provided wrapper.\n> 2. I integrated logic for indexes which do not support weights and\n> positions inside (which gives MAYBE in certain cases on negation) inside\n> previous TS_execute_recurse function called with additional flag for this\n> class of indexes.\n> 3. Check function for GIST and GIN now gives ternary result and is called\n> with ternary type callback. I think in future nothing prevents smoothly\n> shifting callback functions, check functions and even TS_execute result to\n> ternary.\n>\n> So I also send my variant patch for review and discussion.\n>\n> Regards,\n> Pavel Borisov\n>\n> вс, 17 мая 2020 г. в 03:14, Tom Lane <tgl@sss.pgh.pa.us>:\n>\n>> I wrote:\n>> > I think the root of the problem is that if we have a query using\n>> > weights, and we are testing tsvector data that lacks positions/weights,\n>> > we can never say there's definitely a match. I don't see any decently\n>> > clean way to fix this without redefining the TSExecuteCallback API\n>> > to return a tri-state YES/NO/MAYBE result, because really we need to\n>> > decide that it's MAYBE at the level of processing the QI_VAL node,\n>> > not later on. I'd tried to avoid that in e81e5741a, but maybe we\n>> > should just bite that bullet, and not worry about whether there's\n>> > any third-party code providing its own TSExecuteCallback routine.\n>> > codesearch.debian.net suggests that there are no external callers\n>> > of TS_execute, so maybe we can get away with that.\n>>\n>> 0001 attached is a proposed patch that does it that way. Given the\n>> API break involved, it's not quite clear what to do with this.\n>> ISTM we have three options:\n>>\n>> 1. Ignore the API issue and back-patch. Given the apparent lack of\n>> external callers of TS_execute, maybe we can get away with that;\n>> but I wonder if we'd get pushback from distros that have automatic\n>> ABI-break detectors in place.\n>>\n>> 2. Assume we can't backpatch, but it's still OK to slip this into\n>> v13. (This option clearly has a limited shelf life, but I think\n>> we could get away with it until late beta.)\n>>\n>> 3. Assume we'd better hold this till v14.\n>>\n>> I find #3 unduly conservative, seeing that this is clearly a bug\n>> fix, but on the other hand #1 is a bit scary. Aside from the API\n>> issue, it's not impossible that this has introduced some corner\n>> case behavioral changes that we'd consider to be new bugs rather\n>> than bug fixes.\n>>\n>> Anyway, some notes for reviewers:\n>>\n>> * The core idea of the patch is to make the TS_execute callbacks\n>> have ternary results and to insist they return TS_MAYBE in any\n>> case where the correct result is uncertain.\n>>\n>> * That fixes the bug at hand, and it also allows getting rid of\n>> some kluges at higher levels. The GIN code no longer needs its\n>> own TS_execute_ternary implementation, and the GIST code no longer\n>> needs to suppose that it can't trust NOT results.\n>>\n>> * I put some effort into not leaking memory within tsvector_op.c's\n>> checkclass_str and checkcondition_str. (The final output array\n>> can still get leaked, I believe. Fixing that seems like material\n>> for a different patch, and it might not be worth any trouble.)\n>>\n>> * The new test cases in tstypes.sql are to verify that we didn't\n>> change behavior of the basic tsvector @@ tsquery code. There wasn't\n>> any coverage of these cases before, and the logic for checkclass_str\n>> without position info had to be tweaked to preserve this behavior.\n>>\n>> * The new cases in tsearch verify that the GIN and GIST code gives\n>> the same results as the basic operator.\n>>\n>> Now, as for the 0002 patch attached: after 0001, the only TS_execute()\n>> callers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\n>> which I'd already complained is probably a bug, and the first of\n>> the two calls in tsrank.c's Cover(). It seems difficult to me to\n>> argue that it's not a bug for Cover() to process NOT in one call\n>> but not the other --- moreover, if there was any argument for that\n>> once upon a time, it probably falls to the ground now that (a) we\n>> have a less buggy implementation of NOT and (b) the presence of\n>> phrase queries significantly raises the importance of not taking\n>> short-cuts. Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\n>> flag and has TS_execute compute NOT expressions accurately all the\n>> time.\n>>\n>> As it stands, 0002 changes no regression test results, which I'm\n>> afraid speaks more to our crummy test coverage than anything else;\n>> tests that exercise those two functions with NOT-using queries\n>> would easily show that there is a difference.\n>>\n>> Even if we decide to back-patch 0001, I would not suggest\n>> back-patching 0002, as it's more nearly a definitional change\n>> than a bug fix. But I think it's a good idea anyway.\n>>\n>> I'll stick this in the queue for the July commitfest, in case\n>> anybody wants to review it.\n>>\n>> regards, tom lane\n>>\n>>\n\n1. Really if it's possible to avoid bool callbacks at all and shift everywhere to ternary it makes code quite beautiful and even. But I also think we are still not obliged to drop support for (legacy or otherwise) bool callbacks and also consistent functions form some old extensions (I don't know for sur, whether they exist) which expect old style bool result from TS_execute. In my patch I used ternary logic from TS_execute_recurse on, which can be called by \"new\" ternary consistent callers and leave bool TS_execute, which works as earlier. It also makes callback function wrapping to allow some hypothetical old extension enjoy binary behavior. I am not sure it is very much necessary but as it is not hard I'd propose somewhat leave this feature by combining patches.2. Overall I see two reasons to consider when choosing ternary/boolean calls in TS_execute: speed and compatibility. I'd like to make some performance tests for different types of queries (plain without weights, and containing weights in some or all operands) to evaluate first of these effects in both cases.Then we'll have reasons to commit a certain type of patch or maybe some combination of them.Best regards,Pavel Borisov. вс, 17 мая 2020 г. в 23:53, Pavel Borisov <pashkin.elfe@gmail.com>:Hi, all!Below is my variant how to patch Gin-Gist weights issue:1. First of all I propose to shift from previously Gin's own TS_execute variant and leave only two: TS_execute with bool result and bool type callback and ternary TS_execute_recurse with ternary callback. I suppose all legacy consistent callers can still use bool via provided wrapper.2. I integrated logic for indexes which do not support weights and positions inside (which gives MAYBE in certain cases on negation) inside previous TS_execute_recurse function called with additional flag for this class of indexes.3. Check function for GIST and GIN now gives ternary result and is called with ternary type callback. I think in future nothing prevents smoothly shifting callback functions, check functions and even TS_execute result to ternary.So I also send my variant patch for review and discussion.Regards,Pavel Borisovвс, 17 мая 2020 г. в 03:14, Tom Lane <tgl@sss.pgh.pa.us>:I wrote:\n> I think the root of the problem is that if we have a query using\n> weights, and we are testing tsvector data that lacks positions/weights,\n> we can never say there's definitely a match.  I don't see any decently\n> clean way to fix this without redefining the TSExecuteCallback API\n> to return a tri-state YES/NO/MAYBE result, because really we need to\n> decide that it's MAYBE at the level of processing the QI_VAL node,\n> not later on.  I'd tried to avoid that in e81e5741a, but maybe we\n> should just bite that bullet, and not worry about whether there's\n> any third-party code providing its own TSExecuteCallback routine.\n> codesearch.debian.net suggests that there are no external callers\n> of TS_execute, so maybe we can get away with that.\n\n0001 attached is a proposed patch that does it that way.  Given the\nAPI break involved, it's not quite clear what to do with this.\nISTM we have three options:\n\n1. Ignore the API issue and back-patch.  Given the apparent lack of\nexternal callers of TS_execute, maybe we can get away with that;\nbut I wonder if we'd get pushback from distros that have automatic\nABI-break detectors in place.\n\n2. Assume we can't backpatch, but it's still OK to slip this into\nv13.  (This option clearly has a limited shelf life, but I think\nwe could get away with it until late beta.)\n\n3. Assume we'd better hold this till v14.\n\nI find #3 unduly conservative, seeing that this is clearly a bug\nfix, but on the other hand #1 is a bit scary.  Aside from the API\nissue, it's not impossible that this has introduced some corner\ncase behavioral changes that we'd consider to be new bugs rather\nthan bug fixes.\n\nAnyway, some notes for reviewers:\n\n* The core idea of the patch is to make the TS_execute callbacks\nhave ternary results and to insist they return TS_MAYBE in any\ncase where the correct result is uncertain.\n\n* That fixes the bug at hand, and it also allows getting rid of\nsome kluges at higher levels.  The GIN code no longer needs its\nown TS_execute_ternary implementation, and the GIST code no longer\nneeds to suppose that it can't trust NOT results.\n\n* I put some effort into not leaking memory within tsvector_op.c's\ncheckclass_str and checkcondition_str.  (The final output array\ncan still get leaked, I believe.  Fixing that seems like material\nfor a different patch, and it might not be worth any trouble.)\n\n* The new test cases in tstypes.sql are to verify that we didn't\nchange behavior of the basic tsvector @@ tsquery code.  There wasn't\nany coverage of these cases before, and the logic for checkclass_str\nwithout position info had to be tweaked to preserve this behavior.\n\n* The new cases in tsearch verify that the GIN and GIST code gives\nthe same results as the basic operator.\n\nNow, as for the 0002 patch attached: after 0001, the only TS_execute()\ncallers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\nwhich I'd already complained is probably a bug, and the first of\nthe two calls in tsrank.c's Cover().  It seems difficult to me to\nargue that it's not a bug for Cover() to process NOT in one call\nbut not the other --- moreover, if there was any argument for that\nonce upon a time, it probably falls to the ground now that (a) we\nhave a less buggy implementation of NOT and (b) the presence of\nphrase queries significantly raises the importance of not taking\nshort-cuts.  Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\nflag and has TS_execute compute NOT expressions accurately all the\ntime.\n\nAs it stands, 0002 changes no regression test results, which I'm\nafraid speaks more to our crummy test coverage than anything else;\ntests that exercise those two functions with NOT-using queries\nwould easily show that there is a difference.\n\nEven if we decide to back-patch 0001, I would not suggest\nback-patching 0002, as it's more nearly a definitional change\nthan a bug fix.  But I think it's a good idea anyway.\n\nI'll stick this in the queue for the July commitfest, in case\nanybody wants to review it.\n\n                        regards, tom lane", "msg_date": "Wed, 20 May 2020 18:04:24 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Hi All!\n1. Generally the difference of my patch in comparison to Tom's patch 0001\nis that I tried to move previous logic of GIN's own TS_execute_ternary() to\nthe general logic of TS_execute_recurse and in case we have index without\npositions to avoid diving into phrase operator replacing (only in this\ncase) in by an AND operator. The reason for this I suppose is speed and\nI've done testing of some corner cases like phrase operator with big number\nof OR comparisons inside it.\n\n-----------------------------\nBEFORE ANY PATCH:\n Bitmap Heap Scan on pglist (cost=1715.72..160233.31 rows=114545\nwidth=1234) (actual time=652.294..2719.961 rows=4904 loops=1)\n Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' |\n''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow''\n| ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' |\n''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' |\n''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' |\n''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Rows Removed by Index Recheck: 108191\n Heap Blocks: exact=73789\n -> Bitmap Index Scan on pglist_fts_idx (cost=0.00..1687.09 rows=114545\nwidth=0) (actual time=636.883..636.883 rows=113095 loops=1)\n Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' |\n''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' |\n''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' |\n''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl''\n| ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' |\n''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Planning Time: 3.016 ms\n Execution Time: *2721.002 ms*\n-------------------------------\nAFTER TOM's PATCH (0001)\nBitmap Heap Scan on pglist (cost=1715.72..160233.31 rows=114545\nwidth=1234) (actual time=916.640..2960.571 rows=4904 loops=1)\n Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' |\n''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow''\n| ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' |\n''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' |\n''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' |\n''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Rows Removed by Index Recheck: 108191\n Heap Blocks: exact=73789\n -> Bitmap Index Scan on pglist_fts_idx (cost=0.00..1687.09 rows=114545\nwidth=0) (actual time=900.472..900.472 rows=113095 loops=1)\n Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' |\n''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' |\n''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' |\n''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl''\n| ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' |\n''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Planning Time: 2.688 ms\n Execution Time: *2961.704 ms*\n----------------------------\nAFTER MY PATCH (gin-gist-weight-patch-v3)\nBitmap Heap Scan on pglist (cost=1715.72..160233.31 rows=114545\nwidth=1234) (actual time=616.982..2710.571 rows=4904 loops=1)\n Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' |\n''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow''\n| ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' |\n''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' |\n''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' |\n''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Rows Removed by Index Recheck: 108191\n Heap Blocks: exact=73789\n -> Bitmap Index Scan on pglist_fts_idx (cost=0.00..1687.09 rows=114545\nwidth=0) (actual time=601.586..601.586 rows=113095 loops=1)\n Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' |\n''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' |\n''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' |\n''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl''\n| ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' |\n''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <->\n''start'''::tsquery)\n Planning Time: 3.115 ms\n Execution Time: *2711.533 ms*\n\nI've done the test several times and seems that difference is real effect,\nthough not very big (around 7%). So maybe there is some reason to save\nPHRASE_AS_AND behavior for GIN-GIST indexes despite migration from GIN's\nown TS_execute_ternary() to general TS_execute_recurse.\n\n2. As for shifting from bool to ternary callback I am not quite sure\nwhether it can be useful to save bool callbacks via bool-ternary wrapper.\nWe can include this for compatibility with old callers and can drop. Any\nideas?\n\n3. As for patch 0002 which removes TS_EXEC_CALC_NOT flag I'd like to note\nthat indexes which are written as extensions like RUM index (\nhttps://github.com/postgrespro/rum) use this flag as default behavior of\nTS_execute was NOT doing TS_EXEC_CALC_NOT. If we's like to change this\ndefault it can break the callers. At least I propose to\nleave TS_EXEC_CALC_NOT definition in ts_utils.h but in general I'd like to\nsave default behaviour of TS_execute and not apply patch 0002. Maybe it is\nonly worth to leave notice in a comments in code that TS_EXEC_CALC_NOT left\nfor compatibilty reasons etc.\n\nI'd appreciate any ideas and review of all aforementioned patches.\n\nBest regards,\nPavel Borisov.\n\nср, 20 мая 2020 г. в 18:04, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> 1. Really if it's possible to avoid bool callbacks at all and shift\n> everywhere to ternary it makes code quite beautiful and even. But I also\n> think we are still not obliged to drop support for (legacy or otherwise)\n> bool callbacks and also consistent functions form some old extensions (I\n> don't know for sur, whether they exist) which expect old style bool result\n> from TS_execute.\n>\n> In my patch I used ternary logic from TS_execute_recurse on, which can be\n> called by \"new\" ternary consistent callers and leave bool TS_execute, which\n> works as earlier. It also makes callback function wrapping to allow some\n> hypothetical old extension enjoy binary behavior. I am not sure it is very\n> much necessary but as it is not hard I'd propose somewhat leave this\n> feature by combining patches.\n>\n> 2. Overall I see two reasons to consider when choosing ternary/boolean\n> calls in TS_execute: speed and compatibility. I'd like to make some\n> performance tests for different types of queries (plain without weights,\n> and containing weights in some or all operands) to evaluate first of these\n> effects in both cases.\n>\n> Then we'll have reasons to commit a certain type of patch or maybe some\n> combination of them.\n>\n> Best regards,\n> Pavel Borisov.\n>\n> вс, 17 мая 2020 г. в 23:53, Pavel Borisov <pashkin.elfe@gmail.com>:\n>\n>> Hi, all!\n>> Below is my variant how to patch Gin-Gist weights issue:\n>> 1. First of all I propose to shift from previously Gin's own TS_execute\n>> variant and leave only two: TS_execute with bool result and bool type\n>> callback and ternary TS_execute_recurse with ternary callback. I suppose\n>> all legacy consistent callers can still use bool via provided wrapper.\n>> 2. I integrated logic for indexes which do not support weights and\n>> positions inside (which gives MAYBE in certain cases on negation) inside\n>> previous TS_execute_recurse function called with additional flag for this\n>> class of indexes.\n>> 3. Check function for GIST and GIN now gives ternary result and is called\n>> with ternary type callback. I think in future nothing prevents smoothly\n>> shifting callback functions, check functions and even TS_execute result to\n>> ternary.\n>>\n>> So I also send my variant patch for review and discussion.\n>>\n>> Regards,\n>> Pavel Borisov\n>>\n>> вс, 17 мая 2020 г. в 03:14, Tom Lane <tgl@sss.pgh.pa.us>:\n>>\n>>> I wrote:\n>>> > I think the root of the problem is that if we have a query using\n>>> > weights, and we are testing tsvector data that lacks positions/weights,\n>>> > we can never say there's definitely a match. I don't see any decently\n>>> > clean way to fix this without redefining the TSExecuteCallback API\n>>> > to return a tri-state YES/NO/MAYBE result, because really we need to\n>>> > decide that it's MAYBE at the level of processing the QI_VAL node,\n>>> > not later on. I'd tried to avoid that in e81e5741a, but maybe we\n>>> > should just bite that bullet, and not worry about whether there's\n>>> > any third-party code providing its own TSExecuteCallback routine.\n>>> > codesearch.debian.net suggests that there are no external callers\n>>> > of TS_execute, so maybe we can get away with that.\n>>>\n>>> 0001 attached is a proposed patch that does it that way. Given the\n>>> API break involved, it's not quite clear what to do with this.\n>>> ISTM we have three options:\n>>>\n>>> 1. Ignore the API issue and back-patch. Given the apparent lack of\n>>> external callers of TS_execute, maybe we can get away with that;\n>>> but I wonder if we'd get pushback from distros that have automatic\n>>> ABI-break detectors in place.\n>>>\n>>> 2. Assume we can't backpatch, but it's still OK to slip this into\n>>> v13. (This option clearly has a limited shelf life, but I think\n>>> we could get away with it until late beta.)\n>>>\n>>> 3. Assume we'd better hold this till v14.\n>>>\n>>> I find #3 unduly conservative, seeing that this is clearly a bug\n>>> fix, but on the other hand #1 is a bit scary. Aside from the API\n>>> issue, it's not impossible that this has introduced some corner\n>>> case behavioral changes that we'd consider to be new bugs rather\n>>> than bug fixes.\n>>>\n>>> Anyway, some notes for reviewers:\n>>>\n>>> * The core idea of the patch is to make the TS_execute callbacks\n>>> have ternary results and to insist they return TS_MAYBE in any\n>>> case where the correct result is uncertain.\n>>>\n>>> * That fixes the bug at hand, and it also allows getting rid of\n>>> some kluges at higher levels. The GIN code no longer needs its\n>>> own TS_execute_ternary implementation, and the GIST code no longer\n>>> needs to suppose that it can't trust NOT results.\n>>>\n>>> * I put some effort into not leaking memory within tsvector_op.c's\n>>> checkclass_str and checkcondition_str. (The final output array\n>>> can still get leaked, I believe. Fixing that seems like material\n>>> for a different patch, and it might not be worth any trouble.)\n>>>\n>>> * The new test cases in tstypes.sql are to verify that we didn't\n>>> change behavior of the basic tsvector @@ tsquery code. There wasn't\n>>> any coverage of these cases before, and the logic for checkclass_str\n>>> without position info had to be tweaked to preserve this behavior.\n>>>\n>>> * The new cases in tsearch verify that the GIN and GIST code gives\n>>> the same results as the basic operator.\n>>>\n>>> Now, as for the 0002 patch attached: after 0001, the only TS_execute()\n>>> callers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\n>>> which I'd already complained is probably a bug, and the first of\n>>> the two calls in tsrank.c's Cover(). It seems difficult to me to\n>>> argue that it's not a bug for Cover() to process NOT in one call\n>>> but not the other --- moreover, if there was any argument for that\n>>> once upon a time, it probably falls to the ground now that (a) we\n>>> have a less buggy implementation of NOT and (b) the presence of\n>>> phrase queries significantly raises the importance of not taking\n>>> short-cuts. Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\n>>> flag and has TS_execute compute NOT expressions accurately all the\n>>> time.\n>>>\n>>> As it stands, 0002 changes no regression test results, which I'm\n>>> afraid speaks more to our crummy test coverage than anything else;\n>>> tests that exercise those two functions with NOT-using queries\n>>> would easily show that there is a difference.\n>>>\n>>> Even if we decide to back-patch 0001, I would not suggest\n>>> back-patching 0002, as it's more nearly a definitional change\n>>> than a bug fix. But I think it's a good idea anyway.\n>>>\n>>> I'll stick this in the queue for the July commitfest, in case\n>>> anybody wants to review it.\n>>>\n>>> regards, tom lane\n>>>\n>>>\n\nHi All!1. Generally the difference of my patch in comparison to Tom's patch 0001 is that I tried to move previous logic of GIN's own TS_execute_ternary() to the general logic of TS_execute_recurse and in case we have index without positions to avoid diving into phrase operator replacing (only in this case) in by an AND operator. The reason for this I suppose is speed and I've done testing of some corner cases like phrase operator with big number of OR comparisons inside it.-----------------------------BEFORE ANY PATCH: Bitmap Heap Scan on pglist  (cost=1715.72..160233.31 rows=114545 width=1234) (actual time=652.294..2719.961 rows=4904 loops=1)   Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery)   Rows Removed by Index Recheck: 108191   Heap Blocks: exact=73789   ->  Bitmap Index Scan on pglist_fts_idx  (cost=0.00..1687.09 rows=114545 width=0) (actual time=636.883..636.883 rows=113095 loops=1)         Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery) Planning Time: 3.016 ms Execution Time: 2721.002 ms-------------------------------AFTER TOM's PATCH (0001)Bitmap Heap Scan on pglist  (cost=1715.72..160233.31 rows=114545 width=1234) (actual time=916.640..2960.571 rows=4904 loops=1)   Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery)   Rows Removed by Index Recheck: 108191   Heap Blocks: exact=73789   ->  Bitmap Index Scan on pglist_fts_idx  (cost=0.00..1687.09 rows=114545 width=0) (actual time=900.472..900.472 rows=113095 loops=1)         Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery) Planning Time: 2.688 ms Execution Time: 2961.704 ms----------------------------AFTER MY PATCH (gin-gist-weight-patch-v3)Bitmap Heap Scan on pglist  (cost=1715.72..160233.31 rows=114545 width=1234) (actual time=616.982..2710.571 rows=4904 loops=1)   Recheck Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery)   Rows Removed by Index Recheck: 108191   Heap Blocks: exact=73789   ->  Bitmap Index Scan on pglist_fts_idx  (cost=0.00..1687.09 rows=114545 width=0) (actual time=601.586..601.586 rows=113095 loops=1)         Index Cond: (fts @@ '( ''worth'' | ''good'' | ''result'' | ''index'' | ''anoth'' | ''know'' | ''like'' | ''tool'' | ''job'' | ''think'' | ''slow'' | ''articl'' | ''knowledg'' | ''join'' | ''need'' | ''experi'' | ''understand'' | ''free'' | ''say'' | ''comment'' | ''littl'' | ''move'' | ''function'' | ''new'' | ''never'' | ''general'' | ''get'' | ''java'' | ''postgresql'' | ''notic'' | ''recent'' | ''serious'' ) <-> ''start'''::tsquery) Planning Time: 3.115 ms Execution Time: 2711.533 msI've done the test several times and seems that difference is real effect, though not very big (around 7%). So maybe there is some reason to save PHRASE_AS_AND behavior for GIN-GIST indexes despite migration from GIN's own TS_execute_ternary() to general TS_execute_recurse.2. As for shifting from bool to ternary callback I am not quite sure whether it can be useful to save bool callbacks via bool-ternary wrapper. We can include this for compatibility with old callers and can drop. Any ideas?3. As for patch 0002 which removes TS_EXEC_CALC_NOT flag I'd like to note that indexes which are written as extensions like RUM index (https://github.com/postgrespro/rum) use this flag as default behavior of TS_execute was NOT doing TS_EXEC_CALC_NOT. If we's like to change this default it can break the callers. At least I propose to leave TS_EXEC_CALC_NOT definition in ts_utils.h but in general I'd like to save default behaviour of TS_execute and not apply patch 0002. Maybe it is only worth to leave notice in a comments in code that TS_EXEC_CALC_NOT left for compatibilty reasons etc.I'd appreciate any ideas and review of all aforementioned patches.Best regards,Pavel Borisov.ср, 20 мая 2020 г. в 18:04, Pavel Borisov <pashkin.elfe@gmail.com>:1. Really if it's possible to avoid bool callbacks at all and shift everywhere to ternary it makes code quite beautiful and even. But I also think we are still not obliged to drop support for (legacy or otherwise) bool callbacks and also consistent functions form some old extensions (I don't know for sur, whether they exist) which expect old style bool result from TS_execute. In my patch I used ternary logic from TS_execute_recurse on, which can be called by \"new\" ternary consistent callers and leave bool TS_execute, which works as earlier. It also makes callback function wrapping to allow some hypothetical old extension enjoy binary behavior. I am not sure it is very much necessary but as it is not hard I'd propose somewhat leave this feature by combining patches.2. Overall I see two reasons to consider when choosing ternary/boolean calls in TS_execute: speed and compatibility. I'd like to make some performance tests for different types of queries (plain without weights, and containing weights in some or all operands) to evaluate first of these effects in both cases.Then we'll have reasons to commit a certain type of patch or maybe some combination of them.Best regards,Pavel Borisov. вс, 17 мая 2020 г. в 23:53, Pavel Borisov <pashkin.elfe@gmail.com>:Hi, all!Below is my variant how to patch Gin-Gist weights issue:1. First of all I propose to shift from previously Gin's own TS_execute variant and leave only two: TS_execute with bool result and bool type callback and ternary TS_execute_recurse with ternary callback. I suppose all legacy consistent callers can still use bool via provided wrapper.2. I integrated logic for indexes which do not support weights and positions inside (which gives MAYBE in certain cases on negation) inside previous TS_execute_recurse function called with additional flag for this class of indexes.3. Check function for GIST and GIN now gives ternary result and is called with ternary type callback. I think in future nothing prevents smoothly shifting callback functions, check functions and even TS_execute result to ternary.So I also send my variant patch for review and discussion.Regards,Pavel Borisovвс, 17 мая 2020 г. в 03:14, Tom Lane <tgl@sss.pgh.pa.us>:I wrote:\n> I think the root of the problem is that if we have a query using\n> weights, and we are testing tsvector data that lacks positions/weights,\n> we can never say there's definitely a match.  I don't see any decently\n> clean way to fix this without redefining the TSExecuteCallback API\n> to return a tri-state YES/NO/MAYBE result, because really we need to\n> decide that it's MAYBE at the level of processing the QI_VAL node,\n> not later on.  I'd tried to avoid that in e81e5741a, but maybe we\n> should just bite that bullet, and not worry about whether there's\n> any third-party code providing its own TSExecuteCallback routine.\n> codesearch.debian.net suggests that there are no external callers\n> of TS_execute, so maybe we can get away with that.\n\n0001 attached is a proposed patch that does it that way.  Given the\nAPI break involved, it's not quite clear what to do with this.\nISTM we have three options:\n\n1. Ignore the API issue and back-patch.  Given the apparent lack of\nexternal callers of TS_execute, maybe we can get away with that;\nbut I wonder if we'd get pushback from distros that have automatic\nABI-break detectors in place.\n\n2. Assume we can't backpatch, but it's still OK to slip this into\nv13.  (This option clearly has a limited shelf life, but I think\nwe could get away with it until late beta.)\n\n3. Assume we'd better hold this till v14.\n\nI find #3 unduly conservative, seeing that this is clearly a bug\nfix, but on the other hand #1 is a bit scary.  Aside from the API\nissue, it's not impossible that this has introduced some corner\ncase behavioral changes that we'd consider to be new bugs rather\nthan bug fixes.\n\nAnyway, some notes for reviewers:\n\n* The core idea of the patch is to make the TS_execute callbacks\nhave ternary results and to insist they return TS_MAYBE in any\ncase where the correct result is uncertain.\n\n* That fixes the bug at hand, and it also allows getting rid of\nsome kluges at higher levels.  The GIN code no longer needs its\nown TS_execute_ternary implementation, and the GIST code no longer\nneeds to suppose that it can't trust NOT results.\n\n* I put some effort into not leaking memory within tsvector_op.c's\ncheckclass_str and checkcondition_str.  (The final output array\ncan still get leaked, I believe.  Fixing that seems like material\nfor a different patch, and it might not be worth any trouble.)\n\n* The new test cases in tstypes.sql are to verify that we didn't\nchange behavior of the basic tsvector @@ tsquery code.  There wasn't\nany coverage of these cases before, and the logic for checkclass_str\nwithout position info had to be tweaked to preserve this behavior.\n\n* The new cases in tsearch verify that the GIN and GIST code gives\nthe same results as the basic operator.\n\nNow, as for the 0002 patch attached: after 0001, the only TS_execute()\ncallers that are not specifying TS_EXEC_CALC_NOT are hlCover(),\nwhich I'd already complained is probably a bug, and the first of\nthe two calls in tsrank.c's Cover().  It seems difficult to me to\nargue that it's not a bug for Cover() to process NOT in one call\nbut not the other --- moreover, if there was any argument for that\nonce upon a time, it probably falls to the ground now that (a) we\nhave a less buggy implementation of NOT and (b) the presence of\nphrase queries significantly raises the importance of not taking\nshort-cuts.  Therefore, 0002 attached rips out the TS_EXEC_CALC_NOT\nflag and has TS_execute compute NOT expressions accurately all the\ntime.\n\nAs it stands, 0002 changes no regression test results, which I'm\nafraid speaks more to our crummy test coverage than anything else;\ntests that exercise those two functions with NOT-using queries\nwould easily show that there is a difference.\n\nEven if we decide to back-patch 0001, I would not suggest\nback-patching 0002, as it's more nearly a definitional change\nthan a bug fix.  But I think it's a good idea anyway.\n\nI'll stick this in the queue for the July commitfest, in case\nanybody wants to review it.\n\n                        regards, tom lane", "msg_date": "Thu, 21 May 2020 18:25:46 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> Below is my variant how to patch Gin-Gist weights issue:\n\nI looked at this patch, but I'm unimpressed, because it's buggy.\nYou would have noticed if you'd included the test cases I wrote:\n\n--- /home/postgres/pgsql/src/test/regress/expected/tsearch.out 2020-07-01 14:58\n:56.637627628 -0400\n+++ /home/postgres/pgsql/src/test/regress/results/tsearch.out 2020-07-01 14:59\n:10.996990037 -0400\n@@ -1008,13 +1008,13 @@\n SELECT count(*) FROM test_tsvector WHERE a @@ '!wd:A';\n count \n -------\n- 452\n+ 2\n (1 row)\n \n SELECT count(*) FROM test_tsvector WHERE a @@ '!wd:D';\n count \n -------\n- 450\n+ 0\n (1 row)\n \n -- Test optimization of non-empty GIN_SEARCH_MODE_ALL queries\n\n\n\nIn general, I'm not very convinced by your arguments about preserving the\noption for external TS_execute callers to still use bool flags/results.\nGiven what we've seen so far, it seems almost certain that any such code\nis buggy and needs to be rewritten anyway. Converting to ternary logic\nis far more likely to produce non-buggy code than if we continue to\ntry to put band-aids on the wounds.\n\nAlso, at this point I feel like it's a bit late to consider putting\nanything API-breaking in v13. But if this is a HEAD-only patch then\nthe argument for preserving API is even weaker.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 15:16:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "ср, 1 июл. 2020 г. в 23:16, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> > Below is my variant how to patch Gin-Gist weights issue:\n>\n> I looked at this patch, but I'm unimpressed, because it's buggy.\n>\n\nThank you, i'd noticed and made minor corrections in the patch. Now it\nshould work\ncorrectly,\n\nAs for preserving the option to use legacy bool-style calls, personally I\nsee much\nvalue of not changing API ad hoc to fix something. This may not harm\nvanilla reseases\nbut can break many possible side things like RUM index etc which I think\nare abundant\naround there. Furthermore if we leave legacy bool callback along with\nnewly proposed and\nrecommended for further use it will cost nothing.\n\nSo I've attached a corrected patch. Also I wrote some comments to the code\nand added\nyour test as a part of apatch. Again thank you for sharing your thoughts\nand advice.\n\nAs always I'd appreciate everyone's opinion on the bugfix.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Thu, 2 Jul 2020 15:23:13 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Hello,\n\nOn Thu, Jul 2, 2020 at 8:23 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> ср, 1 июл. 2020 г. в 23:16, Tom Lane <tgl@sss.pgh.pa.us>:\n>>\n>> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> > Below is my variant how to patch Gin-Gist weights issue:\n>>\n>> I looked at this patch, but I'm unimpressed, because it's buggy.\n>\n>\n> Thank you, i'd noticed and made minor corrections in the patch. Now it should work\n> correctly,\n>\n> As for preserving the option to use legacy bool-style calls, personally I see much\n> value of not changing API ad hoc to fix something. This may not harm vanilla reseases\n> but can break many possible side things like RUM index etc which I think are abundant\n> around there. Furthermore if we leave legacy bool callback along with newly proposed and\n> recommended for further use it will cost nothing.\n>\n> So I've attached a corrected patch. Also I wrote some comments to the code and added\n> your test as a part of apatch. Again thank you for sharing your thoughts and advice.\n>\n> As always I'd appreciate everyone's opinion on the bugfix.\n\nI haven't looked at any of the patches carefully yet. But I tried both of them.\n\nI tried Tom's patch. To compile the RUM extension I've made few\nchanges to use new\nTS_execute(). Speaking about backward compatibility. I also think that\nit is not so important\nhere. And RUM alreadyhas a number of \"#if PG_VERSION_NUM\" directives. API breaks\nfrom time to time and it seems inevitable.\n\nI also tried \"gin-gist-weight-patch-v4.diff\". And it didn't require\nchanges into RUM. But as\nTom said above TS_execute() is broken already. Here is the example with\n\"gin-gist-weight-patch-v4.diff\" and RUM:\n\n=# create extension rum;\n=# create table test (a tsvector);\n=# insert into test values ('wd:1A wr:2D'), ('wd:1A wr:2D');\n=# create index on test using rum (a);\n=# select a from test where a @@ '!wd:D';\n a\n----------------\n 'wd':1A 'wr':2\n 'wd':1A 'wr':2\n(2 rows)\n=# set enable_seqscan to off;\n=# select a from test where a @@ '!wd:D';\n a\n---\n(0 rows)\n\nSo it seems we are losing some results with RUM as well.\n\n-- \nArtur\n\n\n", "msg_date": "Fri, 3 Jul 2020 00:38:40 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "чт, 2 июл. 2020 г. в 19:38, Artur Zakirov <zaartur@gmail.com>:\n\n> Hello,\n>\n> On Thu, Jul 2, 2020 at 8:23 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> > ср, 1 июл. 2020 г. в 23:16, Tom Lane <tgl@sss.pgh.pa.us>:\n> >>\n> >> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> >> > Below is my variant how to patch Gin-Gist weights issue:\n> >>\n> >> I looked at this patch, but I'm unimpressed, because it's buggy.\n> >\n> >\n> > Thank you, i'd noticed and made minor corrections in the patch. Now it\n> should work\n> > correctly,\n> >\n> > As for preserving the option to use legacy bool-style calls, personally\n> I see much\n> > value of not changing API ad hoc to fix something. This may not harm\n> vanilla reseases\n> > but can break many possible side things like RUM index etc which I think\n> are abundant\n> > around there. Furthermore if we leave legacy bool callback along with\n> newly proposed and\n> > recommended for further use it will cost nothing.\n> >\n> > So I've attached a corrected patch. Also I wrote some comments to the\n> code and added\n> > your test as a part of apatch. Again thank you for sharing your thoughts\n> and advice.\n> >\n> > As always I'd appreciate everyone's opinion on the bugfix.\n>\n> I haven't looked at any of the patches carefully yet. But I tried both of\n> them.\n>\n> I tried Tom's patch. To compile the RUM extension I've made few\n> changes to use new\n> TS_execute(). Speaking about backward compatibility. I also think that\n> it is not so important\n> here. And RUM alreadyhas a number of \"#if PG_VERSION_NUM\" directives. API\n> breaks\n> from time to time and it seems inevitable.\n>\n> I also tried \"gin-gist-weight-patch-v4.diff\". And it didn't require\n> changes into RUM. But as\n> Tom said above TS_execute() is broken already. Here is the example with\n> \"gin-gist-weight-patch-v4.diff\" and RUM:\n>\n> =# create extension rum;\n> =# create table test (a tsvector);\n> =# insert into test values ('wd:1A wr:2D'), ('wd:1A wr:2D');\n> =# create index on test using rum (a);\n> =# select a from test where a @@ '!wd:D';\n> a\n> ----------------\n> 'wd':1A 'wr':2\n> 'wd':1A 'wr':2\n> (2 rows)\n> =# set enable_seqscan to off;\n> =# select a from test where a @@ '!wd:D';\n> a\n> ---\n> (0 rows)\n>\n> So it seems we are losing some results with RUM as well.\n>\n> --\n> Artur\n>\nFor me it is 100% predictable that unmodified RUM is still losing results\nas it is still using binary callback.\nThe main my goal of saving binary legacy callback is that side callers like\nRUM will not break immediately but remain in\nexisting state (i.e. losing results in some queries). To fix the issue\ncompletely it is needed to make ternary logic in\nPostgres Tsearch AND engage this ternary logic in RUM and other side\nmodules.\n\nThank you for your consideration!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nчт, 2 июл. 2020 г. в 19:38, Artur Zakirov <zaartur@gmail.com>:Hello,\n\nOn Thu, Jul 2, 2020 at 8:23 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> ср, 1 июл. 2020 г. в 23:16, Tom Lane <tgl@sss.pgh.pa.us>:\n>>\n>> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> > Below is my variant how to patch Gin-Gist weights issue:\n>>\n>> I looked at this patch, but I'm unimpressed, because it's buggy.\n>\n>\n> Thank you, i'd noticed and made minor corrections in the patch. Now it should work\n> correctly,\n>\n> As for preserving the option to use legacy bool-style calls, personally I see much\n> value of not changing API ad hoc to fix something. This may not harm vanilla reseases\n> but can break many possible side things like RUM index etc which I think are abundant\n> around there. Furthermore if we leave legacy bool callback along with newly proposed and\n> recommended for further use it will cost nothing.\n>\n> So I've attached a corrected patch. Also I wrote some comments to the code and added\n> your test as a part of apatch. Again thank you for sharing your thoughts and advice.\n>\n> As always I'd appreciate everyone's opinion on the bugfix.\n\nI haven't looked at any of the patches carefully yet. But I tried both of them.\n\nI tried Tom's patch. To compile the RUM extension I've made few\nchanges to use new\nTS_execute(). Speaking about backward compatibility. I also think that\nit is not so important\nhere. And RUM alreadyhas a number of \"#if PG_VERSION_NUM\" directives. API breaks\nfrom time to time and it seems inevitable.\n\nI also tried \"gin-gist-weight-patch-v4.diff\". And it didn't require\nchanges into RUM. But as\nTom said above TS_execute() is broken already. Here is the example with\n\"gin-gist-weight-patch-v4.diff\" and RUM:\n\n=# create extension rum;\n=# create table test (a tsvector);\n=# insert into test values ('wd:1A wr:2D'), ('wd:1A wr:2D');\n=# create index on test using rum (a);\n=# select a from test where a @@ '!wd:D';\n       a\n----------------\n 'wd':1A 'wr':2\n 'wd':1A 'wr':2\n(2 rows)\n=# set enable_seqscan to off;\n=# select a from test where a @@ '!wd:D';\n a\n---\n(0 rows)\n\nSo it seems we are losing some results with RUM as well.\n\n-- \nArturFor me it is 100% predictable that unmodified RUM is still losing results as it is still using binary callback. The main my goal of saving binary legacy callback is that side callers like RUM will not break immediately but remain inexisting state (i.e. losing results in some queries). To fix the issue completely it is needed to make ternary logic in Postgres Tsearch AND engage this ternary logic in RUM and other side modules.Thank you for your consideration!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 2 Jul 2020 20:23:02 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> чт, 2 июл. 2020 г. в 19:38, Artur Zakirov <zaartur@gmail.com>:\n>> So it seems we are losing some results with RUM as well.\n\n> For me it is 100% predictable that unmodified RUM is still losing results\n> as it is still using binary callback.\n\nRight, that's in line with what I expected as well.\n\n> The main my goal of saving binary legacy callback is that side callers like\n> RUM will not break immediately but remain in\n> existing state (i.e. losing results in some queries).\n\nI don't really see why that should be a goal here. I think a forced\ncompile error, calling attention to the fact that there's something\nto fix, is a good thing as long as we do it in a major release.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 12:34:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, all!\r\n\r\nIt seems that as of now we have two sets of patches for this bug:\r\n1. Tom Lane's: 0001-make-callbacks-ternary.patch and 0002-remove-calc-not-flag.patch\r\n2. My: gin-gist-weight-patch-v4.diff\r\n\r\nThere was a quite long discussion above and I suppose that despite the difference both of them suit and will do the necessary fix. \r\nSo I decided to make a review of both Tom Lane's patches.\r\n\r\nBoth of them apply clean. Checks are sucessful. There are regression tests included and they cover the bug. Also I made checks on my PgList database and I suppose the bug is indeed fixed.\r\n\r\nFor 0001-make-callbacks-ternary.patch\r\nAs it was mentioned in discussion, the issue was that in certain cases compare function of a single operand in a query should give undefined meaning \"MAYBE\" which should remain towards to the root of a tree. So the patch in my opinion adresses the problem in a right way.\r\n\r\nPossible dangers of changed callback from binary to ternary is that any side modules which still use binary interface will get warnings on compile and will need minor modifications of code to comply with new interface. I checked it with RUM index and indeed get warnings on compile. In discussion above it was noted that anyway there is no way to get right results in tsearch with NOT without modification of this so I'd recommend committing patch 0001.\r\n\r\nFor 0002-remove-calc-not-flag.patch\r\nThe patch changes the behavior which is now considered default. This is true in RUM module and maybe in some other tsearch side modules. Applying the patch can make code more beautiful but possibly will not give some performance gain and bug is anyway fixed by patch 0001.\r\n\r\nOverall I'd recommend patch 0001-make-callbacks-ternary.patch and close the issue.\r\n\r\n--\r\nBest regards,\r\nPavel Borisov\r\n\r\nPostgres Professional: http://postgrespro.com\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 13 Jul 2020 15:32:24 +0000", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> For 0002-remove-calc-not-flag.patch\n> The patch changes the behavior which is now considered default. This is true in RUM module and maybe in some other tsearch side modules. Applying the patch can make code more beautiful but possibly will not give some performance gain and bug is anyway fixed by patch 0001.\n\nI'd be willing to compromise on just adding TS_EXEC_CALC_NOT to the\ncalls that are missing it today. But I don't see why that's really\na great idea --- it still leaves a risk-of-omission hazard for future\ncallers. Calculating NOTs correctly really ought to be the default\nbehavior.\n\nWhat do you think of replacing TS_EXEC_CALC_NOT with a different\nflag having the opposite sense, maybe called TS_EXEC_SKIP_NOT?\nIf anyone really does need that behavior, they could still get it,\nbut they'd have to be explicit.\n\n> Overall I'd recommend patch 0001-make-callbacks-ternary.patch and close the issue.\n\nThe other issue we have to agree on is whether we want to sneak this\nfix into v13, or wait another year for it. I feel like it's pretty\nlate to be making potentially API-breaking changes, but on the other\nhand this is undoubtedly a bug fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jul 2020 11:10:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "ср, 22 июл. 2020 г. в 19:10, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> > For 0002-remove-calc-not-flag.patch\n> > The patch changes the behavior which is now considered default. This is\n> true in RUM module and maybe in some other tsearch side modules. Applying\n> the patch can make code more beautiful but possibly will not give some\n> performance gain and bug is anyway fixed by patch 0001.\n>\n> I'd be willing to compromise on just adding TS_EXEC_CALC_NOT to the\n> calls that are missing it today. But I don't see why that's really\n> a great idea --- it still leaves a risk-of-omission hazard for future\n> callers. Calculating NOTs correctly really ought to be the default\n> behavior.\n>\n> What do you think of replacing TS_EXEC_CALC_NOT with a different\n> flag having the opposite sense, maybe called TS_EXEC_SKIP_NOT?\n> If anyone really does need that behavior, they could still get it,\n> but they'd have to be explicit.\n>\n> > Overall I'd recommend patch 0001-make-callbacks-ternary.patch and close\n> the issue.\n>\n> The other issue we have to agree on is whether we want to sneak this\n> fix into v13, or wait another year for it. I feel like it's pretty\n> late to be making potentially API-breaking changes, but on the other\n> hand this is undoubtedly a bug fix.\n>\n> regards, tom lane\n>\n\nI am convinced patch 0001 is necessary and enough to fix a bug, so I think\nit's very much worth adding it to v13.\n\nAs for 0002 I see the beauty of this change but I also see the value of\nleaving defaults as they were before.\nThe change of CALC_NOT behavior doesn't seem to be a source of big changes,\nthough. I'm just not convinced it is very much needed.\nThe best way I think is to leave 0002 until the next version and add\ncommentary in the code that this default behavior of NOT\ndoing TS_EXEC_CALC_NOT is inherited from past, so basically any caller\nshould set this flag (see patch 0003-add-comments-on-calc-not.\n\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 23 Jul 2020 00:09:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> ср, 22 июл. 2020 г. в 19:10, Tom Lane <tgl@sss.pgh.pa.us>:\n>> The other issue we have to agree on is whether we want to sneak this\n>> fix into v13, or wait another year for it. I feel like it's pretty\n>> late to be making potentially API-breaking changes, but on the other\n>> hand this is undoubtedly a bug fix.\n\n> I am convinced patch 0001 is necessary and enough to fix a bug, so I think\n> it's very much worth adding it to v13.\n\nAgreed, and done.\n\n> As for 0002 I see the beauty of this change but I also see the value of\n> leaving defaults as they were before.\n> The change of CALC_NOT behavior doesn't seem to be a source of big changes,\n> though. I'm just not convinced it is very much needed.\n> The best way I think is to leave 0002 until the next version and add\n> commentary in the code that this default behavior of NOT\n> doing TS_EXEC_CALC_NOT is inherited from past, so basically any caller\n> should set this flag (see patch 0003-add-comments-on-calc-not.\n\nI don't think it's a great plan to make these two changes in two\nsuccessive versions. They're going to be affecting basically the\nsame set of outside callers, at least if you assume that every\nTS_execute caller will be supplying its own callback function.\nSo we might as well force people to make both updates at once.\nAlso, if there is anyone who thinks they need \"skip NOT\" behavior,\nthis'd be a great time to reconsider.\n\nI revised 0002 to still define a flag for skipping NOTs, but\nit's not the default and is indeed unused in the core code now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 15:49:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix GIN index search sometimes losing results" } ]
[ { "msg_contents": "Hi,\n\nWhile investigating a pg_restore error, I stumbled upon a message that is\nnot so useful.\n\npg_restore: error: could not close data file: No such file or directory\n\nWhich file? File name should be printed too like in the error check for\ncfopen_read a few lines above.\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 7 May 2020 18:54:06 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg_restore error message" }, { "msg_contents": "Em qui., 7 de mai. de 2020 às 18:54, Euler Taveira <\neuler.taveira@2ndquadrant.com> escreveu:\n\n> Hi,\n>\n> While investigating a pg_restore error, I stumbled upon a message that is\n> not so useful.\n>\n> pg_restore: error: could not close data file: No such file or directory\n>\n> Which file? File name should be printed too like in the error check for\n> cfopen_read a few lines above.\n>\nCan suggest improvements?\n\n1. free (398 line) must be pg_free(buf)';\n2. %m, is a format to parameter, right?\n But what parameter? Both fatal call, do not pass this parameter, or is\nit implied?\n\nregards,\nRanier Vilela\n\nEm qui., 7 de mai. de 2020 às 18:54, Euler Taveira <euler.taveira@2ndquadrant.com> escreveu:Hi,While investigating a pg_restore error, I stumbled upon a message that is not so useful.pg_restore: error: could not close data file: No such file or directoryWhich file? File name should be printed too like in the error check for cfopen_read a few lines above.Can suggest improvements?1. free (398 line) must be pg_free(buf)';2. %m, is a format to parameter, right?    But what parameter? Both fatal call, do not pass this parameter, or is it implied?regards,Ranier Vilela", "msg_date": "Thu, 7 May 2020 19:17:14 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message" }, { "msg_contents": "On 2020-May-07, Euler Taveira wrote:\n\n> While investigating a pg_restore error, I stumbled upon a message that is\n> not so useful.\n> \n> pg_restore: error: could not close data file: No such file or directory\n> \n> Which file? File name should be printed too like in the error check for\n> cfopen_read a few lines above.\n\nThanks for reporting. Fix pushed to 9.5 and up.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 19:42:30 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message" }, { "msg_contents": "On 2020-May-07, Ranier Vilela wrote:\n\n> Can suggest improvements?\n> \n> 1. free (398 line) must be pg_free(buf)';\n\nYeah, there's a lot of frontend code that uses free() instead of\npg_free(). There are too many of these that worrying about a single one\nwould not improve things much. I guess we could convert them all, but I\ndon't see much point.\n\n> 2. %m, is a format to parameter, right?\n> But what parameter? Both fatal call, do not pass this parameter, or is\n> it implied?\n\n%m is an implied \"strerror(errno)\", implemented by our snprintf\nreplacement.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 19:45:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message" }, { "msg_contents": "On Fri, May 08, 2020 at 07:45:16PM -0400, Alvaro Herrera wrote:\n> Yeah, there's a lot of frontend code that uses free() instead of\n> pg_free(). There are too many of these that worrying about a single one\n> would not improve things much. I guess we could convert them all, but I\n> don't see much point.\n\nDoing a hard switch would have the disadvantage to create more\nproblems when back-patching. Even if such conflicts would be I guess\nsimple enough to address, that's less to worry about. I think however\nthat there is a point in switching to a more PG-like API if reworking\nan area of the code for a new feature or a refactoring, but this is a\ncase-by-case judgement usually.\n\n>> 2. %m, is a format to parameter, right?\n>> But what parameter? Both fatal call, do not pass this parameter, or is\n>> it implied?\n> \n> %m is an implied \"strerror(errno)\", implemented by our snprintf\n> replacement.\n\nOriginally, %m is a glibc extension, which has been added recently in\nour port in src/port/snprintf.c as of d6c55de.\n--\nMichael", "msg_date": "Sat, 9 May 2020 16:39:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message" } ]
[ { "msg_contents": "Hi,\n\nI've been re-running the TPC-H benchmark, to remind myself the common\nissues with OLAP workloads, and one of the most annoying problems seems\nto be the misestimates in Q2. The query is not particularly complex,\nalthough it does have a correlated subquery with an aggregate, but it's\none of the queries prone to a cascade of nested loops running forever.\nI wonder if there's something we could do to handle this better.\n\nA raw Q2 looks like this:\n\n select\n s_acctbal,\n s_name,\n n_name,\n p_partkey,\n p_mfgr,\n s_address,\n s_phone,\n s_comment\n from\n part,\n supplier,\n partsupp,\n nation,\n region\n where\n p_partkey = ps_partkey\n and s_suppkey = ps_suppkey\n and p_size = 16\n and p_type like '%NICKEL'\n and s_nationkey = n_nationkey\n and n_regionkey = r_regionkey\n and r_name = 'AMERICA'\n and ps_supplycost = (\n select\n min(ps_supplycost)\n from\n partsupp,\n supplier,\n nation,\n region\n where\n p_partkey = ps_partkey\n and s_suppkey = ps_suppkey\n and s_nationkey = n_nationkey\n and n_regionkey = r_regionkey\n and r_name = 'AMERICA'\n )\n order by\n s_acctbal desc,\n n_name,\n s_name,\n p_partkey;\n\nand the full query plan is attached (q2-original-plan.txt).\n\nThe relevant part of the plan is the final join, which also considers\nthe subplan result (all estimates are for scale 10):\n\n -> Merge Join (cost=638655.36..1901120.61 rows=1 width=192)\n (actual time=7299.121..10993.517 rows=4737 loops=1)\n Merge Cond: (part.p_partkey = partsupp.ps_partkey)\n Join Filter: (partsupp.ps_supplycost = (SubPlan 1))\n Rows Removed by Join Filter: 1661\n\nYeah, this is estimated as 1 row but actually returns 4737 rows. All\nthe other nodes are estimated very accurately, it's just this final join\nthat is entirely wrong.\n\nIf you tweak the costs a bit (e.g. reducing random_page_cost etc.) the\nplan can easily switch to nested loops, with this join much deeper in\nthe plan. See the attached q2-nested-loops.txt for an example (I had to\ndisable merge/hash joins to trigger this on scale 10, on larger scales\nit can happen much easier).\n\nNow, the query seems a bit complex, but we can easily simplify it by\ncreating an extra table and reducing the number of joins:\n\n create table foo as select\n *\n from\n partsupp,\n supplier,\n nation,\n region\n where\n s_suppkey = ps_suppkey\n and s_nationkey = n_nationkey\n and n_regionkey = r_regionkey\n and r_name = 'AMERICA';\n\n reate index on t (ps_partkey);\n \nwhich allows us to rewrite Q2 like this (this also ditches the ORDER BY\nand LIMIT clauses):\n\n select\n 1\n from\n part,\n t\n where\n p_partkey = ps_partkey\n and p_size = 16\n and p_type like '%NICKEL'\n and ps_supplycost = (\n select\n min(ps_supplycost)\n from t\n where\n p_partkey = ps_partkey\n );\n\nin fact, we can ditch even the conditions on p_size/p_type which makes\nthe issue even more severe:\n\n select\n 1\n from\n part,\n t\n where\n p_partkey = ps_partkey\n and ps_supplycost = (\n select\n min(ps_supplycost)\n from t\n where\n p_partkey = ps_partkey\n );\n\nwith the join estimated like this:\n\n Hash Join (cost=89761.10..1239195.66 rows=17 width=4)\n (actual time=15379.356..29315.436 rows=1182889 loops=1)\n Hash Cond: ((t.ps_partkey = part.p_partkey) AND\n (t.ps_supplycost = (SubPlan 1)))\n\nYeah, that's underestimated by a factor of 70000 :-(\n\nAn interesting observation is that if you remove the condition on supply\ncost (with the correlated subquery), the estimates get perfect atain. So\nthis seems to be about this particular condition, or how we combine the\nselectivities ...\n\nI'm not sure I've figured all the details yet, but this seems to be due\nto a dependency between the ps_partkey and ps_supplycost columns.\n\nWhen estimating the second condition, we end up calling eqjoinsel()\nwith SubPlan and Var arguments. We clearly won't have ndistinct of MCVs\nfor the SubPlan, so we use\n\n nd1 = 200; /* default */\n nd2 = 94005; /* n_distinct for t.ps_supplycost */\n\nand end up (thanks to eqjoinsel_inner and no NULLs in data) with\n\n selec_inner = 0.00001 = Min(1/nd1, 1/nd2)\n\nBut that's entirely bogus, because while there are ~100k distinct values\nin t.ps_supplycost, those are for *all* ps_partkey values combined. But\neach ps_partkey value has only about ~1.4 distinct ps_supplycost values\non average:\n\n select avg(x) from (select count(distinct ps_supplycost) as x\n from t group by ps_partkey) foo;\n\n avg \n --------------------\n 1.3560712631162481\n (1 row)\n\nWhich I think is the root cause here ...\n\nThe fact that we're using the same table \"t\" in both the main query and\nthe correlated subquery seems rather irrelevant here, because we might\nalso create\n\n create table s as select\n ps_partkey,\n min(ps_supplycost) as min_ps_supplycost\n from t group by ps_partkey;\n\nand use that instead, and we'd still have the same issue. It's just the\nfact for a given ps_partkey value there's only a couple ps_supplycost\nvalues, not the 100k we have for a table.\n\nI wonder if we could use the ndistinct coefficients to improve this,\nsomehow. I suppose eqjoinsel/eqjoinsel_inner could look at\n\n ndistinct(ps_partkey, ps_supplycost) / ndistinct(ps_partkey)\n\nwhen estimating the (SubPlan = Var) condition, and tweak selec_inner\naccordingly.\n\nI do see two challenges with this, though:\n\n1) This probably requires considering all join clauses at once (just\nlike we do for regular clauses), but eqjoinsel and friends seem rather\nheavily designed for inspecting the clauses one by one.\n\n2) I'm not sure what to do about the SubPlan side, for which we may not\nhave any reliable ndistinct estimates at all.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 May 2020 01:57:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Improving estimates for TPC-H Q2" }, { "msg_contents": "Hi Tomas, there’s an interesting related paper in the April 2020 PVLDB, “Quantifying TPC-H Choke Points and Their Optimizations”: http://www.vldb.org/pvldb/vol13/p1206-dreseler.pdf. \n\nMatt\n\n", "msg_date": "Fri, 8 May 2020 07:58:39 -0400", "msg_from": "Matt Daw <matt@mattdaw.com>", "msg_from_op": false, "msg_subject": "Re: Improving estimates for TPC-H Q2" }, { "msg_contents": "On Fri, May 08, 2020 at 07:58:39AM -0400, Matt Daw wrote:\n>Hi Tomas, there’s an interesting related paper in the April 2020 PVLDB,\n>“Quantifying TPC-H Choke Points and Their Optimizations”:\n>http://www.vldb.org/pvldb/vol13/p1206-dreseler.pdf.\n>\n\nThanks.\n\nSeems like an interesting and new paper, although it seems to focus more\non execution than planning, i.e. it discusses execution strategies but\nnot how to pick the right plan.\n\nE.g. the Q2 might also be rewritten to compute the whole subplan only\nonce for all ps_partkey values, and then used for lookup (instead of\nrunning it over and over for each ps_partkey value).\n\nBut this still depends on us being able to decide between those two\nstrategies, which relies on good estimates :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 22:04:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improving estimates for TPC-H Q2" } ]
[ { "msg_contents": "Seems to me it should, at least conditionally. At least if there's a function\nscan or a relation or ..\n\nI mentioned a bit about our use-case here:\nhttps://www.postgresql.org/message-id/20200219173742.GA30939%40telsasoft.com\n=> I'd prefer our loaders to write their own data rather than dirtying large\nfractions of buffer cache and leaving it around for other backends to clean up.\n\ncommit 7f9e061363e58f30eee0cccc8a0e46f637bf137b\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Fri May 8 02:17:32 2020 -0500\n\n Make INSERT SELECT use a BulkInsertState\n\ndiff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c\nindex 20a4c474cc..6da4325225 100644\n--- a/src/backend/executor/nodeModifyTable.c\n+++ b/src/backend/executor/nodeModifyTable.c\n@@ -578,7 +578,7 @@ ExecInsert(ModifyTableState *mtstate,\n \t\t\ttable_tuple_insert_speculative(resultRelationDesc, slot,\n \t\t\t\t\t\t\t\t\t\t estate->es_output_cid,\n \t\t\t\t\t\t\t\t\t\t 0,\n-\t\t\t\t\t\t\t\t\t\t NULL,\n+\t\t\t\t\t\t\t\t\t\t mtstate->bistate,\n \t\t\t\t\t\t\t\t\t\t specToken);\n \n \t\t\t/* insert index entries for tuple */\n@@ -617,7 +617,7 @@ ExecInsert(ModifyTableState *mtstate,\n \t\t\t/* insert the tuple normally */\n \t\t\ttable_tuple_insert(resultRelationDesc, slot,\n \t\t\t\t\t\t\t estate->es_output_cid,\n-\t\t\t\t\t\t\t 0, NULL);\n+\t\t\t\t\t\t\t 0, mtstate->bistate);\n \n \t\t\t/* insert index entries for tuple */\n \t\t\tif (resultRelInfo->ri_NumIndices > 0)\n@@ -2332,6 +2332,14 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)\n \n \tmtstate->mt_arowmarks = (List **) palloc0(sizeof(List *) * nplans);\n \tmtstate->mt_nplans = nplans;\n+\tmtstate->bistate = NULL;\n+\tif (operation == CMD_INSERT)\n+\t{\n+\t\tPlan *p = linitial(node->plans);\n+\t\tAssert(nplans == 1);\n+\t\tif (!IsA(p, Result) && !IsA(p, ValuesScan))\n+\t\t\tmtstate->bistate = GetBulkInsertState();\n+\t}\n \n \t/* set up epqstate with dummy subplan data for the moment */\n \tEvalPlanQualInit(&mtstate->mt_epqstate, estate, NULL, NIL, node->epqParam);\n@@ -2809,6 +2817,9 @@ ExecEndModifyTable(ModifyTableState *node)\n \t */\n \tfor (i = 0; i < node->mt_nplans; i++)\n \t\tExecEndNode(node->mt_plans[i]);\n+\n+\tif (node->bistate)\n+\t\tFreeBulkInsertState(node->bistate);\n }\n \n void\ndiff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h\nindex 4fee043bb2..daf365f181 100644\n--- a/src/include/nodes/execnodes.h\n+++ b/src/include/nodes/execnodes.h\n@@ -14,6 +14,7 @@\n #ifndef EXECNODES_H\n #define EXECNODES_H\n \n+#include \"access/heapam.h\"\n #include \"access/tupconvert.h\"\n #include \"executor/instrument.h\"\n #include \"fmgr.h\"\n@@ -1177,6 +1178,7 @@ typedef struct ModifyTableState\n \tList\t **mt_arowmarks;\t/* per-subplan ExecAuxRowMark lists */\n \tEPQState\tmt_epqstate;\t/* for evaluating EvalPlanQual rechecks */\n \tbool\t\tfireBSTriggers; /* do we need to fire stmt triggers? */\n+\tBulkInsertState\tbistate;\t/* State for bulk insert like INSERT SELECT */\n \n \t/*\n \t * Slot for storing tuples in the root partitioned table's rowtype during\n\n\n", "msg_date": "Fri, 8 May 2020 02:25:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Fri, May 08, 2020 at 02:25:45AM -0500, Justin Pryzby wrote:\n> Seems to me it should, at least conditionally. At least if there's a function\n> scan or a relation or ..\n> \n> I mentioned a bit about our use-case here:\n> https://www.postgresql.org/message-id/20200219173742.GA30939%40telsasoft.com\n> => I'd prefer our loaders to write their own data rather than dirtying large\n> fractions of buffer cache and leaving it around for other backends to clean up.\n\nNobody suggested otherwise so I added here and cleaned up to pass tests.\nhttps://commitfest.postgresql.org/28/2553/\n\n-- \nJustin", "msg_date": "Sun, 10 May 2020 10:58:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Fri, May 08, 2020 at 02:25:45AM -0500, Justin Pryzby wrote:\n> Seems to me it should, at least conditionally. At least if there's a function\n> scan or a relation or ..\n> \n> I mentioned a bit about our use-case here:\n> https://www.postgresql.org/message-id/20200219173742.GA30939%40telsasoft.com\n> => I'd prefer our loaders to write their own data rather than dirtying large\n> fractions of buffer cache and leaving it around for other backends to clean up.\n\nDoes it matter in terms of performance and for which cases does it\nactually matter?\n\n> diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h\n> index 4fee043bb2..daf365f181 100644\n> --- a/src/include/nodes/execnodes.h\n> +++ b/src/include/nodes/execnodes.h\n> @@ -14,6 +14,7 @@\n> #ifndef EXECNODES_H\n> #define EXECNODES_H\n> \n> +#include \"access/heapam.h\"\n> #include \"access/tupconvert.h\"\n> #include \"executor/instrument.h\"\n> #include \"fmgr.h\"\n> @@ -1177,6 +1178,7 @@ typedef struct ModifyTableState\n> \tList\t **mt_arowmarks;\t/* per-subplan ExecAuxRowMark lists */\n> \tEPQState\tmt_epqstate;\t/* for evaluating EvalPlanQual rechecks */\n> \tbool\t\tfireBSTriggers; /* do we need to fire stmt triggers? */\n> +\tBulkInsertState\tbistate;\t/* State for bulk insert like INSERT SELECT */\n\nI think that this needs more thoughts. You are introducing a\ndependency between some generic execution-related nodes and heap, a\ntable AM.\n--\nMichael", "msg_date": "Mon, 11 May 2020 15:19:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "Hi,\n\nOn 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n> Seems to me it should, at least conditionally. At least if there's a function\n> scan or a relation or ..\n\nWell, the problem is that this can cause very very significant\nregressions. As in 10x slower or more. The ringbuffer can cause constant\nXLogFlush() calls (due to the lsn interlock), and the eviction from\nshared_buffers (regardless of actual available) will mean future vacuums\netc will be much slower. I think this is likely to cause pretty\nwidespread regressions on upgrades.\n\nNow, it sucks that we have this problem in the general facility that's\nsupposed to be used for this kind of bulk operation. But I don't really\nsee it realistic as expanding use of bulk insert strategies unless we\nhave some more fundamental fixes.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Thu, 4 Jun 2020 10:30:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "> On 4 Jun 2020, at 19:30, Andres Freund <andres@anarazel.de> wrote:\n> On 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n\n>> Seems to me it should, at least conditionally. At least if there's a function\n>> scan or a relation or ..\n> \n> Well, the problem is that this can cause very very significant\n> regressions. As in 10x slower or more. The ringbuffer can cause constant\n> XLogFlush() calls (due to the lsn interlock), and the eviction from\n> shared_buffers (regardless of actual available) will mean future vacuums\n> etc will be much slower. I think this is likely to cause pretty\n> widespread regressions on upgrades.\n> \n> Now, it sucks that we have this problem in the general facility that's\n> supposed to be used for this kind of bulk operation. But I don't really\n> see it realistic as expanding use of bulk insert strategies unless we\n> have some more fundamental fixes.\n\nBased on the above, and the lack of activity in the thread, it sounds like this\npatch should be marked Returned with Feedback; but Justin: you set it to\nWaiting on Author at the start of the commitfest, are you working on a new\nversion?\n\ncheers ./daniel\n\n", "msg_date": "Sun, 12 Jul 2020 22:04:21 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Thu, Jun 04, 2020 at 10:30:47AM -0700, Andres Freund wrote:\n> On 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n> > Seems to me it should, at least conditionally. At least if there's a function\n> > scan or a relation or ..\n> \n> Well, the problem is that this can cause very very significant\n> regressions. As in 10x slower or more. The ringbuffer can cause constant\n> XLogFlush() calls (due to the lsn interlock), and the eviction from\n> shared_buffers (regardless of actual available) will mean future vacuums\n> etc will be much slower. I think this is likely to cause pretty\n> widespread regressions on upgrades.\n> \n> Now, it sucks that we have this problem in the general facility that's\n> supposed to be used for this kind of bulk operation. But I don't really\n> see it realistic as expanding use of bulk insert strategies unless we\n> have some more fundamental fixes.\n\nI made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n\npostgres=# \\t on \\\\ \\set QUIET \\\\ VACUUM FULL t; \\dt+ t \\\\ begin ; \\timing on \\\\ INSERT INTO t SELECT * FROM t; rollback; SELECT COUNT(1), usagecount FROM pg_buffercache GROUP BY 2 ORDER BY 2; \n| public | t | table | pryzbyj | 35 MB | \n|Time: 9497.318 ms (00:09.497)\n| 33 | 1\n| 3 | 2\n| 18 | 3\n| 5 | 4\n| 4655 | 5\n| 11670 | \n\nvs\n\npostgres=# \\t on \\\\ \\set QUIET \\\\ VACUUM FULL t; \\dt+ t \\\\ begin BULK ; \\timing on \\\\ INSERT INTO t SELECT * FROM t; rollback; SELECT COUNT(1), usagecount FROM pg_buffercache GROUP BY 2 ORDER BY 2; \n| public | t | table | pryzbyj | 35 MB | \n|Time: 8268.780 ms (00:08.269)\n| 2080 | 1\n| 3 | 2\n| 19 | 4\n| 234 | 5\n| 14048 | \n\nAnd:\n\npostgres=# begin ; \\x \\\\ \\t \\\\ SELECT statement_timestamp(); \\o /dev/null \\\\ SELECT 'INSERT INTO t VALUES(0)' FROM generate_series(1,999999); \\set ECHO errors \\\\ \\set QUIET on \\\\ \\o \\\\ \\gexec \\\\ SELECT statement_timestamp(); abort; \\x \\\\ SELECT COUNT(1), usagecount FROM pg_buffercache GROUP BY 2 ORDER BY 2; a\n|statement_timestamp | 2020-07-12 20:31:43.717328-05\n|statement_timestamp | 2020-07-12 20:36:16.692469-05\n|\n| 52 | 1\n| 24 | 2\n| 17 | 3\n| 6 | 4\n| 4531 | 5\n| 11754 | \n\nvs\n\npostgres=# begin BULK ; \\x \\\\ \\t \\\\ SELECT statement_timestamp(); \\o /dev/null \\\\ SELECT 'INSERT INTO t VALUES(0)' FROM generate_series(1,999999); \\set ECHO errors \\\\ \\set QUIET on \\\\ \\o \\\\ \\gexec \\\\ SELECT statement_timestamp(); abort; \\x \\\\ SELECT COUNT(1), usagecount FROM pg_buffercache GROUP BY 2 ORDER BY 2; a\n|statement_timestamp | 2020-07-12 20:43:47.089538-05\n|statement_timestamp | 2020-07-12 20:48:04.798138-05\n|\n| 4456 | 1\n| 22 | 2\n| 1 | 3\n| 7 | 4\n| 79 | 5\n| 11819 |\n\n-- \nJustin", "msg_date": "Sun, 12 Jul 2020 20:57:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Sun, Jul 12, 2020 at 08:57:00PM -0500, Justin Pryzby wrote:\n> On Thu, Jun 04, 2020 at 10:30:47AM -0700, Andres Freund wrote:\n> > On 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n> > > Seems to me it should, at least conditionally. At least if there's a function\n> > > scan or a relation or ..\n> > \n> > Well, the problem is that this can cause very very significant\n> > regressions. As in 10x slower or more. The ringbuffer can cause constant\n> > XLogFlush() calls (due to the lsn interlock), and the eviction from\n> > shared_buffers (regardless of actual available) will mean future vacuums\n> > etc will be much slower. I think this is likely to cause pretty\n> > widespread regressions on upgrades.\n> > \n> > Now, it sucks that we have this problem in the general facility that's\n> > supposed to be used for this kind of bulk operation. But I don't really\n> > see it realistic as expanding use of bulk insert strategies unless we\n> > have some more fundamental fixes.\n> \n> I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n\n@cfbot: rebased", "msg_date": "Sat, 19 Sep 2020 08:32:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Sat, Sep 19, 2020 at 08:32:15AM -0500, Justin Pryzby wrote:\n> On Sun, Jul 12, 2020 at 08:57:00PM -0500, Justin Pryzby wrote:\n> > On Thu, Jun 04, 2020 at 10:30:47AM -0700, Andres Freund wrote:\n> > > On 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n> > > > Seems to me it should, at least conditionally. At least if there's a function\n> > > > scan or a relation or ..\n> > > \n> > > Well, the problem is that this can cause very very significant\n> > > regressions. As in 10x slower or more. The ringbuffer can cause constant\n> > > XLogFlush() calls (due to the lsn interlock), and the eviction from\n> > > shared_buffers (regardless of actual available) will mean future vacuums\n> > > etc will be much slower. I think this is likely to cause pretty\n> > > widespread regressions on upgrades.\n> > > \n> > > Now, it sucks that we have this problem in the general facility that's\n> > > supposed to be used for this kind of bulk operation. But I don't really\n> > > see it realistic as expanding use of bulk insert strategies unless we\n> > > have some more fundamental fixes.\n> > \n> > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n> \n> @cfbot: rebased\n\nagain\n\n-- \nJustin", "msg_date": "Fri, 16 Oct 2020 16:05:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Thu, 4 Jun 2020 at 18:31, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-05-08 02:25:45 -0500, Justin Pryzby wrote:\n> > Seems to me it should, at least conditionally. At least if there's a function\n> > scan or a relation or ..\n>\n> Well, the problem is that this can cause very very significant\n> regressions. As in 10x slower or more. The ringbuffer can cause constant\n> XLogFlush() calls (due to the lsn interlock), and the eviction from\n> shared_buffers (regardless of actual available) will mean future vacuums\n> etc will be much slower. I think this is likely to cause pretty\n> widespread regressions on upgrades.\n>\n> Now, it sucks that we have this problem in the general facility that's\n> supposed to be used for this kind of bulk operation. But I don't really\n> see it realistic as expanding use of bulk insert strategies unless we\n> have some more fundamental fixes.\n\nAre you saying that *anything* that uses the BulkInsertState is\ngenerally broken? We use it for VACUUM and COPY writes, so you are\nsaying they are broken??\n\nWhen we put that in, the use of the ringbuffer for writes required a\nmuch larger number of blocks to smooth out the extra XLogFlush()\ncalls, but overall it was a clear win in those earlier tests. Perhaps\nthe ring buffer needs to be increased, or made configurable. The\neviction behavior was/is deliberate, to avoid large data loads\nspoiling cache - perhaps that could also be configurable for the case\nwhere data fits in shared buffers.\n\nAnyway, if we can discuss what you see as broken, we can fix that and\nthen extend the usage to other cases, such as INSERT SELECT.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Oct 2020 11:41:14 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n\nI think it would be better if this was self-tuning. So that we don't\nallocate a bulkinsert state until we've done say 100 (?) rows\ninserted.\n\nIf there are other conditions under which this is non-optimal\n(Andres?), we can also autodetect that and avoid them.\n\nYou should also use table_multi_insert() since that will give further\nperformance gains by reducing block access overheads. Switching from\nsingle row to multi-row should also only happen once we've loaded a\nfew rows, so we don't introduce overahads for smaller SQL statements.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:29:53 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Thu, Oct 22, 2020 at 01:29:53PM +0100, Simon Riggs wrote:\n> On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n> \n> I think it would be better if this was self-tuning. So that we don't\n> allocate a bulkinsert state until we've done say 100 (?) rows\n> inserted.\n\nI made it an optional, non-default behavior in response to the legitimate\nconcern for performance regression for the cases where a loader needs to be as\nfast as possible - as compared with our case, where we want instead to optimize\nfor our reports by making the loaders responsible for their own writes, rather\nthan leaving behind many dirty pages, and clobbering the cache, too.\n\nAlso, INSERT SELECT doesn't immediately help us (telsasoft), since we use\nINSERT .. VALUES () .. ON CONFLICT. This would handle that case, which is\ngreat, even though that wasn't a design goal. It could also be an integer GUC\nto allow configuring the size of the ring buffer.\n\n> You should also use table_multi_insert() since that will give further\n> performance gains by reducing block access overheads. Switching from\n> single row to multi-row should also only happen once we've loaded a\n> few rows, so we don't introduce overahads for smaller SQL statements.\n\nGood idea...multi_insert (which reduces the overhead of individual inserts) is\nmostly independent from BulkInsert state (which uses a ring-buffer to avoid\ndirtying the cache). I made this 0002.\n\nThis makes INSERT SELECT several times faster, and not clobber the cache too.\n\nTime: 4700.606 ms (00:04.701)\n 123 | 1\n 37 | 2\n 20 | 3\n 11 | 4\n 4537 | 5\n 11656 | \n\nTime: 1125.302 ms (00:01.125)\n 2171 | 1\n 37 | 2\n 20 | 3\n 11 | 4\n 111 | 5\n 14034 | \n\nWhen enabled, this passes nearly all regression tests, and all but 2 of the\nchanges are easily understood. The 2nd patch still needs work.\n\n-- \nJustin", "msg_date": "Thu, 29 Oct 2020 23:51:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On 30.10.20 05:51, Justin Pryzby wrote:\n> On Thu, Oct 22, 2020 at 01:29:53PM +0100, Simon Riggs wrote:\n>> On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>>>>> I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n>>\n>> I think it would be better if this was self-tuning. So that we don't\n>> allocate a bulkinsert state until we've done say 100 (?) rows\n>> inserted.\n> \n> I made it an optional, non-default behavior in response to the legitimate\n> concern for performance regression for the cases where a loader needs to be as\n> fast as possible - as compared with our case, where we want instead to optimize\n> for our reports by making the loaders responsible for their own writes, rather\n> than leaving behind many dirty pages, and clobbering the cache, too.\n> \n> Also, INSERT SELECT doesn't immediately help us (telsasoft), since we use\n> INSERT .. VALUES () .. ON CONFLICT. This would handle that case, which is\n> great, even though that wasn't a design goal. It could also be an integer GUC\n> to allow configuring the size of the ring buffer.\n> \n>> You should also use table_multi_insert() since that will give further\n>> performance gains by reducing block access overheads. Switching from\n>> single row to multi-row should also only happen once we've loaded a\n>> few rows, so we don't introduce overahads for smaller SQL statements.\n> \n> Good idea...multi_insert (which reduces the overhead of individual inserts) is\n> mostly independent from BulkInsert state (which uses a ring-buffer to avoid\n> dirtying the cache). I made this 0002.\n> \n> This makes INSERT SELECT several times faster, and not clobber the cache too.\n> \n> Time: 4700.606 ms (00:04.701)\n> 123 | 1\n> 37 | 2\n> 20 | 3\n> 11 | 4\n> 4537 | 5\n> 11656 |\n> \n> Time: 1125.302 ms (00:01.125)\n> 2171 | 1\n> 37 | 2\n> 20 | 3\n> 11 | 4\n> 111 | 5\n> 14034 |\n> \n> When enabled, this passes nearly all regression tests, and all but 2 of the\n> changes are easily understood. The 2nd patch still needs work.\n> \n\nHi,\n\nCame across this thread because I'm working on an improvement for the \nrelation extension to improve the speed of the bulkinsert itself in \n(highly) parallel cases and would like to make sure that our approaches \nwork nicely together.\n\nGiven what I've seen and tried so far with various benchmarks I would \nalso really like to see a different approach here. The \"BEGIN BULK\" can \nbe problematic for example if you mix small amounts of inserts and big \namounts in the same transaction, or if your application possibly does a \nbulk insert but otherwise mostly OLTP transactions.\n\nTo me the idea from Simon sounds good to only use a bulk insert state \nafter inserting e.g. a 1000 rows, and this also seems more applicable to \nmost applications compared to requiring a change to any application that \nwishes to have faster ingest.\n\nAnother approach could be to combine this, for example, with a few extra \nrequirements to limit the amount of regressions and first learn more how \nthis behaves in the field.\nWe could, for example, only (just throwing out some ideas), require that:\n- the relation has a certain size\n- a BufferStrategy a maximum certain size is used\n- there is a certain amount of lock waiters on relation extension. (like \nwe do with bulk extend)\n- we have extended the relation for at least e.g. 4 MB and not used the \nFSM anymore thereby proving that we are doing bulk operations instead of \nrandom small extensions everywhere into the relation that use the FSM.\n\nAnother thing is that we first try to improve the bulk operation \nfacilities in general and then have another shot at this? Not sure if \nthere is some benchmark / query that shows where such a 10x slowdown \nwould appear but maybe that would be worth a look as well possibly.\n\nRegards,\nLuc\n\n\n", "msg_date": "Mon, 2 Nov 2020 07:53:45 +0100", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, Nov 02, 2020 at 07:53:45AM +0100, Luc Vlaming wrote:\n> On 30.10.20 05:51, Justin Pryzby wrote:\n> > On Thu, Oct 22, 2020 at 01:29:53PM +0100, Simon Riggs wrote:\n> > > On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > \n> > > > > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n> > > \n> > > I think it would be better if this was self-tuning. So that we don't\n> > > allocate a bulkinsert state until we've done say 100 (?) rows\n> > > inserted.\n> > \n> > I made it an optional, non-default behavior in response to the legitimate\n> > concern for performance regression for the cases where a loader needs to be as\n> > fast as possible - as compared with our case, where we want instead to optimize\n> > for our reports by making the loaders responsible for their own writes, rather\n> > than leaving behind many dirty pages, and clobbering the cache, too.\n> > \n> > Also, INSERT SELECT doesn't immediately help us (telsasoft), since we use\n> > INSERT .. VALUES () .. ON CONFLICT. This would handle that case, which is\n> > great, even though that wasn't a design goal. It could also be an integer GUC\n> > to allow configuring the size of the ring buffer.\n> > \n> > > You should also use table_multi_insert() since that will give further\n> > > performance gains by reducing block access overheads. Switching from\n> > > single row to multi-row should also only happen once we've loaded a\n> > > few rows, so we don't introduce overahads for smaller SQL statements.\n> > \n> > Good idea...multi_insert (which reduces the overhead of individual inserts) is\n> > mostly independent from BulkInsert state (which uses a ring-buffer to avoid\n> > dirtying the cache). I made this 0002.\n> > \n> > This makes INSERT SELECT several times faster, and not clobber the cache too.\n> > \n> > Time: 4700.606 ms (00:04.701)\n> > 123 | 1\n> > 37 | 2\n> > 20 | 3\n> > 11 | 4\n> > 4537 | 5\n> > 11656 |\n> > \n> > Time: 1125.302 ms (00:01.125)\n> > 2171 | 1\n> > 37 | 2\n> > 20 | 3\n> > 11 | 4\n> > 111 | 5\n> > 14034 |\n> > \n> > When enabled, this passes nearly all regression tests, and all but 2 of the\n> > changes are easily understood. The 2nd patch still needs work.\n> > \n> \n> Hi,\n> \n> Came across this thread because I'm working on an improvement for the\n> relation extension to improve the speed of the bulkinsert itself in (highly)\n> parallel cases and would like to make sure that our approaches work nicely\n\nThanks for looking.\n\nSince this is a GUC, I thought it would accomodate users optimizing for either\ninserts vs selects, as well as users who don't want to change their application\n(they can \"ALTER SYSTEM SET bulk_insert=on\"). I'm not thrilled about making a\nnew guc, but that seems to be required for \"begin bulk\", which was the obvious\nway to make it an 'opt-in' feature.\n\nI guess it'd be easy to add a counter to ModifyTableState, although it makes\nthe code a bit less clean and conceivably performs \"discontinuously\" - inserts\n100rows/sec for the first 999 rows and then 200rows/sec afterwards.\n\nIf you \"mix\" small inserts and big inserts, it would be a bad strategy to\noptimize for the small ones. Anyway, in a quick test, small inserts were not\nslower.\nhttps://www.postgresql.org/message-id/20200713015700.GA23581%40telsasoft.com\n\nDo you have an example that regresses with bulk insert ?\n\nThe two patches are separate, and it's possible they should be enabled\ndifferently or independently.\n\n-- \nJustin\n\n> Given what I've seen and tried so far with various benchmarks I would also\n> really like to see a different approach here. The \"BEGIN BULK\" can be\n> problematic for example if you mix small amounts of inserts and big amounts\n> in the same transaction, or if your application possibly does a bulk insert\n> but otherwise mostly OLTP transactions.\n\n> To me the idea from Simon sounds good to only use a bulk insert state after\n> inserting e.g. a 1000 rows, and this also seems more applicable to most\n> applications compared to requiring a change to any application that wishes\n> to have faster ingest.\n> \n> Another approach could be to combine this, for example, with a few extra\n> requirements to limit the amount of regressions and first learn more how\n> this behaves in the field.\n> We could, for example, only (just throwing out some ideas), require that:\n> - the relation has a certain size\n> - a BufferStrategy a maximum certain size is used\n> - there is a certain amount of lock waiters on relation extension. (like we\n> do with bulk extend)\n> - we have extended the relation for at least e.g. 4 MB and not used the FSM\n> anymore thereby proving that we are doing bulk operations instead of random\n> small extensions everywhere into the relation that use the FSM.\n> \n> Another thing is that we first try to improve the bulk operation facilities\n> in general and then have another shot at this? Not sure if there is some\n> benchmark / query that shows where such a 10x slowdown would appear but\n> maybe that would be worth a look as well possibly.\n> \n> Regards,\n> Luc\n\n\n", "msg_date": "Mon, 2 Nov 2020 12:45:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, Nov 02, 2020 at 12:45:51PM -0600, Justin Pryzby wrote:\n> On Mon, Nov 02, 2020 at 07:53:45AM +0100, Luc Vlaming wrote:\n> > On 30.10.20 05:51, Justin Pryzby wrote:\n> > > On Thu, Oct 22, 2020 at 01:29:53PM +0100, Simon Riggs wrote:\n> > > > On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > \n> > > > > > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n> > > > \n> > > > I think it would be better if this was self-tuning. So that we don't\n> > > > allocate a bulkinsert state until we've done say 100 (?) rows\n> > > > inserted.\n> > > \n> > > I made it an optional, non-default behavior in response to the legitimate\n> > > concern for performance regression for the cases where a loader needs to be as\n> > > fast as possible - as compared with our case, where we want instead to optimize\n> > > for our reports by making the loaders responsible for their own writes, rather\n> > > than leaving behind many dirty pages, and clobbering the cache, too.\n> > > \n> > > Also, INSERT SELECT doesn't immediately help us (telsasoft), since we use\n> > > INSERT .. VALUES () .. ON CONFLICT. This would handle that case, which is\n> > > great, even though that wasn't a design goal. It could also be an integer GUC\n> > > to allow configuring the size of the ring buffer.\n> > > \n> > > > You should also use table_multi_insert() since that will give further\n> > > > performance gains by reducing block access overheads. Switching from\n> > > > single row to multi-row should also only happen once we've loaded a\n> > > > few rows, so we don't introduce overahads for smaller SQL statements.\n> > > \n> > > Good idea...multi_insert (which reduces the overhead of individual inserts) is\n> > > mostly independent from BulkInsert state (which uses a ring-buffer to avoid\n> > > dirtying the cache). I made this 0002.\n> > > \n> > > This makes INSERT SELECT several times faster, and not clobber the cache too.\n\n - Rebased on Heikki's copy.c split;\n - Rename structures without \"Copy\" prefix;\n - Move MultiInsert* from copyfrom.c to (tentatively) nodeModifyTable.h;\n - Move cur_lineno and transition_capture into MultiInsertInfo;\n\nThis switches to multi insert after a configurable number of tuples.\nIf set to -1, that provides the historic behavior that bulk inserts\ncan leave behind many dirty buffers. Perhaps that should be the default.\n\nI guess this shouldn't be in copy.h or in commands/* at all.\nIt'll be included by both: commands/copyfrom_internal.h and\nexecutor/nodeModifyTable.h. Maybe it should go in util or lib...\nI don't know how to do it without including executor.h, which seems\nto be undesirable.\n\n-- \nJustin", "msg_date": "Mon, 23 Nov 2020 20:00:20 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, Nov 23, 2020 at 08:00:20PM -0600, Justin Pryzby wrote:\n> On Mon, Nov 02, 2020 at 12:45:51PM -0600, Justin Pryzby wrote:\n> > On Mon, Nov 02, 2020 at 07:53:45AM +0100, Luc Vlaming wrote:\n> > > On 30.10.20 05:51, Justin Pryzby wrote:\n> > > > On Thu, Oct 22, 2020 at 01:29:53PM +0100, Simon Riggs wrote:\n> > > > > On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > \n> > > > > > > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.\n> > > > > \n> > > > > I think it would be better if this was self-tuning. So that we don't\n> > > > > allocate a bulkinsert state until we've done say 100 (?) rows\n> > > > > inserted.\n> > > > \n> > > > I made it an optional, non-default behavior in response to the legitimate\n> > > > concern for performance regression for the cases where a loader needs to be as\n> > > > fast as possible - as compared with our case, where we want instead to optimize\n> > > > for our reports by making the loaders responsible for their own writes, rather\n> > > > than leaving behind many dirty pages, and clobbering the cache, too.\n> > > > \n> > > > Also, INSERT SELECT doesn't immediately help us (telsasoft), since we use\n> > > > INSERT .. VALUES () .. ON CONFLICT. This would handle that case, which is\n> > > > great, even though that wasn't a design goal. It could also be an integer GUC\n> > > > to allow configuring the size of the ring buffer.\n> > > > \n> > > > > You should also use table_multi_insert() since that will give further\n> > > > > performance gains by reducing block access overheads. Switching from\n> > > > > single row to multi-row should also only happen once we've loaded a\n> > > > > few rows, so we don't introduce overahads for smaller SQL statements.\n> > > > \n> > > > Good idea...multi_insert (which reduces the overhead of individual inserts) is\n> > > > mostly independent from BulkInsert state (which uses a ring-buffer to avoid\n> > > > dirtying the cache). I made this 0002.\n> > > > \n> > > > This makes INSERT SELECT several times faster, and not clobber the cache too.\n> \n> - Rebased on Heikki's copy.c split;\n> - Rename structures without \"Copy\" prefix;\n> - Move MultiInsert* from copyfrom.c to (tentatively) nodeModifyTable.h;\n> - Move cur_lineno and transition_capture into MultiInsertInfo;\n> \n> This switches to multi insert after a configurable number of tuples.\n> If set to -1, that provides the historic behavior that bulk inserts\n> can leave behind many dirty buffers. Perhaps that should be the default.\n> \n> I guess this shouldn't be in copy.h or in commands/* at all.\n> It'll be included by both: commands/copyfrom_internal.h and\n> executor/nodeModifyTable.h. Maybe it should go in util or lib...\n> I don't know how to do it without including executor.h, which seems\n> to be undesirable.\n\nAttached resolves issue with FDW contrib by including the MultiInsertInfo\nstructure rather than a pointer and makes the logic more closely match\ncopyfrom.c related to partition/triggers.\n\nI had made this a conditional based on the concern that bulk insert state would\ncause regression. But then it occurred to me that COPY uses a bulk insert\nunconditionally. Should COPY be conditional, too ? Or maybe that's ok, since\nCOPY is assumed to be a bulk operation.\n\n-- \nJustin", "msg_date": "Sun, 29 Nov 2020 19:11:46 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "One loose end in this patch is how to check for volatile default expressions.\n\ncopyfrom.c is a utility statement, so it can look at the parser's column list:\nCOPY table(c1,c2)...\n\nHowever, for INSERT, in nodeModifyTable.c, it looks like parsing, rewriting,\nand planning are done, at which point I don't know if there's a good way to\nfind that. The default expressions will have been rewritten into the planned\nstatement.\n\nWe need the list of columns whose default is volatile, excluding columns for\nwhich a non-default value is specified.\n\nINSERT INTO table (c1,c2) VALUES (1,default);\n\nWe'd want the list of any column in the table with a volatile default,\nexcluding columns c1, but not-excluding explicit default columns c2 or any\nimplicit default columns (c3, etc).\n\nAny idea ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 2 Dec 2020 10:53:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Wed, Dec 2, 2020 at 10:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> One loose end in this patch is how to check for volatile default expressions.\n>\n> copyfrom.c is a utility statement, so it can look at the parser's column list:\n> COPY table(c1,c2)...\n>\n> However, for INSERT, in nodeModifyTable.c, it looks like parsing, rewriting,\n> and planning are done, at which point I don't know if there's a good way to\n> find that. The default expressions will have been rewritten into the planned\n> statement.\n>\n> We need the list of columns whose default is volatile, excluding columns for\n> which a non-default value is specified.\n>\n> INSERT INTO table (c1,c2) VALUES (1,default);\n>\n> We'd want the list of any column in the table with a volatile default,\n> excluding columns c1, but not-excluding explicit default columns c2 or any\n> implicit default columns (c3, etc).\n>\n> Any idea ?\n>\n\nI think we should be doing all the necessary checks in the planner and\nhave a flag in the planned stmt to indicate whether to go with multi\ninsert or not. For the required checks, we can have a look at how the\nexisting COPY decides to go with either CIM_MULTI or CIM_SINGLE.\n\nNow, the question of how we can get to know whether a given relation\nhas default expressions or volatile expressions, it is worth to look\nat build_column_default() and contain_volatile_functions().\n\nI prefer to have the multi insert deciding code in COPY and INSERT\nSELECT, in a single common function which can be reused. Though COPY\nhas somethings like default expressions and others ready unlike INSERT\nSELECT, we can try to keep them under a common function and say for\nCOPY we can skip some code and for INSERT SELECT we can do extra work\nto find default expressions.\n\nAlthough unrelated, for parallel inserts in INSERT SELECT[1], in the\nplanner there are some checks to see if the parallelism is safe or\nnot. Check max_parallel_hazard_for_modify() in\nv8-0001-Enable-parallel-SELECT-for-INSERT-INTO-.-SELECT.patch from\n[1]. On the similar lines, we can also have multi insert deciding\ncode.\n\n[1] https://www.postgresql.org/message-id/CAJcOf-fy3P%2BkDArvmbEtdQTxFMf7Rn2%3DV-sqCnMmKO3QKBsgPA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Dec 2020 10:59:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Thu, Dec 03, 2020 at 10:59:34AM +0530, Bharath Rupireddy wrote:\n> On Wed, Dec 2, 2020 at 10:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > One loose end in this patch is how to check for volatile default expressions.\n> \n> I think we should be doing all the necessary checks in the planner and\n> have a flag in the planned stmt to indicate whether to go with multi\n> insert or not. For the required checks, we can have a look at how the\n> existing COPY decides to go with either CIM_MULTI or CIM_SINGLE.\n\nYes, you can see that I've copied the checks from copy.\nLike copy, some checks are done once, in ExecInitModifyTable, outside of the\nExecModifyTable \"loop\".\n\nThis squishes some commits together.\nAnd uses bistate for ON CONFLICT.\nAnd attempts to use memory context for tuple size.\n\nFor the bufferedBytes check, I'm not sure what's best. Copy flushes buffers\nafter 65k of input line length, but that's totally different from tuple slot\nmemory context size, which is what I used for insert. Maybe COPY should also\nuse slot size? Or maybe the threshold to flush needs to be set in miinfo,\nrather than a #define, and differ between COPY and INSERT.\n\n-- \nJustin", "msg_date": "Sat, 5 Dec 2020 13:59:41 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Sat, Dec 05, 2020 at 01:59:41PM -0600, Justin Pryzby wrote:\n> On Thu, Dec 03, 2020 at 10:59:34AM +0530, Bharath Rupireddy wrote:\n> > On Wed, Dec 2, 2020 at 10:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > One loose end in this patch is how to check for volatile default expressions.\n> > \n> > I think we should be doing all the necessary checks in the planner and\n> > have a flag in the planned stmt to indicate whether to go with multi\n> > insert or not. For the required checks, we can have a look at how the\n> > existing COPY decides to go with either CIM_MULTI or CIM_SINGLE.\n> \n> Yes, you can see that I've copied the checks from copy.\n> Like copy, some checks are done once, in ExecInitModifyTable, outside of the\n> ExecModifyTable \"loop\".\n> \n> This squishes some commits together.\n> And uses bistate for ON CONFLICT.\n> And attempts to use memory context for tuple size.\n\nRebased on 9dc718bdf2b1a574481a45624d42b674332e2903\n\nI guess my patch should/may be subsumed by this other one - I'm fine with that.\nhttps://commitfest.postgresql.org/31/2871/\n\nNote that my interest here is just in bistate, to avoid leaving behind many\ndirty buffers, not improved performance of COPY.\n\n-- \nJustin", "msg_date": "Wed, 20 Jan 2021 03:11:13 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, Feb 22, 2021 at 02:25:22AM +0000, houzj.fnst@fujitsu.com wrote:\n> > > Yes, you can see that I've copied the checks from copy.\n> > > Like copy, some checks are done once, in ExecInitModifyTable, outside\n> > > of the ExecModifyTable \"loop\".\n> > >\n> > > This squishes some commits together.\n> > > And uses bistate for ON CONFLICT.\n> > > And attempts to use memory context for tuple size.\n> > \n> > Rebased on 9dc718bdf2b1a574481a45624d42b674332e2903\n> > \n> > I guess my patch should/may be subsumed by this other one - I'm fine with\n> > that.\n> > https://commitfest.postgresql.org/31/2871/\n> > \n> > Note that my interest here is just in bistate, to avoid leaving behind many dirty\n> > buffers, not improved performance of COPY.\n> \n> I am very interested in this patch, and I plan to do some experiments with the patch.\n> Can you please rebase the patch because it seems can not applied to the master now.\n\nThanks for your interest.\n\nI was sitting on a rebased version since the bulk FDW patch will cause\nconflicts, and since this should maybe be built on top of the table-am patch\n(2871). Have fun :)\n\n-- \nJustin", "msg_date": "Sun, 21 Feb 2021 21:01:58 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "> > I am very interested in this patch, and I plan to do some experiments with the\n> patch.\n> > Can you please rebase the patch because it seems can not applied to the\n> master now.\n> \n> Thanks for your interest.\n> \n> I was sitting on a rebased version since the bulk FDW patch will cause conflicts,\n> and since this should maybe be built on top of the table-am patch (2871).\n> Have fun :)\nHi,\n\nWhen I testing with the patch, I found I can not use \"\\d tablename\".\nIt reports the following error, it this related to the patch?\n\n--------------------------------------------------------------------------\nERROR: did not find '}' at end of input node at character 141\nSTATEMENT: SELECT pol.polname, pol.polpermissive,\n CASE WHEN pol.polroles = '{0}' THEN NULL ELSE pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles where oid = any (pol.polroles) order by 1),',') END,\n pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n CASE pol.polcmd\n WHEN 'r' THEN 'SELECT'\n WHEN 'a' THEN 'INSERT'\n WHEN 'w' THEN 'UPDATE'\n WHEN 'd' THEN 'DELETE'\n END AS cmd\n FROM pg_catalog.pg_policy pol\n WHERE pol.polrelid = '58112' ORDER BY 1;\nERROR: did not find '}' at end of input node\nLINE 2: ...catalog.array_to_string(array(select rolname from pg_catalog...\n--------------------------------------------------------------------------\n\nBest regards,\nhouzj\n\n\n", "msg_date": "Mon, 8 Mar 2021 09:07:11 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "> > > I am very interested in this patch, and I plan to do some\n> > > experiments with the\n> > patch.\n> > > Can you please rebase the patch because it seems can not applied to\n> > > the\n> > master now.\n> >\n> > Thanks for your interest.\n> >\n> > I was sitting on a rebased version since the bulk FDW patch will cause\n> > conflicts, and since this should maybe be built on top of the table-am patch\n> (2871).\n> > Have fun :)\n> Hi,\n> \n> When I testing with the patch, I found I can not use \"\\d tablename\".\n> It reports the following error, it this related to the patch?\n\nSorry, solved by re-initdb.\n\nBest regards,\nhouzj\n\n\n", "msg_date": "Mon, 8 Mar 2021 09:18:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, Mar 8, 2021 at 2:18 PM houzj.fnst@fujitsu.com <\nhouzj.fnst@fujitsu.com> wrote:\n\n> > > > I am very interested in this patch, and I plan to do some\n> > > > experiments with the\n> > > patch.\n> > > > Can you please rebase the patch because it seems can not applied to\n> > > > the\n> > > master now.\n> > >\n> > > Thanks for your interest.\n> > >\n> > > I was sitting on a rebased version since the bulk FDW patch will cause\n> > > conflicts, and since this should maybe be built on top of the table-am\n> patch\n> > (2871).\n> > > Have fun :)\n> > Hi,\n> >\n> > When I testing with the patch, I found I can not use \"\\d tablename\".\n> > It reports the following error, it this related to the patch?\n>\n> Sorry, solved by re-initdb.\n>\n> Best regards,\n> houzj\n>\n>\n> One of the patch\n(v10-0001-INSERT-SELECT-to-use-BulkInsertState-and-multi_i.patch) from the\npatchset does not apply.\n\nhttp://cfbot.cputube.org/patch_32_2553.log\n\n1 out of 13 hunks FAILED -- saving rejects to file\nsrc/backend/commands/copyfrom.c.rej\n\nIt is a minor change, therefore I fixed that to make cfbot happy. Please\ntake a look if that works for you.\n\n\n\n\n-- \nIbrar Ahmed", "msg_date": "Thu, 18 Mar 2021 18:37:41 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "Hi,\n+ mtstate->ntuples > bulk_insert_ntuples &&\n+ bulk_insert_ntuples >= 0)\n\nI wonder why bulk_insert_ntuples == 0 is included in the above. It\nseems bulk_insert_ntuples having value of 0 should mean not enabling bulk\ninsertions.\n\n+ }\n+ else\n+ {\n\nnit: the else should be on the same line as the right brace.\n\nCheers\n\nOn Thu, Mar 18, 2021 at 6:38 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Mon, Mar 8, 2021 at 2:18 PM houzj.fnst@fujitsu.com <\n> houzj.fnst@fujitsu.com> wrote:\n>\n>> > > > I am very interested in this patch, and I plan to do some\n>> > > > experiments with the\n>> > > patch.\n>> > > > Can you please rebase the patch because it seems can not applied to\n>> > > > the\n>> > > master now.\n>> > >\n>> > > Thanks for your interest.\n>> > >\n>> > > I was sitting on a rebased version since the bulk FDW patch will cause\n>> > > conflicts, and since this should maybe be built on top of the\n>> table-am patch\n>> > (2871).\n>> > > Have fun :)\n>> > Hi,\n>> >\n>> > When I testing with the patch, I found I can not use \"\\d tablename\".\n>> > It reports the following error, it this related to the patch?\n>>\n>> Sorry, solved by re-initdb.\n>>\n>> Best regards,\n>> houzj\n>>\n>>\n>> One of the patch\n> (v10-0001-INSERT-SELECT-to-use-BulkInsertState-and-multi_i.patch) from the\n> patchset does not apply.\n>\n> http://cfbot.cputube.org/patch_32_2553.log\n>\n> 1 out of 13 hunks FAILED -- saving rejects to file\n> src/backend/commands/copyfrom.c.rej\n>\n> It is a minor change, therefore I fixed that to make cfbot happy. Please\n> take a look if that works for you.\n>\n>\n>\n>\n> --\n> Ibrar Ahmed\n>\n\nHi,+           mtstate->ntuples > bulk_insert_ntuples &&+           bulk_insert_ntuples >= 0)I wonder why bulk_insert_ntuples == 0 is included in the above. It seems bulk_insert_ntuples having value of 0 should mean not enabling bulk insertions.+   }+   else+   {nit: the else should be on the same line as the right brace.CheersOn Thu, Mar 18, 2021 at 6:38 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Mon, Mar 8, 2021 at 2:18 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:> > > I am very interested in this patch, and I plan to do some\n> > > experiments with the\n> > patch.\n> > > Can you please rebase the patch because it seems can not applied to\n> > > the\n> > master now.\n> >\n> > Thanks for your interest.\n> >\n> > I was sitting on a rebased version since the bulk FDW patch will cause\n> > conflicts, and since this should maybe be built on top of the table-am patch\n> (2871).\n> > Have fun :)\n> Hi,\n> \n> When I testing with the patch, I found I can not use \"\\d tablename\".\n> It reports the following error, it this related to the patch?\n\nSorry, solved by re-initdb.\n\nBest regards,\nhouzj\n\n\nOne of the patch (v10-0001-INSERT-SELECT-to-use-BulkInsertState-and-multi_i.patch) from the patchset does not apply. http://cfbot.cputube.org/patch_32_2553.log1 out of 13 hunks FAILED -- saving rejects to file src/backend/commands/copyfrom.c.rejIt is a minor change, therefore I fixed that to make cfbot happy. Please take a look if that works for you.-- Ibrar Ahmed", "msg_date": "Thu, 18 Mar 2021 08:29:50 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "Hi,\r\n\r\nAbout the 0002-patch [Check for volatile defaults].\r\n\r\nI wonder if we can check the volatile default value by traversing \"query->targetList\" in planner.\r\n\r\nIMO, the column default expression was written into the targetList, and the current parallel-safety check\r\ntravere the \"query->targetList\" to determine whether it contains unsafe column default expression.\r\nLike: standard_planner-> query_tree_walker\r\n\tif (walker((Node *) query->targetList, context))\r\n\t\treturn true;\r\nMay be we can do the similar thing to check the volatile defaults, if so, we do not need to add a field to TargetEntry.\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\r\n", "msg_date": "Mon, 22 Mar 2021 09:44:49 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "On Mon, May 11, 2020 at 03:19:34PM +0900, Michael Paquier wrote:\n> On Fri, May 08, 2020 at 02:25:45AM -0500, Justin Pryzby wrote:\n> > Seems to me it should, at least conditionally. At least if there's a function\n> > scan or a relation or ..\n> >\n> > I mentioned a bit about our use-case here:\n> > https://www.postgresql.org/message-id/20200219173742.GA30939%40telsasoft.com\n> > => I'd prefer our loaders to write their own data rather than dirtying large\n> > fractions of buffer cache and leaving it around for other backends to clean up.\n>\n> Does it matter in terms of performance and for which cases does it\n> actually matter?\n\nEvery 15min we're inserting 10s of thousands of rows, which dirties a large\nnumber of buffers:\n\npostgres=# CREATE EXTENSION pg_buffercache; DROP TABLE tt; CREATE TABLE tt(i int); INSERT INTO tt SELECT generate_series(1,999999); SELECT usagecount,COUNT(1) FROM pg_buffercache WHERE isdirty GROUP BY 1 ORDER BY 1;\n usagecount | count\n------------+-------\n 1 | 1\n 2 | 1\n 3 | 2\n 4 | 2\n 5 | 4436\n\nWith this patch it dirties fewer pages and with lower usage count:\n\n 1 | 2052\n 2 | 1\n 3 | 3\n 4 | 2\n 5 | 10\n\nThe goal is to avoid cache churn by using a small ring buffer.\nNote that indexes on the target table will themselves use up buffers, and\nBulkInsert won't help so much.\n\nOn Thu, Mar 18, 2021 at 08:29:50AM -0700, Zhihong Yu wrote:\n> + mtstate->ntuples > bulk_insert_ntuples &&\n> + bulk_insert_ntuples >= 0)\n> \n> I wonder why bulk_insert_ntuples == 0 is included in the above. It\n> seems bulk_insert_ntuples having value of 0 should mean not enabling bulk\n> insertions.\n\nI think it ought to be possible to enable bulk insertions immediately, which is \nwhat 0 does. -1 is the value defined to mean \"do not use bulk insert\". \nI realize there's no documentation yet. \n\nThis patch started out with the goal of using a BulkInsertState for INSERT,\nsame as for COPY. We use INSERT ON CONFLICT VALUES(),()..., and it'd be nice\nif our data loaders could avoid leaving behind dirty buffers.\n\nSimon suggested to also use MultInsertInfo. However that patch is complicated:\nit cannot work with volatile default expressions, and probably many other\nthings that could go in SELECT but not supported by miistate. That may be\nbetter handled by another patch (or in any case by someone else) like this one:\n| New Table Access Methods for Multi and Single Inserts\nhttps://commitfest.postgresql.org/31/2871/\n\nI split off the simple part again. If there's no interest in the 0001 patch\nalone, then I guess it should be closed in the CF.\n\nHowever, the \"new table AM\" patch doesn't address our case, since neither\nVALUES nor INSERT SELECT are considered a bulk insert..\n\n-- \nJustin", "msg_date": "Sun, 26 Sep 2021 19:08:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" }, { "msg_contents": "> On 27 Sep 2021, at 02:08, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I split off the simple part again. If there's no interest in the 0001 patch\n> alone, then I guess it should be closed in the CF.\n\nSince the thread has stalled, maybe that's the best course of action here. Any\nobjections from anyone on the thread?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 13:24:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should INSERT SELECT use a BulkInsertState?" } ]
[ { "msg_contents": "\nHello devs,\n\nalthough having arrays is an anathema in a relational world, pg has them, \nand I find it useful for some queries, mostly in an aggregation to show \nin a compact way what items were grouped together.\n\nThere are a few functions available to deal with arrays. Among these \nfunctions, there is no \"array_sort\". It is easy enough to provide one that \nseems to work, such as:\n\n CREATE OR REPLACE FUNCTION array_sort(a ANYARRAY) RETURNS ANYARRAY\n IMMUTABLE STRICT AS $$\n SELECT ARRAY_AGG(i) FROM (SELECT i FROM UNNEST(a) AS i ORDER BY 1) AS i;\n $$ LANGUAGE sql;\n\nbut I'm afraid that is is not particularly efficient, and I'm not even \nsure that it is deterministic (ok, the subquery is sorted, but the outside\nquery could still decide to scan it out of order for some reason?).\n\nIs there a reason *not* to provide an \"array_sort\" function?\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 8 May 2020 10:02:52 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Why no \"array_sort\" function?" }, { "msg_contents": "Hello\n\n> mostly in an aggregation to show\n> in a compact way what items were grouped together.\n\nAggregate functions have syntax for ordering: just \"select array_agg(i order by i) from ....\"\nDescribed here: https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 08 May 2020 11:26:31 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Why no \"array_sort\" function?" }, { "msg_contents": "\nHello Sergei,\n\n> Aggregate functions have syntax for ordering: just \"select array_agg(i order by i) from ....\"\n> Described here: https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n\nGreat, that's indeed enough for my usage, thanks for the tip!\n\nThe questions remains, why not provide an \"array_sort\", which could be \nuseful in other contexts?\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 8 May 2020 10:46:35 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Why no \"array_sort\" function?" } ]
[ { "msg_contents": "Why Postgres default FILLFACTOR for table is 100 and for Index is 90.\n\nAlthough Oracle is having completely different MVCC architecture, it uses\ndefault 90 for table and 100 for Index (exact reverse of Postgres)\n\nPostgres blocks needed more spaces for row update compares to Oracle\n(because Oracle keeps buffer space only for row expansion, whereas Postgres\nneed to create new versioned row). As I see Postgres is more suitable for\nOLTP workload, keeping TABLE FILLFACTOR value to 90 is more suitable rather\nthan stressing to save storage space. Less FILLFACTOR value will be useful\nto make UPDATEs as HOT applicable as well and that is going to benefit new\nPostgres adopting users who are initially not aware of such setting and\nonly realize this later when VACUUM are really running long and Indexes\ngets bloated. .\n\nOther side Index FILLFACTOR makes sense only for existing populated tables\nand for any row (new INSERTs or INSERT coming through UPDATEs), it can fill\nthe block above FILLFACTOR value. I think 100 default make more sense here.\n\nWhy Postgres default FILLFACTOR for table is 100 and for Index is 90.Although Oracle is having completely different MVCC architecture, it uses default 90 for table and 100 for Index (exact reverse of Postgres)Postgres blocks needed more spaces for row update compares to Oracle (because Oracle keeps buffer space only for row expansion, whereas Postgres need to create new versioned row). As I see Postgres is more suitable for OLTP workload, keeping TABLE FILLFACTOR value to 90 is more suitable rather than stressing to save storage space. Less FILLFACTOR value will be useful to make UPDATEs as HOT applicable as well and that is going to benefit new Postgres adopting users who are initially not aware of such setting and only realize this later when VACUUM are really running long and Indexes gets bloated. .Other side Index FILLFACTOR makes sense only for existing populated tables and for any row (new INSERTs or INSERT coming through UPDATEs), it can fill the block above FILLFACTOR value. I think 100 default make more sense here.", "msg_date": "Fri, 8 May 2020 13:50:30 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres default FILLFACTOR value" }, { "msg_contents": "In Postgres, Index FILLFACTOR only works for monotonically increasing\ncolumn values and for random values it will do 50:50 block split. However\nit's really less likely that monotonically increasing columns gets updated\nthen why we need to waste that 10% space and also making Index range scan\non such tables less performant.\n\npostgres=> create table test(id bigint);\nCREATE TABLE\npostgres=> CREATE INDEX idx1_test ON test (id) with (fillfactor = 100);\nCREATE INDEX\npostgres=> CREATE INDEX idx2_test ON test (id); --default to 90.\nCREATE INDEX\n\npostgres=> insert into test SELECT ceil(random() * 10000000) from\ngenerate_series(1, 10000000) AS temp (id) ;\nINSERT 0 10000000\n\npostgres=> \\di+ idx1_test\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n--------+-----------+-------+----------+-------+--------+-------------\n public | idx1_test | index | postgres | test | 278 MB |\n\npostgres=> \\di+ idx2_test\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n--------+-----------+-------+----------+-------+--------+-------------\n public | idx2_test | index | postgres | test | 280 MB |\n\npostgres=> update test set id = id+1 where id%100=0;\nUPDATE 99671\npostgres=> \\di+ idx1_test\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n--------+-----------+-------+----------+-------+--------+-------------\n public | idx1_test | index | postgres | test | 281 MB |\n\npostgres=> \\di+ idx2_test\n List of relations\n Schema | Name | Type | Owner | Table | Size |\n--------+-----------+-------+----------+-------+--------+-----------\n public | idx2_test | index | postgres | test | 282 MB |\n\n\nOn Fri, May 8, 2020 at 1:50 PM Virender Singla <virender.cse@gmail.com>\nwrote:\n\n> Why Postgres default FILLFACTOR for table is 100 and for Index is 90.\n>\n> Although Oracle is having completely different MVCC architecture, it uses\n> default 90 for table and 100 for Index (exact reverse of Postgres)\n>\n> Postgres blocks needed more spaces for row update compares to Oracle\n> (because Oracle keeps buffer space only for row expansion, whereas Postgres\n> need to create new versioned row). As I see Postgres is more suitable for\n> OLTP workload, keeping TABLE FILLFACTOR value to 90 is more suitable rather\n> than stressing to save storage space. Less FILLFACTOR value will be useful\n> to make UPDATEs as HOT applicable as well and that is going to benefit new\n> Postgres adopting users who are initially not aware of such setting and\n> only realize this later when VACUUM are really running long and Indexes\n> gets bloated. .\n>\n> Other side Index FILLFACTOR makes sense only for existing populated tables\n> and for any row (new INSERTs or INSERT coming through UPDATEs), it can fill\n> the block above FILLFACTOR value. I think 100 default make more sense here.\n>\n>\n>\n>\n\nIn Postgres, Index FILLFACTOR only works for monotonically increasing column values and for random values it will do 50:50 block split. However it's really less likely that monotonically increasing columns gets updated then why we need to waste that 10% space and also making Index range scan on such tables less performant.postgres=> create table test(id bigint);CREATE TABLEpostgres=> CREATE INDEX idx1_test ON test (id)  with (fillfactor = 100);CREATE INDEXpostgres=> CREATE INDEX idx2_test ON test (id); --default to 90.CREATE INDEXpostgres=> insert into test SELECT ceil(random() * 10000000) from generate_series(1, 10000000) AS temp (id) ;INSERT 0 10000000postgres=> \\di+ idx1_test                          List of relations Schema |   Name    | Type  |  Owner   | Table |  Size  | Description--------+-----------+-------+----------+-------+--------+------------- public | idx1_test | index | postgres | test  | 278 MB |postgres=> \\di+ idx2_test                          List of relations Schema |   Name    | Type  |  Owner   | Table |  Size  | Description--------+-----------+-------+----------+-------+--------+------------- public | idx2_test | index | postgres | test  | 280 MB |postgres=> update test set id = id+1 where id%100=0;UPDATE 99671postgres=> \\di+ idx1_test                          List of relations Schema |   Name    | Type  |  Owner   | Table |  Size  | Description--------+-----------+-------+----------+-------+--------+------------- public | idx1_test | index | postgres | test  | 281 MB |postgres=> \\di+ idx2_test                          List of relations Schema |   Name    | Type  |  Owner   | Table |  Size  | --------+-----------+-------+----------+-------+--------+----------- public | idx2_test | index | postgres | test  | 282 MB |On Fri, May 8, 2020 at 1:50 PM Virender Singla <virender.cse@gmail.com> wrote:Why Postgres default FILLFACTOR for table is 100 and for Index is 90.Although Oracle is having completely different MVCC architecture, it uses default 90 for table and 100 for Index (exact reverse of Postgres)Postgres blocks needed more spaces for row update compares to Oracle (because Oracle keeps buffer space only for row expansion, whereas Postgres need to create new versioned row). As I see Postgres is more suitable for OLTP workload, keeping TABLE FILLFACTOR value to 90 is more suitable rather than stressing to save storage space. Less FILLFACTOR value will be useful to make UPDATEs as HOT applicable as well and that is going to benefit new Postgres adopting users who are initially not aware of such setting and only realize this later when VACUUM are really running long and Indexes gets bloated. .Other side Index FILLFACTOR makes sense only for existing populated tables and for any row (new INSERTs or INSERT coming through UPDATEs), it can fill the block above FILLFACTOR value. I think 100 default make more sense here.", "msg_date": "Sun, 17 May 2020 11:18:45 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres default FILLFACTOR value" } ]
[ { "msg_contents": "I happened to notice that COPY TO releases the ACCESS SHARE lock\non the table right when the command ends rather than holding it\nuntil the end of the transaction:\n\n From backend/commands/copy.c:\n\n/*\n * Close the relation. If reading, we can release the AccessShareLock\n * we got; if writing, we should hold the lock until end of transaction\n * to ensure that updates will be committed before lock is released.\n */\nheap_close(rel, (from ? NoLock : AccessShareLock));\n\nThis violates the two-phase locking protocol and means that,\nfor example, the second COPY from the same table in a\nREPEATABLE READ transaction might fail or return nothing if\na concurrent transaction dropped or truncated the table in the\nmean time.\n\nIs this a bug or intentional (for example, to make pg_dump release\nits locks early)? In the latter case, it warrants documentation.\n\nI dug into the history:\n\nThe comment is from commit 4dded12faad, before which COPY TO also\nreleased the lock immediately.\n\nThe early lock release was added in commit bd272cace63, but that\nonly reflected how the indexes were locked before.\n\nSo this behavior seems to go back all the way.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 08 May 2020 10:58:24 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "COPY, lock release and MVCC" }, { "msg_contents": "On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> I happened to notice that COPY TO releases the ACCESS SHARE lock\n> on the table right when the command ends rather than holding it\n> until the end of the transaction:\n\nThat seems inconsistent with what an INSERT statement would do, and thus bad.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 15:43:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Mon, 2020-05-11 at 15:43 -0400, Robert Haas wrote:\n> On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > I happened to notice that COPY TO releases the ACCESS SHARE lock\n> > on the table right when the command ends rather than holding it\n> > until the end of the transaction:\n> \n> That seems inconsistent with what an INSERT statement would do, and thus bad.\n\nWell, should we fix the code or the documentation?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 12 May 2020 17:37:46 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2020-05-11 at 15:43 -0400, Robert Haas wrote:\n>> On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>>> I happened to notice that COPY TO releases the ACCESS SHARE lock\n>>> on the table right when the command ends rather than holding it\n>>> until the end of the transaction:\n\n>> That seems inconsistent with what an INSERT statement would do, and thus bad.\n\n> Well, should we fix the code or the documentation?\n\nI'd agree with fixing the code. Early lock release is something we do on\nsystem catalog accesses, and while it hasn't bitten us yet, I've been\nkind of expecting that someday it will. We should not do it on SQL-driven\naccesses to user tables.\n\nHaving said that, I'd vote for just changing it in HEAD, not\nback-patching. It's not clear that there are consequences bad enough\nto merit a back-patched behavior change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 11:50:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Tue, 2020-05-12 at 11:50 -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Mon, 2020-05-11 at 15:43 -0400, Robert Haas wrote:\n> > > On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > > I happened to notice that COPY TO releases the ACCESS SHARE lock\n> > > > on the table right when the command ends rather than holding it\n> > > > until the end of the transaction:\n> > > That seems inconsistent with what an INSERT statement would do, and thus bad.\n> > Well, should we fix the code or the documentation?\n> \n> I'd agree with fixing the code. Early lock release is something we do on\n> system catalog accesses, and while it hasn't bitten us yet, I've been\n> kind of expecting that someday it will. We should not do it on SQL-driven\n> accesses to user tables.\n> \n> Having said that, I'd vote for just changing it in HEAD, not\n> back-patching. It's not clear that there are consequences bad enough\n> to merit a back-patched behavior change.\n\nAgreed.\n\nHere is a patch.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 12 May 2020 21:50:40 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Wed, May 13, 2020 at 1:20 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2020-05-12 at 11:50 -0400, Tom Lane wrote:\n> > Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > > On Mon, 2020-05-11 at 15:43 -0400, Robert Haas wrote:\n> > > > On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > > > I happened to notice that COPY TO releases the ACCESS SHARE lock\n> > > > > on the table right when the command ends rather than holding it\n> > > > > until the end of the transaction:\n> > > > That seems inconsistent with what an INSERT statement would do, and thus bad.\n> > > Well, should we fix the code or the documentation?\n> >\n> > I'd agree with fixing the code. Early lock release is something we do on\n> > system catalog accesses, and while it hasn't bitten us yet, I've been\n> > kind of expecting that someday it will. We should not do it on SQL-driven\n> > accesses to user tables.\n> >\n> > Having said that, I'd vote for just changing it in HEAD, not\n> > back-patching. It's not clear that there are consequences bad enough\n> > to merit a back-patched behavior change.\n>\n> Agreed.\n>\n> Here is a patch.\n>\n\n- /*\n- * Close the relation. If reading, we can release the AccessShareLock we\n- * got; if writing, we should hold the lock until end of transaction to\n- * ensure that updates will be committed before lock is released.\n- */\n- if (rel != NULL)\n- table_close(rel, (is_from ? NoLock : AccessShareLock));\n+ table_close(rel, NoLock);\n\nI wonder why you have removed (rel != NULL) check? It can be NULL\nwhen we use a query instead of a relation. Refer below code:\nDoCopy()\n{\n..\n..\n{\nAssert(stmt->query);\n\nquery = makeNode(RawStmt);\nquery->stmt = stmt->query;\nquery->stmt_location = stmt_location;\nquery->stmt_len = stmt_len;\n\nrelid = InvalidOid;\nrel = NULL;\n}\n..\n}\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 19:29:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Wed, 2020-05-13 at 19:29 +0530, Amit Kapila wrote:\n> > > > > On Fri, May 8, 2020 at 4:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > > > > I happened to notice that COPY TO releases the ACCESS SHARE lock\n> > > > > > on the table right when the command ends rather than holding it\n> > > > > > until the end of the transaction:\n> > \n> > Here is a patch.\n> > \n> \n> - /*\n> - * Close the relation. If reading, we can release the AccessShareLock we\n> - * got; if writing, we should hold the lock until end of transaction to\n> - * ensure that updates will be committed before lock is released.\n> - */\n> - if (rel != NULL)\n> - table_close(rel, (is_from ? NoLock : AccessShareLock));\n> + table_close(rel, NoLock);\n> \n> I wonder why you have removed (rel != NULL) check?\n\nThat was just unexcusable sloppiness, nothing more.\n\nHere is a fixed patch.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 13 May 2020 22:36:40 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Thu, May 14, 2020 at 2:06 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> > I wonder why you have removed (rel != NULL) check?\n>\n> That was just unexcusable sloppiness, nothing more.\n>\n\nLGTM. I have slightly modified the commit message, see if that looks\nfine to you.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 May 2020 15:11:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Thu, 2020-05-14 at 15:11 +0530, Amit Kapila wrote:\n> LGTM. I have slightly modified the commit message, see if that looks\n> fine to you.\n\nFine with me, thanks.\n\n> This breaks the cases where a REPEATABLE READ transaction could see an\n> empty table if it repeats a COPY statement and somebody truncated the\n> table in the meantime.\n\nI would use \"case\" rather than \"cases\" here.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 14 May 2020 15:40:44 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Thu, May 14, 2020 at 7:10 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Thu, 2020-05-14 at 15:11 +0530, Amit Kapila wrote:\n> > LGTM. I have slightly modified the commit message, see if that looks\n> > fine to you.\n>\n> Fine with me, thanks.\n>\n> > This breaks the cases where a REPEATABLE READ transaction could see an\n> > empty table if it repeats a COPY statement and somebody truncated the\n> > table in the meantime.\n>\n> I would use \"case\" rather than \"cases\" here.\n>\n\nOkay, changed, and pushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 May 2020 10:11:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY, lock release and MVCC" }, { "msg_contents": "On Fri, 2020-05-15 at 10:11 +0530, Amit Kapila wrote:\n> Okay, changed, and pushed.\n\nThank you!\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 15 May 2020 07:04:55 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: COPY, lock release and MVCC" } ]
[ { "msg_contents": "PG12\nSteps to reproduce on 3 nodes cluster with quorum commit.\n1. Cut off network on master with everything.\n2. Pkill -9 PostgreSQL on each node.\n3. Start PostgreSQL on each node.\n\nWhat was strange?\nI check every second pg_last_wal_replay_lsn() and pg_last_wal_receive_lsn().\n\nAll time it was the same.\nFor example last output before killing PostgreSQL:\n2020-05-08 08:31:44,495 DEBUG: Last replayed LSN: 0/502E3D8\n2020-05-08 08:31:44,497 DEBUG: Last received LSN: 0/502E3D8\n\nAfter starting PostgreSQL I got this:\n2020-05-08 08:31:55,324 DEBUG: Last replayed LSN: 0/502E3D8\n2020-05-08 08:31:55,326 DEBUG: Last received LSN: 0/5000000\n\nWhy could it happen and is it expected behaviour?\n\n\nPG12\nSteps to reproduce on 3 nodes cluster with quorum commit.1. Cut off network on master with everything.2. Pkill -9 PostgreSQL on each node.3. Start PostgreSQL on each node.What was strange?I check every second pg_last_wal_replay_lsn() and  pg_last_wal_receive_lsn().All time it was the same.For example last output before killing PostgreSQL:2020-05-08 08:31:44,495 DEBUG: Last replayed LSN: 0/502E3D82020-05-08 08:31:44,497 DEBUG: Last received LSN: 0/502E3D8After starting PostgreSQL I got this:2020-05-08 08:31:55,324 DEBUG: Last replayed LSN: 0/502E3D82020-05-08 08:31:55,326 DEBUG: Last received LSN: 0/5000000Why could it happen and is it expected behaviour?", "msg_date": "Fri, 8 May 2020 14:36:21 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "Hello\n\nYes, this is expected. Walreceiver always start streaming from beginning of the wal segment.\n./src/backend/replication/walreceiverfuncs.c in RequestXLogStreaming:\n\n\t * We always start at the beginning of the segment. That prevents a broken\n\t * segment (i.e., with no records in the first half of a segment) from\n\t * being created by XLOG streaming, which might cause trouble later on if\n\t * the segment is e.g archived.\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 08 May 2020 12:50:32 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "I got it, thank you.\nCan you recommend what to use to determine which quorum standby should be promoted in such case?\nWe planned to use pg_last_wal_receive_lsn() to determine which has fresh data but if it returns the beginning of the segment on both replicas we can’t determine which standby confirmed that write transaction to disk.\n\n> On 8 May 2020, at 14:50, Sergei Kornilov <sk@zsrv.org> wrote:\n> \n> Hello\n> \n> Yes, this is expected. Walreceiver always start streaming from beginning of the wal segment.\n> ./src/backend/replication/walreceiverfuncs.c in RequestXLogStreaming:\n> \n> \t * We always start at the beginning of the segment. That prevents a broken\n> \t * segment (i.e., with no records in the first half of a segment) from\n> \t * being created by XLOG streaming, which might cause trouble later on if\n> \t * the segment is e.g archived.\n> \n> regards, Sergei\n\n\nI got it, thank you.Can you recommend what to use to determine which quorum standby should be promoted in such case?We planned to use pg_last_wal_receive_lsn() to determine which has fresh data but if it returns the beginning of the segment on both replicas we can’t determine which standby confirmed that write transaction to disk.On 8 May 2020, at 14:50, Sergei Kornilov <sk@zsrv.org> wrote:HelloYes, this is expected. Walreceiver always start streaming from beginning of the wal segment../src/backend/replication/walreceiverfuncs.c in RequestXLogStreaming: * We always start at the beginning of the segment. That prevents a broken * segment (i.e., with no records in the first half of a segment) from * being created by XLOG streaming, which might cause trouble later on if * the segment is e.g archived.regards, Sergei", "msg_date": "Fri, 8 May 2020 15:02:26 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "On Fri, May 08, 2020 at 03:02:26PM +0500, godjan • wrote:\n> Can you recommend what to use to determine which quorum standby\n> should be promoted in such case? \n> We planned to use pg_last_wal_receive_lsn() to determine which has\n> fresh data but if it returns the beginning of the segment on both\n> replicas we can’t determine which standby confirmed that write\n> transaction to disk.\n\nIf you want to preserve transaction-level consistency across those\nnotes, what is your configuration for synchronous_standby_names and\nsynchronous_commit on the primary? Cannot you rely on that?\n--\nMichael", "msg_date": "Sat, 9 May 2020 17:48:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "synchronous_standby_names=ANY 1(host1, host2)\nsynchronous_commit=on\n\nSo to understand which standby wrote last data to disk I should know receive_lsn or write_lsn.\n\nSent from my iPhone\n\n> On 9 May 2020, at 13:48, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, May 08, 2020 at 03:02:26PM +0500, godjan • wrote:\n>> Can you recommend what to use to determine which quorum standby\n>> should be promoted in such case? \n>> We planned to use pg_last_wal_receive_lsn() to determine which has\n>> fresh data but if it returns the beginning of the segment on both\n>> replicas we can’t determine which standby confirmed that write\n>> transaction to disk.\n> \n> If you want to preserve transaction-level consistency across those\n> notes, what is your configuration for synchronous_standby_names and\n> synchronous_commit on the primary? Cannot you rely on that?\n> --\n> Michael\n\n\n", "msg_date": "Sun, 10 May 2020 18:58:50 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "On Sun, May 10, 2020 at 06:58:50PM +0500, godjan • wrote:\n> synchronous_standby_names=ANY 1(host1, host2)\n> synchronous_commit=on\n\nThanks for the details. I was not sure based on your previous\nmessages. \n\n> So to understand which standby wrote last data to disk I should know\n> receive_lsn or write_lsn.\n\nIf you have just an access to the standbys, using\npg_last_wal_replay_lsn() should be enough, no? One tricky point is to\nmake sure that each standby does not have more WAL to replay, though\nyou can do that by looking at the wait event called\nRecoveryRetrieveRetryInterval for the startup process.\nNote that when a standby starts and has primary_conninfo set, it would\nrequest streaming to start again at the beginning of the segment as\nmentioned, but it does not change the point up to which the startup\nprocess replays the WAL available locally, as that's what takes\npriority as WAL source (second choice is a WAL archive and third is\nstreaming if all options are set in the recovery configuration).\n\nThere are several HA solutions floating around in the community, and I\ngot to wonder as well if some of them don't just scan the local\npg_wal/ of each standby in this case, even if that's more simple to\nlet the nodes start and replay up to their latest point available.\n--\nMichael", "msg_date": "Mon, 11 May 2020 15:54:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "\n(too bad the history has been removed to keep context)\n\nOn Fri, 8 May 2020 15:02:26 +0500\ngodjan • <g0dj4n@gmail.com> wrote:\n\n> I got it, thank you.\n> Can you recommend what to use to determine which quorum standby should be\n> promoted in such case? We planned to use pg_last_wal_receive_lsn() to\n> determine which has fresh data but if it returns the beginning of the segment\n> on both replicas we can’t determine which standby confirmed that write\n> transaction to disk.\n\nWait, pg_last_wal_receive_lsn() only decrease because you killed your standby.\n\npg_last_wal_receive_lsn() returns the value of walrcv->flushedUpto. The later\nis set to the beginning of the segment requested only during the first\nwalreceiver startup or a timeline fork:\n\n\t/*\n\t * If this is the first startup of walreceiver (on this timeline),\n\t * initialize flushedUpto and latestChunkStart to the starting point.\n\t */\n\tif (walrcv->receiveStart == 0 || walrcv->receivedTLI != tli)\n\t{\n\t\twalrcv->flushedUpto = recptr;\n\t\twalrcv->receivedTLI = tli;\n\t\twalrcv->latestChunkStart = recptr;\n\t}\n\twalrcv->receiveStart = recptr;\n\twalrcv->receiveStartTLI = tli;\n\nAfter a primary loss, as far as the standby are up and running, it is fine\nto use pg_last_wal_receive_lsn().\n\nWhy do you kill -9 your standby? Whay am I missing? Could you explain the\nusecase you are working on to justify this?\n\nRegards,\n\n\n", "msg_date": "Wed, 13 May 2020 16:52:12 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "On Mon, 11 May 2020 15:54:02 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n[...]\n> There are several HA solutions floating around in the community, and I\n> got to wonder as well if some of them don't just scan the local\n> pg_wal/ of each standby in this case, even if that's more simple to\n> let the nodes start and replay up to their latest point available.\n\nPAF relies on pg_last_wal_receive_lsn(). Relying on pg_last_wal_replay_lsn\nmight be possible. As you explained, it would requires to compare current\nreplay LSN with the last received on disk thought. This might probably be done,\neg with pg_waldump maybe and a waiting loop.\n\nHowever, such a waiting loop might be dangerous. If standbys are lagging far\nbehind and/or have read only sessions and/or load slowing down the replay, the\nwaiting loop might be very long. Maybe longer than the required RTO. The HA\nautomatic operator might even takes curative action because of some\nrecovery timeout, making things worst.\n\nRegards,\n\n\n", "msg_date": "Wed, 13 May 2020 17:04:47 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "-> Why do you kill -9 your standby? \nHi, it’s Jepsen test for our HA solution. It checks that we don’t lose data in such situation.\n\nSo, now we update logic as Michael said. All ha alive standbys now waiting for replaying all WAL that they have and after we use pg_last_replay_lsn() to choose which standby will be promoted in failover.\n\nIt fixed out trouble, but there is one another. Now we should wait when all ha alive hosts finish replaying WAL to failover. It might take a while(for example WAL contains wal_record about splitting b-tree).\n\nWe are looking for options that will allow us to find a standby that contains all data and replay all WAL only for this standby before failover.\n\nMaybe you have ideas on how to keep the last actual value of pg_last_wal_receive_lsn()? As I understand WAL receiver doesn’t write to disk walrcv->flushedUpto.\n\n> On 13 May 2020, at 19:52, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> \n> (too bad the history has been removed to keep context)\n> \n> On Fri, 8 May 2020 15:02:26 +0500\n> godjan • <g0dj4n@gmail.com> wrote:\n> \n>> I got it, thank you.\n>> Can you recommend what to use to determine which quorum standby should be\n>> promoted in such case? We planned to use pg_last_wal_receive_lsn() to\n>> determine which has fresh data but if it returns the beginning of the segment\n>> on both replicas we can’t determine which standby confirmed that write\n>> transaction to disk.\n> \n> Wait, pg_last_wal_receive_lsn() only decrease because you killed your standby.\n> \n> pg_last_wal_receive_lsn() returns the value of walrcv->flushedUpto. The later\n> is set to the beginning of the segment requested only during the first\n> walreceiver startup or a timeline fork:\n> \n> \t/*\n> \t * If this is the first startup of walreceiver (on this timeline),\n> \t * initialize flushedUpto and latestChunkStart to the starting point.\n> \t */\n> \tif (walrcv->receiveStart == 0 || walrcv->receivedTLI != tli)\n> \t{\n> \t\twalrcv->flushedUpto = recptr;\n> \t\twalrcv->receivedTLI = tli;\n> \t\twalrcv->latestChunkStart = recptr;\n> \t}\n> \twalrcv->receiveStart = recptr;\n> \twalrcv->receiveStartTLI = tli;\n> \n> After a primary loss, as far as the standby are up and running, it is fine\n> to use pg_last_wal_receive_lsn().\n> \n> Why do you kill -9 your standby? Whay am I missing? Could you explain the\n> usecase you are working on to justify this?\n> \n> Regards,\n\n\n\n", "msg_date": "Thu, 14 May 2020 07:18:33 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "(please, the list policy is bottom-posting to keep history clean, thanks).\n\nOn Thu, 14 May 2020 07:18:33 +0500\ngodjan • <g0dj4n@gmail.com> wrote:\n\n> -> Why do you kill -9 your standby? \n> Hi, it’s Jepsen test for our HA solution. It checks that we don’t lose data\n> in such situation.\n\nOK. This test is highly useful to stress data high availability and durability,\nof course. However, how useful is this test in a context of auto failover for\n**service** high availability? If all your nodes are killed in the same\ndisaster, how/why an automatic cluster manager should take care of starting all\nnodes again and pick the right node to promote?\n\n> So, now we update logic as Michael said. All ha alive standbys now waiting\n> for replaying all WAL that they have and after we use pg_last_replay_lsn() to\n> choose which standby will be promoted in failover.\n> \n> It fixed out trouble, but there is one another. Now we should wait when all\n> ha alive hosts finish replaying WAL to failover. It might take a while(for\n> example WAL contains wal_record about splitting b-tree).\n\nIndeed, this is the concern I wrote about yesterday in a second mail on this\nthread.\n\n> We are looking for options that will allow us to find a standby that contains\n> all data and replay all WAL only for this standby before failover.\n\nNote that when you promote a node, it first replays available WALs before\nacting as a primary. So you can safely signal the promotion to the node and\nwait for it to finish the replay and promote.\n\n> Maybe you have ideas on how to keep the last actual value of\n> pg_last_wal_receive_lsn()? \n\nNope, no clean and elegant idea. One your instances are killed, maybe you can\nforce flush the system cache (secure in-memory-only data) and read the latest\nreceived WAL using pg_waldump?\n\nBut, what if some more data are available from archives, but not received from\nstreaming rep because of a high lag?\n\n> As I understand WAL receiver doesn’t write to disk walrcv->flushedUpto.\n\nI'm not sure to understand what you mean here.\npg_last_wal_receive_lsn() reports the actual value of walrcv->flushedUpto.\nwalrcv->flushedUpto reports the latest LSN force-flushed to disk.\n\n\n> > On 13 May 2020, at 19:52, Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> > wrote:\n> > \n> > \n> > (too bad the history has been removed to keep context)\n> > \n> > On Fri, 8 May 2020 15:02:26 +0500\n> > godjan • <g0dj4n@gmail.com> wrote:\n> > \n> >> I got it, thank you.\n> >> Can you recommend what to use to determine which quorum standby should be\n> >> promoted in such case? We planned to use pg_last_wal_receive_lsn() to\n> >> determine which has fresh data but if it returns the beginning of the\n> >> segment on both replicas we can’t determine which standby confirmed that\n> >> write transaction to disk. \n> > \n> > Wait, pg_last_wal_receive_lsn() only decrease because you killed your\n> > standby.\n> > \n> > pg_last_wal_receive_lsn() returns the value of walrcv->flushedUpto. The\n> > later is set to the beginning of the segment requested only during the first\n> > walreceiver startup or a timeline fork:\n> > \n> > \t/*\n> > \t * If this is the first startup of walreceiver (on this timeline),\n> > \t * initialize flushedUpto and latestChunkStart to the starting\n> > point. */\n> > \tif (walrcv->receiveStart == 0 || walrcv->receivedTLI != tli)\n> > \t{\n> > \t\twalrcv->flushedUpto = recptr;\n> > \t\twalrcv->receivedTLI = tli;\n> > \t\twalrcv->latestChunkStart = recptr;\n> > \t}\n> > \twalrcv->receiveStart = recptr;\n> > \twalrcv->receiveStartTLI = tli;\n> > \n> > After a primary loss, as far as the standby are up and running, it is fine\n> > to use pg_last_wal_receive_lsn().\n> > \n> > Why do you kill -9 your standby? Whay am I missing? Could you explain the\n> > usecase you are working on to justify this?\n> > \n> > Regards, \n> \n> \n> \n\n\n\n-- \nJehan-Guillaume de Rorthais\nDalibo\n\n\n", "msg_date": "Thu, 14 May 2020 18:44:57 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "Hi, sorry for 2 weeks latency in answer :)\n\n>> It fixed out trouble, but there is one another. Now we should wait when all\n>> ha alive hosts finish replaying WAL to failover. It might take a while(for\n>> example WAL contains wal_record about splitting b-tree).\n> \n> Indeed, this is the concern I wrote about yesterday in a second mail on this\n> thread.\n\nActually, I found out that we use the wrong heuristic to understand that standby still replaying WAL.\nWe compare values of pg_last_wal_replay_lsn() after and before sleeping.\nIf standby replaying huge wal_record(e.g. splitting b-tree) it gave us the wrong result.\n\n\n> Note that when you promote a node, it first replays available WALs before\n> acting as a primary.\n\n\nDo you know how Postgres understand that standby still replays available WAL?\nI didn’t get it from the code of promotion.\n\n\n> However, how useful is this test in a context of auto failover for\n> **service** high availability?\n\nSuch a situation has a really low probability in our service. And we thought that it could be okay to resolve such a situation with on-call participation.\n\n> Nope, no clean and elegant idea. One your instances are killed, maybe you can\n> force flush the system cache (secure in-memory-only data)? \n\nDo \"force flush the system cache” means invoke this command https://linux.die.net/man/8/sync <https://linux.die.net/man/8/sync> on the standby?\n\n> and read the latest received WAL using pg_waldump?\n\n\nI did an experiment with pg_waldump without sync:\n- write data on primary \n- kill primary\n- read the latest received WAL using pg_waldump:\n0/1D019F38\n- pg_last_wal_replay_lsn():\n0/1D019F68\n\nSo it’s wrong to use pg_waldump to understand what was latest received LSN. At least without “forcing flush system cache”.\n\n> If all your nodes are killed in the same\n> disaster, how/why an automatic cluster manager should take care of starting all\n> nodes again and pick the right node to promote?\n\n1. How?\nThe automatic cluster manager will restart standbys in such a situation.\nIf the primary lock in ZK is released automatic cluster manager start process of election new primary.\nTo understand which node should be promoted automatic cluster manager should get LSN of the last wal_record wrote on disk by each potential new primary.\nWe used pg_last_wal_receive_lsn() for it but it was a mistake. Because after \"kill -9” on standby pg_last_wal_receive_lsn() reports first lsn of last segment.\n\n2. Why?\n- sleepy on-call in a night can make something bad in such situation)\n- pg_waldump didn’t give the last LSN wrote on disk(at least without forcing flush the system cache) so I don’t know how on-call can understand which standby should be promoted\n- automatic cluster manager successfully resolve such situations in clusters with one determined synchronous standby for years, and we hope it’s possible to do it in clusters with quorum replication\n\n\n\n\nHi, sorry for 2 weeks latency in answer :)It fixed out trouble, but there is one another. Now we should wait when allha alive hosts finish replaying WAL to failover. It might take a while(forexample WAL contains wal_record about splitting b-tree).Indeed, this is the concern I wrote about yesterday in a second mail on thisthread.Actually, I found out that we use the wrong heuristic to understand that standby still replaying WAL.We compare values of pg_last_wal_replay_lsn() after and before sleeping.If standby replaying huge wal_record(e.g. splitting b-tree) it gave us the wrong result.Note that when you promote a node, it first replays available WALs beforeacting as a primary.Do you know how Postgres understand that standby still replays available WAL?I didn’t get it from the code of promotion.However, how useful is this test in a context of auto failover for**service** high availability?Such a situation has a really low probability in our service. And we thought that it could be okay to resolve such a situation with on-call participation.Nope, no clean and elegant idea. One your instances are killed, maybe you canforce flush the system cache (secure in-memory-only data)? Do \"force flush the system cache” means invoke this command https://linux.die.net/man/8/sync on the standby?and read the latest received WAL using pg_waldump?I did an experiment with pg_waldump without sync:- write data on primary - kill primary- read the latest received WAL using pg_waldump:0/1D019F38- pg_last_wal_replay_lsn():0/1D019F68So it’s wrong to use pg_waldump to understand what was latest received LSN. At least without “forcing flush system cache”.If all your nodes are killed in the samedisaster, how/why an automatic cluster manager should take care of starting allnodes again and pick the right node to promote?1. How?The automatic cluster manager will restart standbys in such a situation.If the primary lock in ZK is released automatic cluster manager start process of election new primary.To understand which node should be promoted automatic cluster manager should get LSN of the last wal_record wrote on disk by each potential new primary.We used pg_last_wal_receive_lsn() for it but it was a mistake. Because after \"kill -9” on standby pg_last_wal_receive_lsn() reports first lsn of last segment.2. Why?- sleepy on-call in a night can make something bad in such situation)- pg_waldump didn’t give the last LSN wrote on disk(at least without forcing flush the system cache) so I don’t know how on-call can understand which standby should be promoted- automatic cluster manager successfully resolve such situations in clusters with one determined synchronous standby for years, and we hope it’s possible to do it in clusters with quorum replication", "msg_date": "Mon, 1 Jun 2020 12:44:26 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "On Mon, 1 Jun 2020 12:44:26 +0500\ngodjan • <g0dj4n@gmail.com> wrote:\n\n> Hi, sorry for 2 weeks latency in answer :)\n> \n> >> It fixed out trouble, but there is one another. Now we should wait when all\n> >> ha alive hosts finish replaying WAL to failover. It might take a while(for\n> >> example WAL contains wal_record about splitting b-tree). \n> > \n> > Indeed, this is the concern I wrote about yesterday in a second mail on this\n> > thread. \n> \n> Actually, I found out that we use the wrong heuristic to understand that\n> standby still replaying WAL. We compare values of pg_last_wal_replay_lsn()\n> after and before sleeping. If standby replaying huge wal_record(e.g.\n> splitting b-tree) it gave us the wrong result.\n\nIt could, yes.\n\n> > Note that when you promote a node, it first replays available WALs before\n> > acting as a primary.\n> \n> Do you know how Postgres understand that standby still replays available WAL?\n> I didn’t get it from the code of promotion.\n\nSee chapter \"26.2.2. Standby Server Operation\" in official doc:\n\n«\n Standby mode is exited and the server switches to normal operation when\n pg_ctl promote is run or a trigger file is found (promote_trigger_file).\n Before failover, any WAL immediately available in the archive or in pg_wal\n will be restored, but no attempt is made to connect to the master. \n»\n\nIn the source code, dig around the following chain if interested: StartupXLOG ->\nReadRecord -> XLogReadRecord -> XLogPageRead -> WaitForWALToBecomeAvailable.\n\n[...]\n\n> > Nope, no clean and elegant idea. One your instances are killed, maybe you\n> > can force flush the system cache (secure in-memory-only data)? \n> \n> Do \"force flush the system cache” means invoke this command\n> https://linux.die.net/man/8/sync <https://linux.die.net/man/8/sync> on the\n> standby?\n\nYes, just for safety.\n\n> > and read the latest received WAL using pg_waldump? \n> \n> I did an experiment with pg_waldump without sync:\n> - write data on primary \n> - kill primary\n> - read the latest received WAL using pg_waldump:\n> 0/1D019F38\n> - pg_last_wal_replay_lsn():\n> 0/1D019F68\n\nNormal. pg_waldump gives you the starting LSN of the record.\npg_last_wal_replay_lsn() returns lastReplayedEndRecPtr, which is the end of the\nrecord:\n\n\t/*\n\t * lastReplayedEndRecPtr points to end+1 of the last record successfully\n\t * replayed.\n\nSo I suppose your last xlogrecord was 30 bytes long. If I remember correctly,\nminimal xlogrecord length is 24 bytes, so I bet there's only one xlogrecord\nthere, starting at 0/1D019F38 with last byte at 0/1D019F67.\n\n> So it’s wrong to use pg_waldump to understand what was latest received LSN.\n> At least without “forcing flush system cache”.\n\nNope, just sum the xlogrecord length.\n\n\n", "msg_date": "Tue, 2 Jun 2020 16:11:15 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" }, { "msg_contents": "I got it should be LSN + MAXALIGN(xlogrecord length) 👍\nThanks a lot.\n\n> On 2 Jun 2020, at 19:11, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> Nope, just sum the xlogrecord length.\n\n\nI got it should be LSN + MAXALIGN(xlogrecord length) 👍Thanks a lot.On 2 Jun 2020, at 19:11, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:Nope, just sum the xlogrecord length.", "msg_date": "Wed, 3 Jun 2020 13:56:49 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange decreasing value of pg_last_wal_receive_lsn()" } ]
[ { "msg_contents": "Hello hackers,\n\n\nI have read the community mail from 'postgrespro' which the link\n\nbelow ①, a summary for the patch, it generals a CSN by timestamp\n\nwhen a transaction is committed and assigns a special value as CSN\n\nfor abort transaction, and record them in CSN SLRU file. Now we can\n\njudge if a xid available in a snapshot with a CSN value instead of by\n\nxmin,xmax and xip array so that if we hold CSN as a snapshot which\n\ncan be export and import.\n\n\n\n\n\nCSN may be a correct direction and an important part to implement\n\ndistributed of PostgreSQL because it delivers few data among cross-nodes\n\nfor snapshot, so the patch is meant to do some research.\n\n\n\nWe want to implement Clock-SI base on the patch.However the patch\n\nis too old, and I rebase the infrastructure part of the patch to recently\n\ncommit(7dc37ccea85).\n\n\n\nThe origin patch does not support csn alive among database restart\n\nbecause it will clean csnlog at every time the database restart, it works\n\nwell until a prepared transaction occurs due to the csn of prepare\n\ntransaction cleaned by a database restart. So I add wal support for\n\ncsnlog then csn can alive all the time, and move the csnlog clean work\n\nto auto vacuum.\n\n\n\nIt comes to another issue, now it can't switch from a xid-base snapshot\n\nto csn-base snapshot if a prepare transaction exists because it can not\n\nfind csn for the prepare transaction produced during xid-base snapshot.\n\nTo solve it, if the database restart with snapshot change to csn-base I\n\nrecord an 'xmin_for_csn' where start to check with csn snapshot. \n\n\n\nSome issues known about the current patch:\n\n1. The CSN-snapshot support repeatable read isolation level only, we\n\nshould try to support other isolation levels.\n\n\n\n2. We can not switch fluently from xid-base->csn-base, if there be prepared\n\ntransaction in database.\n\n \n\nWhat do you think about it, I want try to test and improve the patch step\nby step. \n\n\n\n①https://www.postgresql.org/message-id/21BC916B-80A1-43BF-8650-3363CCDAE09C%40postgrespro.ru \n\n\n\n-----------\nRegards,\n\nHighgo Software (Canada/China/Pakistan) \n\nURL : http://www.highgo.ca/ \n\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 08 May 2020 20:43:50 +0800", "msg_from": "Movead Li <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Hello hackers,\r\n\r\nCurrently, I do some changes based on the last version:\r\n1. Catch up to the current commit (c2bd1fec32ab54).\r\n2. Add regression and document.\r\n3. Add support to switch from xid-base snapshot to csn-base snapshot,\r\nand the same with standby side.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 12 Jun 2020 17:41:15 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "On Fri, Jun 12, 2020 at 3:11 PM movead.li@highgo.ca <movead.li@highgo.ca> wrote:\n>\n> Hello hackers,\n>\n> Currently, I do some changes based on the last version:\n> 1. Catch up to the current commit (c2bd1fec32ab54).\n> 2. Add regression and document.\n> 3. Add support to switch from xid-base snapshot to csn-base snapshot,\n> and the same with standby side.\n>\n\nAFAIU, this patch is to improve scalability and also will be helpful\nfor Global Snapshots stuff, is that right? If so, how much\nperformance/scalability benefit this patch will have after Andres's\nrecent work on scalability [1]?\n\n[1] - https://www.postgresql.org/message-id/20200301083601.ews6hz5dduc3w2se%40alap3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jun 2020 12:14:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/06/12 18:41, movead.li@highgo.ca wrote:\n> Hello hackers,\n> \n> Currently, I do some changes based on the last version:\n> 1. Catch up to the current  commit (c2bd1fec32ab54).\n> 2. Add regression and document.\n> 3. Add support to switch from xid-base snapshot to csn-base snapshot,\n> and the same with standby side.\n\nAndrey also seems to be proposing the similar patch [1] that introduces CSN\ninto core. Could you tell me what the difference between his patch and yours?\nIf they are almost the same, we should focus on one together rather than\nworking separately?\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/9964cf46-9294-34b9-4858-971e9029f5c7@postgrespro.ru\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Jun 2020 16:48:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/06/15 16:48, Fujii Masao wrote:\n> \n> \n> On 2020/06/12 18:41, movead.li@highgo.ca wrote:\n>> Hello hackers,\n>>\n>> Currently, I do some changes based on the last version:\n>> 1. Catch up to the current  commit (c2bd1fec32ab54).\n>> 2. Add regression and document.\n>> 3. Add support to switch from xid-base snapshot to csn-base snapshot,\n>> and the same with standby side.\n\nProbably it's not time to do the code review yet, but when I glanced the patch,\nI came up with one question.\n\n0002 patch changes GenerateCSN() so that it generates CSN-related WAL records\n(and inserts it into WAL buffers). Which means that new WAL record is generated\nwhenever CSN is assigned, e.g., in GetSnapshotData(). Is this WAL generation\nreally necessary for CSN?\n\nBTW, GenerateCSN() is called while holding ProcArrayLock. Also it inserts new\nWAL record in WriteXidCsnXlogRec() while holding spinlock. Firstly this is not\nacceptable because spinlocks are intended for *very* short-term locks.\nSecondly, I don't think that WAL generation during ProcArrayLock is good\ndesign because ProcArrayLock is likely to be bottleneck and its term should\nbe short for performance gain.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Jun 2020 20:57:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Thanks for reply.\r\n\r\n>Probably it's not time to do the code review yet, but when I glanced the patch,\r\n>I came up with one question.\r\n>0002 patch changes GenerateCSN() so that it generates CSN-related WAL records\r\n>(and inserts it into WAL buffers). Which means that new WAL record is generated\r\n>whenever CSN is assigned, e.g., in GetSnapshotData(). Is this WAL generation\r\n>really necessary for CSN?\r\nThis is designed for crash recovery, here we record our most new lsn in wal so it\r\nwill not use a history lsn after a restart. It will not write into wal every time, but with\r\na gap which you can see CSNAddByNanosec() function.\r\n\r\n>BTW, GenerateCSN() is called while holding ProcArrayLock. Also it inserts new\r\n>WAL record in WriteXidCsnXlogRec() while holding spinlock. Firstly this is not\r\n>acceptable because spinlocks are intended for *very* short-term locks.\r\n>Secondly, I don't think that WAL generation during ProcArrayLock is good\r\n>design because ProcArrayLock is likely to be bottleneck and its term should\r\n>be short for performance gain.\r\nThanks for point out which may help me deeply, I will reconsider that.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\nThanks for reply.>Probably it's not time to do the code review yet, but when I glanced the patch,>I came up with one question.>0002 patch changes GenerateCSN() so that it generates CSN-related WAL records>(and inserts it into WAL buffers). Which means that new WAL record is generated>whenever CSN is assigned, e.g., in GetSnapshotData(). Is this WAL generation>really necessary for CSN?This is designed for crash recovery, here we record our most new lsn in wal so itwill not use a history lsn after a restart. It will not write into wal every time, but witha gap which you can see CSNAddByNanosec() function.>BTW, GenerateCSN() is called while holding ProcArrayLock. Also it inserts new>WAL record in WriteXidCsnXlogRec() while holding spinlock. Firstly this is not>acceptable because spinlocks are intended for *very* short-term locks.>Secondly, I don't think that WAL generation during ProcArrayLock is good>design because ProcArrayLock is likely to be bottleneck and its term should>be short for performance gain.Thanks for point out which may help me deeply, I will reconsider that.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 19 Jun 2020 11:12:12 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Thanks for reply.\r\n\r\n>AFAIU, this patch is to improve scalability and also will be helpful\r\n>for Global Snapshots stuff, is that right? If so, how much\r\n>performance/scalability benefit this patch will have after Andres's\r\n>recent work on scalability [1]?\r\nThe patch focus on to be an infrastructure of sharding feature, according\r\nto my test almost has the same performance with and without the patch.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\nThanks for reply.\n>AFAIU, this patch is to improve scalability and also will be helpful>for Global Snapshots stuff, is that right?  If so, how much>performance/scalability benefit this patch will have after Andres's>recent work on scalability [1]?The patch focus on to be an infrastructure of sharding feature, accordingto my test almost has the same performance with and without the patch.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 19 Jun 2020 11:29:49 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/06/19 12:12, movead.li@highgo.ca wrote:\n> \n> Thanks for reply.\n> \n> >Probably it's not time to do the code review yet, but when I glanced the patch,\n>>I came up with one question.\n>>0002 patch changes GenerateCSN() so that it generates CSN-related WAL records\n>>(and inserts it into WAL buffers). Which means that new WAL record is generated\n>>whenever CSN is assigned, e.g., in GetSnapshotData(). Is this WAL generation\n>>really necessary for CSN?\n> This is designed for crash recovery, here we record our most new lsn in wal so it\n> will not use a history lsn after a restart. It will not write into wal every time, but with\n> a gap which you can see CSNAddByNanosec() function.\n\nYou mean that the last generated CSN needs to be WAL-logged because any smaller\nCSN than the last one should not be reused after crash recovery. Right?\n\nIf right, that WAL-logging seems not necessary because CSN mechanism assumes\nCSN is increased monotonically. IOW, even without that WAL-logging, CSN afer\ncrash recovery must be larger than that before. No?\n\n\n>>BTW, GenerateCSN() is called while holding ProcArrayLock. Also it inserts new\n>>WAL record in WriteXidCsnXlogRec() while holding spinlock. Firstly this is not\n>>acceptable because spinlocks are intended for *very* short-term locks.\n>>Secondly, I don't think that WAL generation during ProcArrayLock is good\n>>design because ProcArrayLock is likely to be bottleneck and its term should\n>>be short for performance gain.\n> Thanks for point out which may help me deeply, I will reconsider that.\n\nThanks for working on this!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Jun 2020 12:56:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": ">You mean that the last generated CSN needs to be WAL-logged because any smaller\r\n>CSN than the last one should not be reused after crash recovery. Right?\r\nYes that's it. \r\n\r\n>If right, that WAL-logging seems not necessary because CSN mechanism assumes\r\n>CSN is increased monotonically. IOW, even without that WAL-logging, CSN afer\r\n>crash recovery must be larger than that before. No?\r\n\r\nCSN collected based on time of system in this patch, but time is not reliable all the\r\ntime. And it designed for Global CSN(for sharding) where it may rely on CSN from\r\nother node , which generated from other machine. \r\n\r\nSo monotonically is not reliable and it need to keep it's largest CSN in wal in case\r\nof crash.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>You mean that the last generated CSN needs to be WAL-logged because any smaller>CSN than the last one should not be reused after crash recovery. Right?Yes that's it. >If right, that WAL-logging seems not necessary because CSN mechanism assumes>CSN is increased monotonically. IOW, even without that WAL-logging, CSN afer>crash recovery must be larger than that before. No?\nCSN collected based on time of  system in this patch, but time is not reliable all thetime. And it designed for Global CSN(for sharding) where it may rely on CSN fromother node , which generated from other machine. So monotonically is not reliable and it need to keep it's largest CSN in wal in caseof crash.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 19 Jun 2020 12:36:44 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/06/19 13:36, movead.li@highgo.ca wrote:\n> \n> >You mean that the last generated CSN needs to be WAL-logged because any smaller\n>>CSN than the last one should not be reused after crash recovery. Right?\n> Yes that's it.\n> \n>>If right, that WAL-logging seems not necessary because CSN mechanism assumes\n>>CSN is increased monotonically. IOW, even without that WAL-logging, CSN afer\n>>crash recovery must be larger than that before. No?\n> \n> CSN collected based on time of  system in this patch, but time is not reliable all the\n> time. And it designed for Global CSN(for sharding) where it may rely on CSN from\n> other node , which generated from other machine.\n> \n> So monotonically is not reliable and it need to keep it's largest CSN in wal in case\n> of crash.\n\nThanks for the explanaion! Understood.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Jun 2020 14:54:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "On 6/12/20 2:41 PM, movead.li@highgo.ca wrote:\n> Hello hackers,\n> \n> Currently, I do some changes based on the last version:\n> 1. Catch up to the current  commit (c2bd1fec32ab54).\n> 2. Add regression and document.\n> 3. Add support to switch from xid-base snapshot to csn-base snapshot,\n> and the same with standby side.\n\nSome remarks on your patch:\n1. The variable last_max_csn can be an atomic variable.\n2. GenerateCSN() routine: in the case than csn < csnState->last_max_csn \nThis is the case when someone changed the value of the system clock. I \nthink it is needed to write a WARNING to the log file. (May be we can do \nsynchronization with a time server.\n3. That about global snapshot xmin? In the pgpro version of the patch we \nhad GlobalSnapshotMapXmin() routine to maintain circular buffer of \noldestXmins for several seconds in past. This buffer allows to shift \noldestXmin in the past when backend is importing global transaction. \nOtherwise old versions of tuples that were needed for this transaction \ncan be recycled by other processes (vacuum, HOT, etc).\nHow do you implement protection from local pruning? I saw \nSNAP_DESYNC_COMPLAIN, but it is not used anywhere.\n4. The current version of the patch is not applied clearly with current \nmaster.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 29 Jun 2020 12:47:38 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Thanks for the remarks,\n\n\n\n>Some remarks on your patch: \n\n>1. The variable last_max_csn can be an atomic variable. \n\nYes will consider.\n\n\n\n>2. GenerateCSN() routine: in the case than csn < csnState->last_max_csn \n\n>This is the case when someone changed the value of the system clock. I \n\n>think it is needed to write a WARNING to the log file. (May be we can do \n\n>synchronization with a time server. \n\nYes good point, I will work out a way to report the warning, it should exist a \n\nreport gap rather than report every time it generates CSN.\n\nIf we really need a correct time? What's the inferiority if one node generate\n\ncsn by monotonically increasing?\n\n\n\n>3. That about global snapshot xmin? In the pgpro version of the patch we \n\n>had GlobalSnapshotMapXmin() routine to maintain circular buffer of \n\n>oldestXmins for several seconds in past. This buffer allows to shift \n\n>oldestXmin in the past when backend is importing global transaction. \n\n>Otherwise old versions of tuples that were needed for this transaction \n\n>can be recycled by other processes (vacuum, HOT, etc). \n\n>How do you implement protection from local pruning? I saw \n\n>SNAP_DESYNC_COMPLAIN, but it is not used anywhere.\n\nI have researched your patch which is so great, in the patch only data\n\nout of 'global_snapshot_defer_time' can be vacuum, and it keep dead\n\ntuple even if no snapshot import at all,right?\n\n\n\nI am thanking about a way if we can start remain dead tuple just before\n\nwe import a csn snapshot.\n\n\n\nBase on Clock-SI paper, we should get local CSN then send to shard nodes,\n\nbecause we do not known if the shard nodes' csn bigger or smaller then\n\nmaster node, so we should keep some dead tuple all the time to support\n\nsnapshot import anytime.\n\n\n\nThen if we can do a small change to CLock-SI model, we do not use the \n\nlocal csn when transaction start, instead we touch every shard node for\n\nrequire their csn, and shard nodes start keep dead tuple, and master node\n\nchoose the biggest csn to send to shard nodes.\n\n\n\nBy the new way, we do not need to keep dead tuple all the time and do\n\nnot need to manage a ring buf, we can give to ball to 'snapshot too old'\n\nfeature. But for trade off, almost all shard node need wait. \n\nI will send more detail explain in few days.\n\n\n\n\n\n>4. The current version of the patch is not applied clearly with current \n\n>master. \n\nMaybe it's because of the release of PG13, it cause some conflict, I will\n\nrebase it.\n\n\n\n---\nRegards,\n\nHighgo Software (Canada/China/Pakistan) \n\nURL : http://www.highgo.ca/ \n\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\nThanks for the remarks,>Some remarks on your patch: >1. The variable last_max_csn can be an atomic variable. Yes will consider.>2. GenerateCSN() routine: in the case than csn < csnState->last_max_csn >This is the case when someone changed the value of the system clock. I >think it is needed to write a WARNING to the log file. (May be we can do >synchronization with a time server. Yes good point, I will work out a way to report the warning, it should exist a report gap rather than report every time it generates CSN.If we really need a correct time? What's the inferiority if one node generatecsn by monotonically increasing?>3. That about global snapshot xmin? In the pgpro version of the patch we >had GlobalSnapshotMapXmin() routine to maintain circular buffer of >oldestXmins for several seconds in past. This buffer allows to shift >oldestXmin in the past when backend is importing global transaction. >Otherwise old versions of tuples that were needed for this transaction >can be recycled by other processes (vacuum, HOT, etc). >How do you implement protection from local pruning? I saw >SNAP_DESYNC_COMPLAIN, but it is not used anywhere.I have researched your patch which is so great, in the patch only dataout of 'global_snapshot_defer_time' can be vacuum, and it keep deadtuple even if no snapshot import at all,right?I am thanking about a way if we can start remain dead tuple just beforewe import a csn snapshot.Base on Clock-SI paper, we should get local CSN then send to shard nodes,because we do not known if the shard nodes' csn bigger or smaller thenmaster node, so we should keep some dead tuple all the time to supportsnapshot import anytime.Then if we can do a small change to CLock-SI model, we do not use the local csn when transaction start, instead we touch every shard node forrequire their csn, and shard nodes start keep dead tuple, and master nodechoose the biggest csn to send to shard nodes.By the new way, we do not need to keep dead tuple all the time and donot need to manage a ring buf, we can give to ball to 'snapshot too old'feature. But for trade off, almost all shard node need wait. I will send more detail explain in few days.>4. The current version of the patch is not applied clearly with current >master. Maybe it's because of the release of PG13, it cause some conflict, I willrebase it.---Regards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 02 Jul 2020 22:31:56 +0800", "msg_from": "Movead Li <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "On 7/2/20 7:31 PM, Movead Li wrote:\n> Thanks for the remarks,\n> \n> >Some remarks on your patch:\n> >1. The variable last_max_csn can be an atomic variable.\n> Yes will consider.\n> \n> >2. GenerateCSN() routine: in the case than csn < csnState->last_max_csn\n> >This is the case when someone changed the value of the system clock. I\n> >think it is needed to write a WARNING to the log file. (May be we can do\n> >synchronization with a time server.\n> Yes good point, I will work out a way to report the warning, it should \n> exist a\n> report gap rather than report every time it generates CSN.\n> If we really need a correct time? What's the inferiority if one node \n> generate\n> csn by monotonically increasing?\nChanges in time values can lead to poor effects, such as old snapshot. \nAdjusting the time can be a kind of defense.\n> \n> >3. That about global snapshot xmin? In the pgpro version of the patch we\n> >had GlobalSnapshotMapXmin() routine to maintain circular buffer of\n> >oldestXmins for several seconds in past. This buffer allows to shift\n> >oldestXmin in the past when backend is importing global transaction.\n> >Otherwise old versions of tuples that were needed for this transaction\n> >can be recycled by other processes (vacuum, HOT, etc).\n> >How do you implement protection from local pruning? I saw\n> >SNAP_DESYNC_COMPLAIN, but it is not used anywhere.\n> I have researched your patch which is so great, in the patch only data\n> out of 'global_snapshot_defer_time' can be vacuum, and it keep dead\n> tuple even if no snapshot import at all,right?\n> \n> I am thanking about a way if we can start remain dead tuple just before\n> we import a csn snapshot.\n> \n> Base on Clock-SI paper, we should get local CSN then send to shard nodes,\n> because we do not known if the shard nodes' csn bigger or smaller then\n> master node, so we should keep some dead tuple all the time to support\n> snapshot import anytime.\n> \n> Then if we can do a small change to CLock-SI model, we do not use the\n> local csn when transaction start, instead we touch every shard node for\n> require their csn, and shard nodes start keep dead tuple, and master node\n> choose the biggest csn to send to shard nodes.\n> \n> By the new way, we do not need to keep dead tuple all the time and do\n> not need to manage a ring buf, we can give to ball to 'snapshot too old'\n> feature. But for trade off, almost all shard node need wait.\n> I will send more detail explain in few days.\nI think, in the case of distributed system and many servers it can be \nbottleneck.\nMain idea of \"deferred time\" is to reduce interference between DML \nqueries in the case of intensive OLTP workload. This time can be reduced \nif the bloationg of a database prevails over the frequency of \ntransaction aborts.\n> \n> \n> >4. The current version of the patch is not applied clearly with current\n> >master.\n> Maybe it's because of the release of PG13, it cause some conflict, I will\n> rebase it.\nOk\n> \n> ---\n> Regards,\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca <http://www.highgo.ca/>\n> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n> \n> \n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 3 Jul 2020 08:24:22 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Hello Andrey\r\n\r\n>> I have researched your patch which is so great, in the patch only data\r\n>> out of 'global_snapshot_defer_time' can be vacuum, and it keep dead\r\n>> tuple even if no snapshot import at all,right?\r\n>> \r\n>> I am thanking about a way if we can start remain dead tuple just before\r\n>> we import a csn snapshot.\r\n>> \r\n>> Base on Clock-SI paper, we should get local CSN then send to shard nodes,\r\n>> because we do not known if the shard nodes' csn bigger or smaller then\r\n>> master node, so we should keep some dead tuple all the time to support\r\n>> snapshot import anytime.\r\n>> \r\n>> Then if we can do a small change to CLock-SI model, we do not use the\r\n>> local csn when transaction start, instead we touch every shard node for\r\n>> require their csn, and shard nodes start keep dead tuple, and master node\r\n>> choose the biggest csn to send to shard nodes.\r\n>> \r\n>> By the new way, we do not need to keep dead tuple all the time and do\r\n>> not need to manage a ring buf, we can give to ball to 'snapshot too old'\r\n>> feature. But for trade off, almost all shard node need wait.\r\n>> I will send more detail explain in few days.\r\n>I think, in the case of distributed system and many servers it can be \r\n>bottleneck.\r\n>Main idea of \"deferred time\" is to reduce interference between DML \r\n>queries in the case of intensive OLTP workload. This time can be reduced \r\n>if the bloationg of a database prevails over the frequency of \r\n>transaction aborts.\r\nOK there maybe a performance issue, and I have another question about Clock-SI.\r\n\r\nFor example we have three nodes, shard1(as master), shard2, shard3, which\r\n(time of node2) > (time of node2) > (time of node3), and you can see a picture:\r\nhttp://movead.gitee.io/picture/blog_img_bad/csn/clock_si_question.png \r\nAs far as I know about Clock-SI, left part of the blue line will setup as a snapshotif master require a snapshot at time t1. But in fact data A should in snapshot butnot and data B should out of snapshot but not.\r\nIf this scene may appear in your origin patch? Or something my understand aboutClock-SI is wrong?\r\n\r\n\n\nHello Andrey\n\n>> I have researched your patch which is so great, in the patch only data\n>> out of 'global_snapshot_defer_time' can be vacuum, and it keep dead\n>> tuple even if no snapshot import at all,right?\n>> \n>> I am thanking about a way if we can start remain dead tuple just before\n>> we import a csn snapshot.\n>> \n>> Base on Clock-SI paper, we should get local CSN then send to shard nodes,\n>> because we do not known if the shard nodes' csn bigger or smaller then\n>> master node, so we should keep some dead tuple all the time to support\n>> snapshot import anytime.\n>> \n>> Then if we can do a small change to CLock-SI model, we do not use the\n>> local csn when transaction start, instead we touch every shard node for\n>> require their csn, and shard nodes start keep dead tuple, and master node\n>> choose the biggest csn to send to shard nodes.\n>> \n>> By the new way, we do not need to keep dead tuple all the time and do\n>> not need to manage a ring buf, we can give to ball to 'snapshot too old'\n>> feature. But for trade off, almost all shard node need wait.\n>> I will send more detail explain in few days.\n>I think, in the case of distributed system and many servers it can be \n>bottleneck.\n>Main idea of \"deferred time\" is to reduce interference between DML \n>queries in the case of intensive OLTP workload. This time can be reduced \n>if the bloationg of a database prevails over the frequency of \n>transaction aborts.OK there maybe a performance issue, and I have another question about Clock-SI.For example we have three  nodes, shard1(as master), shard2, shard3, which(time of node2) > (time of node2) > (time of node3), and you can see a picture: http://movead.gitee.io/picture/blog_img_bad/csn/clock_si_question.png As far as I know about Clock-SI, left part of the blue line will setup as a snapshotif master require a snapshot at time t1. But in fact data A should in snapshot butnot and data B should out of snapshot but not.If this scene may appear in your origin patch? Or something my understand aboutClock-SI is wrong?", "msg_date": "Sat, 4 Jul 2020 22:56:17 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "On 7/4/20 7:56 PM, movead.li@highgo.ca wrote:\n> \n> \n> As far as I know about Clock-SI, left part of the blue line will\n> setup as a snapshot\n> \n> if master require a snapshot at time t1. But in fact data A should\n> in snapshot but\n> \n> not and data B should out of snapshot but not.\n> \n> \n> If this scene may appear in your origin patch? Or something my\n> understand about\n> \n> Clock-SI is wrong?\n> \n> \n> \n\nSorry for late answer.\n\nI have doubts that I fully understood your question, but still.\nWhat real problems do you see here? Transaction t1 doesn't get state of \nshard2 until time at node with shard2 won't reach start time of t1.\nIf transaction, that inserted B wants to know about it position in time \nrelatively to t1 it will generate CSN, attach to node1 and will see, \nthat t1 is not started yet.\n\nMaybe you are saying about the case that someone who has a faster data \nchannel can use the knowledge from node1 to change the state at node2?\nIf so, i think it is not a problem, or you can explain your idea.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 13 Jul 2020 09:41:12 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": ">I have doubts that I fully understood your question, but still.\r\n>What real problems do you see here? Transaction t1 doesn't get state of\r\n>shard2 until time at node with shard2 won't reach start time of t1.\r\n>If transaction, that inserted B wants to know about it position in time\r\n>relatively to t1 it will generate CSN, attach to node1 and will see,\r\n>that t1 is not started yet.\r\n \r\n>Maybe you are saying about the case that someone who has a faster data\r\n>channel can use the knowledge from node1 to change the state at node2?\r\n>If so, i think it is not a problem, or you can explain your idea.\r\nSorry, I think this is my wrong understand about Clock-SI. At first I expect\r\nwe can get a absolutly snapshot, for example B should not include in the\r\nsnapshot because it happened after time t1. How ever Clock-SI can not guarantee\r\nthat and no design guarantee that at all.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n \r\n\n\n>I have doubts that I fully understood your question, but still.>What real problems do you see here? Transaction t1 doesn't get state of>shard2 until time at node with shard2 won't reach start time of t1.>If transaction, that inserted B wants to know about it position in time>relatively to t1 it will generate CSN, attach to node1 and will see,>that t1 is not started yet. >Maybe you are saying about the case that someone who has a faster data>channel can use the knowledge from node1 to change the state at node2?>If so, i think it is not a problem, or you can explain your idea.\nSorry, I think this is my wrong understand about Clock-SI. At first I expectwe can get a absolutly snapshot, for example B should not include in thesnapshot because it happened after time t1. How ever Clock-SI can not guaranteethat and no design guarantee that at all.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 13 Jul 2020 14:46:23 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/06/19 14:54, Fujii Masao wrote:\n> \n> \n> On 2020/06/19 13:36, movead.li@highgo.ca wrote:\n>>\n>>  >You mean that the last generated CSN needs to be WAL-logged because any smaller\n>>> CSN than the last one should not be reused after crash recovery. Right?\n>> Yes that's it.\n>>\n>>> If right, that WAL-logging seems not necessary because CSN mechanism assumes\n>>> CSN is increased monotonically. IOW, even without that WAL-logging, CSN afer\n>>> crash recovery must be larger than that before. No?\n>>\n>> CSN collected based on time of  system in this patch, but time is not reliable all the\n>> time. And it designed for Global CSN(for sharding) where it may rely on CSN from\n>> other node , which generated from other machine.\n>>\n>> So monotonically is not reliable and it need to keep it's largest CSN in wal in case\n>> of crash.\n> \n> Thanks for the explanaion! Understood.\n\nI have another question about this patch;\n\nWhen checking each tuple visibility, we always have to get the CSN\ncorresponding to XMIN or XMAX from CSN SLRU. In the past discussion,\nthere was the suggestion that CSN should be stored in the tuple header\nor somewhere (like hint bit) to avoid the overhead by very frequehntly\nlookup for CSN SLRU. I'm not sure the conclusion of this discussion.\nBut this patch doesn't seem to adopt that idea. So did you confirm that\nsuch performance overhead by lookup for CSN SLRU is negligible?\n\nOf course I know that idea has big issue, i.e., there is no enough space\nto store CSN in a tuple header if CSN is 64 bits. If CSN is 32 bits, we may\nbe able to replace XMIN or XMAX with CSN corresponding to them. But\nit means that we have to struggle with one more wraparound issue\n(CSN wraparound issue). So it's not easy to adopt that idea...\n\nSorry if this was already discussed and concluded...\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 13 Jul 2020 20:18:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": ">When checking each tuple visibility, we always have to get the CSN\r\n>corresponding to XMIN or XMAX from CSN SLRU. In the past discussion,\r\n>there was the suggestion that CSN should be stored in the tuple header\r\n>or somewhere (like hint bit) to avoid the overhead by very frequehntly\r\n>lookup for CSN SLRU. I'm not sure the conclusion of this discussion.\r\n>But this patch doesn't seem to adopt that idea. So did you confirm that\r\n>such performance overhead by lookup for CSN SLRU is negligible?\r\nThis patch came from postgrespro's patch which shows a good performance,\r\nI have simple test on current patch and result no performance decline. \r\nAnd not everytime we do a tuple visibility need lookup forCSN SLRU, only xid\r\nlarge than 'TransactionXmin' need that. Maybe we have not touch the case\r\nwhich cause bad performance, so it shows good performance temporary. \r\n\r\n>Of course I know that idea has big issue, i.e., there is no enough space\r\n>to store CSN in a tuple header if CSN is 64 bits. If CSN is 32 bits, we may\r\n>be able to replace XMIN or XMAX with CSN corresponding to them. But\r\n>it means that we have to struggle with one more wraparound issue\r\n>(CSN wraparound issue). So it's not easy to adopt that idea...\r\n\r\n>Sorry if this was already discussed and concluded...\r\nI think your point with CSN in tuple header is a exciting approach, but I have\r\nnot seen the discussion, can you show me the discussion address?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>When checking each tuple visibility, we always have to get the CSN>corresponding to XMIN or XMAX from CSN SLRU. In the past discussion,>there was the suggestion that CSN should be stored in the tuple header>or somewhere (like hint bit) to avoid the overhead by very frequehntly>lookup for CSN SLRU. I'm not sure the conclusion of this discussion.>But this patch doesn't seem to adopt that idea. So did you confirm that>such performance overhead by lookup for CSN SLRU is negligible?This patch came from postgrespro's patch which shows a good performance,I have simple test on current patch and result no performance decline. And not everytime we do a tuple visibility need lookup forCSN SLRU, only xidlarge than 'TransactionXmin' need that. Maybe we have not touch the casewhich cause bad performance, so it shows good performance temporary. >Of course I know that idea has big issue, i.e., there is no enough space>to store CSN in a tuple header if CSN is 64 bits. If CSN is 32 bits, we may>be able to replace XMIN or XMAX with CSN corresponding to them. But>it means that we have to struggle with one more wraparound issue>(CSN wraparound issue). So it's not easy to adopt that idea...>Sorry if this was already discussed and concluded...I think your point with CSN in tuple header is a exciting approach, but I havenot seen the discussion, can you show me the discussion address?\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 14 Jul 2020 10:02:10 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "\n\nOn 2020/07/14 11:02, movead.li@highgo.ca wrote:\n> \n>>When checking each tuple visibility, we always have to get the CSN\n>>corresponding to XMIN or XMAX from CSN SLRU. In the past discussion,\n>>there was the suggestion that CSN should be stored in the tuple header\n>>or somewhere (like hint bit) to avoid the overhead by very frequehntly\n>>lookup for CSN SLRU. I'm not sure the conclusion of this discussion.\n>>But this patch doesn't seem to adopt that idea. So did you confirm that\n>>such performance overhead by lookup for CSN SLRU is negligible?\n> This patch came from postgrespro's patch which shows a good performance,\n> I have simple test on current patch and result no performance decline.\n\nThis is good news! When I read the past discussions about CSN, my impression\nwas that the performance overhead by CSN SLRU lookup might become one of\nshow-stopper for CSN. So I was worring about this issue...\n\n\n> And not everytime we do a tuple visibility need lookup forCSN SLRU, only xid\n> large than 'TransactionXmin' need that. Maybe we have not touch the case\n> which cause bad performance, so it shows good performance temporary.\n\nYes, we would need more tests in several cases.\n\n\n>>Of course I know that idea has big issue, i.e., there is no enough space\n>>to store CSN in a tuple header if CSN is 64 bits. If CSN is 32 bits, we may\n>>be able to replace XMIN or XMAX with CSN corresponding to them. But\n>>it means that we have to struggle with one more wraparound issue\n>>(CSN wraparound issue). So it's not easy to adopt that idea...\n> \n>>Sorry if this was already discussed and concluded...\n> I think your point with CSN in tuple header is a exciting approach, but I have\n> not seen the discussion, can you show me the discussion address?\n\nProbably you can find the discussion by searching with the keywords\n\"CSN\" and \"hint bit\". For example,\n\nhttps://www.postgresql.org/message-id/CAPpHfdv7BMwGv=OfUg3S-jGVFKqHi79pR_ZK1Wsk-13oZ+cy5g@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:42:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "On 7/13/20 11:46 AM, movead.li@highgo.ca wrote:\n\nI continue to see your patch. Some code improvements see at the attachment.\n\nQuestions:\n* csnSnapshotActive is the only member of the CSNshapshotShared struct.\n* The WriteAssignCSNXlogRec() routine. I din't understand why you add 20 \nnanosec to current CSN and write this into the WAL. For simplify our \ncommunication, I rewrote this routine in accordance with my opinion (see \npatch in attachment).\n\nAt general, maybe we will add your WAL writing CSN machinery + TAP tests \nto the patch from the thread [1] and work on it together?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/07b2c899-4ed0-4c87-1327-23c750311248%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 15 Jul 2020 14:06:14 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "Currently, we are developing and test global snapshot on branch[2] created by\r\nAndrey, I want to keep a latest patch set on this thread so that hackers can easily\r\ncatch every change on this area.\r\n\r\nThis time it change little point come up by Fujii Masao about WriteXidCsnXlogRec()\r\nshould out of spinlocks, and add comments for CSNAddByNanosec(), and other\r\nfine tunings.\r\n\r\n[1]\r\nhttps://github.com/danolivo/pgClockSI\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 29 Jul 2020 10:14:57 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" }, { "msg_contents": "I find an issue with snapshot switch part of last patch, the xmin_for_csn value is\r\nwrong in TransactionIdGetCSN() function. I try to hold xmin_for_csn in pg_control\r\nand add a UnclearCSN statue for transactionid. And new patches attached.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 10 Aug 2020 09:52:59 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: POC and rebased patch for CSN based snapshots" } ]
[ { "msg_contents": "On primary I can execute ’SELECT write_lsn FROM pg_stat_replication;’ and get write_lsn of standby.\nI didn’t find function like \"pg_last_write_lsn()” to get write_lsn on standby. Is it possible?\n\n", "msg_date": "Fri, 8 May 2020 18:25:02 +0500", "msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>", "msg_from_op": true, "msg_subject": "Is it possible to find out write_lsn on standby?" }, { "msg_contents": "Hello,\n\nOn Fri, 8 May 2020 18:25:02 +0500\ngodjan • <g0dj4n@gmail.com> wrote:\n\n> On primary I can execute ’SELECT write_lsn FROM pg_stat_replication;’ and get\n> write_lsn of standby. I didn’t find function like \"pg_last_write_lsn()” to\n> get write_lsn on standby. Is it possible?\n\nNo, there's no admin function exposing it. The closest one is\npg_last_wal_receive_lsn() which is related to pg_stat_replication.flush_lsn.\n\nThere was discussions to create eg. pg_stat_wal_receiver view few weeks/moth\nago. But this is nowhere close to something. No patch ready yet.\n\nRegards,\n\n\n", "msg_date": "Wed, 13 May 2020 17:29:18 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Is it possible to find out write_lsn on standby?" } ]
[ { "msg_contents": "Here is a sketch for implementing the design that Tom described here:\nhttps://www.postgresql.org/message-id/flat/357.1550612935%40sss.pgh.pa.us\n\nIn short, we would like to have only one plan for ModifyTable to get\ntuples out of to update/delete, not N for N child result relations as\nis done currently.\n\nI suppose things are the way they are because creating a separate plan\nfor each result relation makes the job of ModifyTable node very\nsimple, which is currently this:\n\n1. Take the plan's output tuple, extract the tupleid of the tuple to\nupdate/delete in the currently active result relation,\n2. If delete, go to 3, else if update, filter out the junk columns\nfrom the above tuple\n3. Call ExecUpdate()/ExecDelete() on the result relation with the new\ntuple, if any\n\nIf we make ModifyTable do a little more work for the inheritance case,\nwe can create only one plan but without \"expanding\" the targetlist.\nThat is, it will contain entries only for attributes that are assigned\nvalues in the SET clause. This makes the plan reusable across result\nrelations, because all child relations must have those attributes,\neven though the attribute numbers might be different. Anyway, the\nwork that ModifyTable will now have to do is this:\n\n1. Take the plan's output tuple, extract tupleid of the tuple to\nupdate/delete and \"tableoid\"\n2. Select the result relation to operate on using the tableoid\n3. If delete, go to 4, else if update, fetch the tuple identified by\ntupleid from the result relation and fill in the unassigned columns\nusing that \"old\" tuple, also filtering out the junk columns\n4. Call ExecUpdate()/ExecDelete() on the result relation with the new\ntuple, if any\n\nI do think that doing this would be worthwhile even if we may be\nincreasing ModifyTable's per-row overhead slightly, because planning\noverhead of the current approach is very significant, especially for\npartition trees with beyond a couple of thousand partitions. As to\nhow bad the problem is, trying to create a generic plan for `update\nfoo set ... where key = $1`, where foo has over 2000 partitions,\ncauses OOM even on a machine with 6GB of memory.\n\nThe one plan shared by all result relations will be same as the one we\nwould get if the query were SELECT, except it will contain junk\nattributes such as ctid needed to identify tuples and a new \"tableoid\"\njunk attribute if multiple result relations will be present due to\ninheritance. One major way in which this targetlist differs from the\ncurrent per-result-relation plans is that it won't be passed through\nexpand_targetlist(), because the set of unassigned attributes may not\nbe unique among children. As mentioned above, those missing\nattributes will be filled by ModifyTable doing some extra work,\nwhereas previously they would have come with the plan's output tuple.\n\nFor child result relations that are foreign tables, their FDW adds\njunk attribute(s) to the query’s targetlist by updating it in-place\n(AddForeignUpdateTargets). However, as the child tables will no\nlonger get their own parsetree, we must use some hack around this\ninterface to obtain the foreign table specific junk attributes and add\nthem to the original/parent query’s targetlist. Assuming that all or\nmost of the children will belong to the same FDW, we will end up with\nonly a handful such junk columns in the final targetlist. I am not\nsure if it's worthwhile to change the API of AddForeignUpdateTargets\nto require FDWs to not scribble on the passed-in parsetree as part of\nthis patch.\n\nAs for how ModifyTable will create the new tuple for updates, I have\ndecided to use a ProjectionInfo for each result relation, which\nprojects a full, *clean* tuple ready to be put back into the relation.\nWhen projecting, plan’s output tuple serves as OUTER tuple and the old\ntuple fetched to fill unassigned attributes serves as SCAN tuple. By\nhaving this ProjectionInfo also serve as the “junk filter”, we don't\nneed JunkFilters. The targetlist that this projection computes is\nsame as that of the result-relation-specific plan. Initially, I\nthought to generate this \"expanded\" targetlist in\nExecInitModifyTable(). But as it can be somewhat expensive, doing it\nonly once in the planner seemed like a good idea. These\nper-result-relations targetlists are carried in the ModifyTable node.\n\nTo identify the result relation from the tuple produced by the plan,\n“tableoid” junk column will be used. As the tuples for different\nresult relations won’t necessarily come out in the order in which\nresult relations are laid out in the ModifyTable node, we need a way\nto map the tableoid value to result relation indexes. I have decided\nto use a hash table here.\n\nA couple of things that I didn't think very hard what to do about now,\nbut may revisit later.\n\n* We will no longer be able use DirectModify APIs to push updates to\nremote servers for foreign child result relations\n\n* Over in [1], I have said that we get run-time pruning for free for\nModifyTable because the plan we are using is same as that for SELECT,\nalthough now I think that I hadn't thought that through. With the PoC\npatch that I have:\n\nprepare q as update foo set a = 250001 where a = $1;\nset plan_cache_mode to 'force_generic_plan';\nexplain execute q(1);\n QUERY PLAN\n--------------------------------------------------------------------\n Update on foo (cost=0.00..142.20 rows=40 width=14)\n Update on foo_1\n Update on foo_2 foo\n Update on foo_3 foo\n Update on foo_4 foo\n -> Append (cost=0.00..142.20 rows=40 width=14)\n Subplans Removed: 3\n -> Seq Scan on foo_1 (cost=0.00..35.50 rows=10 width=14)\n Filter: (a = $1)\n(9 rows)\n\nWhile it's true that we will never have to actually update foo_2,\nfoo_3, and foo_4, ModifyTable still sets up its ResultRelInfos, which\nideally it shouldn't. Maybe we'll need to do something about that\nafter all.\n\nI will post the patch shortly.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqGXmP3-S9y%3DOQHyJyeWnZSOmcxBGdgAMWcLUOsnPTL88w%40mail.gmail.com\n\n\n", "msg_date": "Fri, 8 May 2020 22:32:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, May 8, 2020 at 7:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Here is a sketch for implementing the design that Tom described here:\n> https://www.postgresql.org/message-id/flat/357.1550612935%40sss.pgh.pa.us\n>\n> In short, we would like to have only one plan for ModifyTable to get\n> tuples out of to update/delete, not N for N child result relations as\n> is done currently.\n>\n> I suppose things are the way they are because creating a separate plan\n> for each result relation makes the job of ModifyTable node very\n> simple, which is currently this:\n>\n> 1. Take the plan's output tuple, extract the tupleid of the tuple to\n> update/delete in the currently active result relation,\n> 2. If delete, go to 3, else if update, filter out the junk columns\n> from the above tuple\n> 3. Call ExecUpdate()/ExecDelete() on the result relation with the new\n> tuple, if any\n>\n> If we make ModifyTable do a little more work for the inheritance case,\n> we can create only one plan but without \"expanding\" the targetlist.\n> That is, it will contain entries only for attributes that are assigned\n> values in the SET clause. This makes the plan reusable across result\n> relations, because all child relations must have those attributes,\n> even though the attribute numbers might be different. Anyway, the\n> work that ModifyTable will now have to do is this:\n>\n> 1. Take the plan's output tuple, extract tupleid of the tuple to\n> update/delete and \"tableoid\"\n> 2. Select the result relation to operate on using the tableoid\n> 3. If delete, go to 4, else if update, fetch the tuple identified by\n> tupleid from the result relation and fill in the unassigned columns\n> using that \"old\" tuple, also filtering out the junk columns\n> 4. Call ExecUpdate()/ExecDelete() on the result relation with the new\n> tuple, if any\n>\n> I do think that doing this would be worthwhile even if we may be\n> increasing ModifyTable's per-row overhead slightly, because planning\n> overhead of the current approach is very significant, especially for\n> partition trees with beyond a couple of thousand partitions. As to\n> how bad the problem is, trying to create a generic plan for `update\n> foo set ... where key = $1`, where foo has over 2000 partitions,\n> causes OOM even on a machine with 6GB of memory.\n\nPer row overhead would be incurred for every row whereas the plan time\noverhead is one-time or in case of a prepared statement almost free.\nSo we need to compare it esp. when there are 2000 partitions and all\nof them are being updated. But generally I agree that this would be a\nbetter approach. It might help using PWJ when the result relation\njoins with other partitioned table. I am not sure whether that\neffectively happens today by partition pruning. More on this later.\n\n>\n> The one plan shared by all result relations will be same as the one we\n> would get if the query were SELECT, except it will contain junk\n> attributes such as ctid needed to identify tuples and a new \"tableoid\"\n> junk attribute if multiple result relations will be present due to\n> inheritance. One major way in which this targetlist differs from the\n> current per-result-relation plans is that it won't be passed through\n> expand_targetlist(), because the set of unassigned attributes may not\n> be unique among children. As mentioned above, those missing\n> attributes will be filled by ModifyTable doing some extra work,\n> whereas previously they would have come with the plan's output tuple.\n>\n> For child result relations that are foreign tables, their FDW adds\n> junk attribute(s) to the query’s targetlist by updating it in-place\n> (AddForeignUpdateTargets). However, as the child tables will no\n> longer get their own parsetree, we must use some hack around this\n> interface to obtain the foreign table specific junk attributes and add\n> them to the original/parent query’s targetlist. Assuming that all or\n> most of the children will belong to the same FDW, we will end up with\n> only a handful such junk columns in the final targetlist. I am not\n> sure if it's worthwhile to change the API of AddForeignUpdateTargets\n> to require FDWs to not scribble on the passed-in parsetree as part of\n> this patch.\n\nWhat happens if there's a mixture of foreign and local partitions or\nmixture of FDWs? Injecting junk columns from all FDWs in the top level\ntarget list will cause error because those attributes won't be\navailable everywhere.\n\n>\n> As for how ModifyTable will create the new tuple for updates, I have\n> decided to use a ProjectionInfo for each result relation, which\n> projects a full, *clean* tuple ready to be put back into the relation.\n> When projecting, plan’s output tuple serves as OUTER tuple and the old\n> tuple fetched to fill unassigned attributes serves as SCAN tuple. By\n> having this ProjectionInfo also serve as the “junk filter”, we don't\n> need JunkFilters. The targetlist that this projection computes is\n> same as that of the result-relation-specific plan. Initially, I\n> thought to generate this \"expanded\" targetlist in\n> ExecInitModifyTable(). But as it can be somewhat expensive, doing it\n> only once in the planner seemed like a good idea. These\n> per-result-relations targetlists are carried in the ModifyTable node.\n>\n> To identify the result relation from the tuple produced by the plan,\n> “tableoid” junk column will be used. As the tuples for different\n> result relations won’t necessarily come out in the order in which\n> result relations are laid out in the ModifyTable node, we need a way\n> to map the tableoid value to result relation indexes. I have decided\n> to use a hash table here.\n\nCan we plan the scan query to add a sort node to order the rows by tableoid?\n\n>\n> A couple of things that I didn't think very hard what to do about now,\n> but may revisit later.\n>\n> * We will no longer be able use DirectModify APIs to push updates to\n> remote servers for foreign child result relations\n\nIf we convert a whole DML into partitionwise DML (just as it happens\ntoday unintentionally), we should be able to use DirectModify. PWJ\nwill help there. But even we can detect that the scan underlying a\nparticular partition can be evaluated completely on the node same as\nwhere the partition resides, we should be able to use DirectModify.\nBut if we are not able to support this optimization, the queries which\nbenefit from it for today won't perform well. I think we need to think\nabout this now instead of leave for later. Otherwise, make it so that\nwe use the old way when there are foreign partitions and new way\notherwise.\n\n>\n> * Over in [1], I have said that we get run-time pruning for free for\n> ModifyTable because the plan we are using is same as that for SELECT,\n> although now I think that I hadn't thought that through. With the PoC\n> patch that I have:\n>\n> prepare q as update foo set a = 250001 where a = $1;\n> set plan_cache_mode to 'force_generic_plan';\n> explain execute q(1);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Update on foo (cost=0.00..142.20 rows=40 width=14)\n> Update on foo_1\n> Update on foo_2 foo\n> Update on foo_3 foo\n> Update on foo_4 foo\n> -> Append (cost=0.00..142.20 rows=40 width=14)\n> Subplans Removed: 3\n> -> Seq Scan on foo_1 (cost=0.00..35.50 rows=10 width=14)\n> Filter: (a = $1)\n> (9 rows)\n>\n> While it's true that we will never have to actually update foo_2,\n> foo_3, and foo_4, ModifyTable still sets up its ResultRelInfos, which\n> ideally it shouldn't. Maybe we'll need to do something about that\n> after all.\n\n* Tuple re-routing during UPDATE. For now it's disabled so your design\nshould work. But we shouldn't design this feature in such a way that\nit comes in the way to enable tuple re-routing in future :).\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 11 May 2020 18:28:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Hi Ashutosh,\n\nThanks for chiming in.\n\nOn Mon, May 11, 2020 at 9:58 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Fri, May 8, 2020 at 7:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I do think that doing this would be worthwhile even if we may be\n> > increasing ModifyTable's per-row overhead slightly, because planning\n> > overhead of the current approach is very significant, especially for\n> > partition trees with beyond a couple of thousand partitions. As to\n> > how bad the problem is, trying to create a generic plan for `update\n> > foo set ... where key = $1`, where foo has over 2000 partitions,\n> > causes OOM even on a machine with 6GB of memory.\n>\n> Per row overhead would be incurred for every row whereas the plan time\n> overhead is one-time or in case of a prepared statement almost free.\n> So we need to compare it esp. when there are 2000 partitions and all\n> of them are being updated.\n\nI assume that such UPDATEs would be uncommon.\n\n> But generally I agree that this would be a\n> better approach. It might help using PWJ when the result relation\n> joins with other partitioned table.\n\nIt does, because the plan below ModifyTable is same as if the query\nwere SELECT instead of UPDATE; with my PoC:\n\nexplain (costs off) update foo set a = foo2.a + 1 from foo foo2 where\nfoo.a = foo2.a;\n QUERY PLAN\n--------------------------------------------------\n Update on foo\n Update on foo_1\n Update on foo_2\n -> Append\n -> Merge Join\n Merge Cond: (foo_1.a = foo2_1.a)\n -> Sort\n Sort Key: foo_1.a\n -> Seq Scan on foo_1\n -> Sort\n Sort Key: foo2_1.a\n -> Seq Scan on foo_1 foo2_1\n -> Merge Join\n Merge Cond: (foo_2.a = foo2_2.a)\n -> Sort\n Sort Key: foo_2.a\n -> Seq Scan on foo_2\n -> Sort\n Sort Key: foo2_2.a\n -> Seq Scan on foo_2 foo2_2\n(20 rows)\n\nas opposed to what you get today:\n\nexplain (costs off) update foo set a = foo2.a + 1 from foo foo2 where\nfoo.a = foo2.a;\n QUERY PLAN\n--------------------------------------------------\n Update on foo\n Update on foo_1\n Update on foo_2\n -> Merge Join\n Merge Cond: (foo_1.a = foo2.a)\n -> Sort\n Sort Key: foo_1.a\n -> Seq Scan on foo_1\n -> Sort\n Sort Key: foo2.a\n -> Append\n -> Seq Scan on foo_1 foo2\n -> Seq Scan on foo_2 foo2_1\n -> Merge Join\n Merge Cond: (foo_2.a = foo2.a)\n -> Sort\n Sort Key: foo_2.a\n -> Seq Scan on foo_2\n -> Sort\n Sort Key: foo2.a\n -> Append\n -> Seq Scan on foo_1 foo2\n -> Seq Scan on foo_2 foo2_1\n(23 rows)\n\n> > For child result relations that are foreign tables, their FDW adds\n> > junk attribute(s) to the query’s targetlist by updating it in-place\n> > (AddForeignUpdateTargets). However, as the child tables will no\n> > longer get their own parsetree, we must use some hack around this\n> > interface to obtain the foreign table specific junk attributes and add\n> > them to the original/parent query’s targetlist. Assuming that all or\n> > most of the children will belong to the same FDW, we will end up with\n> > only a handful such junk columns in the final targetlist. I am not\n> > sure if it's worthwhile to change the API of AddForeignUpdateTargets\n> > to require FDWs to not scribble on the passed-in parsetree as part of\n> > this patch.\n>\n> What happens if there's a mixture of foreign and local partitions or\n> mixture of FDWs? Injecting junk columns from all FDWs in the top level\n> target list will cause error because those attributes won't be\n> available everywhere.\n\nThat is a good question and something I struggled with ever since I\nstarted started thinking about implementing this.\n\nFor the problem that FDWs may inject junk columns that could neither\nbe present in local tables (root parent and other local children) nor\nother FDWs, I couldn't think of any solution other than to restrict\nwhat those junk columns can be -- to require them to be either \"ctid\",\n\"wholerow\", or a set of only *inherited* user columns. I think that's\nwhat Tom was getting at when he said the following in the email I\ncited in my first email:\n\n\"...It gets a bit harder if the tree contains some foreign tables,\nbecause they might have different concepts of row identity, but I'd\nthink in most cases you could still combine those into a small number\nof output columns.\"\n\nMaybe I misunderstood what Tom said, but I can't imagine how to let\nthese junk columns be anything that *all* tables contained in an\ninheritance tree, especially the root parent, cannot emit, if they are\nto be emitted out of a single plan.\n\n> > As for how ModifyTable will create the new tuple for updates, I have\n> > decided to use a ProjectionInfo for each result relation, which\n> > projects a full, *clean* tuple ready to be put back into the relation.\n> > When projecting, plan’s output tuple serves as OUTER tuple and the old\n> > tuple fetched to fill unassigned attributes serves as SCAN tuple. By\n> > having this ProjectionInfo also serve as the “junk filter”, we don't\n> > need JunkFilters. The targetlist that this projection computes is\n> > same as that of the result-relation-specific plan. Initially, I\n> > thought to generate this \"expanded\" targetlist in\n> > ExecInitModifyTable(). But as it can be somewhat expensive, doing it\n> > only once in the planner seemed like a good idea. These\n> > per-result-relations targetlists are carried in the ModifyTable node.\n> >\n> > To identify the result relation from the tuple produced by the plan,\n> > “tableoid” junk column will be used. As the tuples for different\n> > result relations won’t necessarily come out in the order in which\n> > result relations are laid out in the ModifyTable node, we need a way\n> > to map the tableoid value to result relation indexes. I have decided\n> > to use a hash table here.\n>\n> Can we plan the scan query to add a sort node to order the rows by tableoid?\n\nHmm, I am afraid that some piece of partitioning code that assumes a\ncertain order of result relations, and that order is not based on\nsorting tableoids.\n\n> > A couple of things that I didn't think very hard what to do about now,\n> > but may revisit later.\n> >\n> > * We will no longer be able use DirectModify APIs to push updates to\n> > remote servers for foreign child result relations\n>\n> If we convert a whole DML into partitionwise DML (just as it happens\n> today unintentionally), we should be able to use DirectModify. PWJ\n> will help there. But even we can detect that the scan underlying a\n> particular partition can be evaluated completely on the node same as\n> where the partition resides, we should be able to use DirectModify.\n\nI remember Fujita-san mentioned something like this, but I haven't\nlooked into how feasible it would be given the current DirectModify\ninterface.\n\n> But if we are not able to support this optimization, the queries which\n> benefit from it for today won't perform well. I think we need to think\n> about this now instead of leave for later. Otherwise, make it so that\n> we use the old way when there are foreign partitions and new way\n> otherwise.\n\nI would very much like find a solution for this, which hopefully isn't\nto fall back to using the old way.\n\n> * Tuple re-routing during UPDATE. For now it's disabled so your design\n> should work. But we shouldn't design this feature in such a way that\n> it comes in the way to enable tuple re-routing in future :).\n\nSorry, what is tuple re-routing and why does this new approach get in its way?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 May 2020 23:41:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Mon, May 11, 2020 at 8:58 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> What happens if there's a mixture of foreign and local partitions or\n> mixture of FDWs? Injecting junk columns from all FDWs in the top level\n> target list will cause error because those attributes won't be\n> available everywhere.\n\nI think that we're talking about a plan like this:\n\nUpdate\n-> Append\n -> a bunch of children\n\nI believe that you'd want to have happen here is for each child to\nemit the row identity columns that it knows about, and emit NULL for\nthe others. Then when you do the Append you end up with a row format\nthat includes all the individual identity columns, but for any\nparticular tuple, only one set of such columns is populated and the\nothers are all NULL. There doesn't seem to be any execution-time\nproblem with such a representation, but there might be a planning-time\nproblem with building it, because when you're writing a tlist for the\nAppend node, what varattno are you going to use for the columns that\nexist only in one particular child and not the others? The fact that\nsetrefs processing happens so late seems like an annoyance in this\ncase.\n\nMaybe it would be easier to have one Update note per kind of row\nidentity, i.e. if there's more than one such notion then...\n\nPlaceholder\n-> Update\n -> Append\n -> all children with one notion of row identity\n-> Update\n -> Append\n -> all children with another notion of row identity\n\n...and so forth.\n\nBut I'm not sure.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 14:34:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I believe that you'd want to have happen here is for each child to\n> emit the row identity columns that it knows about, and emit NULL for\n> the others. Then when you do the Append you end up with a row format\n> that includes all the individual identity columns, but for any\n> particular tuple, only one set of such columns is populated and the\n> others are all NULL.\n\nYeah, that was what I'd imagined in my earlier thinking about this.\n\n> There doesn't seem to be any execution-time\n> problem with such a representation, but there might be a planning-time\n> problem with building it,\n\nPossibly. We manage to cope with not-all-alike children now, of course,\nbut I think it might be true that no one plan node has Vars from\ndissimilar children. Even so, the Vars are self-identifying, so it\nseems like this ought to be soluble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 14:48:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Mon, May 11, 2020 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > There doesn't seem to be any execution-time\n> > problem with such a representation, but there might be a planning-time\n> > problem with building it,\n>\n> Possibly. We manage to cope with not-all-alike children now, of course,\n> but I think it might be true that no one plan node has Vars from\n> dissimilar children. Even so, the Vars are self-identifying, so it\n> seems like this ought to be soluble.\n\nIf the parent is RTI 1, and the children are RTIs 2..6, what\nvarno/varattno will we use in RTI 1's tlist to represent a column that\nexists in both RTI 2 and RTI 3 but not in RTI 1, 4, 5, or 6?\n\nI suppose the answer is 2 - or 3, but I guess we'd pick the first\nchild as the representative of the class. We surely can't use varno 1,\nbecause then there's no varattno that makes any sense. But if we use\n2, now we have the tlist for RTI 1 containing expressions with a\nchild's RTI as the varno. I could be wrong, but I think that's going\nto make setrefs.c throw up and die, and I wouldn't be very surprised\nif there were a bunch of other things that crashed and burned, too. I\nthink we have quite a bit of code that expects to be able to translate\nbetween parent-rel expressions and child-rel expressions, and that's\ngoing to be pretty problematic here.\n\nMaybe your answer is - let's just fix all that stuff. That could well\nbe right, but my first reaction is to think that it sounds hard.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 15:10:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If the parent is RTI 1, and the children are RTIs 2..6, what\n> varno/varattno will we use in RTI 1's tlist to represent a column that\n> exists in both RTI 2 and RTI 3 but not in RTI 1, 4, 5, or 6?\n\nFair question. We don't have any problem representing the column\nas it exists in any one of those children, but we lack a notation\nfor the \"union\" or whatever you want to call it, except in the case\nwhere the parent relation has a corresponding column. Still, this\ndoesn't seem that hard to fix. My inclination would be to invent\ndummy parent-rel columns (possibly with negative attnums? not sure if\nthat'd be easier or harder than adding them in the positive direction)\nto represent such \"union\" columns. This concept would only need to\nexist within the planner I think, since after setrefs.c there'd be no\ntrace of those dummy columns.\n\n> I think we have quite a bit of code that expects to be able to translate\n> between parent-rel expressions and child-rel expressions, and that's\n> going to be pretty problematic here.\n\n... shrug. Sure, we'll need to be able to do that mapping. Why will\nit be any harder than any other parent <-> child mapping? The planner\nwould know darn well what the mapping is while it's inventing the\ndummy columns, so it just has to keep that info around for use later.\n\n> Maybe your answer is - let's just fix all that stuff. That could well\n> be right, but my first reaction is to think that it sounds hard.\n\nI have to think that it'll net out as less code, and certainly less\ncomplicated code, than trying to extend inheritance_planner in its\ncurrent form to do what we wish it'd do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 16:22:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Mon, May 11, 2020 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > If the parent is RTI 1, and the children are RTIs 2..6, what\n> > varno/varattno will we use in RTI 1's tlist to represent a column that\n> > exists in both RTI 2 and RTI 3 but not in RTI 1, 4, 5, or 6?\n>\n> Fair question. We don't have any problem representing the column\n> as it exists in any one of those children, but we lack a notation\n> for the \"union\" or whatever you want to call it, except in the case\n> where the parent relation has a corresponding column. Still, this\n> doesn't seem that hard to fix. My inclination would be to invent\n> dummy parent-rel columns (possibly with negative attnums? not sure if\n> that'd be easier or harder than adding them in the positive direction)\n> to represent such \"union\" columns.\n\nAh, that makes sense. If we can invent dummy columns on the parent\nrel, then most of what I was worrying about no longer seems very\nworrying.\n\nI'm not sure what's involved in inventing such dummy columns, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 16:25:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Mon, May 11, 2020 at 8:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Per row overhead would be incurred for every row whereas the plan time\n> > overhead is one-time or in case of a prepared statement almost free.\n> > So we need to compare it esp. when there are 2000 partitions and all\n> > of them are being updated.\n>\n> I assume that such UPDATEs would be uncommon.\n\nYes, 2000 partitions being updated would be rare. But many rows from\nthe same partition being updated may not be that common. We have to\nknow how much is that per row overhead and updating how many rows it\ntakes to beat the planning time overhead. If the number of rows is\nvery large, we are good.\n\n>\n> > But generally I agree that this would be a\n> > better approach. It might help using PWJ when the result relation\n> > joins with other partitioned table.\n>\n> It does, because the plan below ModifyTable is same as if the query\n> were SELECT instead of UPDATE; with my PoC:\n>\n> explain (costs off) update foo set a = foo2.a + 1 from foo foo2 where\n> foo.a = foo2.a;\n> QUERY PLAN\n> --------------------------------------------------\n> Update on foo\n> Update on foo_1\n> Update on foo_2\n> -> Append\n> -> Merge Join\n> Merge Cond: (foo_1.a = foo2_1.a)\n> -> Sort\n> Sort Key: foo_1.a\n> -> Seq Scan on foo_1\n> -> Sort\n> Sort Key: foo2_1.a\n> -> Seq Scan on foo_1 foo2_1\n> -> Merge Join\n> Merge Cond: (foo_2.a = foo2_2.a)\n> -> Sort\n> Sort Key: foo_2.a\n> -> Seq Scan on foo_2\n> -> Sort\n> Sort Key: foo2_2.a\n> -> Seq Scan on foo_2 foo2_2\n> (20 rows)\n\nWonderful. That looks good.\n\n\n> > Can we plan the scan query to add a sort node to order the rows by tableoid?\n>\n> Hmm, I am afraid that some piece of partitioning code that assumes a\n> certain order of result relations, and that order is not based on\n> sorting tableoids.\n\nI am suggesting that we override that order (if any) in\ncreate_modifytable_path() or create_modifytable_plan() by explicitly\nordering the incoming paths on tableoid. May be using MergeAppend.\n\n\n>\n> > * Tuple re-routing during UPDATE. For now it's disabled so your design\n> > should work. But we shouldn't design this feature in such a way that\n> > it comes in the way to enable tuple re-routing in future :).\n>\n> Sorry, what is tuple re-routing and why does this new approach get in its way?\n\nAn UPDATE causing a tuple to move to a different partition. It would\nget in its way since the tuple will be located based on tableoid,\nwhich will be the oid of the old partition. But I think this approach\nhas higher chance of being able to solve that problem eventually\nrather than the current approach.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 12 May 2020 18:24:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, May 12, 2020 at 5:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, May 11, 2020 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > If the parent is RTI 1, and the children are RTIs 2..6, what\n> > > varno/varattno will we use in RTI 1's tlist to represent a column that\n> > > exists in both RTI 2 and RTI 3 but not in RTI 1, 4, 5, or 6?\n> >\n> > Fair question. We don't have any problem representing the column\n> > as it exists in any one of those children, but we lack a notation\n> > for the \"union\" or whatever you want to call it, except in the case\n> > where the parent relation has a corresponding column. Still, this\n> > doesn't seem that hard to fix. My inclination would be to invent\n> > dummy parent-rel columns (possibly with negative attnums? not sure if\n> > that'd be easier or harder than adding them in the positive direction)\n> > to represent such \"union\" columns.\n>\n> Ah, that makes sense. If we can invent dummy columns on the parent\n> rel, then most of what I was worrying about no longer seems very\n> worrying.\n\nIIUC, the idea is to have \"dummy\" columns in the top parent's\nreltarget for every junk TLE added to the top-level targetlist by\nchild tables' FDWs that the top parent itself can't emit. But we allow\nthese FDW junk TLEs to contain any arbitrary expression, not just\nplain Vars [1], so what node type are these dummy parent columns? I\ncan see from add_vars_to_targetlist() that we allow only Vars and\nPlaceHolderVars to be present in a relation's reltarget->exprs, but\nneither of those seem suitable for the task.\n\nOnce we get something in the parent's reltarget->exprs representing\nthese child expressions, from there they go back into child\nreltargets, so it would appear that our appendrel transformation code\nmust somehow be taught to deal with these dummy columns.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/docs/current/fdw-callbacks.html#FDW-CALLBACKS-UPDATE\n\n\"...If the extra expressions are more complex than simple Vars, they\nmust be run through eval_const_expressions before adding them to the\ntargetlist.\"\n\n\n", "msg_date": "Tue, 12 May 2020 21:55:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Tue, May 12, 2020 at 5:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Ah, that makes sense. If we can invent dummy columns on the parent\n>> rel, then most of what I was worrying about no longer seems very\n>> worrying.\n\n> IIUC, the idea is to have \"dummy\" columns in the top parent's\n> reltarget for every junk TLE added to the top-level targetlist by\n> child tables' FDWs that the top parent itself can't emit. But we allow\n> these FDW junk TLEs to contain any arbitrary expression, not just\n> plain Vars [1], so what node type are these dummy parent columns?\n\nWe'd have to group the children into groups that share the same\nrow-identity column type. This is why I noted way-back-when that\nit'd be a good idea to discourage FDWs from being too wild about\nwhat they use for row identity.\n\n(Also, just to be totally clear: I am *not* envisioning this as a\nmechanism for FDWs to inject whatever computations they darn please\ninto query trees. It's for the row identity needed by UPDATE/DELETE,\nand nothing else. That being the case, it's hard to understand why\nthe bottom-level Vars wouldn't be just plain Vars --- maybe \"system\ncolumn\" Vars or something like that, but still just Vars, not\nexpressions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 09:57:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, 13 May 2020 at 00:54, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, May 11, 2020 at 8:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Per row overhead would be incurred for every row whereas the plan time\n> > > overhead is one-time or in case of a prepared statement almost free.\n> > > So we need to compare it esp. when there are 2000 partitions and all\n> > > of them are being updated.\n> >\n> > I assume that such UPDATEs would be uncommon.\n>\n> Yes, 2000 partitions being updated would be rare. But many rows from\n> the same partition being updated may not be that common. We have to\n> know how much is that per row overhead and updating how many rows it\n> takes to beat the planning time overhead. If the number of rows is\n> very large, we are good.\n\nRows from a non-parallel Append should arrive in order. If you were\nworried about the performance of finding the correct ResultRelInfo for\nthe tuple that we just got, then we could just cache the tableOid and\nResultRelInfo for the last row, and if that tableoid matches on this\nrow, just use the same ResultRelInfo as last time. That'll save\ndoing the hash table lookup in all cases, apart from when the Append\nchanges to the next child subplan. Not sure exactly how that'll fit\nin with the foreign table discussion that's going on here though.\nAnother option would be to not use tableoid and instead inject an INT4\nConst (0 to nsubplans) into each subplan's targetlist that serves as\nthe index into an array of ResultRelInfos.\n\nAs for which ResultRelInfos to initialize, couldn't we just have the\nplanner generate an OidList of all the ones that we could need.\nBasically, all the non-pruned partitions. Perhaps we could even be\npretty lazy about building those ResultRelInfos during execution too.\nWe'd need to grab the locks first, but, without staring at the code, I\ndoubt there's a reason we'd need to build them all upfront. That\nwould help in cases where pruning didn't prune much, but due to\nsomething else in the WHERE clause, the results only come from some\nsmall subset of partitions.\n\nDavid\n\n\n", "msg_date": "Wed, 13 May 2020 11:51:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, May 12, 2020 at 9:54 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Mon, May 11, 2020 at 8:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Per row overhead would be incurred for every row whereas the plan time\n> > > overhead is one-time or in case of a prepared statement almost free.\n> > > So we need to compare it esp. when there are 2000 partitions and all\n> > > of them are being updated.\n> >\n> > I assume that such UPDATEs would be uncommon.\n>\n> Yes, 2000 partitions being updated would be rare. But many rows from\n> the same partition being updated may not be that common. We have to\n> know how much is that per row overhead and updating how many rows it\n> takes to beat the planning time overhead. If the number of rows is\n> very large, we are good.\n\nMaybe I am misunderstanding you, but the more the rows to update, the\nmore overhead we will be paying with the new approach.\n\n> > > Can we plan the scan query to add a sort node to order the rows by tableoid?\n> >\n> > Hmm, I am afraid that some piece of partitioning code that assumes a\n> > certain order of result relations, and that order is not based on\n> > sorting tableoids.\n>\n> I am suggesting that we override that order (if any) in\n> create_modifytable_path() or create_modifytable_plan() by explicitly\n> ordering the incoming paths on tableoid. May be using MergeAppend.\n\nSo, we will need to do 2 things:\n\n1. Implicitly apply an ORDER BY tableoid clause\n2. Add result relation RTIs to ModifyTable.resultRelations in the\norder of their RTE's relid.\n\nMaybe we can do that as a separate patch. Also, I am not sure if it\nwill get in the way of someone wanting to have ORDER BY LIMIT for\nupdates.\n\n> > > * Tuple re-routing during UPDATE. For now it's disabled so your design\n> > > should work. But we shouldn't design this feature in such a way that\n> > > it comes in the way to enable tuple re-routing in future :).\n> >\n> > Sorry, what is tuple re-routing and why does this new approach get in its way?\n>\n> An UPDATE causing a tuple to move to a different partition. It would\n> get in its way since the tuple will be located based on tableoid,\n> which will be the oid of the old partition. But I think this approach\n> has higher chance of being able to solve that problem eventually\n> rather than the current approach.\n\nAgain, I don't think I understand. We do currently (as of v11)\nre-route tuples when UPDATE causes them to move to a different\npartition, which, gladly, continues to work with my patch.\n\nSo how it works is like this: for a given \"new\" tuple, ExecUpdate()\nchecks if the tuple would violate the partition constraint of the\nresult relation that was passed along with the tuple. If it does, the\nnew tuple will be moved, by calling ExecDelete() to delete it from the\ncurrent relation, followed by ExecInsert() to find the new home for\nthe tuple. The only thing that changes with the new approach is how\nExecModifyTable() chooses a result relation to pass to ExecUpdate()\nfor a given \"new\" tuple it has fetched from the plan, which is quite\nindependent from the tuple re-routing mechanism proper.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 12:50:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, May 13, 2020 at 8:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 13 May 2020 at 00:54, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Mon, May 11, 2020 at 8:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > Per row overhead would be incurred for every row whereas the plan time\n> > > > overhead is one-time or in case of a prepared statement almost free.\n> > > > So we need to compare it esp. when there are 2000 partitions and all\n> > > > of them are being updated.\n> > >\n> > > I assume that such UPDATEs would be uncommon.\n> >\n> > Yes, 2000 partitions being updated would be rare. But many rows from\n> > the same partition being updated may not be that common. We have to\n> > know how much is that per row overhead and updating how many rows it\n> > takes to beat the planning time overhead. If the number of rows is\n> > very large, we are good.\n>\n> Rows from a non-parallel Append should arrive in order. If you were\n> worried about the performance of finding the correct ResultRelInfo for\n> the tuple that we just got, then we could just cache the tableOid and\n> ResultRelInfo for the last row, and if that tableoid matches on this\n> row, just use the same ResultRelInfo as last time. That'll save\n> doing the hash table lookup in all cases, apart from when the Append\n> changes to the next child subplan.\n\nThat would be a more common case, yes. Not when a join is involved though.\n\n> Not sure exactly how that'll fit\n> in with the foreign table discussion that's going on here though.\n\nForeign table discussion is concerned with what the only top-level\ntargetlist should look like given that different result relations may\nrequire different row-identifying junk columns, due to possibly\nbelonging to different FDWs. Currently that's not a thing to worry\nabout, because each result relation has its own plan and hence the\ntargetlist.\n\n> Another option would be to not use tableoid and instead inject an INT4\n> Const (0 to nsubplans) into each subplan's targetlist that serves as\n> the index into an array of ResultRelInfos.\n\nThat may be a bit fragile, considering how volatile that number\n(result relation index) can be if you figure in run-time pruning, but\nmaybe worth considering.\n\n> As for which ResultRelInfos to initialize, couldn't we just have the\n> planner generate an OidList of all the ones that we could need.\n> Basically, all the non-pruned partitions.\n\nWhy would replacing list of RT indexes by OIDs be better?\n\n> Perhaps we could even be\n> pretty lazy about building those ResultRelInfos during execution too.\n> We'd need to grab the locks first, but, without staring at the code, I\n> doubt there's a reason we'd need to build them all upfront. That\n> would help in cases where pruning didn't prune much, but due to\n> something else in the WHERE clause, the results only come from some\n\nLate ResultRelInfo initialization is worth considering, given that\ndoing it for tuple-routing target relations works. I don't know why\nwe are still Initializing them all in InitPlan(), because the only\njustification given for doing so that I know of is that it prevents\nlock-upgrade. I think we discussed somewhat recently that that is not\nreally a hazard.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 16:02:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, May 12, 2020 at 10:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Tue, May 12, 2020 at 5:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> Ah, that makes sense. If we can invent dummy columns on the parent\n> >> rel, then most of what I was worrying about no longer seems very\n> >> worrying.\n>\n> > IIUC, the idea is to have \"dummy\" columns in the top parent's\n> > reltarget for every junk TLE added to the top-level targetlist by\n> > child tables' FDWs that the top parent itself can't emit. But we allow\n> > these FDW junk TLEs to contain any arbitrary expression, not just\n> > plain Vars [1], so what node type are these dummy parent columns?\n>\n> We'd have to group the children into groups that share the same\n> row-identity column type. This is why I noted way-back-when that\n> it'd be a good idea to discourage FDWs from being too wild about\n> what they use for row identity.\n\nI understood the part about having a dummy parent column for each\ngroup of children that use the same junk attribute. I think we must\ngroup them using resname + row-identity Var type though, not just the\nlatter, because during execution, the FDWs look up the junk columns by\nname. If two FDWs add junk Vars of the same type, say, 'tid', but use\ndifferent resname, say, \"ctid\" and \"rowid\", respectively, we must add\ntwo dummy parent columns.\n\n> (Also, just to be totally clear: I am *not* envisioning this as a\n> mechanism for FDWs to inject whatever computations they darn please\n> into query trees. It's for the row identity needed by UPDATE/DELETE,\n> and nothing else. That being the case, it's hard to understand why\n> the bottom-level Vars wouldn't be just plain Vars --- maybe \"system\n> column\" Vars or something like that, but still just Vars, not\n> expressions.)\n\nI suppose we would need to explicitly check that and cause an error if\nthe contained expression is not a plain Var. Neither the interface\nwe've got nor the documentation discourages them from putting just\nabout any expression into the junk TLE.\n\nBased on an off-list chat with Robert, I started looking into whether\nit would make sense to drop the middleman Append (or MergeAppend)\naltogether, if only to avoid having to invent a representation for\nparent targetlist that is never actually computed. However, it's not\nhard to imagine that any new book-keeping code to manage child plans,\neven though perhaps cheaper in terms of cycles spent than\ninheritance_planner(), would add complexity to the main planner. It\nwould also be a shame to lose useful functionality that we get by\nhaving an Append present, such as run-time pruning and partitionwise\njoins.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 22:15:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, 13 May 2020 at 19:02, Amit Langote <amitlangote09@gmail.com> wrote:\n> > As for which ResultRelInfos to initialize, couldn't we just have the\n> > planner generate an OidList of all the ones that we could need.\n> > Basically, all the non-pruned partitions.\n>\n> Why would replacing list of RT indexes by OIDs be better?\n\nTBH, I didn't refresh my memory of the code before saying that.\nHowever, if we have a list of RT index for which rangetable entries we\nmust build ResultRelInfos for, then why is it a problem that plan-time\npruning is not allowing you to eliminate the excess ResultRelInfos,\nlike you mentioned in:\n\nOn Sat, 9 May 2020 at 01:33, Amit Langote <amitlangote09@gmail.com> wrote:\n> prepare q as update foo set a = 250001 where a = $1;\n> set plan_cache_mode to 'force_generic_plan';\n> explain execute q(1);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Update on foo (cost=0.00..142.20 rows=40 width=14)\n> Update on foo_1\n> Update on foo_2 foo\n> Update on foo_3 foo\n> Update on foo_4 foo\n> -> Append (cost=0.00..142.20 rows=40 width=14)\n> Subplans Removed: 3\n> -> Seq Scan on foo_1 (cost=0.00..35.50 rows=10 width=14)\n> Filter: (a = $1)\n> (9 rows)\n\nShouldn't you just be setting the ModifyTablePath.resultRelations to\nthe non-pruned RT indexes?\n\n> > Perhaps we could even be\n> > pretty lazy about building those ResultRelInfos during execution too.\n> > We'd need to grab the locks first, but, without staring at the code, I\n> > doubt there's a reason we'd need to build them all upfront. That\n> > would help in cases where pruning didn't prune much, but due to\n> > something else in the WHERE clause, the results only come from some\n>\n> Late ResultRelInfo initialization is worth considering, given that\n> doing it for tuple-routing target relations works. I don't know why\n> we are still Initializing them all in InitPlan(), because the only\n> justification given for doing so that I know of is that it prevents\n> lock-upgrade. I think we discussed somewhat recently that that is not\n> really a hazard.\n\nLooking more closely at ExecGetRangeTableRelation(), we'll already\nhave the lock by that time, there's an Assert to verify that too.\nIt'll have been acquired either during planning or during\nAcquireExecutorLocks(). So it seems doing anything for delaying the\nbuilding of ResultRelInfos wouldn't need to account for taking the\nlock at a different time.\n\nDavid\n\n\n", "msg_date": "Thu, 14 May 2020 10:55:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, May 14, 2020 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 13 May 2020 at 19:02, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > As for which ResultRelInfos to initialize, couldn't we just have the\n> > > planner generate an OidList of all the ones that we could need.\n> > > Basically, all the non-pruned partitions.\n> >\n> > Why would replacing list of RT indexes by OIDs be better?\n>\n> TBH, I didn't refresh my memory of the code before saying that.\n> However, if we have a list of RT index for which rangetable entries we\n> must build ResultRelInfos for, then why is it a problem that plan-time\n> pruning is not allowing you to eliminate the excess ResultRelInfos,\n> like you mentioned in:\n>\n> On Sat, 9 May 2020 at 01:33, Amit Langote <amitlangote09@gmail.com> wrote:\n> > prepare q as update foo set a = 250001 where a = $1;\n> > set plan_cache_mode to 'force_generic_plan';\n> > explain execute q(1);\n> > QUERY PLAN\n> > --------------------------------------------------------------------\n> > Update on foo (cost=0.00..142.20 rows=40 width=14)\n> > Update on foo_1\n> > Update on foo_2 foo\n> > Update on foo_3 foo\n> > Update on foo_4 foo\n> > -> Append (cost=0.00..142.20 rows=40 width=14)\n> > Subplans Removed: 3\n> > -> Seq Scan on foo_1 (cost=0.00..35.50 rows=10 width=14)\n> > Filter: (a = $1)\n> > (9 rows)\n>\n> Shouldn't you just be setting the ModifyTablePath.resultRelations to\n> the non-pruned RT indexes?\n\nOh, that example is showing run-time pruning for a generic plan. If\nplanner prunes partitions, of course, their result relation indexes\nare not present in ModifyTablePath.resultRelations.\n\n> > > Perhaps we could even be\n> > > pretty lazy about building those ResultRelInfos during execution too.\n> > > We'd need to grab the locks first, but, without staring at the code, I\n> > > doubt there's a reason we'd need to build them all upfront. That\n> > > would help in cases where pruning didn't prune much, but due to\n> > > something else in the WHERE clause, the results only come from some\n> >\n> > Late ResultRelInfo initialization is worth considering, given that\n> > doing it for tuple-routing target relations works. I don't know why\n> > we are still Initializing them all in InitPlan(), because the only\n> > justification given for doing so that I know of is that it prevents\n> > lock-upgrade. I think we discussed somewhat recently that that is not\n> > really a hazard.\n>\n> Looking more closely at ExecGetRangeTableRelation(), we'll already\n> have the lock by that time, there's an Assert to verify that too.\n> It'll have been acquired either during planning or during\n> AcquireExecutorLocks(). So it seems doing anything for delaying the\n> building of ResultRelInfos wouldn't need to account for taking the\n> lock at a different time.\n\nYep, I think it might be worthwhile to delay ResultRelInfo building\nfor UPDATE/DELETE too. I would like to leave that for another patch\nthough.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 May 2020 14:09:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, May 13, 2020 at 9:21 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Maybe I am misunderstanding you, but the more the rows to update, the\n> more overhead we will be paying with the new approach.\n\nYes, that's right. How much is that compared to the current planning\noverhead. How many rows it takes for that overhead to be comparable to\nthe current planning overhead.\n\nBut let's not sweat on that point much right now.\n\n>\n> So, we will need to do 2 things:\n>\n> 1. Implicitly apply an ORDER BY tableoid clause\n> 2. Add result relation RTIs to ModifyTable.resultRelations in the\n> order of their RTE's relid.\n>\n> Maybe we can do that as a separate patch. Also, I am not sure if it\n> will get in the way of someone wanting to have ORDER BY LIMIT for\n> updates.\n\nIt won't. But may be David's idea is better.\n\n>\n> > > > * Tuple re-routing during UPDATE. For now it's disabled so your design\n> > > > should work. But we shouldn't design this feature in such a way that\n> > > > it comes in the way to enable tuple re-routing in future :).\n> > >\n> > > Sorry, what is tuple re-routing and why does this new approach get in its way?\n> >\n> > An UPDATE causing a tuple to move to a different partition. It would\n> > get in its way since the tuple will be located based on tableoid,\n> > which will be the oid of the old partition. But I think this approach\n> > has higher chance of being able to solve that problem eventually\n> > rather than the current approach.\n>\n> Again, I don't think I understand. We do currently (as of v11)\n> re-route tuples when UPDATE causes them to move to a different\n> partition, which, gladly, continues to work with my patch.\n\nAh! Ok. I missed that part then.\n\n>\n> So how it works is like this: for a given \"new\" tuple, ExecUpdate()\n> checks if the tuple would violate the partition constraint of the\n> result relation that was passed along with the tuple. If it does, the\n> new tuple will be moved, by calling ExecDelete() to delete it from the\n> current relation, followed by ExecInsert() to find the new home for\n> the tuple. The only thing that changes with the new approach is how\n> ExecModifyTable() chooses a result relation to pass to ExecUpdate()\n> for a given \"new\" tuple it has fetched from the plan, which is quite\n> independent from the tuple re-routing mechanism proper.\n>\n\nThanks for the explanation.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 14 May 2020 18:24:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "So, I think I have a patch that seems to work, but not all the way,\nmore on which below.\n\nHere is the commit message in the attached patch.\n\n===\nSubject: [PATCH] Overhaul UPDATE's targetlist processing\n\nInstead of emitting the full tuple matching the target table's tuple\ndescriptor, make the plan emit only the attributes that are assigned\nvalues in the SET clause, plus row-identity junk attributes as before.\nThis allows us to avoid making a separate plan for each target\nrelation in the inheritance case, because the only reason it is so\ncurrently is to account for the fact that each target relations may\nhave a set of attributes that is different from others. Having only\none plan suffices, because the set of assigned attributes must be same\nin all the result relations.\n\nWhile the plan will now produce only the assigned attributes and\nrow-identity junk attributes, other columns' values are filled by\nrefetching the old tuple. To that end, there will be a targetlist for\neach target relation to compute the full tuple, that is, by combining\nthe values from the plan tuple and the old tuple, but they are passed\nseparately in the ModifyTable node.\n\nImplementation notes:\n\n* In the inheritance case, as the same plan produces tuples to be\nupdated from multiple result relations, the tuples now need to also\nidentity which table they come from, so an additional junk attribute\n\"tableoid\" is present in that case.\n\n* Considering that the inheritance set may contain foreign tables that\nrequire a different (set of) row-identity junk attribute(s), the plan\nneeds to emit multiple distinct junk attributes. When transposed to a\nchild scan node, this targetlist emits a non-NULL value for the junk\nattribute that's valid for the child relation and NULL for others.\n\n* Executor and FDW execution APIs can no longer assume any specific\norder in which the result relations will be processed. For each\ntuple to be updated/deleted, result relation is selected by looking it\nup in a hash table using the \"tableoid\" value as the key.\n\n* Since the plan does not emit values for all the attributes, FDW APIs\nmay not assume that the individual column values in the TupleTableSlot\ncontaining the plan tuple are accessible by their attribute numbers.\n\nTODO:\n\n* Reconsider having only one plan!\n* Update FDW handler docs to reflect the API changes\n===\n\nRegarding the first TODO, it is to address the limitation that FDWs\nwill no longer be able push the *whole* child UPDATE/DELETE query down\nto the remote server, including any joins, which is allowed at the\nmoment via PlanDirectModify API. The API seems to have been designed\nwith an assumption that the child scan/join node is the top-level\nplan, but that's no longer the case. If we consider bypassing the\nAppend and allow ModifyTable to access the child scan/join nodes\ndirectly, maybe we can allow that. I haven't updated the expected\noutput of postgres_fdw regression tests for now pending this.\n\nA couple of things in the patch that I feel slightly uneasy about:\n\n* Result relations are now appendrel children in the planner.\nNormally, any wholerow Vars in the child relation's reltarget->exprs\nget a ConvertRowType added on top to convert it back to the parent's\nreltype, because that's what the client expects in the SELECT case.\nIn the result relation case, the executor expects to see child\nwholerow Vars themselves, not their parent versions.\n\n* FDW's ExecFoeignUpdate() API expects that the NEW tuple passed to it\nmatch the target foreign table reltype, so that it can access the\ntarget attributes in the tuple by attribute numbers. Considering that\nthe plan no longer builds the full tuple itself, I made the executor\nsatisfy that expectation by filling the missing attributes' values\nusing the target table's wholerow attribute. That is, we now *always*\nfetch the wholerow attributes for UPDATE, not just when there are\nrow-level triggers that need it. I think that's unfortunate. Maybe,\nthe correct way is asking the FDWs to translate (setrefs.c style) the\ntarget attribute numbers appropriately to access the plan's output\ntuple.\n\nI will add the patch to the next CF. I haven't yet fully checked the\nperformance considerations of the new approach, but will do so in the\ncoming days.\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 2 Jun 2020 13:15:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, Jun 2, 2020 at 1:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> So, I think I have a patch that seems to work, but not all the way,\n> more on which below.\n>\n> Here is the commit message in the attached patch.\n>\n> ===\n> Subject: [PATCH] Overhaul UPDATE's targetlist processing\n>\n> Instead of emitting the full tuple matching the target table's tuple\n> descriptor, make the plan emit only the attributes that are assigned\n> values in the SET clause, plus row-identity junk attributes as before.\n> This allows us to avoid making a separate plan for each target\n> relation in the inheritance case, because the only reason it is so\n> currently is to account for the fact that each target relations may\n> have a set of attributes that is different from others. Having only\n> one plan suffices, because the set of assigned attributes must be same\n> in all the result relations.\n>\n> While the plan will now produce only the assigned attributes and\n> row-identity junk attributes, other columns' values are filled by\n> refetching the old tuple. To that end, there will be a targetlist for\n> each target relation to compute the full tuple, that is, by combining\n> the values from the plan tuple and the old tuple, but they are passed\n> separately in the ModifyTable node.\n>\n> Implementation notes:\n>\n> * In the inheritance case, as the same plan produces tuples to be\n> updated from multiple result relations, the tuples now need to also\n> identity which table they come from, so an additional junk attribute\n> \"tableoid\" is present in that case.\n>\n> * Considering that the inheritance set may contain foreign tables that\n> require a different (set of) row-identity junk attribute(s), the plan\n> needs to emit multiple distinct junk attributes. When transposed to a\n> child scan node, this targetlist emits a non-NULL value for the junk\n> attribute that's valid for the child relation and NULL for others.\n>\n> * Executor and FDW execution APIs can no longer assume any specific\n> order in which the result relations will be processed. For each\n> tuple to be updated/deleted, result relation is selected by looking it\n> up in a hash table using the \"tableoid\" value as the key.\n>\n> * Since the plan does not emit values for all the attributes, FDW APIs\n> may not assume that the individual column values in the TupleTableSlot\n> containing the plan tuple are accessible by their attribute numbers.\n>\n> TODO:\n>\n> * Reconsider having only one plan!\n> * Update FDW handler docs to reflect the API changes\n> ===\n\nI divided that into two patches:\n\n1. Make the plan producing tuples to be updated emit only the columns\nthat are actually updated. postgres_fdw test fails unless you also\napply the patch I posted at [1], because there is an unrelated bug in\nUPDATE tuple routing code that manifests due to some changes of this\npatch.\n\n2. Due to 1, inheritance_planner() is no longer needed, that is,\ninherited update/delete can be handled by pulling the rows to\nupdate/delete from only one plan, not one per child result relation.\nThis one makes that so.\n\nThere are some unsolved problems having to do with foreign tables in\nboth 1 and 2:\n\nIn 1, FDW update APIs still assume that the plan produces \"full\" tuple\nfor update. That needs to be fixed so that FDWs deal with getting\nonly the updated columns in the plan's output targetlist.\n\nIn 2, still haven't figured out a way to call PlanDirectModify() on\nchild foreign tables. Lacking that, inherited updates on foreign\ntables are now slower, because they are not pushed down. I'd like to\nfigure something out to fix that situation.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA%40mail.gmail.com", "msg_date": "Fri, 12 Jun 2020 15:46:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Hello,\n\nI have been working away at this and have updated the patches for many\ncosmetic and some functional improvements.\n\nOn Fri, Jun 12, 2020 at 3:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I divided that into two patches:\n>\n> 1. Make the plan producing tuples to be updated emit only the columns\n> that are actually updated. postgres_fdw test fails unless you also\n> apply the patch I posted at [1], because there is an unrelated bug in\n> UPDATE tuple routing code that manifests due to some changes of this\n> patch.\n>\n> 2. Due to 1, inheritance_planner() is no longer needed, that is,\n> inherited update/delete can be handled by pulling the rows to\n> update/delete from only one plan, not one per child result relation.\n> This one makes that so.\n>\n> There are some unsolved problems having to do with foreign tables in\n> both 1 and 2:\n>\n> In 1, FDW update APIs still assume that the plan produces \"full\" tuple\n> for update. That needs to be fixed so that FDWs deal with getting\n> only the updated columns in the plan's output targetlist.\n>\n> In 2, still haven't figured out a way to call PlanDirectModify() on\n> child foreign tables. Lacking that, inherited updates on foreign\n> tables are now slower, because they are not pushed down. I'd like to\n> figure something out to fix that situation.\n\nIn the updated patch, I have implemented a partial solution to this,\nbut I think it should be enough in most practically useful situations.\nWith the updated patch, PlanDirectModify is now called for child\nresult relations, but the FDWs will need to be revised to do useful\nwork in that call (as the patch does for postgres_fdw), because a\npotentially pushable ForeignScan involving a given child result\nrelation will now be at the bottom of the source plan tree, whereas\nbefore it would be the top-level plan. Another disadvantage of this\nnew situation is that inherited update/delete involving joins that\nwere previously pushable cannot be pushed anymore. If update/delete\nwould have been able to use partition-wise join, a child join\ninvolving a given child result relation could in principle be pushed,\nbut some semi-related issues prevent the use of partition-wise joins\nfor update/delete, especially when there are foreign table partitions.\n\nAnother major change is that instead of \"tableoid\" junk attribute to\nidentify the target result relation for a given tuple to be\nupdated/deleted, the patch now makes the tuples to be updated/deleted\ncontain a junk attribute that gives the index of the result relation\nin the query's list of result relations which can be used to look up\nthe target result relation directly. With \"tableoid\", we would need\nto build a hash table to map the result relation OIDs to result\nrelation indexes, a step that could be seen to become a bottleneck\nwith large partition counts (I am talking about executing generic\nplans here and have mentioned this problem on the thread to make\ngeneric plan execution for update/delete faster [1]).\n\nHere are the commit messages of the attached patches:\n\n[PATCH v3 1/3] Overhaul how updates compute a new tuple\n\nCurrently, the planner rewrites the top-level targetlist of an update\nstatement's parsetree so that it contains entries for all attributes\nof the target relation, including for those columns that have not\nbeen changed. This arrangement means that the executor can take a\ntuple that the plan produces, remove any junk attributes in it and\npass it down to the table AM or FDW update API as the new tuple.\nIt also means that in an inherited update, where there are multiple\ntarget relations, the planner must produce that many plans, because\nthe targetlists for different target relations may not all look the\nsame considering that child relations may have different sets of\ncolumns with varying attribute numbers.\n\nThis commit revises things so that the planner no longer expands\nthe parsetree targetlist to include unchanged columns so that the\nplan only produces values of the changed columns. To make the new\ntuple to pass to table AM and FDW update API, executor now evaluates\nanother targetlist matching the target table's TupleDesc which refers\nto the plan's output tuple to gets values of the changed columns and\nto the old tuple that is refetched for values of unchanged columns.\n\nTo get values for unchanged columns to use when forming the new tuple\nto pass to ExecForeignUpdate(), we now require foreign scans to\nalways include the wholerow Var corresponding to the old tuple being\nupdated, because the unchanged columns are not present in the\nplan's targetlist.\n\nAs a note to FDW authors, any FDW update planning APIs that look at\nthe plan's targetlist for checking if it is pushable to remote side\n(e.g. PlanDirectModify) should now instead look at \"update targetlist\"\nthat is set by the planner in PlannerInfo.update_tlist, because resnos\nin the plan's targetlist is no longer indexable by target column's\nattribute numbers.\n\nNote that even though the main goal of doing this is to avoid having\nto make multiple plans in the inherited update case, this commit does\nnot touch that subject. A subsequent commit will change things that\nare necessary to make inherited updates work with a single plan.\n\n[PATCH v3 2/3] Include result relation index if any in ForeignScan\n\nFDWs that can perform an UPDATE/DELETE remotely using the \"direct\nmodify\" set of APIs need in some cases to access the result relation\nproperties for which they can currently look at\nEState.es_result_relation_info. However that means the executor must\nensure that es_result_relation_info points to the correct result\nrelation at all times, especially during inherited updates. This\nrequirement gets in the way of number of projects related to changing\nhow ModifyTable operates. For example, an upcoming patch will change\nthings such that there will be one source plan for all result\nrelations whereas currently there is one per result relation, an\narrangement which makes it convenient to switch the result relation\nwhen the source plan changes.\n\nThis commit installs a new field 'resultRelIndex' in ForeignScan node\nwhich must be set by an FDW if the node will be used to carry out an\nUPDATE/DELETE operation on a given foreign table, which is the case\nif the FDW manages to push that operations to the remote side. This\ncommit also modifies postgres_fdw to implement that.\n\n[PATCH v3 3/3] Revise how inherited update/delete are handled\n\nNow that we have the ability to maintain and evaluate the targetlist\nneeded to generate an update's new tuples independently of the plan\nwhich fetches the tuples to be updated, there is no need to make\nseparate plans for child result relations as inheritance_planner()\ncurrently does. We generated separate plans before such capability\nwas present, because that was the only way to generate new tuples of\nchild relations where each may have its own unique set of columns\n(albeit all sharing the set columns present in the root parent).\n\nWith this commit, an inherited update/delete query will now be planned\njust as a non-inherited one, generating a single plan that goes under\nModifyTable. The plan for the inherited case is essentially the one\nthat we get for a select query, although the targetlist additionally\ncontains junk attributes needed by update/delete.\n\nBy going from one plan per result relation to only one shared across\nall result relations, the executor now needs a new way to identify the\nresult relation to direct a given tuple's update/delete to, whereas\nbefore, it could tell that from the plan it is executing. To that\nend, the planner now adds a new junk attribute to the query's\ntargetlist that for each tuple gives the index of the result relation\nin the query's list of result relations. That is in addition to the\njunk attribute that the planner already adds to identify the tuple's\nposition in a given relation (such as \"ctid\").\n\nGiven the way query planning with inherited tables work where child\nrelations are not part of the query's jointree and only the root\nparent is, there are some challenges that arise in the update/delete\ncase:\n\n* The junk attributes needed by child result relations need to be\nrepresented as root parent Vars, which is a non-issue for a given\nchild if what the child needs and what is added for the root parent\nare one and the same column. But considering that that may not\nalways be the case, more parent Vars might get added to the top-level\ntargetlist as children are added to the query as result relations.\nIn some cases, a child relation may use a column that is not present\nin the parent (allowed by traditional inheritance) or a non-column\nexpression, which must be represented using what this patch calls\n\"fake\" parent vars. These fake parent vars are really only\nplaceholders for the underlying child relation's column or expression\nand don't reach the executor's expression evluation machinery.\n\n* FDWs that are able to push update/delete fully to the remote side\nusing DirectModify set of APIs now have to go through hoops to\nidentify the subplan and the UPDATE targetlist to push for child\nresult relations, because the subplans for individual result\nrelations are no loger top-level plans. In fact, if the result\nrelation is joined to another relation, update/delete cannot be\npushed down at all anymore, whereas before since the child relations\nwould be present in the main jointree, they could be in the case\nwhere the relation being joined to was present on the same server as\nthe child result relation.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA%40mail.gmail.com", "msg_date": "Fri, 11 Sep 2020 19:20:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Sep 11, 2020 at 07:20:56PM +0900, Amit Langote wrote:\n> I have been working away at this and have updated the patches for many\n> cosmetic and some functional improvements.\n\nPlease note that this patch set fails to apply. Could you provide a\nrebase please?\n--\nMichael", "msg_date": "Thu, 1 Oct 2020 13:32:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 1, 2020 at 1:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 11, 2020 at 07:20:56PM +0900, Amit Langote wrote:\n> > I have been working away at this and have updated the patches for many\n> > cosmetic and some functional improvements.\n>\n> Please note that this patch set fails to apply. Could you provide a\n> rebase please?\n\nYeah, I'm working on posting an updated patch.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Oct 2020 15:24:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Sep 11, 2020 at 7:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Here are the commit messages of the attached patches:\n>\n> [PATCH v3 1/3] Overhaul how updates compute a new tuple\n>\n> Currently, the planner rewrites the top-level targetlist of an update\n> statement's parsetree so that it contains entries for all attributes\n> of the target relation, including for those columns that have not\n> been changed. This arrangement means that the executor can take a\n> tuple that the plan produces, remove any junk attributes in it and\n> pass it down to the table AM or FDW update API as the new tuple.\n> It also means that in an inherited update, where there are multiple\n> target relations, the planner must produce that many plans, because\n> the targetlists for different target relations may not all look the\n> same considering that child relations may have different sets of\n> columns with varying attribute numbers.\n>\n> This commit revises things so that the planner no longer expands\n> the parsetree targetlist to include unchanged columns so that the\n> plan only produces values of the changed columns. To make the new\n> tuple to pass to table AM and FDW update API, executor now evaluates\n> another targetlist matching the target table's TupleDesc which refers\n> to the plan's output tuple to gets values of the changed columns and\n> to the old tuple that is refetched for values of unchanged columns.\n>\n> To get values for unchanged columns to use when forming the new tuple\n> to pass to ExecForeignUpdate(), we now require foreign scans to\n> always include the wholerow Var corresponding to the old tuple being\n> updated, because the unchanged columns are not present in the\n> plan's targetlist.\n>\n> As a note to FDW authors, any FDW update planning APIs that look at\n> the plan's targetlist for checking if it is pushable to remote side\n> (e.g. PlanDirectModify) should now instead look at \"update targetlist\"\n> that is set by the planner in PlannerInfo.update_tlist, because resnos\n> in the plan's targetlist is no longer indexable by target column's\n> attribute numbers.\n>\n> Note that even though the main goal of doing this is to avoid having\n> to make multiple plans in the inherited update case, this commit does\n> not touch that subject. A subsequent commit will change things that\n> are necessary to make inherited updates work with a single plan.\n\nI tried to assess the performance impact of this rejiggering of how\nupdates are performed. As to why one may think there may be a\nnegative impact, consider that ExecModifyTable() now has to perform an\nextra fetch of the tuple being updated for filling in the unchanged\nvalues of the update's NEW tuple, because the plan itself will only\nproduce the values of changed columns.\n\n* Setup: a 10 column target table with a millions rows\n\ncreate table test_update_10 (\n a int,\n b int default NULL,\n c int default 0,\n d text default 'ddd',\n e text default 'eee',\n f text default 'fff',\n g text default 'ggg',\n h text default 'hhh',\n i text default 'iii',\n j text default 'jjj'\n);\ninsert into test_update_1o (a) select generate_series(1, 1000000);\n\n* pgbench test script (test_update_10.sql):\n\n\\set a random(1, 1000000)\nupdate test_update_10 set b = :a where a = :a;\n\n* TPS of `pgbench -n -T 120 -f test_update_10.sql`\n\nHEAD:\n\ntps = 10964.391120 (excluding connections establishing)\ntps = 12142.456638 (excluding connections establishing)\ntps = 11746.345270 (excluding connections establishing)\ntps = 11959.602001 (excluding connections establishing)\ntps = 12267.249378 (excluding connections establishing)\n\nmedian: 11959.60\n\nPatched:\n\ntps = 11565.916170 (excluding connections establishing)\ntps = 11952.491663 (excluding connections establishing)\ntps = 11959.789308 (excluding connections establishing)\ntps = 11699.611281 (excluding connections establishing)\ntps = 11799.220930 (excluding connections establishing)\n\nmedian: 11799.22\n\nThere is a slight impact but the difference seems within margin of error.\n\nOn the more optimistic side, I imagined that the trimming down of the\nplan's targetlist to include only changed columns would boost\nperformance, especially with tables containing more columns, which is\nnot uncommon. With 20 columns (additional columns are all filler ones\nas shown in the 10-column example), the same benchmarks gives the\nfollowing numbers:\n\nHEAD:\n\ntps = 11401.691219 (excluding connections establishing)\ntps = 11620.855088 (excluding connections establishing)\ntps = 11285.469430 (excluding connections establishing)\ntps = 10991.890904 (excluding connections establishing)\ntps = 10847.433093 (excluding connections establishing)\n\nmedian: 11285.46\n\nPatched:\n\ntps = 10958.443325 (excluding connections establishing)\ntps = 11613.783817 (excluding connections establishing)\ntps = 10940.129336 (excluding connections establishing)\ntps = 10717.405272 (excluding connections establishing)\ntps = 11691.330537 (excluding connections establishing)\n\nmedian: 10958.44\n\nHmm, not so much.\n\nWith 40 columns:\n\nHEAD:\n\ntps = 9778.362149 (excluding connections establishing)\ntps = 10004.792176 (excluding connections establishing)\ntps = 9473.849373 (excluding connections establishing)\ntps = 9776.931393 (excluding connections establishing)\ntps = 9737.891870 (excluding connections establishing)\n\nmedian: 9776.93\n\nPatched:\n\ntps = 10709.949043 (excluding connections establishing)\ntps = 10754.160718 (excluding connections establishing)\ntps = 10175.841480 (excluding connections establishing)\ntps = 9973.729774 (excluding connections establishing)\ntps = 10467.109679 (excluding connections establishing)\n\nmedian: 10467.10\n\nThere you go.\n\nPerhaps, the plan's bigger target list with HEAD does not cause a\nsignificant overhead in the *simple* update like above, because most\nof the work during execution is of fetching the tuple to update and of\nactually updating it. So, I also checked with a slightly more\ncomplicated query containing a join:\n\n\\set a random(1, 1000000)\nupdate test_update_10 t set b = foo.b from foo where t.a = foo.a and foo.b = :a;\n\nwhere `foo` is defined as:\n\ncreate table foo (a int, b int);\ninsert into foo select generate_series(1, 1000000);\ncreate index on foo (b);\n\nLooking at the EXPLAIN output of the query, one can see that the\ntarget list is smaller after patching which can save some work:\n\nHEAD:\n\nexplain (costs off, verbose) update test_update_10 t set b = foo.b\nfrom foo where t.a = foo.a and foo.b = 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Update on public.test_update_10 t\n -> Nested Loop\n Output: t.a, foo.b, t.c, t.d, t.e, t.f, t.g, t.h, t.i, t.j,\nt.ctid, foo.ctid\n -> Index Scan using foo_b_idx on public.foo\n Output: foo.b, foo.ctid, foo.a\n Index Cond: (foo.b = 1)\n -> Index Scan using test_update_10_a_idx on public.test_update_10 t\n Output: t.a, t.c, t.d, t.e, t.f, t.g, t.h, t.i, t.j, t.ctid\n Index Cond: (t.a = foo.a)\n(9 rows)\n\nPatched:\n\nexplain (costs off, verbose) update test_update_10 t set b = foo.b\nfrom foo where t.a = foo.a and foo.b = 1;\n QUERY PLAN\n------------------------------------------------------------------------------\n Update on public.test_update_10 t\n -> Nested Loop\n Output: foo.b, t.ctid, foo.ctid\n -> Index Scan using foo_b_idx on public.foo\n Output: foo.b, foo.ctid, foo.a\n Index Cond: (foo.b = 1)\n -> Index Scan using test_update_10_a_idx on public.test_update_10 t\n Output: t.ctid, t.a\n Index Cond: (t.a = foo.a)\n(9 rows)\n\nAnd here are the TPS numbers for that query with 10, 20, 40 columns\ntable cases. Note that the more columns the target table has, the\nbigger the target list to compute is with HEAD.\n\n10 columns:\n\nHEAD:\n\ntps = 7594.881268 (excluding connections establishing)\ntps = 7660.451217 (excluding connections establishing)\ntps = 7598.899951 (excluding connections establishing)\ntps = 7413.397046 (excluding connections establishing)\ntps = 7484.978635 (excluding connections establishing)\n\nmedian: 7594.88\n\nPatched:\n\ntps = 7402.409104 (excluding connections establishing)\ntps = 7532.776214 (excluding connections establishing)\ntps = 7549.397016 (excluding connections establishing)\ntps = 7512.321466 (excluding connections establishing)\ntps = 7448.255418 (excluding connections establishing)\n\nmedian: 7512.32\n\n20 columns:\n\nHEAD:\n\ntps = 6842.674366 (excluding connections establishing)\ntps = 7151.724481 (excluding connections establishing)\ntps = 7093.727976 (excluding connections establishing)\ntps = 7072.273547 (excluding connections establishing)\ntps = 7040.350004 (excluding connections establishing)\n\nmedian: 7093.72\n\nPatched:\n\ntps = 7362.941398 (excluding connections establishing)\ntps = 7106.826433 (excluding connections establishing)\ntps = 7353.507317 (excluding connections establishing)\ntps = 7361.944770 (excluding connections establishing)\ntps = 7072.027684 (excluding connections establishing)\n\nmedian: 7353.50\n\n40 columns:\n\nHEAD:\n\ntps = 6396.845818 (excluding connections establishing)\ntps = 6383.105593 (excluding connections establishing)\ntps = 6370.143763 (excluding connections establishing)\ntps = 6370.455213 (excluding connections establishing)\ntps = 6380.993666 (excluding connections establishing)\n\nmedian: 6380.99\n\nPatched:\n\ntps = 7091.581813 (excluding connections establishing)\ntps = 7036.805326 (excluding connections establishing)\ntps = 7019.120007 (excluding connections establishing)\ntps = 7025.704379 (excluding connections establishing)\ntps = 6848.846667 (excluding connections establishing)\n\nmedian: 7025.70\n\nIt seems clear that the saving on the target list computation overhead\nthat we get from the patch is hard to ignore in this case.\n\nI've attached updated patches, because as Michael pointed out, the\nprevious version no longer applies.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 4 Oct 2020 11:44:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Sun, Oct 4, 2020 at 11:44 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Sep 11, 2020 at 7:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Here are the commit messages of the attached patches:\n> >\n> > [PATCH v3 1/3] Overhaul how updates compute a new tuple\n>\n> I tried to assess the performance impact of this rejiggering of how\n> updates are performed. As to why one may think there may be a\n> negative impact, consider that ExecModifyTable() now has to perform an\n> extra fetch of the tuple being updated for filling in the unchanged\n> values of the update's NEW tuple, because the plan itself will only\n> produce the values of changed columns.\n>\n...\n> It seems clear that the saving on the target list computation overhead\n> that we get from the patch is hard to ignore in this case.\n>\n> I've attached updated patches, because as Michael pointed out, the\n> previous version no longer applies.\n\nRebased over the recent executor result relation related commits.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Oct 2020 22:03:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On 29/10/2020 15:03, Amit Langote wrote:\n> On Sun, Oct 4, 2020 at 11:44 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Fri, Sep 11, 2020 at 7:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> Here are the commit messages of the attached patches:\n>>>\n>>> [PATCH v3 1/3] Overhaul how updates compute a new tuple\n>>\n>> I tried to assess the performance impact of this rejiggering of how\n>> updates are performed. As to why one may think there may be a\n>> negative impact, consider that ExecModifyTable() now has to perform an\n>> extra fetch of the tuple being updated for filling in the unchanged\n>> values of the update's NEW tuple, because the plan itself will only\n>> produce the values of changed columns.\n>>\n> ...\n>> It seems clear that the saving on the target list computation overhead\n>> that we get from the patch is hard to ignore in this case.\n>>\n>> I've attached updated patches, because as Michael pointed out, the\n>> previous version no longer applies.\n> \n> Rebased over the recent executor result relation related commits.\n\nI also did some quick performance testing with a simple update designed \nas a worst-case scenario:\n\ncreate unlogged table tab (a int4, b int4);\ninsert into tab select g, g from generate_series(1, 10000000) g;\n\n\\timing on\nvacuum tab; update tab set b = b, a = a;\n\nWithout the patch, the update takes about 7.3 s on my laptop, and about \n8.3 s with the patch.\n\nIn this case, the patch fetches the old tuple, but it wouldn't really \nneed to, because all the columns are updated. Could we optimize that \nspecial case?\n\nIn principle, it would sometimes also make sense to add the old columns \nto the targetlist like we used to, to avoid the fetch. But estimating \nwhen that's cheaper would be complicated.\n\nDespite that, I like this new approach a lot. It's certainly much nicer \nthan inheritance_planner().\n\n- Heikki\n\n\n", "msg_date": "Fri, 30 Oct 2020 22:35:48 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I also did some quick performance testing with a simple update designed \n> as a worst-case scenario:\n\n> vacuum tab; update tab set b = b, a = a;\n\n> In this case, the patch fetches the old tuple, but it wouldn't really \n> need to, because all the columns are updated. Could we optimize that \n> special case?\n\nI'm not following. We need to read the old values of a and b for\nthe update source expressions, no?\n\n(One could imagine realizing that this is a no-op update, but that\nseems quite distinct from the problem at hand, and probably not\nworth the cycles.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Oct 2020 17:10:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On 30/10/2020 23:10, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> I also did some quick performance testing with a simple update designed\n>> as a worst-case scenario:\n> \n>> vacuum tab; update tab set b = b, a = a;\n> \n>> In this case, the patch fetches the old tuple, but it wouldn't really\n>> need to, because all the columns are updated. Could we optimize that\n>> special case?\n> \n> I'm not following. We need to read the old values of a and b for\n> the update source expressions, no?\n> \n> (One could imagine realizing that this is a no-op update, but that\n> seems quite distinct from the problem at hand, and probably not\n> worth the cycles.)\n\nAh, no, that's not what I meant. You do need to read the old values to \ncalculate the new ones, but if you update all the columns or if you \nhappened to read all the old values as part of the scan, then you don't \nneed to fetch the old tuple in the ModifyTable node.\n\nLet's try better example. Currently with the patch:\n\npostgres=# explain verbose update tab set a = 1;\n QUERY PLAN \n\n---------------------------------------------------------------------------------\n Update on public.tab (cost=0.00..269603.27 rows=0 width=0)\n -> Seq Scan on public.tab (cost=0.00..269603.27 rows=10028327 \nwidth=10)\n Output: 1, ctid\n\nThe Modify Table node will fetch the old tuple to get the value for 'b', \nwhich is unchanged. But if you do:\n\npostgres=# explain verbose update tab set a = 1, b = 2;\n QUERY PLAN \n\n---------------------------------------------------------------------------------\n Update on public.tab (cost=0.00..269603.27 rows=0 width=0)\n -> Seq Scan on public.tab (cost=0.00..269603.27 rows=10028327 \nwidth=14)\n Output: 1, 2, ctid\n\nThe Modify Table will still fetch the old tuple, but in this case, it's \nnot really necessary, because both columns are overwritten.\n\n- Heikki\n\n\n", "msg_date": "Fri, 30 Oct 2020 23:55:53 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> .... But if you do:\n\n> postgres=# explain verbose update tab set a = 1, b = 2;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------\n> Update on public.tab (cost=0.00..269603.27 rows=0 width=0)\n> -> Seq Scan on public.tab (cost=0.00..269603.27 rows=10028327 \n> width=14)\n> Output: 1, 2, ctid\n\n> The Modify Table will still fetch the old tuple, but in this case, it's \n> not really necessary, because both columns are overwritten.\n\nAh, that I believe. Not sure it's a common enough case to spend cycles\nlooking for, though.\n\nIn any case, we still have to access the old tuple, don't we?\nTo lock it and update its t_ctid, whether or not we have use for\nits user columns. Maybe there's some gain from not having to\ndeconstruct the tuple, but it doesn't seem like it'd be much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Oct 2020 18:12:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On 31/10/2020 00:12, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> .... But if you do:\n> \n>> postgres=# explain verbose update tab set a = 1, b = 2;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------\n>> Update on public.tab (cost=0.00..269603.27 rows=0 width=0)\n>> -> Seq Scan on public.tab (cost=0.00..269603.27 rows=10028327\n>> width=14)\n>> Output: 1, 2, ctid\n> \n>> The Modify Table will still fetch the old tuple, but in this case, it's\n>> not really necessary, because both columns are overwritten.\n> \n> Ah, that I believe. Not sure it's a common enough case to spend cycles\n> looking for, though.\n> \n> In any case, we still have to access the old tuple, don't we?\n> To lock it and update its t_ctid, whether or not we have use for\n> its user columns. Maybe there's some gain from not having to\n> deconstruct the tuple, but it doesn't seem like it'd be much.\n\nYeah, you need to access the old tuple to update its t_ctid, but \naccessing it twice is still more expensive than accessing it once. Maybe \nyou could optimize it somewhat by keeping the buffer pinned or \nsomething. Or push the responsibility down to the table AM, passing the \nAM only the modified columns, and let the AM figure out how to deal with \nthe columns that were not modified, hoping that it can do something smart.\n\nIt's indeed not a big deal in usual cases. The test case I constructed \nwas deliberately bad, and the slowdown was only about 10%. I'm OK with \nthat, but if there's an easy way to avoid it, we should. (Seems like \nthere isn't.)\n\n- Heikki\n\n\n", "msg_date": "Sat, 31 Oct 2020 00:26:32 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Sat, Oct 31, 2020 at 7:26 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 31/10/2020 00:12, Tom Lane wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> >> .... But if you do:\n> >\n> >> postgres=# explain verbose update tab set a = 1, b = 2;\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------------------------\n> >> Update on public.tab (cost=0.00..269603.27 rows=0 width=0)\n> >> -> Seq Scan on public.tab (cost=0.00..269603.27 rows=10028327\n> >> width=14)\n> >> Output: 1, 2, ctid\n> >\n> >> The Modify Table will still fetch the old tuple, but in this case, it's\n> >> not really necessary, because both columns are overwritten.\n> >\n> > Ah, that I believe. Not sure it's a common enough case to spend cycles\n> > looking for, though.\n> >\n> > In any case, we still have to access the old tuple, don't we?\n> > To lock it and update its t_ctid, whether or not we have use for\n> > its user columns. Maybe there's some gain from not having to\n> > deconstruct the tuple, but it doesn't seem like it'd be much.\n\nWith the patched, the old tuple fetched by ModifyTable node will not\nbe deconstructed in this case, because all the values needed to form\nthe new tuple will be obtained from the plan's output tuple, so there\nis no need to read the user columns from the old tuple. Given that,\nit indeed sounds a bit wasteful to have read the tuple as Heikki\npoints out, but again, that's in a rare case.\n\n> Yeah, you need to access the old tuple to update its t_ctid, but\n> accessing it twice is still more expensive than accessing it once. Maybe\n> you could optimize it somewhat by keeping the buffer pinned or\n> something.\n\nThe buffer containing the old tuple is already pinned first when\nExecModifyTable() fetches the tuple to form the new tuple, and then\nwhen, in this example, heap_update() fetches it to update the old\ntuple contents.\n\n> Or push the responsibility down to the table AM, passing the\n> AM only the modified columns, and let the AM figure out how to deal with\n> the columns that were not modified, hoping that it can do something smart.\n\nThat sounds interesting, but maybe a sizable project on its own?\n\nThanks a lot for taking a look at this, BTW.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Nov 2020 16:40:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On 29/10/2020 15:03, Amit Langote wrote:\n> Rebased over the recent executor result relation related commits.\n\nModifyTablePath didn't get the memo that a ModifyTable can only have one \nsubpath after these patches. Attached patch, on top of your v5 patches, \ncleans that up.\n\n- Heikki", "msg_date": "Wed, 11 Nov 2020 14:10:56 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Nov 11, 2020 at 9:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 29/10/2020 15:03, Amit Langote wrote:\n> > Rebased over the recent executor result relation related commits.\n>\n> ModifyTablePath didn't get the memo that a ModifyTable can only have one\n> subpath after these patches. Attached patch, on top of your v5 patches,\n> cleans that up.\n\nAh, thought I'd taken care of that, thanks. Attached v6.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 13 Nov 2020 18:52:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Nov 13, 2020 at 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Nov 11, 2020 at 9:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > On 29/10/2020 15:03, Amit Langote wrote:\n> > > Rebased over the recent executor result relation related commits.\n> >\n> > ModifyTablePath didn't get the memo that a ModifyTable can only have one\n> > subpath after these patches. Attached patch, on top of your v5 patches,\n> > cleans that up.\n>\n> Ah, thought I'd taken care of that, thanks. Attached v6.\n\nThis got slightly broken due to the recent batch insert related\nchanges, so here is the rebased version. I also made a few cosmetic\nchanges.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 26 Jan 2021 20:54:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Oct 30, 2020 at 6:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Yeah, you need to access the old tuple to update its t_ctid, but\n> accessing it twice is still more expensive than accessing it once. Maybe\n> you could optimize it somewhat by keeping the buffer pinned or\n> something. Or push the responsibility down to the table AM, passing the\n> AM only the modified columns, and let the AM figure out how to deal with\n> the columns that were not modified, hoping that it can do something smart.\n\nJust as a point of possible interest, back when I was working on\nzheap, I sort of wanted to take this in the opposite direction. In\neffect, a zheap tuple has system columns that don't exist for a heap\ntuple, and you can't do an update or delete without knowing what the\nvalues for those columns are, so zheap had to just refetch the tuple,\nbut that sucked in comparisons with the existing heap, which didn't\nhave to do the refetch. At the time, I thought maybe the right idea\nwould be to extend things so that a table AM could specify an\narbitrary set of system columns that needed to be bubbled up to the\npoint where the update or delete happens, but that seemed really\ncomplicated to implement and I never tried. Here it seems like we're\nthinking of going the other way, and just always doing the refetch.\nThat is of course fine for zheap comparative benchmarks: instead of\nmaking zheap faster, we just make the heap slower!\n\nWell, sort of. I didn't think about the benefits of the refetch\napproach when the tuples are wide. That does cast a somewhat different\nlight on things. I suppose we could have both methods and choose the\none that seems likely to be faster in particular cases, but that seems\nlike way too much machinery. Maybe there's some way to further\noptimize accessing the same tuple multiple times in rapid succession\nto claw back some of the lost performance in the slow cases, but I\ndon't have a specific idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jan 2021 14:41:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, Jan 26, 2021 at 8:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Nov 13, 2020 at 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Nov 11, 2020 at 9:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > > On 29/10/2020 15:03, Amit Langote wrote:\n> > > > Rebased over the recent executor result relation related commits.\n> > >\n> > > ModifyTablePath didn't get the memo that a ModifyTable can only have one\n> > > subpath after these patches. Attached patch, on top of your v5 patches,\n> > > cleans that up.\n> >\n> > Ah, thought I'd taken care of that, thanks. Attached v6.\n>\n> This got slightly broken due to the recent batch insert related\n> changes, so here is the rebased version. I also made a few cosmetic\n> changes.\n\nBroken again, so rebased.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 15:22:29 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Jan 27, 2021 at 4:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Oct 30, 2020 at 6:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Yeah, you need to access the old tuple to update its t_ctid, but\n> > accessing it twice is still more expensive than accessing it once. Maybe\n> > you could optimize it somewhat by keeping the buffer pinned or\n> > something. Or push the responsibility down to the table AM, passing the\n> > AM only the modified columns, and let the AM figure out how to deal with\n> > the columns that were not modified, hoping that it can do something smart.\n>\n> Just as a point of possible interest, back when I was working on\n> zheap, I sort of wanted to take this in the opposite direction. In\n> effect, a zheap tuple has system columns that don't exist for a heap\n> tuple, and you can't do an update or delete without knowing what the\n> values for those columns are, so zheap had to just refetch the tuple,\n> but that sucked in comparisons with the existing heap, which didn't\n> have to do the refetch.\n\nSo would zheap refetch a tuple using the \"ctid\" column in the plan's\noutput tuple and then use some other columns from the fetched tuple to\nactually do the update?\n\n> At the time, I thought maybe the right idea\n> would be to extend things so that a table AM could specify an\n> arbitrary set of system columns that needed to be bubbled up to the\n> point where the update or delete happens, but that seemed really\n> complicated to implement and I never tried.\n\nCurrently, FDWs can specify tuple-identifying system columns, which\nare added to the query's targetlist when rewriteTargetListUD() calls\nthe AddForeignUpdateTargets() API.\n\nIn rewriteTargetListUD(), one can see that the planner assumes that\nall local tables, irrespective of their AM, use a \"ctid\" column to\nidentify their tuples:\n\n if (target_relation->rd_rel->relkind == RELKIND_RELATION ||\n target_relation->rd_rel->relkind == RELKIND_MATVIEW ||\n target_relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n {\n /*\n * Emit CTID so that executor can find the row to update or delete.\n */\n var = makeVar(parsetree->resultRelation,\n SelfItemPointerAttributeNumber,\n TIDOID,\n -1,\n InvalidOid,\n 0);\n\n attrname = \"ctid\";\n }\n else if (target_relation->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n {\n /*\n * Let the foreign table's FDW add whatever junk TLEs it wants.\n */\n FdwRoutine *fdwroutine;\n\n fdwroutine = GetFdwRoutineForRelation(target_relation, false);\n\n if (fdwroutine->AddForeignUpdateTargets != NULL)\n fdwroutine->AddForeignUpdateTargets(parsetree, target_rte,\n target_relation);\n\nMaybe the first block could likewise ask the table AM if it prefers to\nadd a custom set of system columns or just add \"ctid\" otherwise?\n\n> Here it seems like we're\n> thinking of going the other way, and just always doing the refetch.\n\nTo be clear, the new refetch in ExecModifyTable() is to fill in the\nunchanged columns in the new tuple. If we rejigger the\ntable_tuple_update() API to receive a partial tuple (essentially\nwhat's in 'planSlot' passed to ExecUpdate) as opposed to the full\ntuple, we wouldn't need the refetch.\n\nWe'd need to teach a few other executor routines, such as\nExecWithCheckOptions(), ExecConstraints(), etc. to live with a partial\ntuple but maybe that's doable with some effort. We could even\noptimize away evaluating any constraints if none of the constrained\ncolumns are unchanged.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 18:32:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Feb 4, 2021 at 4:33 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> So would zheap refetch a tuple using the \"ctid\" column in the plan's\n> output tuple and then use some other columns from the fetched tuple to\n> actually do the update?\n\nYes.\n\n> To be clear, the new refetch in ExecModifyTable() is to fill in the\n> unchanged columns in the new tuple. If we rejigger the\n> table_tuple_update() API to receive a partial tuple (essentially\n> what's in 'planSlot' passed to ExecUpdate) as opposed to the full\n> tuple, we wouldn't need the refetch.\n\nI don't think we should assume that every AM needs the unmodified\ncolumns. Imagine a table AM that's a columnar store. Imagine that each\ncolumn is stored completely separately, so you have to look up the TID\nonce per column and then stick in the new values. Well, clearly you\nwant to skip this completely for columns that don't need to be\nmodified. If someone gives you all the columns it actually sucks,\nbecause now you have to look them all up again just to figure out\nwhich ones you need to change, whereas if they gave you only the\nunmodified columns you could just do nothing for those and save a\nbunch of work.\n\nzheap, though, is always going to need to take another look at the\ntuple to do the update, unless you can pass up some values through\nhidden columns. I'm not exactly sure how expensive that really is,\nthough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 08:40:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Feb 4, 2021 at 10:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Feb 4, 2021 at 4:33 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > To be clear, the new refetch in ExecModifyTable() is to fill in the\n> > unchanged columns in the new tuple. If we rejigger the\n> > table_tuple_update() API to receive a partial tuple (essentially\n> > what's in 'planSlot' passed to ExecUpdate) as opposed to the full\n> > tuple, we wouldn't need the refetch.\n>\n> I don't think we should assume that every AM needs the unmodified\n> columns. Imagine a table AM that's a columnar store. Imagine that each\n> column is stored completely separately, so you have to look up the TID\n> once per column and then stick in the new values. Well, clearly you\n> want to skip this completely for columns that don't need to be\n> modified. If someone gives you all the columns it actually sucks,\n> because now you have to look them all up again just to figure out\n> which ones you need to change, whereas if they gave you only the\n> unmodified columns you could just do nothing for those and save a\n> bunch of work.\n\nRight, that's the idea in case I wasn't clear. Currently, a slot\ncontaining the full tuple is passed to the table AM, with or without\nthe patch. The API:\n\nstatic inline TM_Result\ntable_tuple_update(Relation rel, ItemPointer otid, TupleTableSlot *slot, ...\n\n describes its 'slot' parameter as:\n\n * slot - newly constructed tuple data to store\n\nWe could, possibly in a follow-on patch, adjust the\ntable_tuple_update() API to only accept the changed values through the\nslot.\n\n> zheap, though, is always going to need to take another look at the\n> tuple to do the update, unless you can pass up some values through\n> hidden columns. I'm not exactly sure how expensive that really is,\n> though.\n\nI guess it would depend on how many of those hidden columns there need\nto be (in addition to the existing \"ctid\" hidden column) and how many\nlevels of the plan tree they would need to climb through when bubbling\nup.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 12:14:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Feb 4, 2021 at 10:14 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I guess it would depend on how many of those hidden columns there need\n> to be (in addition to the existing \"ctid\" hidden column) and how many\n> levels of the plan tree they would need to climb through when bubbling\n> up.\n\nMy memory is a bit fuzzy because I haven't looked at this in a while,\nbut I feel like there were basically two: a 64-bit UndoRecPtr and an\ninteger slot number. I could be misremembering, though.\n\nIt's a bit annoying that we percolate things up the tree the way we do\nat all. I realize this is far afield from the topic of this thread.\nBut suppose that I join 5 tables and select a subset of the table\ncolumns in the output. Suppose WLOG the join order is A-B-C-D-E. Well,\nwe're going to pull the columns that are needed from A and B and put\nthem into the slot representing the result of the A-B join. Then we're\ngoing to take some columns from that slot and some columns from C and\nput them into the slot representing the result of the A-B-C join. And\nso on until we get to the top. But the slots for the A-B, A-B-C, and\nA-B-C-D joins don't seem to really be needed. At any point in time,\nthe value for some column A.x should be the same in all of those\nintermediate slots as it is in the current tuple for the baserel scan\nof A. I *think* the only time it's different is when we've advanced\nthe scan for A but haven't gotten around to advancing the joins yet.\nBut that just underscores the point: if we didn't have all of these\nintermediate slots around we wouldn't have to keep advancing them all\nseparately. Instead we could have the slot at the top, representing\nthe final join, pull directly from the baserel slots and skip all the\njunk in the middle.\n\nMaybe there's a real reason that won't work, but the only reason I\nknow about why it wouldn't work is that we don't have the bookkeeping\nto make it work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 11:35:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's a bit annoying that we percolate things up the tree the way we do\n> at all. I realize this is far afield from the topic of this thread.\n> But suppose that I join 5 tables and select a subset of the table\n> columns in the output. Suppose WLOG the join order is A-B-C-D-E. Well,\n> we're going to pull the columns that are needed from A and B and put\n> them into the slot representing the result of the A-B join. Then we're\n> going to take some columns from that slot and some columns from C and\n> put them into the slot representing the result of the A-B-C join. And\n> so on until we get to the top. But the slots for the A-B, A-B-C, and\n> A-B-C-D joins don't seem to really be needed.\n\nYou do realize that we're just copying Datums from one level to the\nnext? For pass-by-ref data, the Datums generally all point at the\nsame physical data in some disk buffer ... or if they don't, it's\nbecause the join method had a good reason to want to copy data.\n\nIf we didn't have the intermediate tuple slots, we'd have to have\nsome other scheme for identifying which data to examine in intermediate\njoin levels' quals. Maybe you can devise a scheme that has less overhead,\nbut it's not immediately obvious that any huge win would be available.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Feb 2021 12:06:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Feb 5, 2021 at 12:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You do realize that we're just copying Datums from one level to the\n> next? For pass-by-ref data, the Datums generally all point at the\n> same physical data in some disk buffer ... or if they don't, it's\n> because the join method had a good reason to want to copy data.\n\nI am older and dumber than I used to be, but I'm amused at the idea\nthat I might be old enough and dumb enough not to understand this. To\nbe honest, given that we are just copying the datums, I find it kind\nof surprising that it causes us pain, but it clearly does. If you\nthink it's not an issue, then what of the email from Amit Langote to\nwhich I was responding, or his earlier message at\nhttp://postgr.es/m/CA+HiwqHUkwcy84uFfUA3qVsyU2pgTwxVkJx1uwPQFSHfPz4rsA@mail.gmail.com\nwhich contains benchmark results?\n\nAs to why it causes us pain, I don't have a full picture of that.\nTarget list construction is one problem: we build all these target\nlists for intermediate notes during planning and they're long enough\n-- if the user has a bunch of columns -- and planning is cheap enough\nfor some queries that the sheer time to construct the list shows up\nnoticeably in profiles. I've seen that be a problem even for query\nplanning problems that involve just one table: a test that takes the\n\"physical tlist\" path can be slower just because the time to construct\nthe longer tlist is significant and the savings from postponing tuple\ndeforming isn't. It seems impossible to believe that it can't also\nhurt us on join queries that actually make use of a lot of columns, so\nthat they've all got to be included in tlists at every level of the\njoin tree. I believe that the execution-time overhead isn't entirely\ntrivial either. Sure, copying an 8-byte quantity is pretty cheap, but\nif you have a lot of columns and you copy them a lot of times for each\nof a lot of tuples, it adds up. Queries that do enough \"real work\"\ne.g. calling expensive functions, forcing disk I/O, etc. will make the\neffect of a bunch of x[i] = y[j] stuff unnoticeable, but there are\nplenty of queries that don't really do anything expensive -- they're\ndoing simple joins of data that's already in memory. Even there,\naccessing buffers figures to be more expensive because it's shared\nmemory with locking and cache line contention; but I don't think that\nmeans we can completely ignore the performance impact of backend-local\ncomputation. b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755 is a good\nexample of getting a significant gain by refactoring to reduce\nseemingly trivial overheads -- in that case, AIUI, the benefits are\naround fewer function calls and better CPU branch prediction.\n\n> If we didn't have the intermediate tuple slots, we'd have to have\n> some other scheme for identifying which data to examine in intermediate\n> join levels' quals. Maybe you can devise a scheme that has less overhead,\n> but it's not immediately obvious that any huge win would be available.\n\nI agree. I'm inclined to suspect that some benefit is possible, but\nthat might be wrong and it sure doesn't look easy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 16:05:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> As to why it causes us pain, I don't have a full picture of that.\n> Target list construction is one problem: we build all these target\n> lists for intermediate notes during planning and they're long enough\n> -- if the user has a bunch of columns -- and planning is cheap enough\n> for some queries that the sheer time to construct the list shows up\n> noticeably in profiles.\n\nWell, the tlist data structure is just about completely separate from\nthe TupleTableSlot mechanisms. I'm more prepared to believe that\nwe could improve on the former, though I don't have any immediate\nideas about how.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Feb 2021 16:53:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Feb 4, 2021 at 3:22 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jan 26, 2021 at 8:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Nov 13, 2020 at 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Wed, Nov 11, 2020 at 9:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > > > On 29/10/2020 15:03, Amit Langote wrote:\n> > > > > Rebased over the recent executor result relation related commits.\n> > > >\n> > > > ModifyTablePath didn't get the memo that a ModifyTable can only have one\n> > > > subpath after these patches. Attached patch, on top of your v5 patches,\n> > > > cleans that up.\n> > >\n> > > Ah, thought I'd taken care of that, thanks. Attached v6.\n> >\n> > This got slightly broken due to the recent batch insert related\n> > changes, so here is the rebased version. I also made a few cosmetic\n> > changes.\n>\n> Broken again, so rebased.\n\nRebased.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Feb 2021 22:42:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Just noticed that a test added by the recent 927f453a941 fails due to\n0002. We no longer allow moving a row into a postgres_fdw partition\nif it is among the UPDATE's result relations, whereas previously we\nwould if the UPDATE on that partition is already finished.\n\nTo fix, I've adjusted the test case. Attached updated version.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Mar 2021 23:39:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 3, 2021 at 9:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Just noticed that a test added by the recent 927f453a941 fails due to\n> 0002. We no longer allow moving a row into a postgres_fdw partition\n> if it is among the UPDATE's result relations, whereas previously we\n> would if the UPDATE on that partition is already finished.\n>\n> To fix, I've adjusted the test case. Attached updated version.\n\nI spent some time studying this patch this morning. As far as I can\nsee, 0001 is a relatively faithful implementation of the design Tom\nproposed back in early 2019. I think it would be nice to either get\nthis committed or else decide that we don't want it and what we're\ngoing to try to do instead, because we can't make UPDATEs and DELETEs\nstop sucking with partitioned tables until we settle on some solution\nto the problem that is inheritance_planner(), and that strikes me as\nan *extremely* important problem. Lots of people are trying to use\npartitioning in PostgreSQL, and they don't like finding out that, in\nmany cases, it makes things slower rather than faster. Neither this\nnor any other patch is going to solve that problem in general, because\nthere are *lots* of things that haven't been well-optimized for\npartitioning yet. But, this is a pretty important case that we should\nreally try to do something about.\n\nSo, that said, here are some random comments:\n\n- I think it would be interesting to repeat your earlier benchmarks\nusing -Mprepared. One question I have is whether we're saving overhead\nhere during planning at the price of execution-time overhead, or\nwhether we're saving during both planning and execution.\n\n- Until I went back and found your earlier remarks on this thread, I\nwas confused as to why you were replacing a JunkFilter with a\nProjectionInfo. I think it would be good to try to add some more\nexplicit comments about that design choice, perhaps as header comments\nfor ExecGetUpdateNewTuple, or maybe there's a better place. I'm still\nnot sure why we need to do the same thing for the insert case, which\nseems to just be about removing junk columns. At least in the non-JIT\ncase, it seems to me that ExecJunkFilter() should be cheaper than\nExecProject(). Is it different enough to matter? Does the fact that we\ncan JIT the ExecProject() work make it actually faster? These are\nthings I don't know.\n\n- There's a comment which you didn't write but just moved which I find\nto be quite confusing. It says \"For UPDATE/DELETE, find the\nappropriate junk attr now. Typically, this will be a 'ctid' or\n'wholerow' attribute, but in the case of a foreign data wrapper it\nmight be a set of junk attributes sufficient to identify the remote\nrow.\" But, the block of code which follows caters only to the 'ctid'\nand 'wholerow' cases, not anything else. Perhaps that's explained by\nthe comment a bit further down which says \"When there is a row-level\ntrigger, there should be a wholerow attribute,\" but if the point is\nthat this code is only reached when there's a row-level trigger,\nthat's far from obvious. It *looks* like something we'd reach for ANY\ninsert or UPDATE case. Maybe it's not your job to do anything about\nthis confusion, but I thought it was worth highlighting.\n\n- The comment for filter_junk_tlist_entries(), needless to say, is of\nthe highest quality, but would it make any sense to think about having\nan option for expand_tlist() to omit the junk entries itself, to avoid\nextra work? I'm unclear whether we want that behavior in all UPDATE\ncases or only some of them, because preproces_targetlist() has a call\nto expand_tlist() to set parse->onConflict->onConflictSet that does\nnot call filter_junk_tlist_entries() on the result. Does this patch\nneed to make any changes to the handling of ON CONFLICT .. UPDATE? It\nlooks to me like right now it doesn't, but I don't know whether that's\nan oversight or intentional.\n\n- The output changes in the main regression test suite are pretty easy\nto understand: we're just seeing columns that no longer need to get\nfed through the execution get dropped. The changes in the postgres_fdw\nresults are harder to understand. In general, it appears that what's\nhappening is that we're no longer outputting the non-updated columns\nindividually -- which makes sense -- but now we're outputting a\nwhole-row var that wasn't there before, e.g.:\n\n- Output: foreign_tbl.a, (foreign_tbl.b + 15), foreign_tbl.ctid\n+ Output: (foreign_tbl.b + 15), foreign_tbl.ctid, foreign_tbl.*\n\nSince this is postgres_fdw, we can re-fetch the row using CTID, so\nit's not clear to me why we need foreign_tbl.* when we didn't before.\nMaybe the comments need improvement.\n\n- Specifically, I think the comments in preptlist.c need some work.\nYou've edited the top-of-file comment, but I don't think it's really\naccurate. The first sentence says \"For INSERT and UPDATE, the\ntargetlist must contain an entry for each attribute of the target\nrelation in the correct order,\" but I don't think that's really true\nany more. It's certainly not what you see in the EXPLAIN output. The\nparagraph goes on to explain that UPDATE has a second target list, but\n(a) that seems to contradict the first sentence and (b) that second\ntarget list isn't what you see when you run EXPLAIN. Also, there's no\nmention of what happens for FDWs here, but it's evidently different,\nas per the previous review comment.\n\n- The comments atop fix_join_expr() should be updated. Maybe you can\njust adjust the wording for case #2.\n\nOK, that's all I've got based on a first read-through. Thanks for your\nwork on this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Mar 2021 12:46:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I spent some time studying this patch this morning. As far as I can\n> see, 0001 is a relatively faithful implementation of the design Tom\n> proposed back in early 2019. I think it would be nice to either get\n> this committed or else decide that we don't want it and what we're\n> going to try to do instead,\n\nYeah, it's on my to-do list for this CF, but I expect it's going to\ntake some concentrated study and other things keep intruding :-(.\n\nAll of your comments/questions seem reasonable; thanks for taking\na look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 13:19:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I wrote:\n> Yeah, it's on my to-do list for this CF, but I expect it's going to\n> take some concentrated study and other things keep intruding :-(.\n\nI've started to look at this seriously, and I wanted to make a note\nabout something that I think we should try to do, but there seems\nlittle hope of shoehorning it in for v14.\n\nThat \"something\" is that ideally, the ModifyTable node should pass\nonly the updated column values to the table AM or FDW, and let that\nlower-level code worry about reconstructing a full tuple by\nre-fetching the unmodified columns. When I first proposed this\nconcept, I'd imagined that maybe we could avoid the additional tuple\nread that this implementation requires by combining it with the\ntuple access that a heap UPDATE must do anyway to lock and outdate\nthe target tuple. Another example of a possible win is Andres'\ncomment upthread that a columnar-storage AM would really rather\ndeal only with the modified columns. Also, the patch as it stands\nasks FDWs to supply all columns in a whole-row junk var, which is\nsomething that might become unnecessary.\n\nHowever, there are big stumbling blocks in the way. ModifyTable\nis responsible for applying CHECK constraints, which may require\nlooking at the values of not-modified columns. An even bigger\nproblem is that per-row triggers currently expect to be given\nthe whole proposed row (and to be able to change columns not\nalready marked for update). We could imagine redefining the\ntrigger APIs to reduce the overhead here, but that's certainly\nnot going to happen in time for v14.\n\nSo for now I think we have to stick with Amit's design of\nreconstructing the full updated tuple at the outset in\nModifyTable, and then proceeding pretty much as updates\nhave done in the past. But there's more we can do later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Mar 2021 15:22:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Mar 25, 2021 at 4:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Yeah, it's on my to-do list for this CF, but I expect it's going to\n> > take some concentrated study and other things keep intruding :-(.\n>\n> I've started to look at this seriously,\n\nThanks a lot.\n\n> and I wanted to make a note\n> about something that I think we should try to do, but there seems\n> little hope of shoehorning it in for v14.\n>\n> That \"something\" is that ideally, the ModifyTable node should pass\n> only the updated column values to the table AM or FDW, and let that\n> lower-level code worry about reconstructing a full tuple by\n> re-fetching the unmodified columns. When I first proposed this\n> concept, I'd imagined that maybe we could avoid the additional tuple\n> read that this implementation requires by combining it with the\n> tuple access that a heap UPDATE must do anyway to lock and outdate\n> the target tuple. Another example of a possible win is Andres'\n> comment upthread\n\n(Ah, I think you mean Heikki's.)\n\n> that a columnar-storage AM would really rather\n> deal only with the modified columns. Also, the patch as it stands\n> asks FDWs to supply all columns in a whole-row junk var, which is\n> something that might become unnecessary.\n\nThat would indeed be nice. I had considered taking on the project to\nrevise FDW local (non-direct) update APIs to deal with being passed\nonly the values of changed columns, but hit some problems when\nimplementing that in postgres_fdw that I don't quite remember the\ndetails of. As you say below, we can pick that up later.\n\n> However, there are big stumbling blocks in the way. ModifyTable\n> is responsible for applying CHECK constraints, which may require\n> looking at the values of not-modified columns. An even bigger\n> problem is that per-row triggers currently expect to be given\n> the whole proposed row (and to be able to change columns not\n> already marked for update). We could imagine redefining the\n> trigger APIs to reduce the overhead here, but that's certainly\n> not going to happen in time for v14.\n\nYeah, at least the trigger concerns look like they will take work we\nbetter not do in the 2 weeks left in the v14 cycle.\n\n> So for now I think we have to stick with Amit's design of\n> reconstructing the full updated tuple at the outset in\n> ModifyTable, and then proceeding pretty much as updates\n> have done in the past. But there's more we can do later.\n\nAgreed.\n\nI'm addressing Robert's comments and will post an updated patch by tomorrow.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:52:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> - Until I went back and found your earlier remarks on this thread, I\n> was confused as to why you were replacing a JunkFilter with a\n> ProjectionInfo. I think it would be good to try to add some more\n> explicit comments about that design choice, perhaps as header comments\n> for ExecGetUpdateNewTuple, or maybe there's a better place. I'm still\n> not sure why we need to do the same thing for the insert case, which\n> seems to just be about removing junk columns.\n\nI wondered about that too; this patch allegedly isn't touching anything\ninteresting about INSERT cases, so why should we modify that? However,\nwhen I tried to poke at that, I discovered that it seems to be dead code\nanyway. A look at coverage.postgresql.org will show you that no\nregression test reaches \"junk_filter_needed = true\" in\nExecInitModifyTable's inspection of INSERT tlists, and I've been unable to\ncreate such a case manually either. I think the reason is that the parser\ninitially builds all INSERT ... SELECT cases with the SELECT as an\nexplicit subquery, giving rise to a SubqueryScan node just below the\nModifyTable, which will project exactly the desired columns and no more.\nWe'll optimize away the SubqueryScan if it's a no-op projection, but not\nif it is getting rid of junk columns. There is room for more optimization\nhere: dropping the SubqueryScan in favor of making ModifyTable do the same\nprojection would win by removing one plan node's worth of overhead. But\nI don't think we need to panic about whether the projection is done with\nExecProject or a junk filter --- we'd be strictly ahead of the current\nsituation either way.\n\nGiven that background, I agree with Amit's choice to change this,\njust to reduce the difference between how INSERT and UPDATE cases work.\nFor now, there's no performance difference anyway, since neither the\nProjectionInfo nor the JunkFilter code can be reached. (Maybe a comment\nabout that would be useful.)\n\nBTW, in the version of the patch that I'm working on (not ready to\npost yet), I've thrown away everything that Amit did in setrefs.c\nand tlist.c, so I don't recommend he spend time improving the comments\nthere ;-). I did not much like the idea of building a full TargetList\nfor each partition; that's per-partition cycles and storage space that\nwe won't be able to reclaim with the 0002 patch. And we don't really\nneed it, because there are only going to be three cases at runtime:\npull a column from the subplan result tuple, pull a column from the\nold tuple, or insert a NULL for a dropped column. So I've replaced\nthe per-target-table tlists with integer lists of the attnums of the\nUPDATE's target columns in that table. These are compact and they\ndon't require any further processing in setrefs.c, and the executor\ncan easily build a projection expression from that data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Mar 2021 14:07:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Fri, Mar 26, 2021 at 3:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > - Until I went back and found your earlier remarks on this thread, I\n> > was confused as to why you were replacing a JunkFilter with a\n> > ProjectionInfo. I think it would be good to try to add some more\n> > explicit comments about that design choice, perhaps as header comments\n> > for ExecGetUpdateNewTuple, or maybe there's a better place. I'm still\n> > not sure why we need to do the same thing for the insert case, which\n> > seems to just be about removing junk columns.\n>\n> I wondered about that too; this patch allegedly isn't touching anything\n> interesting about INSERT cases, so why should we modify that? However,\n> when I tried to poke at that, I discovered that it seems to be dead code\n> anyway. A look at coverage.postgresql.org will show you that no\n> regression test reaches \"junk_filter_needed = true\" in\n> ExecInitModifyTable's inspection of INSERT tlists, and I've been unable to\n> create such a case manually either.\n\nI noticed this too.\n\n> I think the reason is that the parser\n> initially builds all INSERT ... SELECT cases with the SELECT as an\n> explicit subquery, giving rise to a SubqueryScan node just below the\n> ModifyTable, which will project exactly the desired columns and no more.\n> We'll optimize away the SubqueryScan if it's a no-op projection, but not\n> if it is getting rid of junk columns. There is room for more optimization\n> here: dropping the SubqueryScan in favor of making ModifyTable do the same\n> projection would win by removing one plan node's worth of overhead.\n\nOh, so there could possibly be a case where ModifyTable would have to\ndo junk filtering for INSERTs, but IIUC only if the planner optimized\naway junk-filtering-SubqueryScan nodes too? I was thinking that\nperhaps INSERTs used to need junk filtering in the past but no longer\nand now it's just dead code.\n\n> But\n> I don't think we need to panic about whether the projection is done with\n> ExecProject or a junk filter --- we'd be strictly ahead of the current\n> situation either way.\n\nI would've liked to confirm that with a performance comparison, but no\ntest case exists to do so. :(\n\n> Given that background, I agree with Amit's choice to change this,\n> just to reduce the difference between how INSERT and UPDATE cases work.\n>\n> For now, there's no performance difference anyway, since neither the\n> ProjectionInfo nor the JunkFilter code can be reached.\n> (Maybe a comment about that would be useful.)\n\nI've added a comment in ExecInitModifyTable() around the block that\ninitializes new-tuple ProjectionInfo to say that INSERTs don't\nactually need to use it today.\n\n> BTW, in the version of the patch that I'm working on (not ready to\n> post yet), I've thrown away everything that Amit did in setrefs.c\n> and tlist.c, so I don't recommend he spend time improving the comments\n> there ;-).\n\nOh, I removed filter_junk_tlist_entries() in my updated version too\nprompted by Robert's comment, but haven't touched\nset_update_tlist_references().\n\n> I did not much like the idea of building a full TargetList\n> for each partition; that's per-partition cycles and storage space that\n> we won't be able to reclaim with the 0002 patch. And we don't really\n> need it, because there are only going to be three cases at runtime:\n> pull a column from the subplan result tuple, pull a column from the\n> old tuple, or insert a NULL for a dropped column. So I've replaced\n> the per-target-table tlists with integer lists of the attnums of the\n> UPDATE's target columns in that table. These are compact and they\n> don't require any further processing in setrefs.c, and the executor\n> can easily build a projection expression from that data.\n\nI remember that in the earliest unposted versions, I had made\nExecInitModifyTable() take up the burden of creating the targetlist\nthat the projection will compute, which is what your approach sounds\nlike. However, I had abandoned it due to the concern that it possibly\nhurt the prepared statements because we would build the targetlist on\nevery execution, whereas only once if the planner does it.\n\nI'm just about done addressing Robert's comments, so will post an\nupdate shortly.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:51:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Mar 26, 2021 at 3:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think the reason is that the parser\n>> initially builds all INSERT ... SELECT cases with the SELECT as an\n>> explicit subquery, giving rise to a SubqueryScan node just below the\n>> ModifyTable, which will project exactly the desired columns and no more.\n>> We'll optimize away the SubqueryScan if it's a no-op projection, but not\n>> if it is getting rid of junk columns. There is room for more optimization\n>> here: dropping the SubqueryScan in favor of making ModifyTable do the same\n>> projection would win by removing one plan node's worth of overhead.\n\n> Oh, so there could possibly be a case where ModifyTable would have to\n> do junk filtering for INSERTs, but IIUC only if the planner optimized\n> away junk-filtering-SubqueryScan nodes too? I was thinking that\n> perhaps INSERTs used to need junk filtering in the past but no longer\n> and now it's just dead code.\n\nI'm honestly not very sure about that. It's possible that there was\nsome state of the code in which we supported INSERT/SELECT but didn't\nend up putting a SubqueryScan node in there, but if so it was a long\nlong time ago. It looks like the SELECT-is-a-subquery parser logic\ndates to 05e3d0ee8 of 2000-10-05, which was a long time before\nModifyTable existed as such. I'm not interested enough to dig\nfurther than that.\n\nHowever, it's definitely true that we can now generate INSERT plans\nwhere there's a SubqueryScan node that's not doing anything but\nstripping junk columns, for instance\n\n\tINSERT INTO t SELECT x FROM t2 ORDER BY y;\n\nwhere the ORDER BY has to be done with an explicit sort. The\nsort will be on a resjunk \"y\" column.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Mar 2021 22:39:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 24, 2021 at 1:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Mar 3, 2021 at 9:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Just noticed that a test added by the recent 927f453a941 fails due to\n> > 0002. We no longer allow moving a row into a postgres_fdw partition\n> > if it is among the UPDATE's result relations, whereas previously we\n> > would if the UPDATE on that partition is already finished.\n> >\n> > To fix, I've adjusted the test case. Attached updated version.\n>\n> I spent some time studying this patch this morning.\n\nThanks a lot for your time on this.\n\n> As far as I can\n> see, 0001 is a relatively faithful implementation of the design Tom\n> proposed back in early 2019. I think it would be nice to either get\n> this committed or else decide that we don't want it and what we're\n> going to try to do instead, because we can't make UPDATEs and DELETEs\n> stop sucking with partitioned tables until we settle on some solution\n> to the problem that is inheritance_planner(), and that strikes me as\n> an *extremely* important problem. Lots of people are trying to use\n> partitioning in PostgreSQL, and they don't like finding out that, in\n> many cases, it makes things slower rather than faster. Neither this\n> nor any other patch is going to solve that problem in general, because\n> there are *lots* of things that haven't been well-optimized for\n> partitioning yet. But, this is a pretty important case that we should\n> really try to do something about.\n>\n> So, that said, here are some random comments:\n>\n> - I think it would be interesting to repeat your earlier benchmarks\n> using -Mprepared. One question I have is whether we're saving overhead\n> here during planning at the price of execution-time overhead, or\n> whether we're saving during both planning and execution.\n\nPlease see at the bottom of this reply.\n\n> - Until I went back and found your earlier remarks on this thread, I\n> was confused as to why you were replacing a JunkFilter with a\n> ProjectionInfo. I think it would be good to try to add some more\n> explicit comments about that design choice, perhaps as header comments\n> for ExecGetUpdateNewTuple, or maybe there's a better place.\n\nI think the comments around ri_projectNew that holds the\nProjectionInfo node explains this to some degree, especially the\ncomment in ExecInitModifyTable() that sets it. I don't particularly\nsee a need to go into detail why JunkFilter is not suitable for the\ntask if we're no longer using it at all in nodeModifyTable.c.\n\n> I'm still\n> not sure why we need to do the same thing for the insert case, which\n> seems to just be about removing junk columns.\n\nI think I was hesitant to have both a ri_junkFilter and ri_projectNew\ncatering for inserts and update/delete respectively.\n\n> At least in the non-JIT\n> case, it seems to me that ExecJunkFilter() should be cheaper than\n> ExecProject(). Is it different enough to matter? Does the fact that we\n> can JIT the ExecProject() work make it actually faster? These are\n> things I don't know.\n\nExecJunkFilter() indeed looks cheaper on a first look for simple junk\nfiltering, but as Tom also found out, there's actually no test case\ninvolving INSERT to do the actual performance comparison with.\n\n> - There's a comment which you didn't write but just moved which I find\n> to be quite confusing. It says \"For UPDATE/DELETE, find the\n> appropriate junk attr now. Typically, this will be a 'ctid' or\n> 'wholerow' attribute, but in the case of a foreign data wrapper it\n> might be a set of junk attributes sufficient to identify the remote\n> row.\" But, the block of code which follows caters only to the 'ctid'\n> and 'wholerow' cases, not anything else. Perhaps that's explained by\n> the comment a bit further down which says \"When there is a row-level\n> trigger, there should be a wholerow attribute,\" but if the point is\n> that this code is only reached when there's a row-level trigger,\n> that's far from obvious. It *looks* like something we'd reach for ANY\n> insert or UPDATE case. Maybe it's not your job to do anything about\n> this confusion, but I thought it was worth highlighting.\n\nI do remember being confused by that note regarding the junk\nattributes required by FDWs for their result relations when I first\nsaw it, but eventually found out that it's talking about the\ninformation about junk attributes that FDWs track in their *private*\ndata structures. For example, postgres_fdw uses\nPgFdwModifyState.ctidAttno to record the index of the \"ctid\" TLE in\nthe source plan's targetlist. It is used, for example, by\npostgresExecForeignUpdate() to extract the ctid from the plan tuple\npassed to it and pass the value as parameter for the remote query:\nupdate remote_tab set ... where ctid = $1.\n\nI've clarified the comment to make that a bit clear.\n\n> - The comment for filter_junk_tlist_entries(), needless to say, is of\n> the highest quality,\n\nSorry, it was a copy-paste job.\n\n> but would it make any sense to think about having\n> an option for expand_tlist() to omit the junk entries itself, to avoid\n> extra work? I'm unclear whether we want that behavior in all UPDATE\n> cases or only some of them, because preproces_targetlist() has a call\n> to expand_tlist() to set parse->onConflict->onConflictSet that does\n> not call filter_junk_tlist_entries() on the result.\n\nI added an exclude_junk parameter to expand_targetlist() and passed\nfalse for it in all sites except make_update_tlist(), including where\nit's called on parse->onConflict->onConflictSet.\n\nAlthough, make_update_tlist() and related code may have been\nsuperseded by Tom's WIP patch.\n\n> Does this patch\n> need to make any changes to the handling of ON CONFLICT .. UPDATE? It\n> looks to me like right now it doesn't, but I don't know whether that's\n> an oversight or intentional.\n\nI intentionally didn't bother with changing any part of the ON\nCONFLICT UPDATE case, mainly because INSERTs don't have a\ninheritance_planner() problem. We may want to revisit that in the\nfuture if we decide to revise the ExecUpdate() API to not pass the\nfully-reconstructed new tuple, which this patch doesn't do.\n\n> - The output changes in the main regression test suite are pretty easy\n> to understand: we're just seeing columns that no longer need to get\n> fed through the execution get dropped. The changes in the postgres_fdw\n> results are harder to understand. In general, it appears that what's\n> happening is that we're no longer outputting the non-updated columns\n> individually -- which makes sense -- but now we're outputting a\n> whole-row var that wasn't there before, e.g.:\n>\n> - Output: foreign_tbl.a, (foreign_tbl.b + 15), foreign_tbl.ctid\n> + Output: (foreign_tbl.b + 15), foreign_tbl.ctid, foreign_tbl.*\n>\n> Since this is postgres_fdw, we can re-fetch the row using CTID, so\n> it's not clear to me why we need foreign_tbl.* when we didn't before.\n> Maybe the comments need improvement.\n\nExecForeignUpdate FDW API expects being passed a fully-formed new\ntuple, even though it will typically only access the changed columns\nfrom that tuple to pass in the remote update query. There is a\ncomment in rewriteTargetListUD() to explain this, which I have updated\nsomewhat to read as follows:\n\n /*\n * ExecUpdate() needs to pass a full new tuple to be assigned to the\n * result relation to ExecForeignUpdate(), although the plan will have\n * produced values for only the changed columns. Here we ask the FDW\n * to fetch wholerow to serve as the side channel for getting the\n * values of the unchanged columns when constructing the full tuple to\n * be passed to ExecForeignUpdate(). Actually, we only really need\n * this for UPDATEs that are not pushed to the remote side, but whether\n * or not the pushdown will occur is not clear when this function is\n * called, so we ask for wholerow anyway.\n *\n * We will also need the \"old\" tuple if there are any row triggers.\n */\n\n> - Specifically, I think the comments in preptlist.c need some work.\n> You've edited the top-of-file comment, but I don't think it's really\n> accurate. The first sentence says \"For INSERT and UPDATE, the\n> targetlist must contain an entry for each attribute of the target\n> relation in the correct order,\" but I don't think that's really true\n> any more. It's certainly not what you see in the EXPLAIN output. The\n> paragraph goes on to explain that UPDATE has a second target list, but\n> (a) that seems to contradict the first sentence and (b) that second\n> target list isn't what you see when you run EXPLAIN. Also, there's no\n> mention of what happens for FDWs here, but it's evidently different,\n> as per the previous review comment.\n\nIt seems Tom has other things in mind for what I've implemented as\nupdate_tlist, so I will leave this alone.\n\n> - The comments atop fix_join_expr() should be updated. Maybe you can\n> just adjust the wording for case #2.\n\nApparently the changes in setrefs.c are being thrown out as well in\nTom's patch, so likewise I will leave this alone.\n\n\nAttached updated version of the patch. I have forgotten to mention in\nmy recent posts on this thread one thing about 0001 that I had\nmentioned upthread back in June. That it currently fails a test in\npostgres_fdw's suite due to a bug of cross-partition updates that I\ndecided at the time to pursue in another thread:\nhttps://www.postgresql.org/message-id/CA%2BHiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA%40mail.gmail.com\n\nThat bug is revealed due to some changes that 0001 makes. However, it\ndoes not matter after applying 0002, because the current way of having\none plan per result relation is a precondition for that bug to\nmanifest. So, if we are to apply only 0001 first, then I'm afraid we\nwould have take care of that bug before applying 0001.\n\nFinally, here are the detailed results of the benchmarks I redid to\ncheck the performance implications of doing UPDATEs the new way,\ncomparing master and 0001.\n\nRepeated 2 custom pgbench tests against the UPDATE target tables\ncontaining 10, 20, 40, and 80 columns. The 2 custom tests are as\nfollows:\n\nnojoin:\n\n\\set a random(1, 1000000)\nupdate test_table t set b = :a where a = :a;\n\njoin:\n\n\\set a random(1, 1000000)\nupdate test_table t set b = foo.b from foo where t.a = foo.a and foo.b = :a;\n\nfoo has just 2 integer columns a, b, with an index on b.\n\nChecked using both -Msimple and -Mprepared this time, whereas I had\nonly checked the former the last time.\n\nI'd summarize the results I see as follows:\n\nIn -Msimple mode, patched wins by a tiny margin for both nojoin and\njoin cases at 10, 20 columns, and by slightly larger margin at 40, 80\ncolumns with the join case showing bigger margin than nojoin.\n\nIn -Mprepared mode, where the numbers are a bit noisy, I can only tell\nclearly that the patched wins by a very wide margin for the join case\nat 40, 80 columns, without a clear winner in other cases.\n\nTo answer Robert's questions in this regard:\n\n> One question I have is whether we're saving overhead\n> here during planning at the price of execution-time overhead, or\n> whether we're saving during both planning and execution.\n\nSmaller targetlists due to the patch at least help the patched end up\non the better side of tps comparison. Maybe this aspect helps reduce\nboth the planning and execution time. As for whether the results\nreflect negatively on the fact that we now fetch the tuple one more\ntime to construct the new tuple, I don't quite see that to be the\ncase.\n\nRaw tps figures (each case repeated 3 times) follow. I'm also\nattaching (a hopefully self-contained) shell script file\n(test_update.sh) that you can run to reproduce the numbers for the\nvarious cases.\n\n10 columns\n\nnojoin simple master\ntps = 12278.749205 (without initial connection time)\ntps = 11537.051718 (without initial connection time)\ntps = 12312.717990 (without initial connection time)\nnojoin simple patched\ntps = 12160.125784 (without initial connection time)\ntps = 12170.271905 (without initial connection time)\ntps = 12212.037774 (without initial connection time)\n\nnojoin prepared master\ntps = 12228.149183 (without initial connection time)\ntps = 12509.135100 (without initial connection time)\ntps = 11698.161145 (without initial connection time)\nnojoin prepared patched\ntps = 13033.005860 (without initial connection time)\ntps = 14690.203013 (without initial connection time)\ntps = 15083.096511 (without initial connection time)\n\njoin simple master\ntps = 9112.059568 (without initial connection time)\ntps = 10730.739559 (without initial connection time)\ntps = 10663.677821 (without initial connection time)\njoin simple patched\ntps = 10980.139631 (without initial connection time)\ntps = 10887.743691 (without initial connection time)\ntps = 10929.663379 (without initial connection time)\n\njoin prepared master\ntps = 21333.421825 (without initial connection time)\ntps = 23895.538826 (without initial connection time)\ntps = 24761.384786 (without initial connection time)\njoin prepared patched\ntps = 25665.062858 (without initial connection time)\ntps = 25037.391119 (without initial connection time)\ntps = 25421.839842 (without initial connection time)\n\n20 columns\n\nnojoin simple master\ntps = 11215.161620 (without initial connection time)\ntps = 11306.536537 (without initial connection time)\ntps = 11310.776393 (without initial connection time)\nnojoin simple patched\ntps = 11791.107767 (without initial connection time)\ntps = 11757.933141 (without initial connection time)\ntps = 11743.983647 (without initial connection time)\n\nnojoin prepared master\ntps = 17144.510719 (without initial connection time)\ntps = 14032.133587 (without initial connection time)\ntps = 15678.801224 (without initial connection time)\nnojoin prepared patched\ntps = 16603.131255 (without initial connection time)\ntps = 14703.564675 (without initial connection time)\ntps = 13652.827905 (without initial connection time)\n\njoin simple master\ntps = 9637.904229 (without initial connection time)\ntps = 9869.163480 (without initial connection time)\ntps = 9865.673335 (without initial connection time)\njoin simple patched\ntps = 10779.705826 (without initial connection time)\ntps = 10790.961520 (without initial connection time)\ntps = 10917.759963 (without initial connection time)\n\njoin prepared master\ntps = 23030.120609 (without initial connection time)\ntps = 22347.620338 (without initial connection time)\ntps = 24227.376933 (without initial connection time)\njoin prepared patched\ntps = 22303.689184 (without initial connection time)\ntps = 24507.395745 (without initial connection time)\ntps = 25219.535413 (without initial connection time)\n\n40 columns\n\nnojoin simple master\ntps = 10348.352638 (without initial connection time)\ntps = 9978.449528 (without initial connection time)\ntps = 10024.132430 (without initial connection time)\nnojoin simple patched\ntps = 10169.485989 (without initial connection time)\ntps = 10239.297780 (without initial connection time)\ntps = 10643.076675 (without initial connection time)\n\nnojoin prepared master\ntps = 13606.361325 (without initial connection time)\ntps = 15815.149553 (without initial connection time)\ntps = 15940.675165 (without initial connection time)\nnojoin prepared patched\ntps = 13889.450942 (without initial connection time)\ntps = 13406.879350 (without initial connection time)\ntps = 15640.326344 (without initial connection time)\n\njoin simple master\ntps = 9235.503480 (without initial connection time)\ntps = 9244.756832 (without initial connection time)\ntps = 8785.542317 (without initial connection time)\njoin simple patched\ntps = 10106.285796 (without initial connection time)\ntps = 10375.248536 (without initial connection time)\ntps = 10357.087162 (without initial connection time)\n\njoin prepared master\ntps = 18795.665779 (without initial connection time)\ntps = 17650.815736 (without initial connection time)\ntps = 20903.206602 (without initial connection time)\njoin prepared patched\ntps = 24706.505207 (without initial connection time)\ntps = 22867.751793 (without initial connection time)\ntps = 23589.244380 (without initial connection time)\n\n80 columns\n\nnojoin simple master\ntps = 8281.679334 (without initial connection time)\ntps = 7517.657106 (without initial connection time)\ntps = 8509.366647 (without initial connection time)\nnojoin simple patched\ntps = 9200.437258 (without initial connection time)\ntps = 9349.939671 (without initial connection time)\ntps = 9128.197101 (without initial connection time)\n\nnojoin prepared master\ntps = 12975.410783 (without initial connection time)\ntps = 13486.858443 (without initial connection time)\ntps = 10994.355244 (without initial connection time)\nnojoin prepared patched\ntps = 14266.725696 (without initial connection time)\ntps = 15250.258418 (without initial connection time)\ntps = 13356.236075 (without initial connection time)\n\njoin simple master\ntps = 7678.440018 (without initial connection time)\ntps = 7699.796166 (without initial connection time)\ntps = 7880.407359 (without initial connection time)\njoin simple patched\ntps = 9552.413096 (without initial connection time)\ntps = 9469.579290 (without initial connection time)\ntps = 9584.026033 (without initial connection time)\n\njoin prepared master\ntps = 18390.262404 (without initial connection time)\ntps = 18754.121500 (without initial connection time)\ntps = 20355.875827 (without initial connection time)\njoin prepared patched\ntps = 24041.648927 (without initial connection time)\ntps = 22510.192030 (without initial connection time)\ntps = 21825.870402 (without initial connection time)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 27 Mar 2021 00:21:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Attached updated version of the patch. I have forgotten to mention in\n> my recent posts on this thread one thing about 0001 that I had\n> mentioned upthread back in June. That it currently fails a test in\n> postgres_fdw's suite due to a bug of cross-partition updates that I\n> decided at the time to pursue in another thread:\n> https://www.postgresql.org/message-id/CA%2BHiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA%40mail.gmail.com\n\nYeah, I ran into that too. I think we need not try to fix it in HEAD;\nwe aren't likely to commit 0001 and 0002 separately. We need some fix\nfor the back branches, but that would better be discussed in the other\nthread. (Note that the version of 0001 I attach below shows the actual\noutput of the postgres_fdw test, including a failure from said bug.)\n\nI wanted to give a data dump of where I am. I've reviewed and\nnontrivially modified 0001 and the executor parts of 0002, and\nI'm fairly happy with the state of that much of the code now.\n(Note that 0002 below contains some cosmetic fixes, such as comments,\nthat logically belong in 0001, but I didn't bother to tidy that up\nsince I'm not seeing these as separate commits anyway.)\n\nThe planner, however, still needs a lot of work. There's a serious\nfunctional problem, in that UPDATEs across partition trees having\nmore than one foreign table fail with\n\nERROR: junk column \"wholerow\" of child relation 5 conflicts with parent junk column with same name\n\n(cf. multiupdate.sql test case attached). I think we could get around\nthat by requiring \"wholerow\" junk attrs to have vartype RECORDOID instead\nof the particular table's rowtype, which might also remove the need for\nsome of the vartype translation hacking in 0002. But I haven't tried yet.\n\nMore abstractly, I really dislike the \"fake variable\" design, primarily\nthe aspect that you made the fake variables look like real columns of\nthe parent table with attnums just beyond the last real one. I think\nthis is just a recipe for obscuring bugs, since it means you have to\nlobotomize a lot of bad-attnum error checks. The alternative I'm\nconsidering is to invent a separate RTE that holds all the junk columns.\nHaven't tried that yet either.\n\nThe situation in postgres_fdw is not great either, specifically this\nchange:\n\n@@ -2054,8 +2055,7 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n \t */\n \tif (plan && plan->operation == CMD_UPDATE &&\n \t\t(resultRelInfo->ri_usesFdwDirectModify ||\n-\t\t resultRelInfo->ri_FdwState) &&\n-\t\tresultRelInfo > mtstate->resultRelInfo + mtstate->mt_whichplan)\n+\t\t resultRelInfo->ri_FdwState))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t errmsg(\"cannot route tuples into foreign table to be updated \\\"%s\\\"\",\n\nwhich is what forced you to remove or lobotomize several regression\ntest cases. Now admittedly, that just moves the state of play for\ncross-partition updates into postgres_fdw partitions from \"works\nsometimes\" to \"works never\". But I don't like the idea that we'll\nbe taking away actual functionality.\n\nI have a blue-sky idea for fixing that properly, which is to disable FDW\ndirect updates when there is a possibility of a cross-partition update,\ninstead doing it the old way with a remote cursor reading the source rows\nfor later UPDATEs. (If anyone complains that this is too slow, my answer\nis \"it can be arbitrarily fast when it doesn't have to give the right\nanswer\". Failing on cross-partition updates isn't acceptable.) The point\nis that once we have issued DECLARE CURSOR, the cursor's view of the\nsource data is static so it doesn't matter if we insert new rows into the\nremote table. The hard part of that is to make sure that the DECLARE\nCURSOR gets issued before any updates from other partitions can arrive,\nwhich I think means we'd need to issue it during plan tree startup not at\nfirst fetch from the ForeignScan node. Maybe that happens already, or\nmaybe we'd need a new/repurposed FDW API call. I've not researched it.\n\nAnyway, I'd really like to get this done for v14, so I'm going to buckle\ndown and try to fix the core-planner issues I mentioned. It'd be nice\nif somebody could look at fixing the postgres_fdw problem in parallel\nwith that.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 27 Mar 2021 12:30:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I wrote:\n> ... which is what forced you to remove or lobotomize several regression\n> test cases. Now admittedly, that just moves the state of play for\n> cross-partition updates into postgres_fdw partitions from \"works\n> sometimes\" to \"works never\". But I don't like the idea that we'll\n> be taking away actual functionality.\n> I have a blue-sky idea for fixing that properly, which is to disable FDW\n> direct updates when there is a possibility of a cross-partition update,\n> instead doing it the old way with a remote cursor reading the source rows\n> for later UPDATEs.\n\nAfter further poking at this, I realize that there is an independent reason\nwhy a direct FDW update is unsuitable in a partitioned UPDATE: it fails to\ncope with cases where a row needs to be moved *out* of a remote table.\n(If you were smart and put a CHECK constraint equivalent to the partition\nconstraint on the remote table, you'll get a CHECK failure. If you did\nnot do that, you just silently get the wrong behavior, with the row\nupdated where it is and thus no longer accessible via the partitioned\ntable.) Again, backing off trying to use a direct update seems like\nthe right route to a fix.\n\nSo the short answer here is that postgres_fdw is about 75% broken for\ncross-partition updates anyway, so making it 100% broken isn't giving\nup as much as I thought. Accordingly, I'm not going to consider that\nissue to be a blocker for this patch. Still, if anybody wants to\nwork on un-breaking it, that'd be great.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Mar 2021 14:11:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Sun, Mar 28, 2021 at 3:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > ... which is what forced you to remove or lobotomize several regression\n> > test cases. Now admittedly, that just moves the state of play for\n> > cross-partition updates into postgres_fdw partitions from \"works\n> > sometimes\" to \"works never\". But I don't like the idea that we'll\n> > be taking away actual functionality.\n> > I have a blue-sky idea for fixing that properly, which is to disable FDW\n> > direct updates when there is a possibility of a cross-partition update,\n> > instead doing it the old way with a remote cursor reading the source rows\n> > for later UPDATEs.\n>\n> After further poking at this, I realize that there is an independent reason\n> why a direct FDW update is unsuitable in a partitioned UPDATE: it fails to\n> cope with cases where a row needs to be moved *out* of a remote table.\n> (If you were smart and put a CHECK constraint equivalent to the partition\n> constraint on the remote table, you'll get a CHECK failure. If you did\n> not do that, you just silently get the wrong behavior, with the row\n> updated where it is and thus no longer accessible via the partitioned\n> table.) Again, backing off trying to use a direct update seems like\n> the right route to a fix.\n\nAgreed.\n\n> So the short answer here is that postgres_fdw is about 75% broken for\n> cross-partition updates anyway, so making it 100% broken isn't giving\n> up as much as I thought. Accordingly, I'm not going to consider that\n> issue to be a blocker for this patch. Still, if anybody wants to\n> work on un-breaking it, that'd be great.\n\nOkay, I will give that a try once we're done with the main patch.\n\nBTW, I had forgotten to update the description in postgres-fdw.sgml of\nthe current limitation, which is as follows:\n\n===\nNote also that postgres_fdw supports row movement invoked by UPDATE\nstatements executed on partitioned tables, but it currently does not\nhandle the case where a remote partition chosen to insert a moved row\ninto is also an UPDATE target partition that will be updated later.\n===\n\nI think we will need to take out the \"...table will be updated later\"\npart at the end of the sentence.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Mar 2021 20:49:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Sun, Mar 28, 2021 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Attached updated version of the patch. I have forgotten to mention in\n> > my recent posts on this thread one thing about 0001 that I had\n> > mentioned upthread back in June. That it currently fails a test in\n> > postgres_fdw's suite due to a bug of cross-partition updates that I\n> > decided at the time to pursue in another thread:\n> > https://www.postgresql.org/message-id/CA%2BHiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA%40mail.gmail.com\n>\n> Yeah, I ran into that too. I think we need not try to fix it in HEAD;\n> we aren't likely to commit 0001 and 0002 separately. We need some fix\n> for the back branches, but that would better be discussed in the other\n> thread. (Note that the version of 0001 I attach below shows the actual\n> output of the postgres_fdw test, including a failure from said bug.)\n\nOkay, makes sense.\n\n> I wanted to give a data dump of where I am. I've reviewed and\n> nontrivially modified 0001 and the executor parts of 0002, and\n> I'm fairly happy with the state of that much of the code now.\n\nThanks a lot for that work. I have looked at the changes and I agree\nthat updateColnosLists + ExecBuildUpdateProjection() looks much better\nthan updateTargetLists in the original patch. Looking at\nExecBuildUpdateProjection(), I take back my comment upthread regarding\nthe performance characteristics of your approach, that the prepared\nstatements would suffer from having to build the update-new-tuple\nprojection(s) from scratch on every execution.\n\n> (Note that 0002 below contains some cosmetic fixes, such as comments,\n> that logically belong in 0001, but I didn't bother to tidy that up\n> since I'm not seeing these as separate commits anyway.)\n>\n> The planner, however, still needs a lot of work. There's a serious\n> functional problem, in that UPDATEs across partition trees having\n> more than one foreign table fail with\n>\n> ERROR: junk column \"wholerow\" of child relation 5 conflicts with parent junk column with same name\n>\n> (cf. multiupdate.sql test case attached).\n\nOops, thanks for noticing that.\n\n> I think we could get around\n> that by requiring \"wholerow\" junk attrs to have vartype RECORDOID instead\n> of the particular table's rowtype, which might also remove the need for\n> some of the vartype translation hacking in 0002. But I haven't tried yet.\n\nSounds like that might work.\n\n> More abstractly, I really dislike the \"fake variable\" design, primarily\n> the aspect that you made the fake variables look like real columns of\n> the parent table with attnums just beyond the last real one. I think\n> this is just a recipe for obscuring bugs, since it means you have to\n> lobotomize a lot of bad-attnum error checks. The alternative I'm\n> considering is to invent a separate RTE that holds all the junk columns.\n> Haven't tried that yet either.\n\nHmm, I did expect to hear a strong critique of that piece of code. I\nlook forward to reviewing your alternative implementation.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Mar 2021 21:41:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Mar 28, 2021 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wanted to give a data dump of where I am. I've reviewed and\n>> nontrivially modified 0001 and the executor parts of 0002, and\n>> I'm fairly happy with the state of that much of the code now.\n\n> Thanks a lot for that work. I have looked at the changes and I agree\n> that updateColnosLists + ExecBuildUpdateProjection() looks much better\n> than updateTargetLists in the original patch. Looking at\n> ExecBuildUpdateProjection(), I take back my comment upthread regarding\n> the performance characteristics of your approach, that the prepared\n> statements would suffer from having to build the update-new-tuple\n> projection(s) from scratch on every execution.\n\nYeah, I don't see any reason why the custom projection-build code\nwould be any slower than the regular path. Related to this, though,\nI was wondering whether we could get a useful win by having\nnodeModifyTable.c be lazier about doing the per-target-table\ninitialization steps. I think we have to open and lock all the\ntables at start for semantic reasons, so maybe that swamps everything\nelse. But we could avoid purely-internal setup steps, such as\nbuilding the slots and projection expressions, until the first time\na particular target is actually updated into. This'd help if we've\nfailed to prune a lot of partitions that the update/delete won't\nactually affect.\n\n>> More abstractly, I really dislike the \"fake variable\" design, primarily\n>> the aspect that you made the fake variables look like real columns of\n>> the parent table with attnums just beyond the last real one. I think\n>> this is just a recipe for obscuring bugs, since it means you have to\n>> lobotomize a lot of bad-attnum error checks. The alternative I'm\n>> considering is to invent a separate RTE that holds all the junk columns.\n>> Haven't tried that yet either.\n\n> Hmm, I did expect to hear a strong critique of that piece of code. I\n> look forward to reviewing your alternative implementation.\n\nI got one version working over the weekend, but I didn't like the amount\nof churn it forced in postgres_fdw (and, presumably, other FDWs). Gimme\na day or so to try something else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Mar 2021 10:41:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Mon, Mar 29, 2021 at 11:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Sun, Mar 28, 2021 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I wanted to give a data dump of where I am. I've reviewed and\n> >> nontrivially modified 0001 and the executor parts of 0002, and\n> >> I'm fairly happy with the state of that much of the code now.\n>\n> > Thanks a lot for that work. I have looked at the changes and I agree\n> > that updateColnosLists + ExecBuildUpdateProjection() looks much better\n> > than updateTargetLists in the original patch. Looking at\n> > ExecBuildUpdateProjection(), I take back my comment upthread regarding\n> > the performance characteristics of your approach, that the prepared\n> > statements would suffer from having to build the update-new-tuple\n> > projection(s) from scratch on every execution.\n>\n> Yeah, I don't see any reason why the custom projection-build code\n> would be any slower than the regular path. Related to this, though,\n> I was wondering whether we could get a useful win by having\n> nodeModifyTable.c be lazier about doing the per-target-table\n> initialization steps.\n>\n> I think we have to open and lock all the\n> tables at start for semantic reasons, so maybe that swamps everything\n> else. But we could avoid purely-internal setup steps, such as\n> building the slots and projection expressions, until the first time\n> a particular target is actually updated into. This'd help if we've\n> failed to prune a lot of partitions that the update/delete won't\n> actually affect.\n\nOh, that is exactly what I have proposed in:\n\nhttps://commitfest.postgresql.org/32/2621/\n\n> >> More abstractly, I really dislike the \"fake variable\" design, primarily\n> >> the aspect that you made the fake variables look like real columns of\n> >> the parent table with attnums just beyond the last real one. I think\n> >> this is just a recipe for obscuring bugs, since it means you have to\n> >> lobotomize a lot of bad-attnum error checks. The alternative I'm\n> >> considering is to invent a separate RTE that holds all the junk columns.\n> >> Haven't tried that yet either.\n>\n> > Hmm, I did expect to hear a strong critique of that piece of code. I\n> > look forward to reviewing your alternative implementation.\n>\n> I got one version working over the weekend, but I didn't like the amount\n> of churn it forced in postgres_fdw (and, presumably, other FDWs). Gimme\n> a day or so to try something else.\n\nSure, thanks again.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Mar 2021 08:02:19 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Here's a v13 patchset that I feel pretty good about.\n\nMy original thought for replacing the \"fake variable\" design was to\nadd another RTE holding the extra variables, and then have setrefs.c\ntranslate the placeholder variables to the real thing at the last\npossible moment. I soon realized that instead of an actual RTE,\nit'd be better to invent a special varno value akin to INDEX_VAR\n(I called it ROWID_VAR, though I'm not wedded to that name). Info\nabout the associated variables is kept in a list of RowIdentityVarInfo\nstructs, which are more suitable than a regular RTE would be.\n\nI got that and the translate-in-setrefs approach more or less working,\nbut it was fairly messy, because the need to know about these special\nvariables spilled into FDWs and a lot of other places; for example\nindxpath.c needed a special check for them when deciding if an\nindex-only scan is possible. What turns out to be a lot cleaner is\nto handle the translation in adjust_appendrel_attrs_mutator(), so that\nwe have converted to real variables by the time we reach any\nrelation-scan-level logic.\n\nI did end up having to break the API for FDW AddForeignUpdateTargets\nfunctions: they need to do things differently when adding junk columns,\nand they need different parameters. This seems all to the good though,\nbecause the old API has been a backwards-compatibility hack for some\ntime (e.g., in not passing the \"root\" pointer).\n\nSome other random notes:\n\n* I was unimpressed with the idea of distinguishing different target\nrelations by embedding integer constants in the plan. In the first\nplace, the implementation was extremely fragile --- there was\nabsolutely NOTHING tying the counter you used to the subplans' eventual\nindexes in the ModifyTable lists. Plus I don't have a lot of faith\nthat setrefs.c will reliably do what you want in terms of bubbling the\nthings up. Maybe that could be made more robust, but the other problem\nis that the EXPLAIN output is just about unreadable; nobody will\nunderstand what \"(0)\" means. So I went back to the idea of emitting\ntableoid, and installed a hashtable plus a one-entry lookup cache\nto make the run-time mapping as fast as I could. I'm not necessarily\nsaying that this is how it has to be indefinitely, but I think we\nneed more work on planner and EXPLAIN infrastructure before we can\nget the idea of directly providing a list index to work nicely.\n\n* I didn't agree with your decision to remove the now-failing test\ncases from postgres_fdw.sql. I think it's better to leave them there,\nespecially in the cases where we were checking the plan as well as\nthe execution. Hopefully we'll be able to un-break those soon.\n\n* I updated a lot of hereby-obsoleted comments, which makes the patch\na bit longer than v12; but actually the code is a good bit smaller.\nThere's a noticeable net code savings in src/backend/optimizer/,\nwhich there was not before.\n\nI've not made any attempt to do performance testing on this,\nbut I think that's about the only thing standing between us\nand committing this thing.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 30 Mar 2021 00:51:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I wrote:\n> I've not made any attempt to do performance testing on this,\n> but I think that's about the only thing standing between us\n> and committing this thing.\n\nI think the main gating condition for committing this is \"does it\nmake things worse for simple non-partitioned updates?\". The need\nfor an extra tuple fetch per row certainly makes it seem like there\ncould be a slowdown there. However, in some tests here with current\nHEAD (54bb91c30), I concur with Amit's findings that there's barely\nany slowdown in that case. I re-did Heikki's worst-case example [1]\nof updating both columns of a 2-column table. I also tried variants\nof that involving updating two columns of a 6-column table and of a\n10-column table, figuring that those cases might be a bit more\nrepresentative of typical usage (see attached scripts). What I got\nwas\n\nTimes in ms, for the median of 3 runs:\n\nTable width\tHEAD\tpatch\tHEAD\tpatch\n\t\t-- jit on ---\t-- jit off --\n\n2 columns\t12528\t13329\t12574\t12922\n6 columns\t15861\t15862\t14775\t15249\n10 columns\t18399\t16985\t16994\t16907\n\nSo even with the worst case, it's not too bad, just a few percent\nworse, and once you get to a reasonable number of columns the v13\npatch is starting to win.\n\nHowever, I then tried a partitioned equivalent of the 6-column case\n(script also attached), and it looks like\n\n6 columns\t16551\t19097\t15637\t18201\n\nwhich is really noticeably worse, 16% or so. I poked at it with\n\"perf\" to see if there were any clear bottlenecks, and didn't find\na smoking gun. As best I can tell, the extra overhead must come\nfrom the fact that all the tuples are now passing through an Append\nnode that's not there in the old-style plan tree. I find this\nsurprising, because a (non-parallelized) Append doesn't really *do*\nmuch; it certainly doesn't add any data copying or the like.\nMaybe it's not so much the Append as that the rules for what kind of\ntuple slot can be used have changed somehow? Andres would have a\nclearer idea about that than I do.\n\nAnyway, I'm satisfied that this patch isn't likely to seriously\nhurt non-partitioned cases. There may be some micro-optimization\nthat could help simple partitioned cases, though.\n\nThis leaves us with a question whether to commit this patch now or\ndelay it till we have a better grip on why cases like this one are\nslower. I'm inclined to think that since there are a lot of clear\nwins for users of partitioning, we shouldn't let the fact that there\nare also some losses block committing. But it's a judgment call.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2e50d782-36f9-e723-0c4b-d133e63c6127%40iki.fi", "msg_date": "Tue, 30 Mar 2021 14:39:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I wrote:\n> ... I also tried variants\n> of that involving updating two columns of a 6-column table and of a\n> 10-column table, figuring that those cases might be a bit more\n> representative of typical usage (see attached scripts).\n\nArgh, I managed to attach the wrong file for the 10-column test\ncase. For the archives' sake, here's the right one.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 30 Mar 2021 14:50:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I wrote:\n> However, I then tried a partitioned equivalent of the 6-column case\n> (script also attached), and it looks like\n> 6 columns\t16551\t19097\t15637\t18201\n> which is really noticeably worse, 16% or so.\n\n... and on the third hand, that might just be some weird compiler-\nand platform-specific artifact.\n\nUsing the exact same compiler (RHEL8's gcc 8.3.1) on a different\nx86_64 machine, I measure the same case as about 7% slowdown not\n16%. That's still not great, but it calls the original measurement\ninto question, for sure.\n\nUsing Apple's clang 12.0.0 on an M1 mini, the patch actually clocks\nin a couple percent *faster* than HEAD, for both the partitioned and\nunpartitioned 6-column test cases.\n\nSo I'm not sure what to make of these results, but my level of concern\nis less than it was earlier today. I might've just gotten trapped by\nthe usual bugaboo of micro-benchmarking, ie putting too much stock in\nonly one test case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 18:13:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 31, 2021 at 7:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > However, I then tried a partitioned equivalent of the 6-column case\n> > (script also attached), and it looks like\n> > 6 columns 16551 19097 15637 18201\n> > which is really noticeably worse, 16% or so.\n>\n> ... and on the third hand, that might just be some weird compiler-\n> and platform-specific artifact.\n>\n> Using the exact same compiler (RHEL8's gcc 8.3.1) on a different\n> x86_64 machine, I measure the same case as about 7% slowdown not\n> 16%. That's still not great, but it calls the original measurement\n> into question, for sure.\n>\n> Using Apple's clang 12.0.0 on an M1 mini, the patch actually clocks\n> in a couple percent *faster* than HEAD, for both the partitioned and\n> unpartitioned 6-column test cases.\n>\n> So I'm not sure what to make of these results, but my level of concern\n> is less than it was earlier today. I might've just gotten trapped by\n> the usual bugaboo of micro-benchmarking, ie putting too much stock in\n> only one test case.\n\nFWIW, I ran the scripts you shared and see the following (median of 3\nruns) times in ms for the UPDATE in each script:\n\nheikki6.sql\n\nmaster: 19139 (jit=off) 18404 (jit=on)\npatched: 20202 (jit=off) 19290 (jit=on)\n\nhekki10.sql\n\nmaster: 21686 (jit=off) 21435 (jit=on)\npatched: 20953 (jit=off) 20161 (jit=on)\n\nPatch shows a win for 10 columns here.\n\npart6.sql\n\nmaster: 20321 (jit=off) 19580 (jit=on)\npatched: 22661 (jit=off) 21636 (jit=on)\n\nI wrote part10.sql and ran that too, with these results:\n\nmaster: 22280 (jit=off) 21876 (jit=on)\npatched: 23466 (jit=off) 22905 (jit=on)\n\nThe partitioned case slowdown is roughly 10% with 6 columns, 5% with\n10. I would agree that that's not too bad for a worse-case test case,\nnor something we couldn't optimize. I have yet to look closely at\nwhere the problem lies though.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Mar 2021 11:37:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "I noticed something else interesting. If you try an actually-useful\nUPDATE, ie one that has to do some computation in the target list,\nyou can get a plan like this if it's a partitioned table:\n\nEXPLAIN (verbose, costs off) UPDATE parent SET f2 = f2 + 1;\n QUERY PLAN \n---------------------------------------------------------------------------\n Update on public.parent\n Update on public.child1 parent_1\n Update on public.child2 parent_2\n Update on public.child3 parent_3\n -> Append\n -> Seq Scan on public.child1 parent_1\n Output: (parent_1.f2 + 1), parent_1.tableoid, parent_1.ctid\n -> Seq Scan on public.child2 parent_2\n Output: (parent_2.f2 + 1), parent_2.tableoid, parent_2.ctid\n -> Seq Scan on public.child3 parent_3\n Output: (parent_3.f2 + 1), parent_3.tableoid, parent_3.ctid\n\nBut when using traditional inheritance, it looks more like:\n\nEXPLAIN (verbose, costs off) UPDATE parent SET f2 = f2 + 1;\n QUERY PLAN \n---------------------------------------------------------------------------\n Update on public.parent\n Update on public.parent parent_1\n Update on public.child1 parent_2\n Update on public.child2 parent_3\n Update on public.child3 parent_4\n -> Result\n Output: (parent.f2 + 1), parent.tableoid, parent.ctid\n -> Append\n -> Seq Scan on public.parent parent_1\n Output: parent_1.f2, parent_1.tableoid, parent_1.ctid\n -> Seq Scan on public.child1 parent_2\n Output: parent_2.f2, parent_2.tableoid, parent_2.ctid\n -> Seq Scan on public.child2 parent_3\n Output: parent_3.f2, parent_3.tableoid, parent_3.ctid\n -> Seq Scan on public.child3 parent_4\n Output: parent_4.f2, parent_4.tableoid, parent_4.ctid\n\nThat is, instead of shoving the \"f2 + 1\" computation down to the table\nscans, it gets done in a separate Result node, implying yet another\nextra node in the plan with resultant slowdown. The reason for this\nseems to be that apply_scanjoin_target_to_paths has special logic\nto push the target down to members of a partitioned table, but it\ndoesn't do that for other sorts of appendrels. That isn't new\nwith this patch, you can see the same behavior in SELECT.\n\nGiven the distinct whiff of second-class citizenship that traditional\ninheritance has today, I'm not sure how excited people will be about\nfixing this. I've complained before that apply_scanjoin_target_to_paths\nis brute-force and needs to be rewritten, but I don't really want to\nundertake that task right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 22:56:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 31, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I noticed something else interesting. If you try an actually-useful\n> UPDATE, ie one that has to do some computation in the target list,\n> you can get a plan like this if it's a partitioned table:\n>\n> EXPLAIN (verbose, costs off) UPDATE parent SET f2 = f2 + 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Update on public.parent\n> Update on public.child1 parent_1\n> Update on public.child2 parent_2\n> Update on public.child3 parent_3\n> -> Append\n> -> Seq Scan on public.child1 parent_1\n> Output: (parent_1.f2 + 1), parent_1.tableoid, parent_1.ctid\n> -> Seq Scan on public.child2 parent_2\n> Output: (parent_2.f2 + 1), parent_2.tableoid, parent_2.ctid\n> -> Seq Scan on public.child3 parent_3\n> Output: (parent_3.f2 + 1), parent_3.tableoid, parent_3.ctid\n>\n> But when using traditional inheritance, it looks more like:\n>\n> EXPLAIN (verbose, costs off) UPDATE parent SET f2 = f2 + 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Update on public.parent\n> Update on public.parent parent_1\n> Update on public.child1 parent_2\n> Update on public.child2 parent_3\n> Update on public.child3 parent_4\n> -> Result\n> Output: (parent.f2 + 1), parent.tableoid, parent.ctid\n> -> Append\n> -> Seq Scan on public.parent parent_1\n> Output: parent_1.f2, parent_1.tableoid, parent_1.ctid\n> -> Seq Scan on public.child1 parent_2\n> Output: parent_2.f2, parent_2.tableoid, parent_2.ctid\n> -> Seq Scan on public.child2 parent_3\n> Output: parent_3.f2, parent_3.tableoid, parent_3.ctid\n> -> Seq Scan on public.child3 parent_4\n> Output: parent_4.f2, parent_4.tableoid, parent_4.ctid\n>\n> That is, instead of shoving the \"f2 + 1\" computation down to the table\n> scans, it gets done in a separate Result node, implying yet another\n> extra node in the plan with resultant slowdown. The reason for this\n> seems to be that apply_scanjoin_target_to_paths has special logic\n> to push the target down to members of a partitioned table, but it\n> doesn't do that for other sorts of appendrels. That isn't new\n> with this patch, you can see the same behavior in SELECT.\n\nI've noticed this too when investigating why\nfind_modifytable_subplan() needed to deal with a Result node in some\ncases.\n\n> Given the distinct whiff of second-class citizenship that traditional\n> inheritance has today, I'm not sure how excited people will be about\n> fixing this. I've complained before that apply_scanjoin_target_to_paths\n> is brute-force and needs to be rewritten, but I don't really want to\n> undertake that task right now.\n\nI remember having *unsuccessfully* tried to make\napply_scanjoin_target_to_paths() do the targetlist pushdown for the\ntraditional inheritance cases as well. I agree that rethinking the\nwhole apply_scanjoin_target_to_paths() approach might be a better use\nof our time. It has a looping-over-the-whole-partition-array\nbottleneck for simple lookup queries that I have long wanted to\npropose doing something about.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:28:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Mar 31, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... I've complained before that apply_scanjoin_target_to_paths\n>> is brute-force and needs to be rewritten, but I don't really want to\n>> undertake that task right now.\n\n> I remember having *unsuccessfully* tried to make\n> apply_scanjoin_target_to_paths() do the targetlist pushdown for the\n> traditional inheritance cases as well. I agree that rethinking the\n> whole apply_scanjoin_target_to_paths() approach might be a better use\n> of our time. It has a looping-over-the-whole-partition-array\n> bottleneck for simple lookup queries that I have long wanted to\n> propose doing something about.\n\nI was wondering if we could get anywhere by pushing more smarts\ndown to the level of create_projection_path itself, ie if we see\nwe're trying to apply a projection to an AppendPath then push it\nunderneath that automatically. Then maybe some of the hackery\nin apply_scanjoin_target_to_paths could go away.\n\nThere's already an attempt at that in apply_projection_to_path,\nbut it's not completely clean so there are callers that can't\nuse it. Maybe a little more thought about how to do that\nin a way that violates no invariants would yield dividends.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 02:14:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, Mar 30, 2021 at 1:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a v13 patchset that I feel pretty good about.\n\nThanks. After staring at this for a day now, I do too.\n\n> My original thought for replacing the \"fake variable\" design was to\n> add another RTE holding the extra variables, and then have setrefs.c\n> translate the placeholder variables to the real thing at the last\n> possible moment. I soon realized that instead of an actual RTE,\n> it'd be better to invent a special varno value akin to INDEX_VAR\n> (I called it ROWID_VAR, though I'm not wedded to that name). Info\n> about the associated variables is kept in a list of RowIdentityVarInfo\n> structs, which are more suitable than a regular RTE would be.\n>\n> I got that and the translate-in-setrefs approach more or less working,\n> but it was fairly messy, because the need to know about these special\n> variables spilled into FDWs and a lot of other places; for example\n> indxpath.c needed a special check for them when deciding if an\n> index-only scan is possible. What turns out to be a lot cleaner is\n> to handle the translation in adjust_appendrel_attrs_mutator(), so that\n> we have converted to real variables by the time we reach any\n> relation-scan-level logic.\n>\n> I did end up having to break the API for FDW AddForeignUpdateTargets\n> functions: they need to do things differently when adding junk columns,\n> and they need different parameters. This seems all to the good though,\n> because the old API has been a backwards-compatibility hack for some\n> time (e.g., in not passing the \"root\" pointer).\n\nThis all looks really neat.\n\nI couldn't help but think that the RowIdentityVarInfo management code\nlooks a bit like SpecialJunkVarInfo stuff in my earliest patches, but\nof course without all the fragility of assigning \"fake\" attribute\nnumbers to a \"real\" base relation(s).\n\n> Some other random notes:\n>\n> * I was unimpressed with the idea of distinguishing different target\n> relations by embedding integer constants in the plan. In the first\n> place, the implementation was extremely fragile --- there was\n> absolutely NOTHING tying the counter you used to the subplans' eventual\n> indexes in the ModifyTable lists. Plus I don't have a lot of faith\n> that setrefs.c will reliably do what you want in terms of bubbling the\n> things up. Maybe that could be made more robust, but the other problem\n> is that the EXPLAIN output is just about unreadable; nobody will\n> understand what \"(0)\" means. So I went back to the idea of emitting\n> tableoid, and installed a hashtable plus a one-entry lookup cache\n> to make the run-time mapping as fast as I could. I'm not necessarily\n> saying that this is how it has to be indefinitely, but I think we\n> need more work on planner and EXPLAIN infrastructure before we can\n> get the idea of directly providing a list index to work nicely.\n\nOkay.\n\n> * I didn't agree with your decision to remove the now-failing test\n> cases from postgres_fdw.sql. I think it's better to leave them there,\n> especially in the cases where we were checking the plan as well as\n> the execution. Hopefully we'll be able to un-break those soon.\n\nOkay.\n\n> * I updated a lot of hereby-obsoleted comments, which makes the patch\n> a bit longer than v12; but actually the code is a good bit smaller.\n> There's a noticeable net code savings in src/backend/optimizer/,\n> which there was not before.\n\nAgreed. (I had evidently missed a bunch of comments referring to the\nold ways of how inherited updates are performed.)\n\n> I've not made any attempt to do performance testing on this,\n> but I think that's about the only thing standing between us\n> and committing this thing.\n\nI reran some of the performance tests I did earlier (I've attached the\nmodified test running script for reference):\n\npgbench -n -T60 -M{simple|prepared} -f nojoin.sql\n\nnojoin.sql:\n\n\\set a random(1, 1000000)\nupdate test_table t set b = :a where a = :a;\n\n...and here are the tps figures:\n\n-Msimple\n\nnparts 10cols 20cols 40cols\n\nmaster:\n64 10112 9878 10920\n128 9662 10691 10604\n256 9642 9691 10626\n1024 8589 9675 9521\n\npatched:\n64 13493 13463 13313\n128 13305 13447 12705\n256 13190 13161 12954\n1024 11791 11408 11786\n\nNo variation across various column counts, but the patched improves\nthe tps for each case by quite a bit.\n\n-Mprepared (plan_cache_mode=force_generic_plan)\n\nmaster:\n64 2286 2285 2266\n128 1163 1127 1091\n256 531 519 544\n1024 77 71 69\n\npatched:\n64 6522 6612 6275\n128 3568 3625 3372\n256 1847 1710 1823\n1024 433 427 386\n\nAgain, no variation across columns counts. tps drops as partition\ncount increases both before and after applying the patches, although\npatched performs way better, which is mainly attributable to the\nability of UPDATE to now utilize runtime pruning (actually of the\nAppend under ModifyTable). The drop as partition count increases can\nbe attributed to the fact that with a generic plan, there are a bunch\nof steps that must be done across all partitions, such as\nAcauireExecutorLocks(), ExecCheckRTPerms(), per-result-rel\ninitializations in ExecInitModifyTable(), etc., even with the patched.\nAs mentioned upthread, [1] can help with the last bit.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/32/2621/", "msg_date": "Wed, 31 Mar 2021 21:54:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Tue, Mar 30, 2021 at 1:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a v13 patchset that I feel pretty good about.\n\n> Thanks. After staring at this for a day now, I do too.\n\nThanks for looking! Pushed after some more docs-fiddling and a final\nread-through. I think the only code change from v13 is that I decided\nto convert ExecGetJunkAttribute into a \"static inline\", since it's\njust a thin wrapper around slot_getattr(). Doesn't really help\nperformance much, but it shouldn't hurt.\n\n> ... The drop as partition count increases can\n> be attributed to the fact that with a generic plan, there are a bunch\n> of steps that must be done across all partitions, such as\n> AcauireExecutorLocks(), ExecCheckRTPerms(), per-result-rel\n> initializations in ExecInitModifyTable(), etc., even with the patched.\n> As mentioned upthread, [1] can help with the last bit.\n\nI'll try to find some time to look at that one.\n\nI'd previously been thinking that we couldn't be lazy about applying\nmost of those steps at executor startup, but on second thought,\nExecCheckRTPerms should be a no-op anyway for child tables. So\nmaybe it would be okay to not take a lock, much less do the other\nstuff, until the particular child table is stored into.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 11:58:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Tue, Mar 30, 2021 at 12:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe that could be made more robust, but the other problem\n> is that the EXPLAIN output is just about unreadable; nobody will\n> understand what \"(0)\" means.\n\nI think this was an idea that originally came from me, prompted by\nwhat we already do for:\n\nrhaas=# explain verbose select 1 except select 2;\n QUERY PLAN\n-----------------------------------------------------------------------------\n HashSetOp Except (cost=0.00..0.06 rows=1 width=8)\n Output: (1), (0)\n -> Append (cost=0.00..0.05 rows=2 width=8)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..0.02 rows=1 width=8)\n Output: 1, 0\n -> Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=8)\n Output: 2, 1\n -> Result (cost=0.00..0.01 rows=1 width=4)\n Output: 2\n(11 rows)\n\nThat is admittedly pretty magical, but it's a precedent. If you think\nthe relation OID to subplan index lookup is fast enough that it\ndoesn't matter, then I guess it's OK, but I guess my opinion is that\nthe subplan index feels like the thing we really want, and if we're\npassing anything else up the plan tree, that seems to be a decision\nmade out of embarrassment rather than conviction. I think the real\nproblem here is that the deparsing code isn't in on the secret. If in\nthe above example, or in this patch, it deparsed as (Subplan Index) at\nthe parent level, and 0, 1, 2, ... in the children, it wouldn't\nconfuse anyone, or at least not much more than EXPLAIN output does in\ngeneral.\n\nOr even if we just output (Constant-Value) it wouldn't be that bad.\nThe whole convention of deparsing target lists by recursing into the\nchildren, or one of them, in some ways obscures what's really going\non. I did a talk a few years ago in which I made those target lists\ndeparse as $OUTER.0, $OUTER.1, $INNER.0, etc. and I think people found\nthat pretty enlightening, because it's sort of non-obvious in what way\ntable foo is present when a target list 8 levels up in the join tree\nclaims to have a value for foo.x. Now, such notation can't really be\nrecommended in general, because it'd be too hard to understand what\nwas happening in a lot of cases, but the recursive stuff is clearly\nnot without its own attendant confusions.\n\nThanks to both of you for working on this! As I said before, this\nseems like really important work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:01:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 30, 2021 at 12:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe that could be made more robust, but the other problem\n>> is that the EXPLAIN output is just about unreadable; nobody will\n>> understand what \"(0)\" means.\n\n> I think this was an idea that originally came from me, prompted by\n> what we already do for:\n\nI agree that we have some existing behavior that's related to this, but\nit's still messy, and I couldn't find any evidence that suggested that the\nruntime lookup costs anything. Typical subplans are going to deliver\nlong runs of tuples from the same target relation, so as long as we\nmaintain a one-element cache of the last lookup result, it's only about\none comparison per tuple most of the time.\n\n> I think the real\n> problem here is that the deparsing code isn't in on the secret.\n\nAgreed; if we spent some more effort on that end of it, maybe we\ncould do something different here. I'm not very sure what good\noutput would look like though. A key advantage of tableoid is\nthat that's already a thing people know about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:24:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 31, 2021 at 1:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that we have some existing behavior that's related to this, but\n> it's still messy, and I couldn't find any evidence that suggested that the\n> runtime lookup costs anything. Typical subplans are going to deliver\n> long runs of tuples from the same target relation, so as long as we\n> maintain a one-element cache of the last lookup result, it's only about\n> one comparison per tuple most of the time.\n\nOK, that's pretty fair.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:37:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, Apr 1, 2021 at 12:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Tue, Mar 30, 2021 at 1:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Here's a v13 patchset that I feel pretty good about.\n>\n> > Thanks. After staring at this for a day now, I do too.\n>\n> Thanks for looking! Pushed after some more docs-fiddling and a final\n> read-through. I think the only code change from v13 is that I decided\n> to convert ExecGetJunkAttribute into a \"static inline\", since it's\n> just a thin wrapper around slot_getattr(). Doesn't really help\n> performance much, but it shouldn't hurt.\n\nThanks a lot.\n\n> > ... The drop as partition count increases can\n> > be attributed to the fact that with a generic plan, there are a bunch\n> > of steps that must be done across all partitions, such as\n> > AcauireExecutorLocks(), ExecCheckRTPerms(), per-result-rel\n> > initializations in ExecInitModifyTable(), etc., even with the patched.\n> > As mentioned upthread, [1] can help with the last bit.\n>\n> I'll try to find some time to look at that one.\n>\n> I'd previously been thinking that we couldn't be lazy about applying\n> most of those steps at executor startup, but on second thought,\n> ExecCheckRTPerms should be a no-op anyway for child tables.\n\nYeah, David did say that in that thread:\n\nhttps://www.postgresql.org/message-id/CAApHDvqPzsMcKLRpmNpUW97PmaQDTmD7b2BayEPS5AN4LY-0bA%40mail.gmail.com\n\n> So\n> maybe it would be okay to not take a lock, much less do the other\n> stuff, until the particular child table is stored into.\n\nNote that the patch over there doesn't do anything about\nAcquireExecutorLocks() bottleneck, as there are some yet-unsolved race\nconditions that were previously discussed here:\n\nhttps://www.postgresql.org/message-id/flat/CAKJS1f_kfRQ3ZpjQyHC7=PK9vrhxiHBQFZ+hc0JCwwnRKkF3hg@mail.gmail.com\n\nAnyway, I'll post the rebased version of the patch that we do have.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Apr 2021 11:09:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Thu, 1 Apr 2021 at 15:09, Amit Langote <amitlangote09@gmail.com> wrote:\n> Note that the patch over there doesn't do anything about\n> AcquireExecutorLocks() bottleneck, as there are some yet-unsolved race\n> conditions that were previously discussed here:\n>\n> https://www.postgresql.org/message-id/flat/CAKJS1f_kfRQ3ZpjQyHC7=PK9vrhxiHBQFZ+hc0JCwwnRKkF3hg@mail.gmail.com\n\nThe only way I can think of so far to get around having to lock all\nchild partitions is pretty drastic and likely it's too late to change\nanyway. The idea is that when you attach an existing table as a\npartition that you can no longer access it directly. We'd likely have\nto invent a new relkind for partitions for that to work. This would\nmean that we shouldn't ever need to lock individual partitions as all\nthings which access them must do so via the parent. I imagined that we\nmight still be able to truncate partitions with an ALTER TABLE ...\nTRUNCATE PARTITION ...; or something. It feels a bit late for all\nthat now though, especially so with all the CONCURRENTLY work Alvaro\nhas done to make ATTACH/DETACH not take an AEL.\n\nAdditionally, I imagine doing this would upset a lot of people who do\ndirect accesses to partitions.\n\nRobert also mentioned some ideas in [1]. However, it seems that might\nhave a performance impact on locking in general.\n\nI think some other DBMSes might not allow direct access to partitions.\nPerhaps the locking issue is the reason why.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYbtm1uuDne3rRp_uNA2RFiBwXX1ngj3RSLxOfc3oS7cQ%40mail.gmail.com\n\n\n", "msg_date": "Thu, 1 Apr 2021 16:06:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "On Wed, Mar 31, 2021 at 9:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I reran some of the performance tests I did earlier (I've attached the\n> modified test running script for reference):\n\nFor archives' sake, noticing a mistake in my benchmarking script, I\nrepeated the tests. Apparently, all pgbench runs were performed with\n40 column tables, not 10, 20, and 40 as shown in the results.\n\n> pgbench -n -T60 -M{simple|prepared} -f nojoin.sql\n>\n> nojoin.sql:\n>\n> \\set a random(1, 1000000)\n> update test_table t set b = :a where a = :a;\n>\n> ...and here are the tps figures:\n>\n> -Msimple\n>\n> nparts 10cols 20cols 40cols\n>\n> master:\n> 64 10112 9878 10920\n> 128 9662 10691 10604\n> 256 9642 9691 10626\n> 1024 8589 9675 9521\n>\n> patched:\n> 64 13493 13463 13313\n> 128 13305 13447 12705\n> 256 13190 13161 12954\n> 1024 11791 11408 11786\n>\n> No variation across various column counts, but the patched improves\n> the tps for each case by quite a bit.\n\n-Msimple\n\npre-86dc90056:\nnparts 10cols 20cols 40cols\n\n64 11345 10650 10327\n128 11014 11005 10069\n256 10759 10827 10133\n1024 9518 10314 8418\n\npost-86dc90056:\n 10cols 20cols 40cols\n\n64 13829 13677 13207\n128 13521 12843 12418\n256 13071 13006 12926\n1024 12351 12036 11739\n\nMy previous assertion that the tps does vary across different column\ncounts seems to hold in this case, that is, -Msimple mode.\n\n> -Mprepared (plan_cache_mode=force_generic_plan)\n>\n> master:\n> 64 2286 2285 2266\n> 128 1163 1127 1091\n> 256 531 519 544\n> 1024 77 71 69\n>\n> patched:\n> 64 6522 6612 6275\n> 128 3568 3625 3372\n> 256 1847 1710 1823\n> 1024 433 427 386\n>\n> Again, no variation across columns counts.\n\n-Mprepared\n\npre-86dc90056:\n 10cols 20cols 40cols\n\n64 3059 2851 2154\n128 1675 1366 1100\n256 685 658 544\n1024 126 85 76\n\npost-86dc90056:\n 10cols 20cols 40cols\n\n64 7665 6966 6444\n128 4211 3968 3389\n256 2205 2020 1783\n1024 545 499 389\n\nIn the -Mprepared case however, it does vary, both before and after\n86dc90056. For the post-86dc90056 case, I suspect it's because\nExecBuildUpdateProjection(), whose complexity is O(number-of-columns),\nbeing performed for *all* partitions in ExecInitModifyTable(). In the\n-Msimple case, it would always be for only one partition, so it\ndoesn't make that much of a difference to ExecInitModifyTable() time.\n\n> tps drops as partition\n> count increases both before and after applying the patches, although\n> patched performs way better, which is mainly attributable to the\n> ability of UPDATE to now utilize runtime pruning (actually of the\n> Append under ModifyTable). The drop as partition count increases can\n> be attributed to the fact that with a generic plan, there are a bunch\n> of steps that must be done across all partitions, such as\n> AcauireExecutorLocks(), ExecCheckRTPerms(), per-result-rel\n> initializations in ExecInitModifyTable(), etc., even with the patched.\n> As mentioned upthread, [1] can help with the last bit.\n\nHere are the numbers after applying that patch:\n\n 10cols 20cols 40cols\n\n64 17185 17064 16625\n128 12261 11648 11968\n256 7662 7564 7439\n1024 2252 2185 2101\n\nWith the patch, ExecBuildUpdateProjection() will be called only once\nirrespective of the number of partitions, almost like the -Msimple\ncase, so the tps across column counts does not vary by much.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Apr 2021 16:41:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "Hi\r\n\r\nAfter 86dc900, In \" src/include/nodes/pathnodes.h \",\r\nI noticed that it uses the word \" partitioned UPDATE \" in the comment above struct RowIdentityVarInfo.\r\n\r\nBut, it seems \" inherited UPDATE \" is used in the rest of places.\r\nIs it better to keep them consistent by using \" inherited UPDATE \" ?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n", "msg_date": "Mon, 17 May 2021 06:07:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: making update/delete of inheritance trees scale better" }, { "msg_contents": "Hi,\n\nOn Mon, May 17, 2021 at 3:07 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi\n>\n> After 86dc900, In \" src/include/nodes/pathnodes.h \",\n> I noticed that it uses the word \" partitioned UPDATE \" in the comment above struct RowIdentityVarInfo.\n>\n> But, it seems \" inherited UPDATE \" is used in the rest of places.\n> Is it better to keep them consistent by using \" inherited UPDATE \" ?\n\nYeah, I would not be opposed to fixing that. Like this maybe (patch attached)?\n\n- * In partitioned UPDATE/DELETE it's important for child partitions to share\n+ * In an inherited UPDATE/DELETE it's important for child tables to share\n\nWhile at it, I also noticed that the comment refers to the\nrow_identity_vars, but it can be unclear which variable it is\nreferring to. So fixed that too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 May 2021 15:32:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: making update/delete of inheritance trees scale better" }, { "msg_contents": "> On Mon, May 17, 2021 at 3:07 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Hi\r\n> >\r\n> > After 86dc900, In \" src/include/nodes/pathnodes.h \", I noticed that it\r\n> > uses the word \" partitioned UPDATE \" in the comment above struct\r\n> RowIdentityVarInfo.\r\n> >\r\n> > But, it seems \" inherited UPDATE \" is used in the rest of places.\r\n> > Is it better to keep them consistent by using \" inherited UPDATE \" ?\r\n> \r\n> Yeah, I would not be opposed to fixing that. Like this maybe (patch attached)?\r\n\r\n> - * In partitioned UPDATE/DELETE it's important for child partitions to share\r\n> + * In an inherited UPDATE/DELETE it's important for child tables to \r\n> + share\r\n\r\nThanks for the change, it looks good to me.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Mon, 17 May 2021 09:18:26 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: making update/delete of inheritance trees scale better" } ]
[ { "msg_contents": "Hi, hackers!\n\n*** The problem ***\nI'm investigating some cases of reduced database performance due to MultiXactOffsetLock contention (80% MultiXactOffsetLock, 20% IO DataFileRead).\nThe problem manifested itself during index repack and constraint validation. Both being effectively full table scans.\nThe database workload contains a lot of select for share\\select for update queries. I've tried to construct synthetic world generator and could not achieve similar lock configuration: I see a lot of different locks in wait events, particularly a lot more MultiXactMemberLocks. But from my experiments with synthetic workload, contention of MultiXactOffsetLock can be reduced by increasing NUM_MXACTOFFSET_BUFFERS=8 to bigger numbers.\n\n*** Question 1 ***\nIs it safe to increase number of buffers of MultiXact\\All SLRUs, recompile and run database as usual?\nI cannot experiment much with production. But I'm mostly sure that bigger buffers will solve the problem.\n\n*** Question 2 ***\nProbably, we could do GUCs for SLRU sizes? Are there any reasons not to do them configurable? I think multis, clog, subtransactions and others will benefit from bigger buffer. But, probably, too much of knobs can be confusing.\n\n*** Question 3 ***\nMultiXact offset lock is always taken as exclusive lock. It turns MultiXact Offset subsystem to single threaded. If someone have good idea how to make it more concurrency-friendly, I'm willing to put some efforts into this.\nProbably, I could just add LWlocks for each offset buffer page. Is it something worth doing? Or are there any hidden cavers and difficulties?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 8 May 2020 21:36:40 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 8 мая 2020 г., в 21:36, Andrey M. Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> *** The problem ***\n> I'm investigating some cases of reduced database performance due to MultiXactOffsetLock contention (80% MultiXactOffsetLock, 20% IO DataFileRead).\n> The problem manifested itself during index repack and constraint validation. Both being effectively full table scans.\n> The database workload contains a lot of select for share\\select for update queries. I've tried to construct synthetic world generator and could not achieve similar lock configuration: I see a lot of different locks in wait events, particularly a lot more MultiXactMemberLocks. But from my experiments with synthetic workload, contention of MultiXactOffsetLock can be reduced by increasing NUM_MXACTOFFSET_BUFFERS=8 to bigger numbers.\n> \n> *** Question 1 ***\n> Is it safe to increase number of buffers of MultiXact\\All SLRUs, recompile and run database as usual?\n> I cannot experiment much with production. But I'm mostly sure that bigger buffers will solve the problem.\n> \n> *** Question 2 ***\n> Probably, we could do GUCs for SLRU sizes? Are there any reasons not to do them configurable? I think multis, clog, subtransactions and others will benefit from bigger buffer. But, probably, too much of knobs can be confusing.\n> \n> *** Question 3 ***\n> MultiXact offset lock is always taken as exclusive lock. It turns MultiXact Offset subsystem to single threaded. If someone have good idea how to make it more concurrency-friendly, I'm willing to put some efforts into this.\n> Probably, I could just add LWlocks for each offset buffer page. Is it something worth doing? Or are there any hidden cavers and difficulties?\n\nI've created benchmark[0] imitating MultiXact pressure on my laptop: 7 clients are concurrently running select \"select * from table where primary_key = ANY ($1) for share\" where $1 is array of identifiers so that each tuple in a table is locked by different set of XIDs. During this benchmark I observe contention of MultiXactControlLock in pg_stat_activity\n\n пятница, 8 мая 2020 г. 15:08:37 (every 1s)\n\n pid | wait_event | wait_event_type | state | query \n-------+----------------------------+-----------------+--------+----------------------------------------------------\n 41344 | ClientRead | Client | idle | insert into t1 select generate_series(1,1000000,1)\n 41375 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41377 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41378 | | | active | select * from t1 where i = ANY ($1) for share\n 41379 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41381 | | | active | select * from t1 where i = ANY ($1) for share\n 41383 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41385 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n(8 rows)\n\nFinally, the benchmark is measuring time to execute select for update 42 times.\n\nI've went ahead and created 3 patches:\n1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers\n2. Reduce locking level to shared on read of MultiXactId members\n3. Configurable cache size\n\nI've found out that:\n1. When MultiXact working set does not fit into buffers - benchmark results grow very high. Yet, very big buffers slow down benchmark too. For this benchmark optimal SLRU size id 32 pages for offsets and 64 pages for members (defaults are 8 and 16 respectively).\n2. Lock optimisation increases performance by 5% on default SLRU sizes. Actually, benchmark does not explicitly read MultiXactId members, but when it replaces one with another - it have to read previous set. I understand that we can construct benchmark to demonstrate dominance of any algorithm and 5% of synthetic workload is not a very big number. But it just make sense to try to take shared lock for reading.\n3. Manipulations with cache size do not affect benchmark anyhow. It's somewhat expected: benchmark is designed to defeat cache, either way OffsetControlLock would not be stressed.\n\nFor our workload, I think we will just increase numbers of SLRU sizes. But patchset may be useful for tuning and as a performance optimisation of MultiXact.\n\nAlso MultiXacts seems to be not very good fit into SLRU design. I think it would be better to use B-tree as a container. Or at least make MultiXact members extendable in-place (reserve some size when multixact is created).\nWhen we want to extend number of locks for a tuple currently we will:\n1. Iterate through all SLRU buffers for offsets to read current offset (with exclusive lock for offsets)\n2. Iterate through all buffers for members to find current members (with exclusive lock for members)\n3. Create new members array with +1 xid\n4. Iterate through all cache members to find out maybe there are any such cache item as what we are going to create\n5. iterate over 1 again for write\n6. Iterate over 2 again for write\n\nObviously this does not scale well - we cannot increase SLRU sizes for too long.\n\nThanks! I'd be happy to hear any feedback.\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/multixact_stress", "msg_date": "Mon, 11 May 2020 16:17:58 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 11 мая 2020 г., в 16:17, Andrey M. Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> I've went ahead and created 3 patches:\n> 1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers\n> 2. Reduce locking level to shared on read of MultiXactId members\n> 3. Configurable cache size\n\nI'm looking more at MultiXact and it seems to me that we have a race condition there.\n\nWhen we create a new MultiXact we do:\n1. Generate new MultiXactId under MultiXactGenLock\n2. Record new mxid with members and offset to WAL\n3. Write offset to SLRU under MultiXactOffsetControlLock\n4. Write members to SLRU under MultiXactMemberControlLock\n\nWhen we read MultiXact we do:\n1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock\n2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1\n3. Retrieve members from SLRU under MultiXactMemberControlLock\n4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.\n\nWhat am I missing?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 13 May 2020 23:08:37 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "At Wed, 13 May 2020 23:08:37 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> \n> \n> > 11 мая 2020 г., в 16:17, Andrey M. Borodin <x4mmm@yandex-team.ru> написал(а):\n> > \n> > I've went ahead and created 3 patches:\n> > 1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers\n> > 2. Reduce locking level to shared on read of MultiXactId members\n> > 3. Configurable cache size\n> \n> I'm looking more at MultiXact and it seems to me that we have a race condition there.\n> \n> When we create a new MultiXact we do:\n> 1. Generate new MultiXactId under MultiXactGenLock\n> 2. Record new mxid with members and offset to WAL\n> 3. Write offset to SLRU under MultiXactOffsetControlLock\n> 4. Write members to SLRU under MultiXactMemberControlLock\n\nBut, don't we hold exclusive lock on the buffer through all the steps\nabove?\n\n> When we read MultiXact we do:\n> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock\n> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1\n> 3. Retrieve members from SLRU under MultiXactMemberControlLock\n> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.\n\nSo transactions never see such incomplete mxids, I believe.\n\n> What am I missing?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 May 2020 10:25:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 14 мая 2020 г., в 06:25, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> At Wed, 13 May 2020 23:08:37 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n>> \n>> \n>>> 11 мая 2020 г., в 16:17, Andrey M. Borodin <x4mmm@yandex-team.ru> написал(а):\n>>> \n>>> I've went ahead and created 3 patches:\n>>> 1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers\n>>> 2. Reduce locking level to shared on read of MultiXactId members\n>>> 3. Configurable cache size\n>> \n>> I'm looking more at MultiXact and it seems to me that we have a race condition there.\n>> \n>> When we create a new MultiXact we do:\n>> 1. Generate new MultiXactId under MultiXactGenLock\n>> 2. Record new mxid with members and offset to WAL\n>> 3. Write offset to SLRU under MultiXactOffsetControlLock\n>> 4. Write members to SLRU under MultiXactMemberControlLock\n> \n> But, don't we hold exclusive lock on the buffer through all the steps\n> above?\nYes...Unless MultiXact is observed on StandBy. This could lead to observing inconsistent snapshot: one of lockers committed tuple delete, but standby sees it as alive.\n\n>> When we read MultiXact we do:\n>> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock\n>> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1\n>> 3. Retrieve members from SLRU under MultiXactMemberControlLock\n>> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.\n> \n> So transactions never see such incomplete mxids, I believe.\nI've observed sleep in step 2. I believe it's possible to observe special effects of step 4 too.\nMaybe we could add lock on standby to dismiss this 1000us wait? Sometimes it hits hard on Standbys: if someone is locking whole table on primary - all seq scans on standbys follow him with MultiXactOffsetControlLock contention.\n\nIt looks like this:\n0x00007fcd56896ff7 in __GI___select (nfds=nfds@entry=0, readfds=readfds@entry=0x0, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7ffd83376fe0) at ../sysdeps/unix/sysv/linux/select.c:41\n#0 0x00007fcd56896ff7 in __GI___select (nfds=nfds@entry=0, readfds=readfds@entry=0x0, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7ffd83376fe0) at ../sysdeps/unix/sysv/linux/select.c:41\n#1 0x000056186e0d54bd in pg_usleep (microsec=microsec@entry=1000) at ./build/../src/port/pgsleep.c:56\n#2 0x000056186dd5edf2 in GetMultiXactIdMembers (from_pgupgrade=0 '\\000', onlyLock=<optimized out>, members=0x7ffd83377080, multi=3106214809) at ./build/../src/backend/access/transam/multixact.c:1370\n#3 GetMultiXactIdMembers () at ./build/../src/backend/access/transam/multixact.c:1202\n#4 0x000056186dd2d2d9 in MultiXactIdGetUpdateXid (xmax=<optimized out>, t_infomask=<optimized out>) at ./build/../src/backend/access/heap/heapam.c:7039\n#5 0x000056186dd35098 in HeapTupleGetUpdateXid (tuple=tuple@entry=0x7fcba3b63d58) at ./build/../src/backend/access/heap/heapam.c:7080\n#6 0x000056186e0cd0f8 in HeapTupleSatisfiesMVCC (htup=<optimized out>, snapshot=0x56186f44a058, buffer=230684) at ./build/../src/backend/utils/time/tqual.c:1091\n#7 0x000056186dd2d922 in heapgetpage (scan=scan@entry=0x56186f4c8e78, page=page@entry=3620) at ./build/../src/backend/access/heap/heapam.c:439\n#8 0x000056186dd2ea7c in heapgettup_pagemode (key=0x0, nkeys=0, dir=ForwardScanDirection, scan=0x56186f4c8e78) at ./build/../src/backend/access/heap/heapam.c:1034\n#9 heap_getnext (scan=scan@entry=0x56186f4c8e78, direction=direction@entry=ForwardScanDirection) at ./build/../src/backend/access/heap/heapam.c:1801\n#10 0x000056186de84f51 in SeqNext (node=node@entry=0x56186f4a4f78) at ./build/../src/backend/executor/nodeSeqscan.c:81\n#11 0x000056186de6a3f1 in ExecScanFetch (recheckMtd=0x56186de84ef0 <SeqRecheck>, accessMtd=0x56186de84f20 <SeqNext>, node=0x56186f4a4f78) at ./build/../src/backend/executor/execScan.c:97\n#12 ExecScan (node=0x56186f4a4f78, accessMtd=0x56186de84f20 <SeqNext>, recheckMtd=0x56186de84ef0 <SeqRecheck>) at ./build/../src/backend/executor/execScan.c:164\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 14 May 2020 10:19:42 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "At Thu, 14 May 2020 10:19:42 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> >> I'm looking more at MultiXact and it seems to me that we have a race condition there.\n> >> \n> >> When we create a new MultiXact we do:\n> >> 1. Generate new MultiXactId under MultiXactGenLock\n> >> 2. Record new mxid with members and offset to WAL\n> >> 3. Write offset to SLRU under MultiXactOffsetControlLock\n> >> 4. Write members to SLRU under MultiXactMemberControlLock\n> > \n> > But, don't we hold exclusive lock on the buffer through all the steps\n> > above?\n> Yes...Unless MultiXact is observed on StandBy. This could lead to observing inconsistent snapshot: one of lockers committed tuple delete, but standby sees it as alive.\n\nAh, right. I looked from GetNewMultiXactId. Actually\nXLOG_MULTIXACT_CREATE_ID is not protected from concurrent reference to\nthe creating mxact id. And GetMultiXactIdMembers is considering that\ncase.\n\n> >> When we read MultiXact we do:\n> >> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock\n> >> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1\n> >> 3. Retrieve members from SLRU under MultiXactMemberControlLock\n> >> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.\n> > \n> > So transactions never see such incomplete mxids, I believe.\n> I've observed sleep in step 2. I believe it's possible to observe special effects of step 4 too.\n> Maybe we could add lock on standby to dismiss this 1000us wait? Sometimes it hits hard on Standbys: if someone is locking whole table on primary - all seq scans on standbys follow him with MultiXactOffsetControlLock contention.\n\nGetMultiXactIdMembers believes that 4 is successfully done if 2\nreturned valid offset, but actually that is not obvious.\n\nIf we add a single giant lock just to isolate ,say,\nGetMultiXactIdMember and RecordNewMultiXact, it reduces concurrency\nunnecessarily. Perhaps we need finer-grained locking-key for standby\nthat works similary to buffer lock on primary, that doesn't cause\nconfilicts between irrelevant mxids.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 May 2020 15:16:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 14 мая 2020 г., в 11:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> At Thu, 14 May 2020 10:19:42 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n>>>> I'm looking more at MultiXact and it seems to me that we have a race condition there.\n>>>> \n>>>> When we create a new MultiXact we do:\n>>>> 1. Generate new MultiXactId under MultiXactGenLock\n>>>> 2. Record new mxid with members and offset to WAL\n>>>> 3. Write offset to SLRU under MultiXactOffsetControlLock\n>>>> 4. Write members to SLRU under MultiXactMemberControlLock\n>>> \n>>> But, don't we hold exclusive lock on the buffer through all the steps\n>>> above?\n>> Yes...Unless MultiXact is observed on StandBy. This could lead to observing inconsistent snapshot: one of lockers committed tuple delete, but standby sees it as alive.\n> \n> Ah, right. I looked from GetNewMultiXactId. Actually\n> XLOG_MULTIXACT_CREATE_ID is not protected from concurrent reference to\n> the creating mxact id. And GetMultiXactIdMembers is considering that\n> case.\n> \n>>>> When we read MultiXact we do:\n>>>> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock\n>>>> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1\n>>>> 3. Retrieve members from SLRU under MultiXactMemberControlLock\n>>>> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.\n>>> \n>>> So transactions never see such incomplete mxids, I believe.\n>> I've observed sleep in step 2. I believe it's possible to observe special effects of step 4 too.\n>> Maybe we could add lock on standby to dismiss this 1000us wait? Sometimes it hits hard on Standbys: if someone is locking whole table on primary - all seq scans on standbys follow him with MultiXactOffsetControlLock contention.\n> \n> GetMultiXactIdMembers believes that 4 is successfully done if 2\n> returned valid offset, but actually that is not obvious.\n> \n> If we add a single giant lock just to isolate ,say,\n> GetMultiXactIdMember and RecordNewMultiXact, it reduces concurrency\n> unnecessarily. Perhaps we need finer-grained locking-key for standby\n> that works similary to buffer lock on primary, that doesn't cause\n> confilicts between irrelevant mxids.\n> \nWe can just replay members before offsets. If offset is already there - members are there too.\nBut I'd be happy if we could mitigate those 1000us too - with a hint about last maixd state in a shared MX state, for example.\n\nActually, if we read empty mxid array instead of something that is replayed just yet - it's not a problem of inconsistency, because transaction in this mxid could not commit before we started. ISTM.\nSo instead of fix, we, probably, can just add a comment. If this reasoning is correct.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 14 May 2020 11:44:01 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "At Thu, 14 May 2020 11:44:01 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> > GetMultiXactIdMembers believes that 4 is successfully done if 2\n> > returned valid offset, but actually that is not obvious.\n> > \n> > If we add a single giant lock just to isolate ,say,\n> > GetMultiXactIdMember and RecordNewMultiXact, it reduces concurrency\n> > unnecessarily. Perhaps we need finer-grained locking-key for standby\n> > that works similary to buffer lock on primary, that doesn't cause\n> > confilicts between irrelevant mxids.\n> > \n> We can just replay members before offsets. If offset is already there - members are there too.\n> But I'd be happy if we could mitigate those 1000us too - with a hint about last maixd state in a shared MX state, for example.\n\nGenerally in such cases, condition variables would work. In the\nattached PoC, the reader side gets no penalty in the \"likely\" code\npath. The writer side always calls ConditionVariableBroadcast but the\nwaiter list is empty in almost all cases. But I couldn't cause the\nsituation where the sleep 1000u is reached.\n\n> Actually, if we read empty mxid array instead of something that is replayed just yet - it's not a problem of inconsistency, because transaction in this mxid could not commit before we started. ISTM.\n> So instead of fix, we, probably, can just add a comment. If this reasoning is correct.\n\nThe step 4 of the reader side reads the members of the target mxid. It\nis already written if the offset of the *next* mxid is filled-in.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 15 May 2020 09:03:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 15 мая 2020 г., в 05:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> At Thu, 14 May 2020 11:44:01 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n>>> GetMultiXactIdMembers believes that 4 is successfully done if 2\n>>> returned valid offset, but actually that is not obvious.\n>>> \n>>> If we add a single giant lock just to isolate ,say,\n>>> GetMultiXactIdMember and RecordNewMultiXact, it reduces concurrency\n>>> unnecessarily. Perhaps we need finer-grained locking-key for standby\n>>> that works similary to buffer lock on primary, that doesn't cause\n>>> confilicts between irrelevant mxids.\n>>> \n>> We can just replay members before offsets. If offset is already there - members are there too.\n>> But I'd be happy if we could mitigate those 1000us too - with a hint about last maixd state in a shared MX state, for example.\n> \n> Generally in such cases, condition variables would work. In the\n> attached PoC, the reader side gets no penalty in the \"likely\" code\n> path. The writer side always calls ConditionVariableBroadcast but the\n> waiter list is empty in almost all cases. But I couldn't cause the\n> situation where the sleep 1000u is reached.\nThanks! That really looks like a good solution without magic timeouts. Beautiful!\nI think I can create temporary extension which calls MultiXact API and tests edge-cases like this 1000us wait.\nThis extension will also be also useful for me to assess impact of bigger buffers, reduced read locking (as in my 2nd patch) and other tweaks.\n\n>> Actually, if we read empty mxid array instead of something that is replayed just yet - it's not a problem of inconsistency, because transaction in this mxid could not commit before we started. ISTM.\n>> So instead of fix, we, probably, can just add a comment. If this reasoning is correct.\n> \n> The step 4 of the reader side reads the members of the target mxid. It\n> is already written if the offset of the *next* mxid is filled-in.\nMost often - yes, but members are not guaranteed to be filled in order. Those who win MXMemberControlLock will write first.\nBut nobody can read members of MXID before it is returned. And its members will be written before returning MXID.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 15 May 2020 14:01:46 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "At Fri, 15 May 2020 14:01:46 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> \n> \n> > 15 мая 2020 г., в 05:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> > \n> > At Thu, 14 May 2020 11:44:01 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> >>> GetMultiXactIdMembers believes that 4 is successfully done if 2\n> >>> returned valid offset, but actually that is not obvious.\n> >>> \n> >>> If we add a single giant lock just to isolate ,say,\n> >>> GetMultiXactIdMember and RecordNewMultiXact, it reduces concurrency\n> >>> unnecessarily. Perhaps we need finer-grained locking-key for standby\n> >>> that works similary to buffer lock on primary, that doesn't cause\n> >>> confilicts between irrelevant mxids.\n> >>> \n> >> We can just replay members before offsets. If offset is already there - members are there too.\n> >> But I'd be happy if we could mitigate those 1000us too - with a hint about last maixd state in a shared MX state, for example.\n> > \n> > Generally in such cases, condition variables would work. In the\n> > attached PoC, the reader side gets no penalty in the \"likely\" code\n> > path. The writer side always calls ConditionVariableBroadcast but the\n> > waiter list is empty in almost all cases. But I couldn't cause the\n> > situation where the sleep 1000u is reached.\n> Thanks! That really looks like a good solution without magic timeouts. Beautiful!\n> I think I can create temporary extension which calls MultiXact API and tests edge-cases like this 1000us wait.\n> This extension will also be also useful for me to assess impact of bigger buffers, reduced read locking (as in my 2nd patch) and other tweaks.\n\nHappy to hear that, It would need to use timeout just in case, though.\n\n> >> Actually, if we read empty mxid array instead of something that is replayed just yet - it's not a problem of inconsistency, because transaction in this mxid could not commit before we started. ISTM.\n> >> So instead of fix, we, probably, can just add a comment. If this reasoning is correct.\n> > \n> > The step 4 of the reader side reads the members of the target mxid. It\n> > is already written if the offset of the *next* mxid is filled-in.\n> Most often - yes, but members are not guaranteed to be filled in order. Those who win MXMemberControlLock will write first.\n> But nobody can read members of MXID before it is returned. And its members will be written before returning MXID.\n\nYeah, right. Otherwise assertion failure happens.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 May 2020 13:54:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> On 15 May 2020, at 02:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Generally in such cases, condition variables would work. In the\n> attached PoC, the reader side gets no penalty in the \"likely\" code\n> path. The writer side always calls ConditionVariableBroadcast but the\n> waiter list is empty in almost all cases. But I couldn't cause the\n> situation where the sleep 1000u is reached.\n\nThe submitted patch no longer applies, can you please submit an updated\nversion? I'm marking the patch Waiting on Author in the meantime.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 14:02:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 2 июля 2020 г., в 17:02, Daniel Gustafsson <daniel@yesql.se> написал(а):\n> \n>> On 15 May 2020, at 02:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n>> Generally in such cases, condition variables would work. In the\n>> attached PoC, the reader side gets no penalty in the \"likely\" code\n>> path. The writer side always calls ConditionVariableBroadcast but the\n>> waiter list is empty in almost all cases. But I couldn't cause the\n>> situation where the sleep 1000u is reached.\n> \n> The submitted patch no longer applies, can you please submit an updated\n> version? I'm marking the patch Waiting on Author in the meantime.\nThanks, Daniel! PFA V2\n\nBest regards, Andrey Borodin.", "msg_date": "Wed, 8 Jul 2020 12:03:54 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 8 Jul 2020, at 09:03, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 2 июля 2020 г., в 17:02, Daniel Gustafsson <daniel@yesql.se> написал(а):\n>> \n>>> On 15 May 2020, at 02:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> \n>>> Generally in such cases, condition variables would work. In the\n>>> attached PoC, the reader side gets no penalty in the \"likely\" code\n>>> path. The writer side always calls ConditionVariableBroadcast but the\n>>> waiter list is empty in almost all cases. But I couldn't cause the\n>>> situation where the sleep 1000u is reached.\n>> \n>> The submitted patch no longer applies, can you please submit an updated\n>> version? I'm marking the patch Waiting on Author in the meantime.\n> Thanks, Daniel! PFA V2\n\nThis version too has stopped applying according to the CFbot. I've moved it to\nthe next commitfest as we're out of time in this one and it was only pointed\nout now, but kept it Waiting on Author.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 2 Aug 2020 23:30:21 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 08.07.2020 10:03, Andrey M. Borodin wrote:\n>\n>> 2 июля 2020 г., в 17:02, Daniel Gustafsson <daniel@yesql.se> написал(а):\n>>\n>>> On 15 May 2020, at 02:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>> Generally in such cases, condition variables would work. In the\n>>> attached PoC, the reader side gets no penalty in the \"likely\" code\n>>> path. The writer side always calls ConditionVariableBroadcast but the\n>>> waiter list is empty in almost all cases. But I couldn't cause the\n>>> situation where the sleep 1000u is reached.\n>> The submitted patch no longer applies, can you please submit an updated\n>> version? I'm marking the patch Waiting on Author in the meantime.\n> Thanks, Daniel! PFA V2\n>\n> Best regards, Andrey Borodin.\n\n1) The first patch is sensible and harmless, so I think it is ready for \ncommitter. I haven't tested the performance impact, though.\n\n2) I like the initial proposal to make various SLRU buffers \nconfigurable, however, I doubt if it really solves the problem, or just \nmoves it to another place?\n\nThe previous patch you sent was based on some version that contained \nchanges for other slru buffers numbers: 'multixact_offsets_slru_buffers' \nand 'multixact_members_slru_buffers'. Have you just forgot to attach \nthem? The patch message \"[PATCH v2 2/4]\" hints that you had 4 patches)\nMeanwhile, I attach the rebased patch to calm down the CFbot. The \nchanges are trivial.\n\n2.1) I think that both min and max values for this parameter are too \nextreme. Have you tested them?\n\n+               &multixact_local_cache_entries,\n+               256, 2, INT_MAX / 2,\n\n2.2) MAX_CACHE_ENTRIES is not used anymore, so it can be deleted.\n\n3) No changes for third patch. I just renamed it for consistency.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 28 Aug 2020 21:08:34 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi, Anastasia!\n\n> 28 авг. 2020 г., в 23:08, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> написал(а):\n> \n> 1) The first patch is sensible and harmless, so I think it is ready for committer. I haven't tested the performance impact, though.\n> \n> 2) I like the initial proposal to make various SLRU buffers configurable, however, I doubt if it really solves the problem, or just moves it to another place?\n> \n> The previous patch you sent was based on some version that contained changes for other slru buffers numbers: 'multixact_offsets_slru_buffers' and 'multixact_members_slru_buffers'. Have you just forgot to attach them? The patch message \"[PATCH v2 2/4]\" hints that you had 4 patches)\n> Meanwhile, I attach the rebased patch to calm down the CFbot. The changes are trivial.\n> \n> 2.1) I think that both min and max values for this parameter are too extreme. Have you tested them?\n> \n> + &multixact_local_cache_entries,\n> + 256, 2, INT_MAX / 2,\n> \n> 2.2) MAX_CACHE_ENTRIES is not used anymore, so it can be deleted.\n> \n> 3) No changes for third patch. I just renamed it for consistency.\n\nThank you for your review. \n\nIndeed, I had 4th patch with tests, but these tests didn't work well: I still did not manage to stress SLRUs to reproduce problem from production...\n\nYou are absolutely correct in point 2: I did only tests with sane values. And observed extreme performance degradation with values ~ 64 megabytes. I do not know which highest values should we pick? 1Gb? Or highest possible functioning value?\n\nI greatly appreciate your review, sorry for so long delay. Thanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 28 Sep 2020 19:41:41 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 28.09.2020 17:41, Andrey M. Borodin wrote:\n> Hi, Anastasia!\n>\n>> 28 авг. 2020 г., в 23:08, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> написал(а):\n>>\n>> 1) The first patch is sensible and harmless, so I think it is ready for committer. I haven't tested the performance impact, though.\n>>\n>> 2) I like the initial proposal to make various SLRU buffers configurable, however, I doubt if it really solves the problem, or just moves it to another place?\n>>\n>> The previous patch you sent was based on some version that contained changes for other slru buffers numbers: 'multixact_offsets_slru_buffers' and 'multixact_members_slru_buffers'. Have you just forgot to attach them? The patch message \"[PATCH v2 2/4]\" hints that you had 4 patches)\n>> Meanwhile, I attach the rebased patch to calm down the CFbot. The changes are trivial.\n>>\n>> 2.1) I think that both min and max values for this parameter are too extreme. Have you tested them?\n>>\n>> + &multixact_local_cache_entries,\n>> + 256, 2, INT_MAX / 2,\n>>\n>> 2.2) MAX_CACHE_ENTRIES is not used anymore, so it can be deleted.\n>>\n>> 3) No changes for third patch. I just renamed it for consistency.\n> Thank you for your review.\n>\n> Indeed, I had 4th patch with tests, but these tests didn't work well: I still did not manage to stress SLRUs to reproduce problem from production...\n>\n> You are absolutely correct in point 2: I did only tests with sane values. And observed extreme performance degradation with values ~ 64 megabytes. I do not know which highest values should we pick? 1Gb? Or highest possible functioning value?\n\nI would go with the values that we consider adequate for this setting. \nAs I see, there is no strict rule about it in guc.c and many variables \nhave large border values. Anyway, we need to explain it at least in the \ndocumentation and code comments.\n\nIt seems that the default was conservative enough, so it can be also a \nminimal value too. As for maximum, can you provide any benchmark \nresults? If we have a peak and a noticeable performance degradation \nafter that, we can use it to calculate the preferable maxvalue.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Wed, 7 Oct 2020 17:23:28 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi!\n\nOn Mon, Sep 28, 2020 at 5:41 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > 28 авг. 2020 г., в 23:08, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> написал(а):\n> >\n> > 1) The first patch is sensible and harmless, so I think it is ready for committer. I haven't tested the performance impact, though.\n> >\n> > 2) I like the initial proposal to make various SLRU buffers configurable, however, I doubt if it really solves the problem, or just moves it to another place?\n> >\n> > The previous patch you sent was based on some version that contained changes for other slru buffers numbers: 'multixact_offsets_slru_buffers' and 'multixact_members_slru_buffers'. Have you just forgot to attach them? The patch message \"[PATCH v2 2/4]\" hints that you had 4 patches)\n> > Meanwhile, I attach the rebased patch to calm down the CFbot. The changes are trivial.\n> >\n> > 2.1) I think that both min and max values for this parameter are too extreme. Have you tested them?\n> >\n> > + &multixact_local_cache_entries,\n> > + 256, 2, INT_MAX / 2,\n> >\n> > 2.2) MAX_CACHE_ENTRIES is not used anymore, so it can be deleted.\n> >\n> > 3) No changes for third patch. I just renamed it for consistency.\n>\n> Thank you for your review.\n>\n> Indeed, I had 4th patch with tests, but these tests didn't work well: I still did not manage to stress SLRUs to reproduce problem from production...\n>\n> You are absolutely correct in point 2: I did only tests with sane values. And observed extreme performance degradation with values ~ 64 megabytes. I do not know which highest values should we pick? 1Gb? Or highest possible functioning value?\n>\n> I greatly appreciate your review, sorry for so long delay. Thanks!\n\nI took a look at this patchset.\n\nThe 1st and 3rd patches look good to me. I made just minor improvements.\n1) There is still a case when SimpleLruReadPage_ReadOnly() relocks the\nSLRU lock, which is already taken in exclusive mode. I've evaded this\nby passing the lock mode as a parameter to\nSimpleLruReadPage_ReadOnly().\n3) CHECK_FOR_INTERRUPTS() is not needed anymore, because it's called\ninside ConditionVariableSleep() if needed. Also, no current wait\nevents use slashes, and I don't think we should introduce slashes\nhere. Even if we should, then we should also rename existing wait\nevents to be consistent with a new one. So, I've renamed the new wait\nevent to remove the slash.\n\nRegarding the patch 2. I see the current documentation in the patch\ndoesn't explain to the user how to set the new parameter. I think we\nshould give users an idea what workloads need high values of\nmultixact_local_cache_entries parameter and what doesn't. Also, we\nshould explain the negative aspects of high values\nmultixact_local_cache_entries. Ideally, we should get the advantage\nwithout overloading users with new nontrivial parameters, but I don't\nhave a particular idea here.\n\nI'd like to propose committing 1 and 3, but leave 2 for further review.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 26 Oct 2020 04:05:26 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 26 окт. 2020 г., в 06:05, Alexander Korotkov <aekorotkov@gmail.com> написал(а):\n> \n> Hi!\n> \n> On Mon, Sep 28, 2020 at 5:41 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> 28 авг. 2020 г., в 23:08, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> написал(а):\n>>> \n>>> 1) The first patch is sensible and harmless, so I think it is ready for committer. I haven't tested the performance impact, though.\n>>> \n>>> 2) I like the initial proposal to make various SLRU buffers configurable, however, I doubt if it really solves the problem, or just moves it to another place?\n>>> \n>>> The previous patch you sent was based on some version that contained changes for other slru buffers numbers: 'multixact_offsets_slru_buffers' and 'multixact_members_slru_buffers'. Have you just forgot to attach them? The patch message \"[PATCH v2 2/4]\" hints that you had 4 patches)\n>>> Meanwhile, I attach the rebased patch to calm down the CFbot. The changes are trivial.\n>>> \n>>> 2.1) I think that both min and max values for this parameter are too extreme. Have you tested them?\n>>> \n>>> + &multixact_local_cache_entries,\n>>> + 256, 2, INT_MAX / 2,\n>>> \n>>> 2.2) MAX_CACHE_ENTRIES is not used anymore, so it can be deleted.\n>>> \n>>> 3) No changes for third patch. I just renamed it for consistency.\n>> \n>> Thank you for your review.\n>> \n>> Indeed, I had 4th patch with tests, but these tests didn't work well: I still did not manage to stress SLRUs to reproduce problem from production...\n>> \n>> You are absolutely correct in point 2: I did only tests with sane values. And observed extreme performance degradation with values ~ 64 megabytes. I do not know which highest values should we pick? 1Gb? Or highest possible functioning value?\n>> \n>> I greatly appreciate your review, sorry for so long delay. Thanks!\n> \n> I took a look at this patchset.\n> \n> The 1st and 3rd patches look good to me. I made just minor improvements.\n> 1) There is still a case when SimpleLruReadPage_ReadOnly() relocks the\n> SLRU lock, which is already taken in exclusive mode. I've evaded this\n> by passing the lock mode as a parameter to\n> SimpleLruReadPage_ReadOnly().\n> 3) CHECK_FOR_INTERRUPTS() is not needed anymore, because it's called\n> inside ConditionVariableSleep() if needed. Also, no current wait\n> events use slashes, and I don't think we should introduce slashes\n> here. Even if we should, then we should also rename existing wait\n> events to be consistent with a new one. So, I've renamed the new wait\n> event to remove the slash.\n> \n> Regarding the patch 2. I see the current documentation in the patch\n> doesn't explain to the user how to set the new parameter. I think we\n> should give users an idea what workloads need high values of\n> multixact_local_cache_entries parameter and what doesn't. Also, we\n> should explain the negative aspects of high values\n> multixact_local_cache_entries. Ideally, we should get the advantage\n> without overloading users with new nontrivial parameters, but I don't\n> have a particular idea here.\n> \n> I'd like to propose committing 1 and 3, but leave 2 for further review.\n\nThanks for your review, Alexander! \n+1 for avoiding double locking in SimpleLruReadPage_ReadOnly().\nOther changes seem correct to me too.\n\n\nI've tried to find optimal value for cache size and it seems to me that it affects multixact scalability much less than sizes of offsets\\members buffers. I concur that patch 2 of the patchset does not seem documented enough.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 26 Oct 2020 20:45:29 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Mon, Oct 26, 2020 at 6:45 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Thanks for your review, Alexander!\n> +1 for avoiding double locking in SimpleLruReadPage_ReadOnly().\n> Other changes seem correct to me too.\n>\n>\n> I've tried to find optimal value for cache size and it seems to me that it affects multixact scalability much less than sizes of offsets\\members buffers. I concur that patch 2 of the patchset does not seem documented enough.\n\nThank you. I've made a few more minor adjustments to the patchset.\nI'm going to push 0001 and 0003 if there are no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 27 Oct 2020 20:02:20 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Tue, Oct 27, 2020 at 8:02 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Oct 26, 2020 at 6:45 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Thanks for your review, Alexander!\n> > +1 for avoiding double locking in SimpleLruReadPage_ReadOnly().\n> > Other changes seem correct to me too.\n> >\n> >\n> > I've tried to find optimal value for cache size and it seems to me that it affects multixact scalability much less than sizes of offsets\\members buffers. I concur that patch 2 of the patchset does not seem documented enough.\n>\n> Thank you. I've made a few more minor adjustments to the patchset.\n> I'm going to push 0001 and 0003 if there are no objections.\n\nI get that patchset v5 doesn't pass the tests due to typo in assert.\nThe fixes version is attached.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 27 Oct 2020 20:23:26 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Tue, Oct 27, 2020 at 08:23:26PM +0300, Alexander Korotkov wrote:\n>On Tue, Oct 27, 2020 at 8:02 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> On Mon, Oct 26, 2020 at 6:45 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> > Thanks for your review, Alexander!\n>> > +1 for avoiding double locking in SimpleLruReadPage_ReadOnly().\n>> > Other changes seem correct to me too.\n>> >\n>> >\n>> > I've tried to find optimal value for cache size and it seems to me that it affects multixact scalability much less than sizes of offsets\\members buffers. I concur that patch 2 of the patchset does not seem documented enough.\n>>\n>> Thank you. I've made a few more minor adjustments to the patchset.\n>> I'm going to push 0001 and 0003 if there are no objections.\n>\n>I get that patchset v5 doesn't pass the tests due to typo in assert.\n>The fixes version is attached.\n>\n\nI did a quick review on this patch series. A couple comments:\n\n\n0001\n----\n\nThis looks quite suspicious to me - SimpleLruReadPage_ReadOnly is\nchanged to return information about what lock was used, merely to allow\nthe callers to do an Assert() that the value is not LW_NONE.\n\nIMO we could achieve exactly the same thing by passing a simple flag\nthat would say 'make sure we got a lock' or something like that. In\nfact, aren't all callers doing the assert? That'd mean we can just do\nthe check always, without the flag. (I see GetMultiXactIdMembers does\ntwo calls and only checks the second result, but I wonder if that's\nintended or omission.)\n\nIn any case, it'd make the lwlock.c changes unnecessary, I think.\n\n\n0002\n----\n\nSpecifies the number cached MultiXact by backend. Any SLRU lookup ...\n\nshould be 'number of cached ...'\n\n\n0003\n----\n\n * Conditional variable for waiting till the filling of the next multixact\n * will be finished. See GetMultiXactIdMembers() and RecordNewMultiXact()\n * for details.\n\nPerhaps 'till the next multixact is filled' or 'gets full' would be\nbetter. Not sure.\n\n\nThis thread started with a discussion about making the SLRU sizes\nconfigurable, but this patch version only adds a local cache. Does this\nachieve the same goal, or would we still gain something by having GUCs\nfor the SLRUs?\n\nIf we're claiming this improves performance, it'd be good to have some\nworkload demonstrating that and measurements. I don't see anything like\nthat in this thread, so it's a bit hand-wavy. Can someone share details\nof such workload (even synthetic one) and some basic measurements?\n\n\nregards\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 02:36:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Tomas, thanks for looking into this!\n\n> 28 окт. 2020 г., в 06:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> \n> This thread started with a discussion about making the SLRU sizes\n> configurable, but this patch version only adds a local cache. Does this\n> achieve the same goal, or would we still gain something by having GUCs\n> for the SLRUs?\n> \n> If we're claiming this improves performance, it'd be good to have some\n> workload demonstrating that and measurements. I don't see anything like\n> that in this thread, so it's a bit hand-wavy. Can someone share details\n> of such workload (even synthetic one) and some basic measurements?\n\nAll patches in this thread aim at the same goal: improve performance in presence of MultiXact locks contention.\nI could not build synthetical reproduction of the problem, however I did some MultiXact stressing here [0]. It's a clumsy test program, because it still is not clear to me which parameters of workload trigger MultiXact locks contention. In generic case I was encountering other locks like *GenLock: XidGenLock, MultixactGenLock etc. Yet our production system encounters this problem approximately once in a month through this year.\n\nTest program locks for share different set of tuples in presence of concurrent full scans.\nTo produce a set of locks we choose one of 14 bits. If a row number has this bit set to 0 we add lock it.\nI've been measuring time to lock all rows 3 time for each of 14 bits, observing total time to set all locks.\nDuring the test I was observing locks in pg_stat_activity, if they did not contain enough MultiXact locks I was tuning parameters further (number of concurrent clients, number of bits, select queries etc).\n\nWhy is it so complicated? It seems that other reproductions of a problem were encountering other locks.\n\nLets describe patches in this thread from the POV of these test.\n\n*** Configurable SLRU buffers for MultiXact members and offsets.\nFrom tests it is clear that high and low values for these buffers affect the test time. Here are time for a one test run with different offsets and members sizes [1]\nOur production currently runs with (numbers are pages of buffers)\n+#define NUM_MXACTOFFSET_BUFFERS 32\n+#define NUM_MXACTMEMBER_BUFFERS 64\nAnd, looking back to incidents in summer and fall 2020, seems like it helped mostly.\n\nBut it's hard to give some tuning advises based on test results. Values (32,64) produce 10% better result than current hardcoded values (8,16). In generic case this is not what someone should tune first.\n\n*** Configurable caches of MultiXacts.\nTests were specifically designed to beat caches. So, according to test the bigger cache is - the more time it takes to accomplish the test [2].\nAnyway cache is local for backend and it's purpose is deduplication of written MultiXacts, not enhancing reads.\n\n*** Using advantage of SimpleLruReadPage_ReadOnly() in MultiXacts.\nThis simply aligns MultiXact with Subtransactions and CLOG. Other SLRUs already take advantage of reading SLRU with shared lock.\nOn synthetical tests without background selects this patch adds another ~4.7% of performance [3] against [4]. This improvement seems consistent between different parameter values, yet within measurements deviation (see difference between warmup run [5] and closing run [6]).\nAll in all, these attempts to measure impact are hand-wavy too. But it makes sense to use consistent approach among similar subsystems (MultiXacts, Subtrans, CLOG etc).\n\n*** Reduce sleep in GetMultiXactIdMembers() on standby.\nThe problem with pg_usleep(1000L) within GetMultiXactIdMembers() manifests on standbys during contention of MultiXactOffsetControlLock. It's even harder to reproduce.\nYet it seems obvious that reducing sleep to shorter time frame will make count of sleeping backend smaller.\n\nFor consistency I've returned patch with SLRU buffer configs to patchset (other patches are intact). But I'm mostly concerned about patches 1 and 3.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/multixact_stress\n[1] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L22-L39\n[2] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L83-L99\n[3] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L9\n[4] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L29\n[5] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L3\n[6] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L19", "msg_date": "Wed, 28 Oct 2020 12:34:58 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi, Tomas!\n\nThank you for your review.\n\nOn Wed, Oct 28, 2020 at 4:36 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I did a quick review on this patch series. A couple comments:\n>\n>\n> 0001\n> ----\n>\n> This looks quite suspicious to me - SimpleLruReadPage_ReadOnly is\n> changed to return information about what lock was used, merely to allow\n> the callers to do an Assert() that the value is not LW_NONE.\n\nYes, but this is not merely to allow callers to do an Assert().\nSometimes in multixacts it could save us some relocks. So, we can\nskip relocking lock to exclusive mode if it's in exclusive already.\nAdding Assert() to every caller is probably overkill.\n\n> IMO we could achieve exactly the same thing by passing a simple flag\n> that would say 'make sure we got a lock' or something like that. In\n> fact, aren't all callers doing the assert? That'd mean we can just do\n> the check always, without the flag. (I see GetMultiXactIdMembers does\n> two calls and only checks the second result, but I wonder if that's\n> intended or omission.)\n\nHaving just the flag is exactly what the original version by Andrey\ndid. But if we have to read two multixact offsets pages or multiple\nmembers page in one GetMultiXactIdMembers()), then it does relocks\nfrom exclusive mode to exclusive mode. I decide that once we decide\nto optimize this locks, this situation is nice to evade.\n\n> In any case, it'd make the lwlock.c changes unnecessary, I think.\n\nI agree that it would be better to not touch lwlock.c. But I didn't\nfind a way to evade relocking exclusive mode to exclusive mode without\ntouching lwlock.c or making code cumbersome in other places.\n\n> 0002\n> ----\n>\n> Specifies the number cached MultiXact by backend. Any SLRU lookup ...\n>\n> should be 'number of cached ...'\n\nSounds reasonable.\n\n> 0003\n> ----\n>\n> * Conditional variable for waiting till the filling of the next multixact\n> * will be finished. See GetMultiXactIdMembers() and RecordNewMultiXact()\n> * for details.\n>\n> Perhaps 'till the next multixact is filled' or 'gets full' would be\n> better. Not sure.\n\nSounds reasonable as well.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 28 Oct 2020 22:36:39 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Wed, Oct 28, 2020 at 10:36:39PM +0300, Alexander Korotkov wrote:\n>Hi, Tomas!\n>\n>Thank you for your review.\n>\n>On Wed, Oct 28, 2020 at 4:36 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> I did a quick review on this patch series. A couple comments:\n>>\n>>\n>> 0001\n>> ----\n>>\n>> This looks quite suspicious to me - SimpleLruReadPage_ReadOnly is\n>> changed to return information about what lock was used, merely to allow\n>> the callers to do an Assert() that the value is not LW_NONE.\n>\n>Yes, but this is not merely to allow callers to do an Assert().\n>Sometimes in multixacts it could save us some relocks. So, we can\n>skip relocking lock to exclusive mode if it's in exclusive already.\n>Adding Assert() to every caller is probably overkill.\n>\n\nHmm, OK. That can only happen in GetMultiXactIdMembers, which is the\nonly place where we do retry, right? Do we actually know this makes any\nmeasurable difference? It seems we're mostly imagining that it might\nhelp, but we don't have any actual proof of that (e.g. a workload which\nwe might benchmark). Or am I missing something?\n\nFor me, the extra conditions make it way harder to reason about the\nbehavior of the code, and I can't convince myself it's worth it.\n\n\n>> IMO we could achieve exactly the same thing by passing a simple flag\n>> that would say 'make sure we got a lock' or something like that. In\n>> fact, aren't all callers doing the assert? That'd mean we can just do\n>> the check always, without the flag. (I see GetMultiXactIdMembers does\n>> two calls and only checks the second result, but I wonder if that's\n>> intended or omission.)\n>\n>Having just the flag is exactly what the original version by Andrey\n>did. But if we have to read two multixact offsets pages or multiple\n>members page in one GetMultiXactIdMembers()), then it does relocks\n>from exclusive mode to exclusive mode. I decide that once we decide\n>to optimize this locks, this situation is nice to evade.\n>\n>> In any case, it'd make the lwlock.c changes unnecessary, I think.\n>\n>I agree that it would be better to not touch lwlock.c. But I didn't\n>find a way to evade relocking exclusive mode to exclusive mode without\n>touching lwlock.c or making code cumbersome in other places.\n>\n\nHmm. OK.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 29 Oct 2020 00:21:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi,\n\nOn Wed, Oct 28, 2020 at 12:34:58PM +0500, Andrey Borodin wrote:\n>Tomas, thanks for looking into this!\n>\n>> 28 окт. 2020 г., в 06:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>>\n>> This thread started with a discussion about making the SLRU sizes\n>> configurable, but this patch version only adds a local cache. Does this\n>> achieve the same goal, or would we still gain something by having GUCs\n>> for the SLRUs?\n>>\n>> If we're claiming this improves performance, it'd be good to have some\n>> workload demonstrating that and measurements. I don't see anything like\n>> that in this thread, so it's a bit hand-wavy. Can someone share details\n>> of such workload (even synthetic one) and some basic measurements?\n>\n>All patches in this thread aim at the same goal: improve performance in presence of MultiXact locks contention.\n>I could not build synthetical reproduction of the problem, however I did some MultiXact stressing here [0]. It's a clumsy test program, because it still is not clear to me which parameters of workload trigger MultiXact locks contention. In generic case I was encountering other locks like *GenLock: XidGenLock, MultixactGenLock etc. Yet our production system encounters this problem approximately once in a month through this year.\n>\n>Test program locks for share different set of tuples in presence of concurrent full scans.\n>To produce a set of locks we choose one of 14 bits. If a row number has this bit set to 0 we add lock it.\n>I've been measuring time to lock all rows 3 time for each of 14 bits, observing total time to set all locks.\n>During the test I was observing locks in pg_stat_activity, if they did not contain enough MultiXact locks I was tuning parameters further (number of concurrent clients, number of bits, select queries etc).\n>\n>Why is it so complicated? It seems that other reproductions of a problem were encountering other locks.\n>\n\nIt's not my intention to be mean or anything like that, but to me this\nmeans we don't really understand the problem we're trying to solve. Had\nwe understood it, we should be able to construct a workload reproducing\nthe issue ...\n\nI understand what the individual patches are doing, and maybe those\nchanges are desirable in general. But without any benchmarks from a\nplausible workload I find it hard to convince myself that:\n\n(a) it actually will help with the issue you're observing on production\n\nand \n\n(b) it's actually worth the extra complexity (e.g. the lwlock changes)\n\n\nI'm willing to invest some of my time into reviewing/testing this, but I\nthink we badly need better insight into the issue, so that we can build\na workload reproducing it. Perhaps collecting some perf profiles and a\nsample of the queries might help, but I assume you already tried that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 29 Oct 2020 00:32:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 29 окт. 2020 г., в 04:32, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> It's not my intention to be mean or anything like that, but to me this\n> means we don't really understand the problem we're trying to solve. Had\n> we understood it, we should be able to construct a workload reproducing\n> the issue ...\n> \n> I understand what the individual patches are doing, and maybe those\n> changes are desirable in general. But without any benchmarks from a\n> plausible workload I find it hard to convince myself that:\n> \n> (a) it actually will help with the issue you're observing on production\n> \n> and \n> (b) it's actually worth the extra complexity (e.g. the lwlock changes)\n> \n> \n> I'm willing to invest some of my time into reviewing/testing this, but I\n> think we badly need better insight into the issue, so that we can build\n> a workload reproducing it. Perhaps collecting some perf profiles and a\n> sample of the queries might help, but I assume you already tried that.\n\nThanks, Tomas! This totally makes sense.\n\nIndeed, collecting queries did not help yet. We have loadtest environment equivalent to production (but with 10x less shards), copy of production workload queries. But the problem does not manifest there.\nWhy do I think the problem is in MultiXacts?\nHere is a chart with number of wait events on each host\n\n\nDuring the problem MultiXactOffsetControlLock and SLRURead dominate all other lock types. After primary switchover to another node SLRURead continued for a bit there, then disappeared.\nBacktraces on standbys during the problem show that most of backends are sleeping in pg_sleep(1000L) and are not included into wait stats on these charts.\n\nCurrently I'm considering writing test that directly calls MultiXactIdExpand(), MultiXactIdCreate(), and GetMultiXactIdMembers() from an extension. How do you think, would benchmarks in such tests be meaningful?\n\n\nThanks!\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 29 Oct 2020 12:08:21 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Oct 29, 2020 at 12:08:21PM +0500, Andrey Borodin wrote:\n>\n>\n>> 29 окт. 2020 г., в 04:32, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> It's not my intention to be mean or anything like that, but to me this\n>> means we don't really understand the problem we're trying to solve. Had\n>> we understood it, we should be able to construct a workload reproducing\n>> the issue ...\n>>\n>> I understand what the individual patches are doing, and maybe those\n>> changes are desirable in general. But without any benchmarks from a\n>> plausible workload I find it hard to convince myself that:\n>>\n>> (a) it actually will help with the issue you're observing on production\n>>\n>> and\n>> (b) it's actually worth the extra complexity (e.g. the lwlock changes)\n>>\n>>\n>> I'm willing to invest some of my time into reviewing/testing this, but I\n>> think we badly need better insight into the issue, so that we can build\n>> a workload reproducing it. Perhaps collecting some perf profiles and a\n>> sample of the queries might help, but I assume you already tried that.\n>\n>Thanks, Tomas! This totally makes sense.\n>\n>Indeed, collecting queries did not help yet. We have loadtest environment equivalent to production (but with 10x less shards), copy of production workload queries. But the problem does not manifest there.\n>Why do I think the problem is in MultiXacts?\n>Here is a chart with number of wait events on each host\n>\n>\n>During the problem MultiXactOffsetControlLock and SLRURead dominate all other lock types. After primary switchover to another node SLRURead continued for a bit there, then disappeared.\n\nOK, so most of this seems to be due to SLRURead and\nMultiXactOffsetControlLock. Could it be that there were too many\nmultixact members, triggering autovacuum to prevent multixact\nwraparound? That might generate a lot of IO on the SLRU. Are you\nmonitoring the size of the pg_multixact directory?\n\n>Backtraces on standbys during the problem show that most of backends are sleeping in pg_sleep(1000L) and are not included into wait stats on these charts.\n>\n>Currently I'm considering writing test that directly calls MultiXactIdExpand(), MultiXactIdCreate(), and GetMultiXactIdMembers() from an extension. How do you think, would benchmarks in such tests be meaningful?\n>\n\nI don't know. I'd much rather have a SQL-level benchmark than an\nextension doing this kind of stuff.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 29 Oct 2020 14:49:33 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 29 окт. 2020 г., в 18:49, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> On Thu, Oct 29, 2020 at 12:08:21PM +0500, Andrey Borodin wrote:\n>> \n>> \n>>> 29 окт. 2020 г., в 04:32, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>> \n>>> It's not my intention to be mean or anything like that, but to me this\n>>> means we don't really understand the problem we're trying to solve. Had\n>>> we understood it, we should be able to construct a workload reproducing\n>>> the issue ...\n>>> \n>>> I understand what the individual patches are doing, and maybe those\n>>> changes are desirable in general. But without any benchmarks from a\n>>> plausible workload I find it hard to convince myself that:\n>>> \n>>> (a) it actually will help with the issue you're observing on production\n>>> \n>>> and\n>>> (b) it's actually worth the extra complexity (e.g. the lwlock changes)\n>>> \n>>> \n>>> I'm willing to invest some of my time into reviewing/testing this, but I\n>>> think we badly need better insight into the issue, so that we can build\n>>> a workload reproducing it. Perhaps collecting some perf profiles and a\n>>> sample of the queries might help, but I assume you already tried that.\n>> \n>> Thanks, Tomas! This totally makes sense.\n>> \n>> Indeed, collecting queries did not help yet. We have loadtest environment equivalent to production (but with 10x less shards), copy of production workload queries. But the problem does not manifest there.\n>> Why do I think the problem is in MultiXacts?\n>> Here is a chart with number of wait events on each host\n>> \n>> \n>> During the problem MultiXactOffsetControlLock and SLRURead dominate all other lock types. After primary switchover to another node SLRURead continued for a bit there, then disappeared.\n> \n> OK, so most of this seems to be due to SLRURead and\n> MultiXactOffsetControlLock. Could it be that there were too many\n> multixact members, triggering autovacuum to prevent multixact\n> wraparound? That might generate a lot of IO on the SLRU. Are you\n> monitoring the size of the pg_multixact directory?\n\nYes, we had some problems with 'multixact \"members\" limit exceeded' long time ago.\nWe tuned autovacuum_multixact_freeze_max_age = 200000000 and vacuum_multixact_freeze_table_age = 75000000 (half of defaults) and since then did not ever encounter this problem (~5 months).\nBut the MultiXactOffsetControlLock problem persists. Partially the problem was solved by adding more shards. But when one of shards encounters a problem it's either MultiXacts or vacuum causing relation truncation (unrelated to this thread).\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 2 Nov 2020 17:45:33 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi,\n\nAfter the issue reported in [1] got fixed, I've restarted the multi-xact\nstress test, hoping to reproduce the issue. But so far no luck :-(\n\nI've started slightly different tests on two machines - on one machine\nI've done this:\n\n a) init.sql\n\n create table t (a int);\n insert into t select i from generate_series(1,100000000) s(i);\n alter table t add primary key (a);\n\n b) select.sql\n\n SELECT * FROM t\n WHERE a = (1+mod(abs(hashint4(extract(epoch from now())::int)),\n 100000000)) FOR KEY SHARE;\n\n c) pgbench -n -c 32 -j 8 -f select.sql -T $((24*3600)) test\n\nThe idea is to have large table and many clients hitting a small random\nsubset of the rows. A sample of wait events from ~24h run looks like this:\n\n e_type | e_name | sum\n ----------+----------------------+----------\n LWLock | BufferContent | 13913863\n | | 7194679\n LWLock | WALWrite | 1710507\n Activity | LogicalLauncherMain | 726599\n Activity | AutoVacuumMain | 726127\n Activity | WalWriterMain | 725183\n Activity | CheckpointerMain | 604694\n Client | ClientRead | 599900\n IO | WALSync | 502904\n Activity | BgWriterMain | 378110\n Activity | BgWriterHibernate | 348464\n IO | WALWrite | 129412\n LWLock | ProcArray | 6633\n LWLock | WALInsert | 5714\n IO | SLRUWrite | 2580\n IPC | ProcArrayGroupUpdate | 2216\n LWLock | XactSLRU | 2196\n Timeout | VacuumDelay | 1078\n IPC | XactGroupUpdate | 737\n LWLock | LockManager | 503\n LWLock | WALBufMapping | 295\n LWLock | MultiXactMemberSLRU | 267\n IO | DataFileWrite | 68\n LWLock | BufferIO | 59\n IO | DataFileRead | 27\n IO | DataFileFlush | 14\n LWLock | MultiXactGen | 7\n LWLock | BufferMapping | 1\n\nSo, nothing particularly interesting - there certainly are not many wait\nevents related to SLRU.\n\nOn the other machine I did this:\n\n a) init.sql\n create table t (a int primary key);\n insert into t select i from generate_series(1,1000) s(i);\n\n b) select.sql\n select * from t for key share;\n\n c) pgbench -n -c 32 -j 8 -f select.sql -T $((24*3600)) test\n\nand the wait events (24h run too) look like this:\n\n e_type | e_name | sum\n -----------+-----------------------+----------\n LWLock | BufferContent | 20804925\n | | 2575369\n Activity | LogicalLauncherMain | 745780\n Activity | AutoVacuumMain | 745292\n Activity | WalWriterMain | 740507\n Activity | CheckpointerMain | 737691\n Activity | BgWriterHibernate | 731123\n LWLock | WALWrite | 570107\n IO | WALSync | 452603\n Client | ClientRead | 151438\n BufferPin | BufferPin | 23466\n LWLock | WALInsert | 21631\n IO | WALWrite | 19050\n LWLock | ProcArray | 15082\n Activity | BgWriterMain | 14655\n IPC | ProcArrayGroupUpdate | 7772\n LWLock | WALBufMapping | 3555\n IO | SLRUWrite | 1951\n LWLock | MultiXactGen | 1661\n LWLock | MultiXactMemberSLRU | 359\n LWLock | MultiXactOffsetSLRU | 242\n LWLock | XactSLRU | 141\n IPC | XactGroupUpdate | 104\n LWLock | LockManager | 28\n IO | DataFileRead | 4\n IO | ControlFileSyncUpdate | 1\n Timeout | VacuumDelay | 1\n IO | WALInitWrite | 1\n\nAlso nothing particularly interesting - few SLRU wait events.\n\nSo unfortunately this does not really reproduce the SLRU locking issues\nyou're observing - clearly, there has to be something else triggering\nit. Perhaps this workload is too simplistic, or maybe we need to run\ndifferent queries. Or maybe the hw needs to be somewhat different (more\nCPUs? different storage?)\n\n\n[1]\nhttps://www.postgresql.org/message-id/20201104013205.icogbi773przyny5@development\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 10 Nov 2020 01:13:22 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 10 нояб. 2020 г., в 05:13, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> After the issue reported in [1] got fixed, I've restarted the multi-xact\n> stress test, hoping to reproduce the issue. But so far no luck :-(\n\n\nTomas, many thanks for looking into this. I figured out that to make multixact sets bigger transactions must hang for a while and lock large set of tuples. But not continuous range to avoid locking on buffer_content.\nI did not manage to implement this via pgbench, that's why I was trying to hack on separate go program. But, essentially, no luck either.\nI was observing something resemblant though\n\n пятница, 8 мая 2020 г. 15:08:37 (every 1s)\n\n pid | wait_event | wait_event_type | state | query \n-------+----------------------------+-----------------+--------+----------------------------------------------------\n 41344 | ClientRead | Client | idle | insert into t1 select generate_series(1,1000000,1)\n 41375 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41377 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41378 | | | active | select * from t1 where i = ANY ($1) for share\n 41379 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41381 | | | active | select * from t1 where i = ANY ($1) for share\n 41383 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n 41385 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n(8 rows)\n\nbut this picture was not stable.\n\nHow do you collect wait events for aggregation? just insert into some table with cron?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 10 Nov 2020 11:16:49 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 11/10/20 7:16 AM, Andrey Borodin wrote:\n> \n> \n>> 10 нояб. 2020 г., в 05:13, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n>> After the issue reported in [1] got fixed, I've restarted the multi-xact\n>> stress test, hoping to reproduce the issue. But so far no luck :-(\n> \n> \n> Tomas, many thanks for looking into this. I figured out that to make multixact sets bigger transactions must hang for a while and lock large set of tuples. But not continuous range to avoid locking on buffer_content.\n> I did not manage to implement this via pgbench, that's why I was trying to hack on separate go program. But, essentially, no luck either.\n> I was observing something resemblant though\n> \n> пятница, 8 мая 2020 г. 15:08:37 (every 1s)\n> \n> pid | wait_event | wait_event_type | state | query \n> -------+----------------------------+-----------------+--------+----------------------------------------------------\n> 41344 | ClientRead | Client | idle | insert into t1 select generate_series(1,1000000,1)\n> 41375 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n> 41377 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n> 41378 | | | active | select * from t1 where i = ANY ($1) for share\n> 41379 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n> 41381 | | | active | select * from t1 where i = ANY ($1) for share\n> 41383 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n> 41385 | MultiXactOffsetControlLock | LWLock | active | select * from t1 where i = ANY ($1) for share\n> (8 rows)\n> \n> but this picture was not stable.\n> \n\nSeems we haven't made much progress in reproducing the issue :-( I guess\nwe'll need to know more about the machine where this happens. Is there\nanything special about the hardware/config? Are you monitoring size of\nthe pg_multixact directory?\n\n> How do you collect wait events for aggregation? just insert into some table with cron?\n> \n\nNo, I have a simple shell script (attached) sampling data from\npg_stat_activity regularly. Then I load it into a table and aggregate to\nget a summary.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 10 Nov 2020 19:07:07 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Wed, Nov 11, 2020 at 7:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Seems we haven't made much progress in reproducing the issue :-( I guess\n> we'll need to know more about the machine where this happens. Is there\n> anything special about the hardware/config? Are you monitoring size of\n> the pg_multixact directory?\n\nWhich release was the original problem seen on?\n\n\n", "msg_date": "Wed, 11 Nov 2020 07:41:39 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 10 нояб. 2020 г., в 23:07, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> \n> On 11/10/20 7:16 AM, Andrey Borodin wrote:\n>> \n>> \n>> but this picture was not stable.\n>> \n> \n> Seems we haven't made much progress in reproducing the issue :-( I guess\n> we'll need to know more about the machine where this happens. Is there\n> anything special about the hardware/config? Are you monitoring size of\n> the pg_multixact directory?\n\nIt's Ubuntu 18.04.4 LTS, Intel Xeon E5-2660 v4, 56 CPU cores with 256Gb of RAM.\nPostgreSQL 10.14, compiled by gcc 7.5.0, 64-bit\n\nNo, unfortunately we do not have signals for SLRU sizes.\n3.5Tb mdadm raid10 over 28 SSD drives, 82% full.\n\nFirst incident triggering investigation was on 2020-04-19, at that time cluster was running on PG 10.11. But I think it was happening before.\n\nI'd say nothing special...\n\n> \n>> How do you collect wait events for aggregation? just insert into some table with cron?\n>> \n> \n> No, I have a simple shell script (attached) sampling data from\n> pg_stat_activity regularly. Then I load it into a table and aggregate to\n> get a summary.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Fri, 13 Nov 2020 16:49:36 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 13/11/2020 à 12:49, Andrey Borodin a écrit :\n>\n>> 10 нояб. 2020 г., в 23:07, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n>>\n>> On 11/10/20 7:16 AM, Andrey Borodin wrote:\n>>>\n>>> but this picture was not stable.\n>>>\n>> Seems we haven't made much progress in reproducing the issue :-( I guess\n>> we'll need to know more about the machine where this happens. Is there\n>> anything special about the hardware/config? Are you monitoring size of\n>> the pg_multixact directory?\n> It's Ubuntu 18.04.4 LTS, Intel Xeon E5-2660 v4, 56 CPU cores with 256Gb of RAM.\n> PostgreSQL 10.14, compiled by gcc 7.5.0, 64-bit\n>\n> No, unfortunately we do not have signals for SLRU sizes.\n> 3.5Tb mdadm raid10 over 28 SSD drives, 82% full.\n>\n> First incident triggering investigation was on 2020-04-19, at that time cluster was running on PG 10.11. But I think it was happening before.\n>\n> I'd say nothing special...\n>\n>>> How do you collect wait events for aggregation? just insert into some table with cron?\n>>>\n>> No, I have a simple shell script (attached) sampling data from\n>> pg_stat_activity regularly. Then I load it into a table and aggregate to\n>> get a summary.\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n\n\nHi,\n\n\nSome time ago I have encountered a contention on \nMultiXactOffsetControlLock with a performances benchmark. Here are the \nwait event monitoring result with a pooling each 10 seconds and a 30 \nminutes run for the benchmarl:\n\n\n  event_type |           event            |   sum\n------------+----------------------------+----------\n  Client     | ClientRead                 | 44722952\n  LWLock     | MultiXactOffsetControlLock | 30343060\n  LWLock     | multixact_offset           | 16735250\n  LWLock     | MultiXactMemberControlLock |  1601470\n  LWLock     | buffer_content             |   991344\n  LWLock     | multixact_member           |   805624\n  Lock       | transactionid              |   204997\n  Activity   | LogicalLauncherMain        |   198834\n  Activity   | CheckpointerMain           |   198834\n  Activity   | AutoVacuumMain             |   198469\n  Activity   | BgWriterMain               |   184066\n  Activity   | WalWriterMain              |   171571\n  LWLock     | WALWriteLock               |    72428\n  IO         | DataFileRead               |    35708\n  Activity   | BgWriterHibernate          |    12741\n  IO         | SLRURead                   |     9121\n  Lock       | relation                   |     8858\n  LWLock     | ProcArrayLock              |     7309\n  LWLock     | lock_manager               |     6677\n  LWLock     | pg_stat_statements         |     4194\n  LWLock     | buffer_mapping             |     3222\n\n\nAfter reading this thread I change the value of the buffer size to 32 \nand 64 and obtain the following results:\n\n\n  event_type |           event            |    sum\n------------+----------------------------+-----------\n  Client     | ClientRead                 | 268297572\n  LWLock     | MultiXactMemberControlLock |  65162906\n  LWLock     | multixact_member           |  33397714\n  LWLock     | buffer_content             |   4737065\n  Lock       | transactionid              |   2143750\n  LWLock     | SubtransControlLock        |   1318230\n  LWLock     | WALWriteLock               |   1038999\n  Activity   | LogicalLauncherMain        |    940598\n  Activity   | AutoVacuumMain             |    938566\n  Activity   | CheckpointerMain           |    799052\n  Activity   | WalWriterMain              |    749069\n  LWLock     | subtrans                   |    710163\n  Activity   | BgWriterHibernate          |    536763\n  Lock       | object                     |    514225\n  Activity   | BgWriterMain               |    394206\n  LWLock     | lock_manager               |    295616\n  IO         | DataFileRead               |    274236\n  LWLock     | ProcArrayLock              |     77099\n  Lock       | tuple                      |     59043\n  IO         | CopyFileWrite              |     45611\n  Lock       | relation                   |     42714\n\nThere was still contention on multixact but less than the first run. I \nhave increased the buffers to 128 and 512 and obtain the best results \nfor this bench:\n\n  event_type |           event            |    sum\n------------+----------------------------+-----------\n  Client     | ClientRead                 | 160463037\n  LWLock     | MultiXactMemberControlLock |   5334188\n  LWLock     | buffer_content             |   5228256\n  LWLock     | buffer_mapping             |   2368505\n  LWLock     | SubtransControlLock        |   2289977\n  IPC        | ProcArrayGroupUpdate       |   1560875\n  LWLock     | ProcArrayLock              |   1437750\n  Lock       | transactionid              |    825561\n  LWLock     | subtrans                   |    772701\n  LWLock     | WALWriteLock               |    666138\n  Activity   | LogicalLauncherMain        |    492585\n  Activity   | CheckpointerMain           |    492458\n  Activity   | AutoVacuumMain             |    491548\n  LWLock     | lock_manager               |    426531\n  Lock       | object                     |    403581\n  Activity   | WalWriterMain              |    394668\n  Activity   | BgWriterHibernate          |    293112\n  Activity   | BgWriterMain               |    195312\n  LWLock     | MultiXactGenLock           |    177820\n  LWLock     | pg_stat_statements         |    173864\n  IO         | DataFileRead               |    173009\n\n\nI hope these metrics can have some interest to show the utility of this \npatch but unfortunately I can not be more precise and provide reports \nfor the entire patch. The problem is that this benchmark is run on an \napplication that use PostgreSQL 11 and I can not back port the full \npatch, there was too much changes since PG11. I have just increase the \nsize of NUM_MXACTOFFSET_BUFFERS and NUM_MXACTMEMBER_BUFFERS. This allow \nus to triple the number of simultaneous connections between the first \nand the last test.\n\n\nI know that this report is not really helpful but at least I can give \nmore information on the benchmark that was used. This is the proprietary \nzRef benchmark which compares the same Cobol programs (transactional and \nbatch) executed both on mainframes and on x86 servers. Instead  of a DB2 \nz/os database we use PostgreSQL v11. This test has extensive use of \ncursors (each select, even read only, is executed through a cursor) and \nthe contention was observed with update on tables with some foreign \nkeys. There is no explicit FOR SHARE on the queries, only some FOR \nUPDATE clauses. I guess that the multixact contention is the result of \nthe for share locks produced for FK.\n\n\nSo in our case being able to tune the multixact buffers could help a lot \nto improve the performances.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Tue, 8 Dec 2020 17:05:34 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Gilles!\n\nMany thanks for your message!\n\n> 8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n> \n> I know that this report is not really helpful \n\nQuite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n\nBest regards, Andrey Borodin.", "msg_date": "Tue, 8 Dec 2020 22:52:52 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Andrey,\n\nThanks for the backport. I have issue with the first patch \"Use shared \nlock in GetMultiXactIdMembers for offsets and members\" \n(v1106-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-o.patch) the \napplications are not working anymore when I'm applying it. Also PG \nregression tests are failing too on several part.\n\n\n test insert_conflict              ... ok\n test create_function_1            ... FAILED\n test create_type                  ... FAILED\n test create_table                 ... FAILED\n test create_function_2            ... FAILED\n test copy                         ... FAILED\n test copyselect                   ... ok\n test copydml                      ... ok\n test create_misc                  ... FAILED\n test create_operator              ... FAILED\n test create_procedure             ... ok\n test create_index                 ... FAILED\n test index_including              ... ok\n test create_view                  ... FAILED\n test create_aggregate             ... ok\n test create_function_3            ... ok\n test create_cast                  ... ok\n test constraints                  ... FAILED\n test triggers                     ... FAILED\n test inherit                      ...\n ^C\n\n\n\nThis is also where I left my last try to back port for PG11, I will try \nto fix it again but it could take time to have it working.\n\nBest regards,\n\n-- \nGilles Darold\n\n\nLe 08/12/2020 à 18:52, Andrey Borodin a écrit :\n> Hi Gilles!\n>\n> Many thanks for your message!\n>\n>> 8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n>>\n>> I know that this report is not really helpful\n> Quite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n>\n> Best regards, Andrey Borodin.\n>\n\n\n\n\n\n\nHi Andrey,\n\n\nThanks for the backport. I have issue\n with the first patch \"Use shared lock in GetMultiXactIdMembers for\n offsets and members\"\n (v1106-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-o.patch)\n the applications are not working anymore when I'm applying it.\n Also PG regression tests are failing too on several part.\n\n\n\ntest insert_conflict              ... ok\ntest create_function_1            ... FAILED\ntest create_type                  ... FAILED\ntest create_table                 ... FAILED\ntest create_function_2            ... FAILED\ntest copy                         ... FAILED\ntest copyselect                   ... ok\ntest copydml                      ... ok\ntest create_misc                  ... FAILED\ntest create_operator              ... FAILED\ntest create_procedure             ... ok\ntest create_index                 ... FAILED\ntest index_including              ... ok\ntest create_view                  ... FAILED\ntest create_aggregate             ... ok\ntest create_function_3            ... ok\ntest create_cast                  ... ok\ntest constraints                  ... FAILED\ntest triggers                     ... FAILED\ntest inherit                      ...\n ^C\n\n\n\n\n\n This is also where I left my last try\n to back port for PG11, I will try to fix it again but it could\n take time to have it working. \n\n\n\nBest regards,\n\n\n-- \nGilles Darold\n\n\n\nLe 08/12/2020 à 18:52, Andrey Borodin a\n écrit :\n\n\nHi Gilles!\n\nMany thanks for your message!\n\n\n\n8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n\nI know that this report is not really helpful \n\n\n\nQuite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n\nBest regards, Andrey Borodin.", "msg_date": "Wed, 9 Dec 2020 11:51:36 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 09/12/2020 à 11:51, Gilles Darold a écrit :\n> Also PG regression tests are failing too on several part.\n\nForget this, I have not run the regression tests in the right repository:\n\n...\n\n=======================\n  All 189 tests passed.\n=======================\n\n\nI'm looking why the application is failing.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n", "msg_date": "Wed, 9 Dec 2020 12:06:10 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 08/12/2020 à 18:52, Andrey Borodin a écrit :\n> Hi Gilles!\n>\n> Many thanks for your message!\n>\n>> 8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n>>\n>> I know that this report is not really helpful\n> Quite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n>\n> Best regards, Andrey Borodin.\n>\nHi,\n\n\nRunning tests yesterday with the patches has reported log of failures \nwith error on INSERT and UPDATE statements:\n\n\n ERROR:  lock MultiXactOffsetControlLock is not held\n\n\nAfter a patch review this morning I think I have found what's going \nwrong. In patch \nv6-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-offs.patch I think \nthere is a missing reinitialisation of the lockmode variable to LW_NONE \ninside the retry loop after the call to LWLockRelease() in \nsrc/backend/access/transam/multixact.c:1392:GetMultiXactIdMembers(). \nI've attached a new version of the patch for master that include the fix \nI'm using now with PG11 and with which everything works very well now.\n\n\nI'm running more tests to see the impact on the performances to play \nwith multixact_offsets_slru_buffers, multixact_members_slru_buffers and \nmultixact_local_cache_entries. I will reports the results later today.\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Thu, 10 Dec 2020 15:45:56 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 10/12/2020 à 15:45, Gilles Darold a écrit :\n> Le 08/12/2020 à 18:52, Andrey Borodin a écrit :\n>> Hi Gilles!\n>>\n>> Many thanks for your message!\n>>\n>>> 8 дек. 2020 г., в 21:05, Gilles Darold<gilles@darold.net> написал(а):\n>>>\n>>> I know that this report is not really helpful\n>> Quite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n>>\n>> Best regards, Andrey Borodin.\n>>\n> Hi,\n>\n>\n> Running tests yesterday with the patches has reported log of failures \n> with error on INSERT and UPDATE statements:\n>\n>\n> ERROR:  lock MultiXactOffsetControlLock is not held\n>\n>\n> After a patch review this morning I think I have found what's going \n> wrong. In patch \n> v6-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-offs.patch I \n> think there is a missing reinitialisation of the lockmode variable to \n> LW_NONE inside the retry loop after the call to LWLockRelease() in \n> src/backend/access/transam/multixact.c:1392:GetMultiXactIdMembers(). \n> I've attached a new version of the patch for master that include the \n> fix I'm using now with PG11 and with which everything works very well now.\n>\n>\n> I'm running more tests to see the impact on the performances to play \n> with multixact_offsets_slru_buffers, multixact_members_slru_buffers \n> and multixact_local_cache_entries. I will reports the results later today.\n>\n\nHi,\n\nSorry for the delay, I have done some further tests to try to reach the \nlimit without bottlenecks on multixact or shared buffers. The tests was \ndone on a Microsoft Asure machine with 2TB of RAM and 4 sockets Intel \nXeon Platinum 8280M (128 cpu). PG configuration:\n\n     max_connections = 4096\n     shared_buffers = 64GB\n     max_prepared_transactions = 2048\n     work_mem = 256MB\n     maintenance_work_mem = 2GB\n     wal_level = minimal\n     synchronous_commit = off\n     commit_delay = 1000\n     commit_siblings = 10\n     checkpoint_timeout = 1h\n     max_wal_size = 32GB\n     checkpoint_completion_target = 0.9\n\nI have tested with several values for the different buffer's variables \nstarting from:\n\n     multixact_offsets_slru_buffers = 64\n     multixact_members_slru_buffers = 128\n     multixact_local_cache_entries = 256\n\nto the values with the best performances we achieve with this test to \navoid MultiXactOffsetControlLock or MultiXactMemberControlLock:\n\n     multixact_offsets_slru_buffers = 128\n     multixact_members_slru_buffers = 512\n     multixact_local_cache_entries = 1024\n\nAlso shared_buffers have been increased up to 256GB to avoid \nbuffer_mapping contention.\n\nOur last best test reports the following wait events:\n\n      event_type |           event            |    sum\n     ------------+----------------------------+-----------\n      Client     | ClientRead                 | 321690211\n      LWLock     | buffer_content             |   2970016\n      IPC        | ProcArrayGroupUpdate       |   2317388\n      LWLock     | ProcArrayLock              |   1445828\n      LWLock     | WALWriteLock               |   1187606\n      LWLock     | SubtransControlLock        |    972889\n      Lock       | transactionid              |    840560\n      Lock       | relation                   |    587600\n      Activity   | LogicalLauncherMain        |    529599\n      Activity   | AutoVacuumMain             |    528097\n\nAt this stage I don't think we can have better performances by tuning \nthese buffers at least with PG11.\n\nAbout performances gain related to the patch for shared lock in \nGetMultiXactIdMembers unfortunately I can not see a difference with or \nwithout this patch, it could be related to our particular benchmark. But \nclearly the patch on multixact buffers should be committed as this is \nreally helpfull to be able to tuned PG when multixact bottlenecks are found.\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttps://www.lzlabs.com/\n\n\n\n\n\n\n\n\n Le 10/12/2020 à 15:45, Gilles Darold a écrit :\n\n\nLe 08/12/2020 à 18:52, Andrey Borodin\n a écrit :\n\n\nHi Gilles!\n\nMany thanks for your message!\n\n\n\n8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n\nI know that this report is not really helpful \n\n\nQuite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n\nBest regards, Andrey Borodin.\n\n\n\nHi,\n\n\nRunning tests yesterday with the patches has reported log of\n failures with error on INSERT and UPDATE statements:\n\n\n\n ERROR:  lock MultiXactOffsetControlLock is not held\n\n\n\nAfter a patch review this morning I think I have found what's\n going wrong. In patch\n v6-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-offs.patch\n I think there is a missing reinitialisation of the lockmode\n variable to LW_NONE inside the retry loop after the call to\n LWLockRelease() in\n src/backend/access/transam/multixact.c:1392:GetMultiXactIdMembers().\n I've attached a new version of the patch for master that include\n the fix I'm using now with PG11 and\n with which everything works very well now.\n\n\n\nI'm running more tests to see the impact on the performances to\n play with multixact_offsets_slru_buffers,\n multixact_members_slru_buffers and\n multixact_local_cache_entries. I will reports the results later\n today.\n\n\nHi,\n\n Sorry for the delay, I have done some further tests to try to\n reach the limit without bottlenecks on multixact or shared\n buffers. The tests was done on a Microsoft Asure machine with 2TB\n of RAM and 4 sockets Intel Xeon Platinum 8280M (128 cpu). PG\n configuration:\n\n     max_connections = 4096\n     shared_buffers = 64GB\n     max_prepared_transactions = 2048\n     work_mem = 256MB\n     maintenance_work_mem = 2GB\n     wal_level = minimal\n     synchronous_commit = off\n     commit_delay = 1000\n     commit_siblings = 10\n     checkpoint_timeout = 1h\n     max_wal_size = 32GB\n     checkpoint_completion_target = 0.9\n\n I have tested with several values for the different buffer's\n variables starting from:\n\n     multixact_offsets_slru_buffers = 64\n     multixact_members_slru_buffers = 128\n     multixact_local_cache_entries = 256\n\n to the values with the best performances we achieve with this test\n to avoid MultiXactOffsetControlLock or MultiXactMemberControlLock:\n\n     multixact_offsets_slru_buffers = 128\n     multixact_members_slru_buffers = 512\n     multixact_local_cache_entries = 1024\n\n Also shared_buffers have been increased up to 256GB to avoid\n buffer_mapping contention.\n\n Our last best test reports the following wait events:\n\n     event_type |           event            |    sum\n    ------------+----------------------------+-----------\n     Client     | ClientRead                 | 321690211\n     LWLock     | buffer_content             |   2970016\n     IPC        | ProcArrayGroupUpdate       |   2317388\n     LWLock     | ProcArrayLock              |   1445828\n     LWLock     | WALWriteLock               |   1187606\n     LWLock     | SubtransControlLock        |    972889\n     Lock       | transactionid              |    840560\n     Lock       | relation                   |    587600\n     Activity   | LogicalLauncherMain        |    529599\n     Activity   | AutoVacuumMain             |    528097\n\n At this stage I don't think we can have better performances by\n tuning these buffers at least with PG11.\n\n About performances gain related to the patch for shared lock in\n GetMultiXactIdMembers unfortunately I can not see a difference\n with or without this patch, it could be related to our particular\n benchmark. But clearly the patch on multixact buffers should be\n committed as this is really helpfull to be able to tuned PG when\n multixact bottlenecks are found.\n\n\n\n\n Best regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttps://www.lzlabs.com/", "msg_date": "Fri, 11 Dec 2020 18:50:25 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 11/12/2020 à 18:50, Gilles Darold a écrit :\n> Le 10/12/2020 à 15:45, Gilles Darold a écrit :\n>> Le 08/12/2020 à 18:52, Andrey Borodin a écrit :\n>>> Hi Gilles!\n>>>\n>>> Many thanks for your message!\n>>>\n>>>> 8 дек. 2020 г., в 21:05, Gilles Darold <gilles@darold.net> написал(а):\n>>>>\n>>>> I know that this report is not really helpful \n>>> Quite contrary - this benchmarks prove that controllable reproduction exists. I've rebased patches for PG11. Can you please benchmark them (without extending SLRU)?\n>>>\n>>> Best regards, Andrey Borodin.\n>>>\n>> Hi,\n>>\n>>\n>> Running tests yesterday with the patches has reported log of failures\n>> with error on INSERT and UPDATE statements:\n>>\n>>\n>> ERROR:  lock MultiXactOffsetControlLock is not held\n>>\n>>\n>> After a patch review this morning I think I have found what's going\n>> wrong. In patch\n>> v6-0001-Use-shared-lock-in-GetMultiXactIdMembers-for-offs.patch I\n>> think there is a missing reinitialisation of the lockmode variable to\n>> LW_NONE inside the retry loop after the call to LWLockRelease() in\n>> src/backend/access/transam/multixact.c:1392:GetMultiXactIdMembers().\n>> I've attached a new version of the patch for master that include the\n>> fix I'm using now with PG11 and with which everything works very well\n>> now.\n>>\n>>\n>> I'm running more tests to see the impact on the performances to play\n>> with multixact_offsets_slru_buffers, multixact_members_slru_buffers\n>> and multixact_local_cache_entries. I will reports the results later\n>> today.\n>>\n>\n> Hi,\n>\n> Sorry for the delay, I have done some further tests to try to reach\n> the limit without bottlenecks on multixact or shared buffers. The\n> tests was done on a Microsoft Asure machine with 2TB of RAM and 4\n> sockets Intel Xeon Platinum 8280M (128 cpu). PG configuration:\n>\n>     max_connections = 4096\n>     shared_buffers = 64GB\n>     max_prepared_transactions = 2048\n>     work_mem = 256MB\n>     maintenance_work_mem = 2GB\n>     wal_level = minimal\n>     synchronous_commit = off\n>     commit_delay = 1000\n>     commit_siblings = 10\n>     checkpoint_timeout = 1h\n>     max_wal_size = 32GB\n>     checkpoint_completion_target = 0.9\n>\n> I have tested with several values for the different buffer's variables\n> starting from:\n>\n>     multixact_offsets_slru_buffers = 64\n>     multixact_members_slru_buffers = 128\n>     multixact_local_cache_entries = 256\n>\n> to the values with the best performances we achieve with this test to\n> avoid MultiXactOffsetControlLock or MultiXactMemberControlLock:\n>\n>     multixact_offsets_slru_buffers = 128\n>     multixact_members_slru_buffers = 512\n>     multixact_local_cache_entries = 1024\n>\n> Also shared_buffers have been increased up to 256GB to avoid\n> buffer_mapping contention.\n>\n> Our last best test reports the following wait events:\n>\n>      event_type |           event            |    sum\n>     ------------+----------------------------+-----------\n>      Client     | ClientRead                 | 321690211\n>      LWLock     | buffer_content             |   2970016\n>      IPC        | ProcArrayGroupUpdate       |   2317388\n>      LWLock     | ProcArrayLock              |   1445828\n>      LWLock     | WALWriteLock               |   1187606\n>      LWLock     | SubtransControlLock        |    972889\n>      Lock       | transactionid              |    840560\n>      Lock       | relation                   |    587600\n>      Activity   | LogicalLauncherMain        |    529599\n>      Activity   | AutoVacuumMain             |    528097\n>\n> At this stage I don't think we can have better performances by tuning\n> these buffers at least with PG11.\n>\n> About performances gain related to the patch for shared lock in\n> GetMultiXactIdMembers unfortunately I can not see a difference with or\n> without this patch, it could be related to our particular benchmark.\n> But clearly the patch on multixact buffers should be committed as this\n> is really helpfull to be able to tuned PG when multixact bottlenecks\n> are found.\n\n\nI've done more review on these patches.\n\n\n1) as reported in my previous message patch 0001 looks useless as it\ndoesn't allow measurable performances gain.\n\n\n2) In patch 0004 there is two typo: s/informaion/information/ will fix them\n\n\n3) the GUC are missing in the postgresql.conf.sample file, see patch in\nattachment for a proposal.\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttps://www.lzlabs.com/", "msg_date": "Sun, 13 Dec 2020 10:17:51 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 13 дек. 2020 г., в 14:17, Gilles Darold <gilles@darold.net> написал(а):\n> \n> I've done more review on these patches.\n\nThanks, Gilles! I'll incorporate all your fixes to patchset.\nCan you also benchmark conditional variable sleep? The patch \"Add conditional variable to wait for next MultXact offset in corner case\"?\nThe problem manifests on Standby when Primary is heavily loaded with MultiXactOffsetControlLock.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 13 Dec 2020 22:24:49 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 13 дек. 2020 г., в 22:24, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> \n> \n>> 13 дек. 2020 г., в 14:17, Gilles Darold <gilles@darold.net> написал(а):\n>> \n>> I've done more review on these patches.\n> \n> Thanks, Gilles! I'll incorporate all your fixes to patchset.\n\nPFA patches.\nAlso, I've noted that patch \"Add conditional variable to wait for next MultXact offset in corner case\" removes CHECK_FOR_INTERRUPTS();, I'm not sure it's correct.\n\nThanks!\n\nBest regards, Andrey Borodin.", "msg_date": "Mon, 14 Dec 2020 11:31:32 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 13/12/2020 à 18:24, Andrey Borodin a écrit :\n>\n>> 13 дек. 2020 г., в 14:17, Gilles Darold <gilles@darold.net> написал(а):\n>>\n>> I've done more review on these patches.\n> Thanks, Gilles! I'll incorporate all your fixes to patchset.\n> Can you also benchmark conditional variable sleep? The patch \"Add conditional variable to wait for next MultXact offset in corner case\"?\n> The problem manifests on Standby when Primary is heavily loaded with MultiXactOffsetControlLock.\n>\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n\nHi Andrey,\n\n\nSorry for the response delay, we have run several others tests trying to \nfigure out the performances gain per patch but unfortunately we have \nvery heratic results. With the same parameters and patches the test \ndoesn't returns the same results following the day or the hour of the \nday. This is very frustrating and I suppose that this is related to the \nAzure architecture. The only thing that I am sure is that we had the \nbest performances results with all patches and\n\n multixact_offsets_slru_buffers = 256\n multixact_members_slru_buffers = 512\n multixact_local_cache_entries = 4096\n\nbut I can not say if all or part of the patches are improving the \nperformances. My feeling is that performances gain related to patches 1 \n(shared lock) and 3 (conditional variable) do not have much to do with \nthe performances gain compared to just tuning the multixact buffers. \nThis is when the multixact contention is observed but perhaps they are \ndelaying the contention. It's all the more frustrating that we had a \ntest case to reproduce the contention but not the architecture apparently.\n\n\nCan't do much more at this point.\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/\n\n\n\n\n\n\n\nLe 13/12/2020 à 18:24, Andrey Borodin a\n écrit :\n\n\n\n\n\n\n13 дек. 2020 г., в 14:17, Gilles Darold <gilles@darold.net> написал(а):\n\nI've done more review on these patches.\n\n\n\nThanks, Gilles! I'll incorporate all your fixes to patchset.\nCan you also benchmark conditional variable sleep? The patch \"Add conditional variable to wait for next MultXact offset in corner case\"?\nThe problem manifests on Standby when Primary is heavily loaded with MultiXactOffsetControlLock.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\nHi Andrey,\n\n\nSorry for the response delay, we have run several others tests\n trying to figure out the performances gain per patch but\n unfortunately we have very heratic results. With the same\n parameters and patches the test doesn't returns the same results\n following the day or the hour of the day. This is very frustrating\n and I suppose that this is related to the Azure architecture. The\n only thing that I am sure is that we had the best performances\n results with all patches and\n\nmultixact_offsets_slru_buffers = 256\n multixact_members_slru_buffers = 512\n multixact_local_cache_entries = 4096\n\n\n \nbut I can not say if all or part of the patches are improving the\n performances. My feeling is that performances gain related to\n patches 1 (shared lock) and 3 (conditional variable) do\n not have much to do with the\n performances gain compared to just tuning the multixact buffers.\n This is when the multixact contention is observed but perhaps they\n are delaying the contention. It's\n all the more frustrating that we had a test case to\n reproduce the contention but not the architecture\n apparently.\n\n\n\nCan't do much more at this point.\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/", "msg_date": "Wed, 23 Dec 2020 17:31:17 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 23 дек. 2020 г., в 21:31, Gilles Darold <gilles@darold.net> написал(а):\n> \n> Sorry for the response delay, we have run several others tests trying to figure out the performances gain per patch but unfortunately we have very heratic results. With the same parameters and patches the test doesn't returns the same results following the day or the hour of the day. This is very frustrating and I suppose that this is related to the Azure architecture. The only thing that I am sure is that we had the best performances results with all patches and\n> \n> multixact_offsets_slru_buffers = 256\n> multixact_members_slru_buffers = 512\n> multixact_local_cache_entries = 4096\n> \n> \n> but I can not say if all or part of the patches are improving the performances. My feeling is that performances gain related to patches 1 (shared lock) and 3 (conditional variable) do not have much to do with the performances gain compared to just tuning the multixact buffers. This is when the multixact contention is observed but perhaps they are delaying the contention. It's all the more frustrating that we had a test case to reproduce the contention but not the architecture apparently.\n\nHi! Thanks for the input.\nI think we have a consensus here that configuring SLRU size is beneficial for MultiXacts.\nThere is proposal in nearby thread [0] on changing default value of commit_ts SLRU buffers.\nIn my experience from time to time there can be problems with subtransactions cured by extending subtrans SLRU.\n\nLet's make all SLRUs configurable?\nPFA patch with draft of these changes.\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/20210115220744.GA24457%40alvherre.pgsql", "msg_date": "Mon, 15 Feb 2021 22:17:40 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 15/02/2021 à 18:17, Andrey Borodin a écrit :\n>\n>> 23 дек. 2020 г., в 21:31, Gilles Darold <gilles@darold.net> написал(а):\n>>\n>> Sorry for the response delay, we have run several others tests trying to figure out the performances gain per patch but unfortunately we have very heratic results. With the same parameters and patches the test doesn't returns the same results following the day or the hour of the day. This is very frustrating and I suppose that this is related to the Azure architecture. The only thing that I am sure is that we had the best performances results with all patches and\n>>\n>> multixact_offsets_slru_buffers = 256\n>> multixact_members_slru_buffers = 512\n>> multixact_local_cache_entries = 4096\n>>\n>>\n>> but I can not say if all or part of the patches are improving the performances. My feeling is that performances gain related to patches 1 (shared lock) and 3 (conditional variable) do not have much to do with the performances gain compared to just tuning the multixact buffers. This is when the multixact contention is observed but perhaps they are delaying the contention. It's all the more frustrating that we had a test case to reproduce the contention but not the architecture apparently.\n> Hi! Thanks for the input.\n> I think we have a consensus here that configuring SLRU size is beneficial for MultiXacts.\n> There is proposal in nearby thread [0] on changing default value of commit_ts SLRU buffers.\n> In my experience from time to time there can be problems with subtransactions cured by extending subtrans SLRU.\n>\n> Let's make all SLRUs configurable?\n> PFA patch with draft of these changes.\n>\n> Best regards, Andrey Borodin.\n>\n>\n> [0] https://www.postgresql.org/message-id/flat/20210115220744.GA24457%40alvherre.pgsql\n>\n\nThe patch doesn't apply anymore in master cause of error: patch failed: \nsrc/backend/utils/init/globals.c:150\n\n\nAn other remark about this patch is that it should be mentionned in the \ndocumentation (doc/src/sgml/config.sgml) that the new configuration \nvariables need a server restart, for example by adding \"This parameter \ncan only be set at server start.\" like for shared_buffers. Patch on \npostgresql.conf mention it.\n\nAnd some typo to be fixed:\n\n\n s/Tipically/Typically/\n\n s/asincronous/asyncronous/\n\n s/confugured/configured/\n\n s/substrnsactions/substransactions/\n\n\n-- \nGilles Darold\nLzLabs GmbH\n\n\n\n\n\n\n\nLe 15/02/2021 à 18:17, Andrey Borodin a\n écrit :\n\n\n\n\n\n\n23 дек. 2020 г., в 21:31, Gilles Darold <gilles@darold.net> написал(а):\n\nSorry for the response delay, we have run several others tests trying to figure out the performances gain per patch but unfortunately we have very heratic results. With the same parameters and patches the test doesn't returns the same results following the day or the hour of the day. This is very frustrating and I suppose that this is related to the Azure architecture. The only thing that I am sure is that we had the best performances results with all patches and\n\nmultixact_offsets_slru_buffers = 256\nmultixact_members_slru_buffers = 512\nmultixact_local_cache_entries = 4096\n\n\nbut I can not say if all or part of the patches are improving the performances. My feeling is that performances gain related to patches 1 (shared lock) and 3 (conditional variable) do not have much to do with the performances gain compared to just tuning the multixact buffers. This is when the multixact contention is observed but perhaps they are delaying the contention. It's all the more frustrating that we had a test case to reproduce the contention but not the architecture apparently.\n\n\n\nHi! Thanks for the input.\nI think we have a consensus here that configuring SLRU size is beneficial for MultiXacts.\nThere is proposal in nearby thread [0] on changing default value of commit_ts SLRU buffers.\nIn my experience from time to time there can be problems with subtransactions cured by extending subtrans SLRU.\n\nLet's make all SLRUs configurable?\nPFA patch with draft of these changes.\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/20210115220744.GA24457%40alvherre.pgsql\n\n\n\n\n\nThe patch doesn't apply anymore in master cause of error: patch\n failed: src/backend/utils/init/globals.c:150\n\n\n\nAn other remark about this patch is that it should be mentionned\n in the documentation (doc/src/sgml/config.sgml) that the new\n configuration variables need a server restart, for example by\n adding \"This parameter can only be set at server start.\" like for\n shared_buffers. Patch on postgresql.conf mention it.\n\nAnd some typo to be fixed:\n\n\n\ns/Tipically/Typically/\n\ns/asincronous/asyncronous/\ns/confugured/configured/\ns/substrnsactions/substransactions/\n\n\n\n\n-- \nGilles Darold\nLzLabs GmbH", "msg_date": "Thu, 11 Mar 2021 16:50:48 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 11 марта 2021 г., в 20:50, Gilles Darold <gilles@darold.net> написал(а):\n> \n> \n> The patch doesn't apply anymore in master cause of error: patch failed: src/backend/utils/init/globals.c:150\n> \n> \n> \n> An other remark about this patch is that it should be mentionned in the documentation (doc/src/sgml/config.sgml) that the new configuration variables need a server restart, for example by adding \"This parameter can only be set at server start.\" like for shared_buffers. Patch on postgresql.conf mention it.\n> \n> And some typo to be fixed:\n> \n> \n> \n> s/Tipically/Typically/\n> \n> s/asincronous/asyncronous/\n> \n> s/confugured/configured/\n> \n> s/substrnsactions/substransactions/\n> \n> \n\nThanks, Gilles! Fixed.\n\nBest regards, Andrey Borodin.", "msg_date": "Fri, 12 Mar 2021 17:44:02 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Le 12/03/2021 à 13:44, Andrey Borodin a écrit :\n>\n>> 11 марта 2021 г., в 20:50, Gilles Darold <gilles@darold.net> написал(а):\n>>\n>>\n>> The patch doesn't apply anymore in master cause of error: patch failed: src/backend/utils/init/globals.c:150\n>>\n>>\n>>\n>> An other remark about this patch is that it should be mentionned in the documentation (doc/src/sgml/config.sgml) that the new configuration variables need a server restart, for example by adding \"This parameter can only be set at server start.\" like for shared_buffers. Patch on postgresql.conf mention it.\n>>\n>> And some typo to be fixed:\n>>\n>>\n>>\n>> s/Tipically/Typically/\n>>\n>> s/asincronous/asyncronous/\n>>\n>> s/confugured/configured/\n>>\n>> s/substrnsactions/substransactions/\n>>\n>>\n> Thanks, Gilles! Fixed.\n>\n> Best regards, Andrey Borodin.\n>\n\nHi Andrey,\n\nI found two problems in this patch, first in src/include/miscadmin.h\nmultixact_members_slru_buffers is declared twice:\n\n\n extern PGDLLIMPORT int max_parallel_workers;\n+extern PGDLLIMPORT int multixact_offsets_slru_buffers;\n+extern PGDLLIMPORT int multixact_members_slru_buffers;\n+extern PGDLLIMPORT int multixact_members_slru_buffers;  <---------\n+extern PGDLLIMPORT int subtrans_slru_buffers;\n\n\nIn file src/backend/access/transam/multixact.c the second variable\nshould be multixact_buffers_slru_buffers and not\nmultixact_offsets_slru_buffers.\n\n\n@@ -1848,13 +1848,13 @@ MultiXactShmemInit(void)\n        MultiXactMemberCtl->PagePrecedes = MultiXactMemberPagePrecedes;\n\n        SimpleLruInit(MultiXactOffsetCtl,\n-                                 \"MultiXactOffset\",\nNUM_MULTIXACTOFFSET_BUFFERS, 0,\n+                                 \"MultiXactOffset\",\nmultixact_offsets_slru_buffers, 0,\n                                  MultiXactOffsetSLRULock,\n\"pg_multixact/offsets\",\n                                  LWTRANCHE_MULTIXACTOFFSET_BUFFER,\n                                  SYNC_HANDLER_MULTIXACT_OFFSET);\n        SlruPagePrecedesUnitTests(MultiXactOffsetCtl,\nMULTIXACT_OFFSETS_PER_PAGE);\n        SimpleLruInit(MultiXactMemberCtl,\n-                                 \"MultiXactMember\",\nNUM_MULTIXACTMEMBER_BUFFERS, 0,\n+                                 \"MultiXactMember\",\nmultixact_offsets_slru_buffers, 0,    <------------------\n                                  MultiXactMemberSLRULock,\n\"pg_multixact/members\",\n                                  LWTRANCHE_MULTIXACTMEMBER_BUFFER,\n                                  SYNC_HANDLER_MULTIXACT_MEMBER);\n\n\n\nPlease fix them so that I can end the review.\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 16:41:04 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Andrey,\n\nOn Sat, Mar 13, 2021 at 1:44 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> [v10]\n\n+int multixact_offsets_slru_buffers = 8;\n+int multixact_members_slru_buffers = 16;\n+int subtrans_slru_buffers = 32;\n+int notify_slru_buffers = 8;\n+int serial_slru_buffers = 16;\n+int clog_slru_buffers = 0;\n+int commit_ts_slru_buffers = 0;\n\nI don't think we should put \"slru\" (the name of the buffer replacement\nalgorithm, implementation detail) in the GUC names.\n\n+ It defaults to 0, in this case CLOG size is taken as\n<varname>shared_buffers</varname> / 512.\n\nWe already know that increasing the number of CLOG buffers above the\ncurrent number hurts as the linear search begins to dominate\n(according to the commit message for 5364b357), and it doesn't seem\ngreat to ship a new feature that melts your CPU when you turn it up.\nPerhaps, to ship this, we need to introduce a buffer mapping table? I\nhave attached a \"one coffee\" attempt at that, on top of your v10 patch\n(unmodified), for discussion. It survives basic testing but I don't\nknow how it performs.", "msg_date": "Thu, 25 Mar 2021 10:31:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Mar 25, 2021 at 10:31 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> We already know that increasing the number of CLOG buffers above the\n> current number hurts as the linear search begins to dominate\n> (according to the commit message for 5364b357), and it doesn't seem\n> great to ship a new feature that melts your CPU when you turn it up.\n> Perhaps, to ship this, we need to introduce a buffer mapping table? I\n> have attached a \"one coffee\" attempt at that, on top of your v10 patch\n> (unmodified), for discussion. It survives basic testing but I don't\n> know how it performs.\n\nHrrr... Cfbot showed an assertion failure. Here's the two coffee\nversion with a couple of silly mistakes fixed.", "msg_date": "Thu, 25 Mar 2021 14:03:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Andrey, all,\n\nI propose some changes, and I'm attaching a new version:\n\nI renamed the GUCs as clog_buffers etc (no \"_slru_\"). I fixed some\ncopy/paste mistakes where the different GUCs were mixed up. I made\nsome changes to the .conf.sample. I rewrote the documentation so that\nit states the correct unit and defaults, and refers to the\nsubdirectories that are cached by these buffers instead of trying to\ngive a new definition of each of the SLRUs.\n\nDo you like those changes?\n\nSome things I thought about but didn't change:\n\nI'm not entirely sure if we should use the internal and historical\nnames well known to hackers (CLOG), or the visible directory names (I\nmean, we could use pg_xact_buffers instead of clog_buffers). I am not\nsure why these GUCs need to be PGDLLIMPORT, but I see that NBuffers is\nlike that.\n\nI wanted to do some very simple smoke testing of CLOG sizes on my\nlocal development machine:\n\n pgbench -i -s1000 postgres\n pgbench -t4000000 -c8 -j8 -Mprepared postgres\n\nI disabled autovacuum after running that just to be sure it wouldn't\ninterfere with my experiment:\n\n alter table pgbench_accounts set (autovacuum_enabled = off);\n\nThen I shut the cluster down and made a copy, so I could do some\nrepeated experiments from the same initial conditions each time. At\nthis point I had 30 files 0000-001E under pg_xact, holding 256kB = ~1\nmillion transactions each. It'd take ~960 buffers to cache it all.\nSo how long does VACUUM FREEZE pgbench_accounts take?\n\nI tested with just the 0001 patch, and also with the 0002 patch\n(improved version, attached):\n\nclog_buffers=128: 0001=2:28.499, 0002=2:17:891\nclog_buffers=1024: 0001=1:38.485, 0002=1:29.701\n\nI'm sure the speedup of the 0002 patch can be amplified by increasing\nthe number of transactions referenced in the table OR number of\nclog_buffers, considering that the linear search produces\nO(transactions * clog_buffers) work. That was 32M transactions and\n8MB of CLOG, but I bet if you double both of those numbers once or\ntwice things start to get hot. I don't see why you shouldn't be able\nto opt to cache literally all of CLOG if you want (something like 50MB\nassuming default autovacuum_freeze_max_age, scale to taste, up to\n512MB for the theoretical maximum useful value).\n\nI'm not saying the 0002 patch is bug-free yet though, it's a bit finickity.", "msg_date": "Fri, 26 Mar 2021 16:46:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Thomas, Gilles, all!\n\nThanks for reviewing this!\n\n> 25 марта 2021 г., в 02:31, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> I don't think we should put \"slru\" (the name of the buffer replacement\n> algorithm, implementation detail) in the GUC names.\n+1\n\n\n> + It defaults to 0, in this case CLOG size is taken as\n> <varname>shared_buffers</varname> / 512.\n> \n> We already know that increasing the number of CLOG buffers above the\n> current number hurts as the linear search begins to dominate\nUh, my intent was to copy original approach of CLOG SLRU size, I just missed that Min(,) thing in shared_buffers logic.\n\n\n> 26 марта 2021 г., в 08:46, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> Hi Andrey, all,\n> \n> I propose some changes, and I'm attaching a new version:\n> \n> I renamed the GUCs as clog_buffers etc (no \"_slru_\"). I fixed some\n> copy/paste mistakes where the different GUCs were mixed up. I made\n> some changes to the .conf.sample. I rewrote the documentation so that\n> it states the correct unit and defaults, and refers to the\n> subdirectories that are cached by these buffers instead of trying to\n> give a new definition of each of the SLRUs.\n> \n> Do you like those changes?\nYes!\n\n> Some things I thought about but didn't change:\n> \n> I'm not entirely sure if we should use the internal and historical\n> names well known to hackers (CLOG), or the visible directory names (I\n> mean, we could use pg_xact_buffers instead of clog_buffers).\nWhile it is good idea to make notes about directory name, I think the real naming criteria is to help find this GUC when user encounters wait events in pg_stat_activity. I think there is no CLOG mentions in docs [0], only XactBuffer, XactSLRU et c.\n\n> I'm not saying the 0002 patch is bug-free yet though, it's a bit finickity.\nI think the idea of speeding up linear search is really really good for scaling SLRUs. It's not even about improving normal performance of the cluster, but it's important from preventing pathological degradation under certain circumstances. Bigger cache really saves SLAs :) I'll look into the patch more closely this weekend. Thank you!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n\n", "msg_date": "Fri, 26 Mar 2021 11:00:51 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 26 марта 2021 г., в 11:00, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n>> I'm not saying the 0002 patch is bug-free yet though, it's a bit finickity.\n> I think the idea of speeding up linear search is really really good for scaling SLRUs. It's not even about improving normal performance of the cluster, but it's important from preventing pathological degradation under certain circumstances. Bigger cache really saves SLAs :) I'll look into the patch more closely this weekend. Thank you!\n\n\nSome thoughts on HashTable patch:\n1. Can we allocate bigger hashtable to reduce probability of collisions?\n2. Can we use specialised hashtable for this case? I'm afraid hash_search() does comparable number of CPU cycles as simple cycle from 0 to 128. We could inline everything and avoid hashp->hash(keyPtr, hashp->keysize) call. I'm not insisting on special hash though, just an idea.\n3. pageno in SlruMappingTableEntry seems to be unused.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 26 Mar 2021 20:52:22 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Mar 27, 2021 at 4:52 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Some thoughts on HashTable patch:\n> 1. Can we allocate bigger hashtable to reduce probability of collisions?\n\nYeah, good idea, might require some study.\n\n> 2. Can we use specialised hashtable for this case? I'm afraid hash_search() does comparable number of CPU cycles as simple cycle from 0 to 128. We could inline everything and avoid hashp->hash(keyPtr, hashp->keysize) call. I'm not insisting on special hash though, just an idea.\n\nI tried really hard to not fall into this rabbit h.... [hack hack\nhack], OK, here's a first attempt to use simplehash, Andres's\nsteampunk macro-based robinhood template that we're already using for\nseveral other things, and murmurhash which is inlineable and\nbranch-free. I had to tweak it to support \"in-place\" creation and\nfixed size (in other words, no allocators, for use in shared memory).\nThen I was annoyed that I had to add a \"status\" member to our struct,\nso I tried to fix that. Definitely needs more work to think about\nfailure modes when running out of memory, how much spare space you\nneed, etc.\n\nI have not experimented with this much beyond hacking until the tests\npass, but it *should* be more efficient...\n\n> 3. pageno in SlruMappingTableEntry seems to be unused.\n\nIt's the key (dynahash uses the first N bytes of your struct as the\nkey, but in this new simplehash version it's more explicit).", "msg_date": "Sat, 27 Mar 2021 09:26:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 27 марта 2021 г., в 01:26, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> On Sat, Mar 27, 2021 at 4:52 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> Some thoughts on HashTable patch:\n>> 1. Can we allocate bigger hashtable to reduce probability of collisions?\n> \n> Yeah, good idea, might require some study.\nIn a long run we always have this table filled with nslots. But the keys will be usually consecutive numbers (current working set of CLOG\\Multis\\etc). So in a happy hashing scenario collisions will only appear for some random backward jumps. I think just size = nslots * 2 will produce results which cannot be improved significantly.\nAnd this reflects original growth strategy SH_GROW(tb, tb->size * 2).\n\n>> 2. Can we use specialised hashtable for this case? I'm afraid hash_search() does comparable number of CPU cycles as simple cycle from 0 to 128. We could inline everything and avoid hashp->hash(keyPtr, hashp->keysize) call. I'm not insisting on special hash though, just an idea.\n> \n> I tried really hard to not fall into this rabbit h.... [hack hack\n> hack], OK, here's a first attempt to use simplehash,\n\n> Andres's\n> steampunk macro-based robinhood template\nSounds magnificent.\n\n> that we're already using for\n> several other things\nI could not find much tests to be sure that we do not break something...\n\n> , and murmurhash which is inlineable and\n> branch-free.\nI think pageno is a hash already. Why hash any further? And pages accessed together will have smaller access time due to colocation.\n\n> I had to tweak it to support \"in-place\" creation and\n> fixed size (in other words, no allocators, for use in shared memory).\nWe really need to have a test to know what happens when this structure goes out of memory, as you mentioned below. What would be apropriate place for simplehash tests?\n\n> Then I was annoyed that I had to add a \"status\" member to our struct,\n> so I tried to fix that.\nIndeed, sizeof(SlruMappingTableEntry) == 9 seems strange. Will simplehash align it well?\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 27 Mar 2021 10:31:54 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Mar 27, 2021 at 6:31 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 27 марта 2021 г., в 01:26, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> > , and murmurhash which is inlineable and\n> > branch-free.\n\n> I think pageno is a hash already. Why hash any further? And pages accessed together will have smaller access time due to colocation.\n\nYeah, if clog_buffers is large enough then it's already a \"perfect\nhash\", but if it's not then you might get some weird \"harmonic\"\neffects (not sure if that's the right word), basically higher or lower\ncollision rate depending on coincidences in the data. If you apply a\nhash, the collisions should be evenly spread out so at least it'll be\nsomewhat consistent. Does that make sense?\n\n(At some point I figured out that the syscaches have lower collision\nrates and perform better if you use oids directly instead of hashing\nthem... but then it's easy to create a pathological pattern of DDL\nthat turns your hash table into a linked list. Not sure what to think\nabout that.)\n\n> > I had to tweak it to support \"in-place\" creation and\n> > fixed size (in other words, no allocators, for use in shared memory).\n\n> We really need to have a test to know what happens when this structure goes out of memory, as you mentioned below. What would be apropriate place for simplehash tests?\n\nGood questions. This has to be based on being guaranteed to have\nenough space for all of the entries, so the question is really just\n\"how bad can performance get with different load factors\". FWIW there\nwere some interesting cases with clustering when simplehash was first\nused in the executor (see commits ab9f2c42 and parent) which required\nsome work on hashing quality to fix.\n\n> > Then I was annoyed that I had to add a \"status\" member to our struct,\n> > so I tried to fix that.\n\n> Indeed, sizeof(SlruMappingTableEntry) == 9 seems strange. Will simplehash align it well?\n\nWith that \"intrusive status\" patch, the size is back to 8. But I\nthink I made a mistake: I made it steal some key space to indicate\npresence, but I think the presence test should really get access to\nthe whole entry so that you can encode it in more ways. For example,\nwith slotno == -1.\n\nAlright, considering the date, if we want to get this into PostgreSQL\n14 it's time to make some decisions.\n\n1. Do we want customisable SLRU sizes in PG14?\n\n+1 from me, we have multiple reports of performance gains from\nincreasing various different SLRUs, and it's easy to find workloads\nthat go faster.\n\nOne thought: it'd be nice if the user could *see* the current size,\nwhen using the default. SHOW clog_buffers -> 0 isn't very helpful if\nyou want to increase it, but don't know what it's currently set to.\nNot sure off the top of my head how best to do that.\n\n2. What names do we want the GUCs to have? Here's what we have:\n\nProposed GUC Directory System views\nclog_buffers pg_xact Xact\nmultixact_offsets_buffers pg_multixact/offsets MultiXactOffset\nmultixact_members_buffers pg_multixact/members MultiXactMember\nnotify_buffers pg_notify Notify\nserial_buffers pg_serial Serial\nsubtrans_buffers pg_subtrans Subtrans\ncommit_ts_buffers pg_commit_ts CommitTs\n\nBy system views, I mean pg_stat_slru, pg_shmem_allocations and\npg_stat_activity (lock names add \"SLRU\" on the end).\n\nObservations:\n\nIt seems obvious that \"clog_buffers\" should be renamed to \"xact_buffers\".\nIt's not clear whether the multixact GUCs should have the extra \"s\"\nlike the directories, or not, like the system views.\nIt see that we have \"Shared Buffer Lookup Table\" in\npg_shmem_allocations, so where I generated names like \"Subtrans\nMapping Table\" I should change that to \"Lookup\" to match.\n\n3. What recommendations should we make about how to set it?\n\nI think the answer depends partially on the next questions! I think\nwe should probably at least say something short about the pg_stat_slru\nview (cache miss rate) and pg_stat_actitity view (waits on locks), and\nhow to tell if you might need to increase it. I think this probably\nneeds a new paragraph, separate from the docs for the individual GUC.\n\n4. Do we want to ship the dynahash patch?\n\n+0.9. The slight hesitation is that it's new code written very late\nin the cycle, so it may still have bugs or unintended consequences,\nand as you said, at small sizes the linear search must be faster than\nthe hash computation. Could you help test it, and try to break it?\nCan we quantify the scaling effect for some interesting workloads, to\nsee at what size the dynahash beats the linear search, so that we can\nmake an informed decision? Of course, without a hash table, large\nsizes will surely work badly, so it'd be tempting to restrict the size\nyou can set the GUC to.\n\nIf we do include the dynahash patch, then I think it would also be\nreasonable to change the formula for the default, to make it higher on\nlarge systems. The restriction to 128 buffers (= 1MB) doesn't make\nmuch sense on a high frequency OLTP system with 128GB of shared\nbuffers or even 4GB. I think \"unleashing better defaults\" would\nactually be bigger news than the GUC for typical users, because\nthey'll just see PG14 use a few extra MB and go faster without having\nto learn about these obscure new settings.\n\n5. Do we want to ship the simplehash patch?\n\n-0.5. It's a bit too exciting for the last minute, so I'd be inclined\nto wait until the next cycle to do some more research and testing. I\nknow it's a better idea in the long run.\n\n\n", "msg_date": "Mon, 29 Mar 2021 10:15:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 29 марта 2021 г., в 02:15, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> On Sat, Mar 27, 2021 at 6:31 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> 27 марта 2021 г., в 01:26, Thomas Munro <thomas.munro@gmail.com> написал(а):\n>>> , and murmurhash which is inlineable and\n>>> branch-free.\n> \n>> I think pageno is a hash already. Why hash any further? And pages accessed together will have smaller access time due to colocation.\n> \n> Yeah, if clog_buffers is large enough then it's already a \"perfect\n> hash\", but if it's not then you might get some weird \"harmonic\"\n> effects (not sure if that's the right word), basically higher or lower\n> collision rate depending on coincidences in the data. If you apply a\n> hash, the collisions should be evenly spread out so at least it'll be\n> somewhat consistent. Does that make sense?\nAs far as I understand \"Harmonic\" effects only make sense if the distribution is unknown. Hash protects from \"periodic\" data when periods are equal to hash table size. I don't think we need to protect from this case, SLRU data is expected to be localised...\nCost of this protection is necessity to calculate murmur hash on each SLRU lookup. Probably, 10-100ns. Seems like not a big deal.\n\n> (At some point I figured out that the syscaches have lower collision\n> rates and perform better if you use oids directly instead of hashing\n> them... but then it's easy to create a pathological pattern of DDL\n> that turns your hash table into a linked list. Not sure what to think\n> about that.)\n> \n>>> I had to tweak it to support \"in-place\" creation and\n>>> fixed size (in other words, no allocators, for use in shared memory).\n> \n>> We really need to have a test to know what happens when this structure goes out of memory, as you mentioned below. What would be apropriate place for simplehash tests?\n> \n> Good questions. This has to be based on being guaranteed to have\n> enough space for all of the entries, so the question is really just\n> \"how bad can performance get with different load factors\". FWIW there\n> were some interesting cases with clustering when simplehash was first\n> used in the executor (see commits ab9f2c42 and parent) which required\n> some work on hashing quality to fix.\nInteresting read, I didn't know much about simple hash, but seems like there is still many cases where it can be used for good. I always wondered why Postgres uses only Larson's linear hash.\n\n> \n>>> Then I was annoyed that I had to add a \"status\" member to our struct,\n>>> so I tried to fix that.\n> \n>> Indeed, sizeof(SlruMappingTableEntry) == 9 seems strange. Will simplehash align it well?\n> \n> With that \"intrusive status\" patch, the size is back to 8. But I\n> think I made a mistake: I made it steal some key space to indicate\n> presence, but I think the presence test should really get access to\n> the whole entry so that you can encode it in more ways. For example,\n> with slotno == -1.\n> \n> Alright, considering the date, if we want to get this into PostgreSQL\n> 14 it's time to make some decisions.\n> \n> 1. Do we want customisable SLRU sizes in PG14?\n> \n> +1 from me, we have multiple reports of performance gains from\n> increasing various different SLRUs, and it's easy to find workloads\n> that go faster.\nYes, this is main point of this discussion. So +1 from me too.\n\n> \n> One thought: it'd be nice if the user could *see* the current size,\n> when using the default. SHOW clog_buffers -> 0 isn't very helpful if\n> you want to increase it, but don't know what it's currently set to.\n> Not sure off the top of my head how best to do that.\nDon't we expect that SHOW command indicate exactly same value as in config or SET command? If this convention does not exist - probably showing effective value is a good idea.\n\n\n> 2. What names do we want the GUCs to have? Here's what we have:\n> \n> Proposed GUC Directory System views\n> clog_buffers pg_xact Xact\n> multixact_offsets_buffers pg_multixact/offsets MultiXactOffset\n> multixact_members_buffers pg_multixact/members MultiXactMember\n> notify_buffers pg_notify Notify\n> serial_buffers pg_serial Serial\n> subtrans_buffers pg_subtrans Subtrans\n> commit_ts_buffers pg_commit_ts CommitTs\n> \n> By system views, I mean pg_stat_slru, pg_shmem_allocations and\n> pg_stat_activity (lock names add \"SLRU\" on the end).\n> \n> Observations:\n> \n> It seems obvious that \"clog_buffers\" should be renamed to \"xact_buffers\".\n+1\n> It's not clear whether the multixact GUCs should have the extra \"s\"\n> like the directories, or not, like the system views.\nI think we show break the ties by native English speaker's ears or typing habits. I'm not a native speaker.\n\n> It see that we have \"Shared Buffer Lookup Table\" in\n> pg_shmem_allocations, so where I generated names like \"Subtrans\n> Mapping Table\" I should change that to \"Lookup\" to match.\n> \n> 3. What recommendations should we make about how to set it?\n> \n> I think the answer depends partially on the next questions! I think\n> we should probably at least say something short about the pg_stat_slru\n> view (cache miss rate) and pg_stat_actitity view (waits on locks), and\n> how to tell if you might need to increase it. I think this probably\n> needs a new paragraph, separate from the docs for the individual GUC.\nI can only suggest incident-driven approach.\n1. Observe ridiculous amount of backends waiting on particular SLRU.\n2. Double SLRU buffers for that SLRU.\n3. Goto 1.\nI don't think we should mention this approach in docs.\n\n> 4. Do we want to ship the dynahash patch?\n\nThis patch allows to throw infinite amount of memory on a problem of SLRU waiting for IO. So the scale of improvement is much higher. Do I want that we ship this patch? Definitely. Does this change much? I don't know.\n\n> \n> +0.9. The slight hesitation is that it's new code written very late\n> in the cycle, so it may still have bugs or unintended consequences,\n> and as you said, at small sizes the linear search must be faster than\n> the hash computation. Could you help test it, and try to break it?\nI'll test it and try to break.\n\n> Can we quantify the scaling effect for some interesting workloads, to\n> see at what size the dynahash beats the linear search, so that we can\n> make an informed decision?\nI think we cannot statistically distinguish linear search from hash search by means of SLRU. But we can create some synthetic benchmarks.\n\n> Of course, without a hash table, large\n> sizes will surely work badly, so it'd be tempting to restrict the size\n> you can set the GUC to.\n> \n> If we do include the dynahash patch, then I think it would also be\n> reasonable to change the formula for the default, to make it higher on\n> large systems. The restriction to 128 buffers (= 1MB) doesn't make\n> much sense on a high frequency OLTP system with 128GB of shared\n> buffers or even 4GB. I think \"unleashing better defaults\" would\n> actually be bigger news than the GUC for typical users, because\n> they'll just see PG14 use a few extra MB and go faster without having\n> to learn about these obscure new settings.\nI agree. I don't see why we would need to limit buffers to 128 in presence of hash search.\n\n> 5. Do we want to ship the simplehash patch?\n> \n> -0.5. It's a bit too exciting for the last minute, so I'd be inclined\n> to wait until the next cycle to do some more research and testing. I\n> know it's a better idea in the long run.\nOK, obviously, it's safer decision.\n\n\nMy TODO list:\n1. Try to break patch set v13-[0001-0004]\n2. Think how to measure performance of linear search versus hash search in SLRU buffer mapping.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 29 Mar 2021 13:26:02 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n> 29 марта 2021 г., в 11:26, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> My TODO list:\n> 1. Try to break patch set v13-[0001-0004]\n> 2. Think how to measure performance of linear search versus hash search in SLRU buffer mapping.\n\nHi Thomas!\nI'm still doing my homework. And to this moment all my catch is that \"utils/dynahash.h\" is not necessary.\n\nI'm thinking about hashtables and measuring performance near optimum of linear search does not seem a good idea now.\nIt's impossible to prove that difference is statistically significant on all platforms. But even on one platform measurements are just too noisy.\n\nShared buffers lookup table is indeed very similar to this SLRU lookup table. And it does not try to use more memory than needed. I could not find pgbench-visible impact of growing shared buffer lookup table. Obviously, because it's not a bottleneck on regular workload. And it's hard to guess representative pathological workload.\n\nIn fact, this thread started with proposal to use reader-writer lock for multis (instead of exclusive lock), and this proposal encountered same problem. It's very hard to create stable reproduction of pathological workload when this lock is heavily contented. Many people observed the problem, but still there is no open repro.\n\nI bet it will be hard to prove that simplehash is any better then HTAB. But if it is really better, shared buffers could benefit from the same technique.\n\nI think its just fine to use HTAB with normal size, as long as shared buffers do so. But there we allocate slightly more space InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS). Don't we need to allocate nslots + 1 ? It seems that we always do SlruMappingRemove() before SlruMappingAdd() and it is not necessary.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 1 Apr 2021 00:09:43 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Apr 1, 2021 at 10:09 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 29 марта 2021 г., в 11:26, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> > My TODO list:\n> > 1. Try to break patch set v13-[0001-0004]\n> > 2. Think how to measure performance of linear search versus hash search in SLRU buffer mapping.\n>\n> Hi Thomas!\n> I'm still doing my homework. And to this moment all my catch is that \"utils/dynahash.h\" is not necessary.\n\nThanks. Here's a new patch with that fixed, and also:\n\n1. New names (\"... Mapping Table\" -> \"... Lookup Table\" in\npg_shmem_allocations view, \"clog_buffers\" -> \"xact_buffers\") and a\ncouple of typos fixed here and there.\n\n2. Remove the cap of 128 buffers for xact_buffers as agreed. We\nstill need a cap though, to avoid a couple of kinds of overflow inside\nslru.c, both when computing the default value and accepting a\nuser-provided number. I introduced SLRU_MAX_ALLOWED_BUFFERS to keep\nit <= 1GB, and tested this on a 32 bit build with extreme block sizes.\n\nLikewise, I removed the cap of 16 buffers for commit_ts_buffers, but\nonly if you have track_commit_timestamp enabled. It seems silly to\nwaste 1MB per 1GB of shared_buffers on a feature you're not using. So\nthe default is capped at 16 in that case to preserve existing\nbehaviour, but otherwise can be set very large if you want.\n\nI think it's plausible that we'll want to make the multixact sizes\nadaptive too, but I that might have to be a job for later. Likewise,\nI am sure that substransaction-heavy workloads might be slower than\nthey need to be due to the current default, but I have not done the\nresearch, With these new GUCs, people are free to experiment and\ndevelop theories about what the defaults should be in v15.\n\n3. In the special case of xact_buffers, there is a lower cap of\n512MB, because there's no point trying to cache more xids than there\ncan be in existence, and that is computed by working backwards from\nCLOG_XACTS_PER_PAGE etc. It's not possible to do the same sort of\nthing for the other SLRUs without overflow problems, and it doesn't\nseem worth trying to fix that right now (1GB of cached commit\ntimestamps ought to be enough for anyone™, while the theoretical\nmaximum is 10 bytes for 2b xids = 20GB).\n\nTo make this more explicit for people not following our discussion in\ndetail: with shared_buffers=0...512MB, the behaviour doesn't change.\nBut for shared_buffers=1GB you'll get twice as many xact_buffers as\ntoday (2MB instead of being capped at 1MB), and it keeps scaling\nlinearly from there at 0.2%. In other words, all real world databases\nwill get a boost in this department.\n\n4. Change the default for commit_ts_buffers back to shared_buffers /\n1024 (with a minimum of 4), because I think you might have changed it\nby a copy and paste error -- or did you intend to make the default\nhigher?\n\n5. Improve descriptions for the GUCs, visible in pg_settings view, to\nmatch the documentation for related wait events. So \"for commit log\nSLRU\" -> \"for the transaction status SLRU cache\", and similar\ncorrections elsewhere. (I am tempted to try to find a better word\nthan \"SLRU\", which doesn't seem relevant to users, but for now\nconsistency is good.)\n\n6. Added a callback so that SHOW xact_buffers and SHOW\ncommit_ts_buffers display the real size in effect (instead of \"0\" for\ndefault).\n\nI tried running with xact_buffers=1 and soon saw why you change it to\ninterpret 1 the same as 0; with 1 you hit buffer starvation and get\nstuck. I wish there were some way to say \"the default for this GUC is\n0, but if it's not zero then it's got to be at least 4\". I didn't\nstudy the theoretical basis for the previous minimum value of 4, so I\nthink we should keep it that way, so that if you say 3 you get 4. I\nthought it was better to express that like so:\n\n /* Use configured value if provided. */\n if (xact_buffers > 0)\n return Max(4, xact_buffers);\n return Min(CLOG_MAX_ALLOWED_BUFFERS, Max(4, NBuffers / 512));\n\n> I'm thinking about hashtables and measuring performance near optimum of linear search does not seem a good idea now.\n> It's impossible to prove that difference is statistically significant on all platforms. But even on one platform measurements are just too noisy.\n>\n> Shared buffers lookup table is indeed very similar to this SLRU lookup table. And it does not try to use more memory than needed. I could not find pgbench-visible impact of growing shared buffer lookup table. Obviously, because it's not a bottleneck on regular workload. And it's hard to guess representative pathological workload.\n\nThanks for testing. I agree that it's a good idea to follow the main\nbuffer pool's approach for our first version of this. One small\ndifference is that the SLRU patch performs the hash computation while\nit holds the lock. If we computed the hash first and used\nhash_search_with_hash_value(), we could compute it before we obtain\nthe lock, like the main buffer pool.\n\nIf we computed the hash value first, we could also ignore the rule in\nthe documentation for hash_search_with_hash_value() that says that you\nmust calculate it with get_hash_value(), and just call the hash\nfunction ourselves, so that it's fully inlinable. The same\nopportunity exists for the main buffer pool. That'd get you one of\nthe micro-optimisations that simplehash.h offers. Whether it's worth\nbothering with, I don't know.\n\n> In fact, this thread started with proposal to use reader-writer lock for multis (instead of exclusive lock), and this proposal encountered same problem. It's very hard to create stable reproduction of pathological workload when this lock is heavily contented. Many people observed the problem, but still there is no open repro.\n>\n> I bet it will be hard to prove that simplehash is any better then HTAB. But if it is really better, shared buffers could benefit from the same technique.\n\nAgreed, there are a lot of interesting future projects in this area,\nwhen you compare the main buffer pool, these special buffer pools, and\nmaybe also the \"shared relfilenode\" pool patch I have proposed for v15\n(CF entry 2933). All have mapping tables and buffer replacement\nalgorithms (why should they be different?), one has partitions, some\nhave atomic-based header locks, they interlock with WAL differently\n(on page LSN, FPIs and checksum support), ... etc etc.\n\n> I think its just fine to use HTAB with normal size, as long as shared buffers do so. But there we allocate slightly more space InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS). Don't we need to allocate nslots + 1 ? It seems that we always do SlruMappingRemove() before SlruMappingAdd() and it is not necessary.\n\nYeah, we never try to add more elements than allowed, because we have\na big lock around the mapping. The main buffer mapping table has a\nmore concurrent design and might temporarily have one extra entry per\npartition.", "msg_date": "Thu, 1 Apr 2021 16:40:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 1 апр. 2021 г., в 06:40, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> 2. Remove the cap of 128 buffers for xact_buffers as agreed. We\n> still need a cap though, to avoid a couple of kinds of overflow inside\n> slru.c, both when computing the default value and accepting a\n> user-provided number. I introduced SLRU_MAX_ALLOWED_BUFFERS to keep\n> it <= 1GB, and tested this on a 32 bit build with extreme block sizes.\nBTW we do not document maximum values right now.\nI was toying around with big values. For example if we set different big xact_buffers we can get something like\nFATAL: not enough shared memory for data structure \"Notify\" (72768 bytes requested)\nFATAL: not enough shared memory for data structure \"Async Queue Control\" (2492 bytes requested)\nFATAL: not enough shared memory for data structure \"Checkpointer Data\" (393280 bytes requested)\n\nBut never anything about xact_buffers. I don't think it's important, though.\n\n> \n> Likewise, I removed the cap of 16 buffers for commit_ts_buffers, but\n> only if you have track_commit_timestamp enabled. \nIs there a reason to leave 16 pages if commit_ts is disabled? They might be useful for some artefacts of previously enabled commit_ts?\n\n> 4. Change the default for commit_ts_buffers back to shared_buffers /\n> 1024 (with a minimum of 4), because I think you might have changed it\n> by a copy and paste error -- or did you intend to make the default\n> higher?\nI changed default due to some experiments with https://www.postgresql.org/message-id/flat/20210115220744.GA24457%40alvherre.pgsql\nIn fact most important part of that thread was removing the cap, which is done by the patchset now.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n\n\n", "msg_date": "Sat, 3 Apr 2021 22:57:23 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sun, Apr 4, 2021 at 7:57 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> I was toying around with big values. For example if we set different big\nxact_buffers we can get something like\n> FATAL: not enough shared memory for data structure \"Notify\" (72768 bytes\nrequested)\n> FATAL: not enough shared memory for data structure \"Async Queue Control\"\n(2492 bytes requested)\n> FATAL: not enough shared memory for data structure \"Checkpointer Data\"\n(393280 bytes requested)\n>\n> But never anything about xact_buffers. I don't think it's important,\nthough.\n\nI had added the hash table size in SimpleLruShmemSize(), but then\nSimpleLruInit() passed that same value in when allocating the struct, so\nthe struct was oversized. Oops. Fixed.\n\n> > Likewise, I removed the cap of 16 buffers for commit_ts_buffers, but\n> > only if you have track_commit_timestamp enabled.\n\n> Is there a reason to leave 16 pages if commit_ts is disabled? They might\nbe useful for some artefacts of previously enabled commit_ts?\n\nAlvaro, do you have an opinion on that?\n\nThe remaining thing that bothers me about this patch set is that there is\nstill a linear search in the replacement algorithm, and it runs with an\nexclusive lock. That creates a serious problem for large caches that still\naren't large enough. I wonder if we can do something to improve that\nsituation in the time we have. I considered a bunch of ideas but could\nonly find one that fits with slru.c's simplistic locking while tracking\nrecency. What do you think about a hybrid of SLRU and random replacement,\nthat retains some characteristics of both? You could think of it as being\na bit like the tournament selection of the genetic algorithm, with a\ntournament size of (say) 8 or 16. Any ideas on how to evaluate this and\nchoose the number? See attached.", "msg_date": "Wed, 7 Apr 2021 17:59:19 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 7 апр. 2021 г., в 08:59, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> The remaining thing that bothers me about this patch set is that there is still a linear search in the replacement algorithm, and it runs with an exclusive lock. That creates a serious problem for large caches that still aren't large enough. I wonder if we can do something to improve that situation in the time we have. I considered a bunch of ideas but could only find one that fits with slru.c's simplistic locking while tracking recency. What do you think about a hybrid of SLRU and random replacement, that retains some characteristics of both? You could think of it as being a bit like the tournament selection of the genetic algorithm, with a tournament size of (say) 8 or 16. Any ideas on how to evaluate this and choose the number? See attached.\n> <v15-0001-Add-a-buffer-mapping-table-for-SLRUs.patch><v15-0002-Make-all-SLRU-buffer-sizes-configurable.patch><v15-0003-Use-hybrid-random-SLRU-replacement-for-SLRUs.patch>\n\nMaybe instead of fully associative cache with random replacement we could use 1-associative cache?\ni.e. each page can reside only in one spcific buffer slot. If there's something else - evict it.\nI think this would be as efficient as RR cache. And it's soooo fast.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 7 Apr 2021 14:44:32 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 7 апр. 2021 г., в 14:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> Maybe instead of fully associative cache with random replacement we could use 1-associative cache?\n> i.e. each page can reside only in one spcific buffer slot. If there's something else - evict it.\n> I think this would be as efficient as RR cache. And it's soooo fast.\n\nI thought a bit more and understood that RR is protected from two competing pages in working set, while 1-associative cache is not. So, discard that idea.\n\nBest regards, Andrey Borodin.\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 15:13:26 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Apr 8, 2021 at 12:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 7 апр. 2021 г., в 14:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> > Maybe instead of fully associative cache with random replacement we could use 1-associative cache?\n> > i.e. each page can reside only in one spcific buffer slot. If there's something else - evict it.\n> > I think this would be as efficient as RR cache. And it's soooo fast.\n>\n> I thought a bit more and understood that RR is protected from two competing pages in working set, while 1-associative cache is not. So, discard that idea.\n\nIt's an interesting idea. I know that at least one proprietary fork\njust puts the whole CLOG in memory for direct indexing, which is what\nwe'd have here if we said \"oh, your xact_buffers setting is so large\nI'm just going to use slotno = pageno & mask\".\n\nHere's another approach that is a little less exciting than\n\"tournament RR\" (or whatever that should be called; I couldn't find an\nestablished name for it). This version is just our traditional linear\nsearch, except that it stops at 128, and remembers where to start from\nnext time (like a sort of Fisher-Price GCLOCK hand). This feels more\ncommittable to me. You can argue that all buffers above 128 are bonus\nbuffers that PostgreSQL 13 didn't have, so the fact that we can no\nlonger find the globally least recently used page when you set\nxact_buffers > 128 doesn't seem too bad to me, as an incremental step\n(but to be clear, of course we can do better than this with more work\nin later releases).", "msg_date": "Thu, 8 Apr 2021 12:30:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 8 апр. 2021 г., в 03:30, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> Here's another approach that is a little less exciting than\n> \"tournament RR\" (or whatever that should be called; I couldn't find an\n> established name for it). This version is just our traditional linear\n> search, except that it stops at 128, and remembers where to start from\n> next time (like a sort of Fisher-Price GCLOCK hand). This feels more\n> committable to me. You can argue that all buffers above 128 are bonus\n> buffers that PostgreSQL 13 didn't have, so the fact that we can no\n> longer find the globally least recently used page when you set\n> xact_buffers > 128 doesn't seem too bad to me, as an incremental step\n> (but to be clear, of course we can do better than this with more work\n> in later releases).\nI agree that this version of eviction seems much more effective and less intrusive than RR. And it's still LRU, which is important for subsystem that is called SLRU.\nshared->search_slotno is initialized implicitly with memset(). But this seems like a common practice.\nAlso comment above \"max_search = Min(shared->num_slots, MAX_REPLACEMENT_SEARCH);\" does not reflect changes.\n\nBesides this patch looks good to me.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 8 Apr 2021 10:24:55 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Apr 8, 2021 at 7:24 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> I agree that this version of eviction seems much more effective and less intrusive than RR. And it's still LRU, which is important for subsystem that is called SLRU.\n> shared->search_slotno is initialized implicitly with memset(). But this seems like a common practice.\n> Also comment above \"max_search = Min(shared->num_slots, MAX_REPLACEMENT_SEARCH);\" does not reflect changes.\n>\n> Besides this patch looks good to me.\n\nThanks! I chickened out of committing a buffer replacement algorithm\npatch written 11 hours before the feature freeze, but I also didn't\nreally want to commit the GUC patch without that. Ahh, if only we'd\nlatched onto the real problems here just a little sooner, but there is\nalways PostgreSQL 15, I heard it's going to be amazing. Moved to next\nCF.\n\n\n", "msg_date": "Fri, 9 Apr 2021 00:22:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 8 апр. 2021 г., в 15:22, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> On Thu, Apr 8, 2021 at 7:24 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> I agree that this version of eviction seems much more effective and less intrusive than RR. And it's still LRU, which is important for subsystem that is called SLRU.\n>> shared->search_slotno is initialized implicitly with memset(). But this seems like a common practice.\n>> Also comment above \"max_search = Min(shared->num_slots, MAX_REPLACEMENT_SEARCH);\" does not reflect changes.\n>> \n>> Besides this patch looks good to me.\n> \n> Thanks! I chickened out of committing a buffer replacement algorithm\n> patch written 11 hours before the feature freeze, but I also didn't\n> really want to commit the GUC patch without that. Ahh, if only we'd\n> latched onto the real problems here just a little sooner, but there is\n> always PostgreSQL 15, I heard it's going to be amazing. Moved to next\n> CF.\n\nI have one more idea inspired by CPU caches.\nLet's make SLRU n-associative, where n ~ 8.\nWe can divide buffers into \"banks\", number of banks must be power of 2.\nAll banks are of equal size. We choose bank size to approximately satisfy user's configured buffer size.\nEach page can live only within one bank. We use same search and eviction algorithms as we used in SLRU, but we only need to search\\evict over 8 elements.\nAll SLRU data of a single bank will be colocated within at most 2 cache line.\n\nI did not come up with idea how to avoid multiplication of bank_number * bank_size in case when user configured 31337 buffers (any number that is radically not a power of 2).\n\nPFA patch implementing this idea.\n\nBest regards, Andrey Borodin.", "msg_date": "Sun, 11 Apr 2021 21:37:21 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "пн, 14 июн. 2021 г. в 15:07, Andrey Borodin <x4mmm@yandex-team.ru>:\n\n> PFA patch implementing this idea.\n>\n\nI'm benchmarked v17 patches.\nTesting was done on a 96-core machine, with PGDATA completely placed in\ntmpfs.\nPostgreSQL was built with CFLAGS -O2.\n\nfor-update PgBench script:\n\\set aid random_zipfian(1, 100, 2)\nbegin;\nselect :aid from pgbench_accounts where aid = :aid for update;\nupdate pgbench_accounts set abalance = abalance + 1 where aid = :aid;\nupdate pgbench_accounts set abalance = abalance * 2 where aid = :aid;\nupdate pgbench_accounts set abalance = abalance - 2 where aid = :aid;\nend;\n\nBefore each test sample data was filled with \"pgbench -i -s 100\", testing\nwas performed 3 times for 1 hour each test.\nThe benchmark results are presented with changing\nmulti_xact_members_buffers and multicast_offsets_buffers (1:2 respectively):\nsettings tps\nmultixact_members_buffers_64Kb 693.2\nmultixact_members_buffers_128Kb 691.4\nmultixact_members_buffers_192Kb 696.3\nmultixact_members_buffers_256Kb 694.4\nmultixact_members_buffers_320Kb 692.3\nmultixact_members_buffers_448Kb 693.7\nmultixact_members_buffers_512Kb 693.3\nvanilla 676.1\n\nBest regards, Dmitry Vasiliev.\n\nпн, 14 июн. 2021 г. в 15:07, Andrey Borodin <x4mmm@yandex-team.ru>:\nPFA patch implementing this idea.I'm benchmarked v17 patches.Testing was done on a 96-core machine, with PGDATA completely placed in tmpfs.PostgreSQL was built with CFLAGS -O2.for-update PgBench script:\\set aid random_zipfian(1, 100, 2)begin;select :aid from pgbench_accounts where aid = :aid for update;update pgbench_accounts set abalance = abalance + 1 where aid = :aid;update pgbench_accounts set abalance = abalance * 2 where aid = :aid;update pgbench_accounts set abalance = abalance - 2 where aid = :aid;end;Before each test sample data was filled with \"pgbench -i -s 100\", testing was performed 3 times for 1 hour each test. The benchmark results are presented with changing multi_xact_members_buffers and multicast_offsets_buffers (1:2 respectively):settings                          tpsmultixact_members_buffers_64Kb   693.2multixact_members_buffers_128Kb  691.4multixact_members_buffers_192Kb  696.3multixact_members_buffers_256Kb  694.4multixact_members_buffers_320Kb  692.3multixact_members_buffers_448Kb  693.7multixact_members_buffers_512Kb  693.3vanilla                          676.1Best regards, Dmitry Vasiliev.", "msg_date": "Mon, 14 Jun 2021 20:40:16 +0300", "msg_from": "=?UTF-8?B?0JLQsNGB0LjQu9GM0LXQsiDQlNC80LjRgtGA0LjQuQ==?=\n <vadv.mkn@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": ">> 8 апр. 2021 г., в 15:22, Thomas Munro <thomas.munro@gmail.com> написал(а):\n>>\n> I have one more idea inspired by CPU caches.\n> Let's make SLRU n-associative, where n ~ 8.\n> We can divide buffers into \"banks\", number of banks must be power of 2.\n> All banks are of equal size. We choose bank size to approximately satisfy user's configured buffer size.\n> Each page can live only within one bank. We use same search and eviction algorithms as we used in SLRU, but we only need to search\\evict over 8 elements.\n> All SLRU data of a single bank will be colocated within at most 2 cache line.\n> \n> I did not come up with idea how to avoid multiplication of bank_number * bank_size in case when user configured 31337 buffers (any number that is radically not a power of 2).\n\nWe can avoid this multiplication by using gapped memory under SLRU page_statuses, but from my POV here complexity does not worth possible performance gain.\n\nPFA rebase of the patchset. Also I've added a patch to combine page_number, page_status, and page_dirty together to touch less cachelines.\n\nBest regards, Andrey Borodin.", "msg_date": "Sun, 26 Dec 2021 15:09:59 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi,\nOn Sun, Dec 26, 2021 at 03:09:59PM +0500, Andrey Borodin wrote:\n> \n> \n> PFA rebase of the patchset. Also I've added a patch to combine page_number, page_status, and page_dirty together to touch less cachelines.\n\nThe cfbot reports some errors on the latest version of the patch:\n\nhttps://cirrus-ci.com/task/6121317215764480\n[04:56:38.432] su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n[04:56:48.270] In file included from async.c:134:\n[04:56:48.270] ../../../src/include/access/slru.h:47:28: error: expected identifier or ‘(’ before ‘:’ token\n[04:56:48.270] 47 | typedef enum SlruPageStatus:int16_t\n[04:56:48.270] | ^\n[04:56:48.270] ../../../src/include/access/slru.h:53:3: warning: data definition has no type or storage class\n[04:56:48.270] 53 | } SlruPageStatus;\n[04:56:48.270] | ^~~~~~~~~~~~~~\n[04:56:48.270] ../../../src/include/access/slru.h:53:3: warning: type defaults to ‘int’ in declaration of ‘SlruPageStatus’ [-Wimplicit-int]\n[04:56:48.270] ../../../src/include/access/slru.h:58:2: error: expected specifier-qualifier-list before ‘SlruPageStatus’\n[04:56:48.270] 58 | SlruPageStatus page_status;\n[04:56:48.270] | ^~~~~~~~~~~~~~\n[04:56:48.270] async.c: In function ‘asyncQueueAddEntries’:\n[04:56:48.270] async.c:1448:41: error: ‘SlruPageEntry’ has no member named ‘page_dirty’\n[04:56:48.270] 1448 | NotifyCtl->shared->page_entries[slotno].page_dirty = true;\n[04:56:48.270] | ^\n[04:56:48.271] make[3]: *** [<builtin>: async.o] Error 1\n[04:56:48.271] make[3]: *** Waiting for unfinished jobs....\n[04:56:48.297] make[2]: *** [common.mk:39: commands-recursive] Error 2\n[04:56:48.297] make[2]: *** Waiting for unfinished jobs....\n[04:56:54.554] In file included from clog.c:36:\n[04:56:54.554] ../../../../src/include/access/slru.h:47:28: error: expected identifier or ‘(’ before ‘:’ token\n[04:56:54.554] 47 | typedef enum SlruPageStatus:int16_t\n[04:56:54.554] | ^\n[04:56:54.554] ../../../../src/include/access/slru.h:53:3: warning: data definition has no type or storage class\n[04:56:54.554] 53 | } SlruPageStatus;\n[04:56:54.554] | ^~~~~~~~~~~~~~\n[04:56:54.554] ../../../../src/include/access/slru.h:53:3: warning: type defaults to ‘int’ in declaration of ‘SlruPageStatus’ [-Wimplicit-int]\n[04:56:54.554] ../../../../src/include/access/slru.h:58:2: error: expected specifier-qualifier-list before ‘SlruPageStatus’\n[04:56:54.554] 58 | SlruPageStatus page_status;\n[04:56:54.554] | ^~~~~~~~~~~~~~\n[04:56:54.554] clog.c: In function ‘TransactionIdSetPageStatusInternal’:\n[04:56:54.554] clog.c:396:39: error: ‘SlruPageEntry’ has no member named ‘page_dirty’\n[04:56:54.554] 396 | XactCtl->shared->page_entries[slotno].page_dirty = true;\n[04:56:54.554] | ^\n[04:56:54.554] In file included from ../../../../src/include/postgres.h:46,\n[04:56:54.554] from clog.c:33:\n[04:56:54.554] clog.c: In function ‘BootStrapCLOG’:\n[04:56:54.554] clog.c:716:47: error: ‘SlruPageEntry’ has no member named ‘page_dirty’\n[04:56:54.554] 716 | Assert(!XactCtl->shared->page_entries[slotno].page_dirty);\n[04:56:54.554] | ^\n[04:56:54.554] ../../../../src/include/c.h:848:9: note: in definition of macro ‘Assert’\n[04:56:54.554] 848 | if (!(condition)) \\\n[04:56:54.554] | ^~~~~~~~~\n[04:56:54.554] clog.c: In function ‘TrimCLOG’:\n[04:56:54.554] clog.c:801:40: error: ‘SlruPageEntry’ has no member named ‘page_dirty’\n[04:56:54.554] 801 | XactCtl->shared->page_entries[slotno].page_dirty = true;\n[04:56:54.554] | ^\n[04:56:54.554] In file included from ../../../../src/include/postgres.h:46,\n[04:56:54.554] from clog.c:33:\n[04:56:54.554] clog.c: In function ‘clog_redo’:\n[04:56:54.554] clog.c:997:48: error: ‘SlruPageEntry’ has no member named ‘page_dirty’\n[04:56:54.554] 997 | Assert(!XactCtl->shared->page_entries[slotno].page_dirty);\n[04:56:54.554] | ^\n[04:56:54.554] ../../../../src/include/c.h:848:9: note: in definition of macro ‘Assert’\n[04:56:54.554] 848 | if (!(condition)) \\\n[04:56:54.554] | ^~~~~~~~~\n[04:56:54.555] make[4]: *** [<builtin>: clog.o] Error 1\n[04:56:54.555] make[3]: *** [../../../src/backend/common.mk:39: transam-recursive] Error 2\n[04:56:54.555] make[3]: *** Waiting for unfinished jobs....\n[04:56:56.405] make[2]: *** [common.mk:39: access-recursive] Error 2\n[04:56:56.405] make[1]: *** [Makefile:42: all-backend-recurse] Error 2\n[04:56:56.405] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n[04:56:56.407]\n[04:56:56.407] Exit status: 2\n\nCould you send a new version? In the meantime I will switch the patch status\nto Waiting on Author.\n\n\n", "msg_date": "Fri, 14 Jan 2022 17:28:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Fri, Jan 14, 2022 at 05:28:38PM +0800, Julien Rouhaud wrote:\n> > PFA rebase of the patchset. Also I've added a patch to combine \n> > page_number, page_status, and page_dirty together to touch less \n> > cachelines.\n> \n> The cfbot reports some errors on the latest version of the patch:\n> \n> https://cirrus-ci.com/task/6121317215764480\n> [...]\n> Could you send a new version? In the meantime I will switch the patch \n> status\n> to Waiting on Author.\n> \n\nI was planning on running a set of stress tests on these patches. Could \nwe confirm which ones we plan to include in the commitfest?\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Fri, 14 Jan 2022 14:20:05 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> 15 янв. 2022 г., в 03:20, Shawn Debnath <sdn@amazon.com> написал(а):\n> \n> On Fri, Jan 14, 2022 at 05:28:38PM +0800, Julien Rouhaud wrote:\n>>> PFA rebase of the patchset. Also I've added a patch to combine \n>>> page_number, page_status, and page_dirty together to touch less \n>>> cachelines.\n>> \n>> The cfbot reports some errors on the latest version of the patch:\n>> \n>> https://cirrus-ci.com/task/6121317215764480\n>> [...]\n>> Could you send a new version? In the meantime I will switch the patch \n>> status\n>> to Waiting on Author.\n>> \n> \n> I was planning on running a set of stress tests on these patches. Could \n> we confirm which ones we plan to include in the commitfest?\n\nMany thanks for your interest. Here's the latest version.\n\nBest regards, Andrey Borodin.", "msg_date": "Sat, 15 Jan 2022 12:16:59 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:16:59PM +0500, Andrey Borodin wrote:\n> > 15 янв. 2022 г., в 03:20, Shawn Debnath <sdn@amazon.com> написал(а):\n> > On Fri, Jan 14, 2022 at 05:28:38PM +0800, Julien Rouhaud wrote:\n> >>> PFA rebase of the patchset. Also I've added a patch to combine \n> >>> page_number, page_status, and page_dirty together to touch less \n> >>> cachelines.\n> >> \n> >> The cfbot reports some errors on the latest version of the patch:\n> >> \n> >> https://cirrus-ci.com/task/6121317215764480\n> >> [...]\n> >> Could you send a new version? In the meantime I will switch the patch \n> >> status to Waiting on Author.\n> > \n> > I was planning on running a set of stress tests on these patches. Could \n> > we confirm which ones we plan to include in the commitfest?\n> \n> Many thanks for your interest. Here's the latest version.\n\nThis is failing to compile under linux and windows due to bitfield syntax.\nhttp://cfbot.cputube.org/andrey-borodin.html\n\nAnd compile warnings:\n\nslru.c: In function ‘SlruAdjustNSlots’:\nslru.c:161:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n 161 | int nbanks = 1;\n | ^~~\nslru.c: In function ‘SimpleLruReadPage_ReadOnly’:\nslru.c:530:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n 530 | int bankstart = (pageno & shared->bank_mask) * shared->bank_size;\n | ^~~\n\nNote that you can test the CI result using any github account without waiting\nfor the cfbot. See ./src/tools/ci/README.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 15 Jan 2022 09:46:01 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact/SLRU buffers configuration" }, { "msg_contents": "> 15 янв. 2022 г., в 20:46, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n>>> \n>>> I was planning on running a set of stress tests on these patches. Could \n>>> we confirm which ones we plan to include in the commitfest?\n>> \n>> Many thanks for your interest. Here's the latest version.\n> \n> This is failing to compile under linux and windows due to bitfield syntax.\n> http://cfbot.cputube.org/andrey-borodin.html\n\nUh, sorry, I formatted a patch from wrong branch.\n\nJust tested Cirrus. It's wonderful, thanks! Really faster than doing stuff on my machines...\n\nBest regards, Andrey Borodin.", "msg_date": "Sun, 16 Jan 2022 10:36:08 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact/SLRU buffers configuration" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:16:59PM +0500, Andrey Borodin wrote:\n\n> > I was planning on running a set of stress tests on these patches. Could\n> > we confirm which ones we plan to include in the commitfest?\n> \n> Many thanks for your interest. Here's the latest version.\n\nHere are the results of the multixact perf test I ran on the patch that splits\nthe linear SLRU caches into banks. With my test setup, the binaries\nwith the patch applied performed slower marginally across the test\nmatrix against unpatched binaries. Here are the results:\n\n+-------------------------------+---------------------+-----------------------+------------+\n| workload | patched average tps | unpatched average tps | difference |\n+-------------------------------+---------------------+-----------------------+------------+\n| create only | 10250.54396 | 10349.67487 | -1.0% |\n| create and select | 9677.711286 | 9991.065037 | -3.2% |\n| large cache create only | 10310.96646 | 10337.16455 | -0.3% |\n| large cache create and select | 9654.24077 | 9924.270242 | -2.8% |\n+-------------------------------+---------------------+-----------------------+------------+\n\nThe test was configured in the following manner:\n- AWS EC2 c5d.24xlarge instances, located in the same AZ, were used as\n the database host and the test driver. These systems have 96 vcpus and\n 184 GB memory. NVMe drives were configured as RAID5.\n- GUCs were changed from defaults to be the following:\n max_connections = 5000\n shared_buffers = 96GB\n max_wal_size = 2GB\n min_wal_size = 192MB\n- pgbench runs were done with -c 1000 -j 1000 and a scale of 10,000\n- Two multixact workloads were tested, first [0] was a create only\n script which selected 100 pgbench_account rows for share. Second\n workload [1] added a select statement to visit rows touched in the\n past which had multixacts generated for them. pgbench test script [2]\n wraps the call to the functions inside an explicit transaction.\n- Large cache tests are multixact offsets cache size hard coded to 128\n and members cache size hard coded to 256.\n- Row selection is based on time based approach that lets all client\n connections coordinate which rows to work with based on the\n millisecond they start executing. To allow for more multixacts to be\n generated and reduce contention, the workload uses offsets ahead of\n the start id based on a random number.\n- The one bummer about these runs were that they only ran for 600\n seconds for insert only and 400 seconds for insert and select. I\n consistently ran into checkpointer getting oom-killed on this instance\n after that timeframe. Will dig into this separately. But the TPS was \n consistent.\n- Each test was repeated at least 3 times and the average of those runs\n were used.\n- I am using the master branch and changes were applied on commit\n f47ed79cc8a0cfa154dc7f01faaf59822552363f\n\n\nI think patch 1 is a must-have. Regarding patch 2, I would propose we \navoid introducing more complexity into SimpleLRU cache and instead focus \non making the SLRU to buffer cache effort [3] a reality. I would also \nadd that we have a few customers in our fleet who have been successfully \nrunning the large cache configuration on the regular SLRU without any \nissues. With cache sizes this small, the linear searches are still quite \nefficient.\n\nIf my test workload can be made better, please let me know. Happy to \nre-run tests as needed.\n\n\n[0] https://gist.github.com/sdebnath/e015561811adf721dd40dd6638969c69\n[1] https://gist.github.com/sdebnath/2f3802e1fe288594b6661a7a59a7ca07\n[2] https://gist.github.com/sdebnath/6bbfd5f87945a7d819e30a9a1701bc97\n[3] https://www.postgresql.org/message-id/CA%2BhUKGKAYze99B-jk9NoMp-2BDqAgiRC4oJv%2BbFxghNgdieq8Q%40mail.gmail.com\n\n\n\n--\nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Thu, 20 Jan 2022 07:44:54 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 20 янв. 2022 г., в 20:44, Shawn Debnath <sdn@amazon.com> написал(а):\n> \n> If my test workload can be made better, please let me know. Happy to \n> re-run tests as needed.\n\nShawn, thanks for the benchmarking!\n\nCan you please also test 2nd patch against a large multixact SLRUs?\n2nd patch is not intended to do make thing better on default buffer sizes. It must save the perfromance in case of really huge SLRU buffers.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 20 Jan 2022 21:21:24 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Jan 20, 2022 at 09:21:24PM +0500, Andrey Borodin wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n> > 20 янв. 2022 г., в 20:44, Shawn Debnath <sdn@amazon.com> написал(а):\n> Can you please also test 2nd patch against a large multixact SLRUs?\n> 2nd patch is not intended to do make thing better on default buffer sizes. It must save the perfromance in case of really huge SLRU buffers.\n\nTest was performed on 128/256 for multixact offset/members cache as \nstated in my previous email. Sure I can test it for higher values - but \nwhat's a real world value that would make sense? We have been using this \nconfiguration successfully for a few of our customers that ran into \nMultiXact contention.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Thu, 20 Jan 2022 16:19:49 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 21 янв. 2022 г., в 05:19, Shawn Debnath <sdn@amazon.com> написал(а):\n> \n> On Thu, Jan 20, 2022 at 09:21:24PM +0500, Andrey Borodin wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>> \n>>> 20 янв. 2022 г., в 20:44, Shawn Debnath <sdn@amazon.com> написал(а):\n>> Can you please also test 2nd patch against a large multixact SLRUs?\n>> 2nd patch is not intended to do make thing better on default buffer sizes. It must save the perfromance in case of really huge SLRU buffers.\n> \n> Test was performed on 128/256 for multixact offset/members cache as \n> stated in my previous email. Sure I can test it for higher values - but \n> what's a real world value that would make sense? We have been using this \n> configuration successfully for a few of our customers that ran into \n> MultiXact contention.\n\nSorry, seems like I misinterpreted results yesterday.\nI had one concern about 1st patch step: it makes CLOG buffers size dependent on shared_buffers. But in your tests you seem to have already exercised xact_buffers = 24576 without noticeable degradation. Is it correct? I doubt a little bit that linear search among 24K elements on each CLOG access does not incur performance impact, but your tests seem to prove it.\n\nIMV splitting SLRU buffer into banks would make sense for values greater than 1<<10. But you are right that 256 seems enough to cope with most of problems of multixacts so far. I just thought about stressing SLRU buffers with multixacts to be sure that CLOG buffers will not suffer degradation. But yes, it's too indirect test.\n\nMaybe, just to be sure, let's repeat tests with autovacuum turned off to stress xact_buffers? \n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 21 Jan 2022 12:02:36 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> 8 апр. 2021 г., в 17:22, Thomas Munro <thomas.munro@gmail.com> написал(а):\n> \n> Thanks! I chickened out of committing a buffer replacement algorithm\n> patch written 11 hours before the feature freeze, but I also didn't\n> really want to commit the GUC patch without that. Ahh, if only we'd\n> latched onto the real problems here just a little sooner, but there is\n> always PostgreSQL 15, I heard it's going to be amazing. Moved to next\n> CF.\n\nHi Thomas!\n\nThere's feature freeze approaching again. I see that you are working on moving SLRUs to buffer pools, but it is not clear to which PG version it will land. And there is 100% consensus that first patch is useful and helps to prevent big issues. Maybe let's commit 1'st step without lifting default xact_buffers limit? Or 1st patch as-is with any simple technique that prevents linear search in SLRU buffers.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 19 Feb 2022 10:34:36 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Feb 19, 2022 at 6:34 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> There's feature freeze approaching again. I see that you are working on moving SLRUs to buffer pools, but it is not clear to which PG version it will land. And there is 100% consensus that first patch is useful and helps to prevent big issues. Maybe let's commit 1'st step without lifting default xact_buffers limit? Or 1st patch as-is with any simple technique that prevents linear search in SLRU buffers.\n\nHi Andrey,\n\nYeah, the SLRU/buffer pool thing would be complete enough to propose\nfor 16 at the earliest. I posted the early prototype to see what sort\nof reaction the concept would get before doing more work; I know\nothers have investigated this topic too... maybe it can encourage more\npatches, experimental results, ideas to be shared... but this is not\nrelevant for 15.\n\nBack to this patch: assuming we can settle on a good-enough-for-now\nreplacement algorithm, do we want to add this set of 7 GUCs? Does\nanyone else want to weigh in on that? Concretely, this patch adds:\n\nmultixact_offsets_buffers\nmultixact_members_buffers\nsubtrans_buffers\nnotify_buffers\nserial_buffers\nxact_buffers\ncommit_ts_buffers\n\nI guess the people at\nhttps://ottertune.com/blog/postgresql-knobs-list/ would be happy if we\ndid. Hopefully we'd drop the settings in a future release once we\nfigure out the main buffer pool thing (or some other scheme to\nautomate sizing).\n\n\n", "msg_date": "Sun, 20 Feb 2022 10:38:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2022-02-20 10:38:53 +1300, Thomas Munro wrote:\n> Back to this patch: assuming we can settle on a good-enough-for-now\n> replacement algorithm, do we want to add this set of 7 GUCs? Does\n> anyone else want to weigh in on that?\n\nI'm -0.2 on it, given that we have a better path forward.\n\n\n", "msg_date": "Sat, 19 Feb 2022 13:42:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 20 Feb 2022, at 02:42, Andres Freund <andres@anarazel.de> wrote:\n> \n> On 2022-02-20 10:38:53 +1300, Thomas Munro wrote:\n>> Back to this patch: assuming we can settle on a good-enough-for-now\n>> replacement algorithm, do we want to add this set of 7 GUCs? Does\n>> anyone else want to weigh in on that?\n> \n> I'm -0.2 on it, given that we have a better path forward.\nThat’s a really good path forward, but it's discussed at least for 3.5 years[0]. And guaranteed not to be there until 2023. Gilles, Shawn, Dmitry expressed their opinion in lines with that the patch “is a must-have” referring to real pathological performance degradation inflicted by SLRU cache starvation. And I can remember dozen of other incidents that would not happen if the patch was applied, e.g. this post is referring to the patch as a cure [1].\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/20180814213500.GA74618%4060f81dc409fc.ant.amazon.com\n[1] https://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/#what-can-we-do-about-getting-rid-of-nessie\t\n\n", "msg_date": "Sun, 20 Feb 2022 12:35:06 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 20 Feb 2022, at 02:38, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> Back to this patch: assuming we can settle on a good-enough-for-now\n> replacement algorithm, do we want to add this set of 7 GUCs? Does\n> anyone else want to weigh in on that?\n\nHi Thomas!\n\nIt seems we don't have any other input besides reviews and Andres's -0.2.\nIs there a chance to proceed?\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Fri, 18 Mar 2022 14:02:13 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Good day, all.\n\nI did benchmark of patch on 2 socket Xeon 5220 CPU @ 2.20GHz .\nI used \"benchmark\" used to reproduce problems with SLRU on our\ncustomers setup.\nIn opposite to Shawn's tests I concentrated on bad case: a lot\nof contention.\n\nslru-funcs.sql - function definitions\n - functions creates a lot of subtrunsactions to stress subtrans\n - and select random rows for share to stress multixacts\n\nslru-call.sql - function call for benchmark\n\nslru-ballast.sql - randomly select 1000 consequent rows\n \"for update skip locked\" to stress multixacts\n\npatch1 - make SLRU buffers configurable\npatch2 - make \"8-associative banks\"\n\nBenchmark done by pgbench.\nInited with scale 1 to induce contention.\n pgbench -i -s 1 testdb\n\nBenchmark 1:\n- low number of connections (50), 60% slru-call, 40% slru-ballast\n pgbench -f slru-call.sql@60 -f slru-ballast.sql@40 -c 50 -j 75 -P 1 -T 30 testdb\n\nversion | subtrans | multixact | tps\n | buffers | offs/memb | func+ballast\n--------+----------+-----------+------\nmaster | 32 | 8/16 | 184+119\npatch1 | 32 | 8/16 | 184+119\npatch1 | 1024 | 8/16 | 121+77\npatch1 | 1024 | 512/1024 | 118+75\npatch2 | 32 | 8/16 | 190+122\npatch2 | 1024 | 8/16 | 190+125\npatch2 | 1024 | 512/1024 | 190+127\n\nAs you see, this test case degrades with dumb increase of\nSLRU buffers. But use of \"hash table\" in form of \"associative\nbuckets\" makes performance stable.\n\nBenchmark 2:\n- high connection number (600), 98% slru-call, 2% slru-ballast\n pgbench -f slru-call.sql@98 -f slru-ballast.sql@2 -c 600 -j 75 -P 1 -T 30 testdb\n\nI don't paste \"ballast\" tps here since 2% make them too small,\nand they're very noisy.\n\nversion | subtrans | multixact | tps\n | buffers | offs/memb | func\n--------+----------+-----------+------\nmaster | 32 | 8/16 | 13\npatch1 | 32 \n | 8/16 | 13\npatch1 | 1024 | 8/16 | 31\npatch1 | 1024 | 512/1024 | 53\npatch2 | 32 | 8/16 | 12\npatch2 | 1024 | 8/16 | 34\npatch2 | 1024 | 512/1024 | 67\n\nIn this case simple buffer increase does help. But \"buckets\"\nincrease performance gain.\n\nI didn't paste here results third part of patch (\"Pack SLRU...\")\nbecause I didn't see any major performance gain from it, and\nit consumes large part of patch diff.\n\nRebased versions of first two patch parts are attached.\n\nregards,\n\nYura Sokolov", "msg_date": "Thu, 21 Jul 2022 16:00:20 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 21 Jul 2022, at 18:00, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> \n> In this case simple buffer increase does help. But \"buckets\"\n> increase performance gain.\nYura, thank you for your benchmarks!\nWe already knew that patch can save the day on pathological workloads, now we have a proof of this.\nAlso there's the evidence that user can blindly increase size of SLRU if they want (with the second patch). So there's no need for hard explanations on how to tune the buffers size.\n\nThomas, do you still have any doubts? Or is it certain that SLRU will be replaced by any better subsystem in 16?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 23 Jul 2022 13:39:50 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Jul 23, 2022 at 8:41 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Thomas, do you still have any doubts? Or is it certain that SLRU will be replaced by any better subsystem in 16?\n\nHi Andrey,\n\nSorry for my lack of replies on this and the other SLRU thread -- I'm\nthinking and experimenting. More soon.\n\n\n", "msg_date": "Sat, 23 Jul 2022 20:47:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> Andrey Borodin wrote 2022-07-23 11:39:\n> \n> Yura, thank you for your benchmarks!\n> We already knew that patch can save the day on pathological workloads,\n> now we have a proof of this.\n> Also there's the evidence that user can blindly increase size of SLRU\n> if they want (with the second patch). So there's no need for hard\n> explanations on how to tune the buffers size.\n\nHi @Andrey.Borodin, With some considerations and performance checks from \n@Yura.Sokolov we simplified your approach by the following:\n\n1. Preamble. We feel free to increase any SLRU's, since there's no \nperformance degradation on large Buffers count using your SLRU buckets \nsolution.\n2. `slru_buffers_size_scale` is only one config param introduced for all \nSLRUs. It scales SLRUs upper cap by power 2.\n3. All SLRU buffers count are capped by both `MBuffers (shared_buffers)` \nand `slru_buffers_size_scale`. see\n4. Magic initial constants `NUM_*_BUFFERS << slru_buffers_size_scale` \nare applied for every SLRU.\n5. All SLRU buffers are always sized as power of 2, their hash bucket \nsize is always 8.\n\nThere's attached patch for your consideration. It does gather and \nsimplify both `v21-0001-Make-all-SLRU-buffer-sizes-configurable.patch` \nand `v21-0002-Divide-SLRU-buffers-into-8-associative-banks.patch` to \nmuch simpler approach.\n\nThank you, Yours,\n- Ivan", "msg_date": "Tue, 16 Aug 2022 22:36:27 +0300", "msg_from": "i.lazarev@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 17 Aug 2022, at 00:36, i.lazarev@postgrespro.ru wrote:\n> \n>> Andrey Borodin wrote 2022-07-23 11:39:\n>> Yura, thank you for your benchmarks!\n>> We already knew that patch can save the day on pathological workloads,\n>> now we have a proof of this.\n>> Also there's the evidence that user can blindly increase size of SLRU\n>> if they want (with the second patch). So there's no need for hard\n>> explanations on how to tune the buffers size.\n> \n> Hi @Andrey.Borodin, With some considerations and performance checks from @Yura.Sokolov we simplified your approach by the following:\n> \n> 1. Preamble. We feel free to increase any SLRU's, since there's no performance degradation on large Buffers count using your SLRU buckets solution.\n> 2. `slru_buffers_size_scale` is only one config param introduced for all SLRUs. It scales SLRUs upper cap by power 2.\n> 3. All SLRU buffers count are capped by both `MBuffers (shared_buffers)` and `slru_buffers_size_scale`. see\n> 4. Magic initial constants `NUM_*_BUFFERS << slru_buffers_size_scale` are applied for every SLRU.\n> 5. All SLRU buffers are always sized as power of 2, their hash bucket size is always 8.\n> \n> There's attached patch for your consideration. It does gather and simplify both `v21-0001-Make-all-SLRU-buffer-sizes-configurable.patch` and `v21-0002-Divide-SLRU-buffers-into-8-associative-banks.patch` to much simpler approach.\n\nI like the idea of one knob instead of one per each SLRU. Maybe we even could deduce sane value from NBuffers? That would effectively lead to 0 knobs :)\n\nYour patch have a prefix \"v22-0006\", does it mean there are 5 previous steps of the patchset?\n\nThank you!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 18 Aug 2022 08:35:15 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Andrey Borodin wrote 2022-08-18 06:35:\n> \n> I like the idea of one knob instead of one per each SLRU. Maybe we\n> even could deduce sane value from NBuffers? That would effectively\n> lead to 0 knobs :)\n> \n> Your patch have a prefix \"v22-0006\", does it mean there are 5 previous\n> steps of the patchset?\n> \n> Thank you!\n> \n> \n> Best regards, Andrey Borodin.\n\nNot sure it's possible to deduce from NBuffers only. \nslru_buffers_scale_shift looks like relief valve for systems with ultra \nscaled NBuffers.\n\nRegarding v22-0006 I just tried to choose index unique for this thread \nso now it's fixed to 0001 indexing.", "msg_date": "Fri, 19 Aug 2022 18:48:41 +0300", "msg_from": "i.lazarev@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Sat, Jul 23, 2022 at 1:48 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Jul 23, 2022 at 8:41 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Thomas, do you still have any doubts? Or is it certain that SLRU will be replaced by any better subsystem in 16?\n>\n> Hi Andrey,\n>\n> Sorry for my lack of replies on this and the other SLRU thread -- I'm\n> thinking and experimenting. More soon.\n>\n\nHi Thomas,\n\nPostgreSQL 16 feature freeze is approaching again. Let's choose\nsomething from possible solutions, even if the chosen one is\ntemporary.\nThank you!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Tue, 20 Dec 2022 10:39:29 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Fri, 19 Aug 2022 at 21:18, <i.lazarev@postgrespro.ru> wrote:\n>\n> Andrey Borodin wrote 2022-08-18 06:35:\n> >\n> > I like the idea of one knob instead of one per each SLRU. Maybe we\n> > even could deduce sane value from NBuffers? That would effectively\n> > lead to 0 knobs :)\n> >\n> > Your patch have a prefix \"v22-0006\", does it mean there are 5 previous\n> > steps of the patchset?\n> >\n> > Thank you!\n> >\n> >\n> > Best regards, Andrey Borodin.\n>\n> Not sure it's possible to deduce from NBuffers only.\n> slru_buffers_scale_shift looks like relief valve for systems with ultra\n> scaled NBuffers.\n>\n> Regarding v22-0006 I just tried to choose index unique for this thread\n> so now it's fixed to 0001 indexing.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n325bc54eed4ea0836a0bb715bb18342f0c1c668a ===\n=== applying patch ./v23-0001-bucketed-SLRUs-simplified.patch\npatching file src/include/miscadmin.h\n...\npatching file src/backend/utils/misc/guc.c\nHunk #1 FAILED at 32.\nHunk #2 FAILED at 2375.\n2 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/utils/misc/guc.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_2627.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 18:31:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Tue, Jan 3, 2023 at 5:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> does not apply on top of HEAD as in [1], please post a rebased patch:\n>\nThanks! Here's the rebase.\n\nBest regards, Andrey Borodin.", "msg_date": "Sun, 8 Jan 2023 20:19:14 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Mon, Jan 9, 2023 at 9:49 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 5:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > does not apply on top of HEAD as in [1], please post a rebased patch:\n> >\n> Thanks! Here's the rebase.\n\nI was looking into this patch, it seems like three different\noptimizations are squeezed in a single patch\n1) dividing buffer space in banks to reduce the seq search cost 2) guc\nparameter for buffer size scale 3) some of the buffer size values are\nmodified compared to what it is on the head. I think these are 3\npatches which should be independently committable.\n\nWhile looking into the first idea of dividing the buffer space in\nbanks, I see that it will speed up finding the buffers but OTOH while\nsearching the victim buffer it will actually can hurt the performance\nthe slru pages which are frequently accessed are not evenly\ndistributed across the banks. So imagine the cases where we have some\nbanks with a lot of empty slots and other banks from which we\nfrequently have to evict out the pages in order to get the new pages\nin.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:28:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "Hi Dilip! Thank you for the review!\n\nOn Tue, Jan 10, 2023 at 9:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jan 9, 2023 at 9:49 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n> >\n> > On Tue, Jan 3, 2023 at 5:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > does not apply on top of HEAD as in [1], please post a rebased patch:\n> > >\n> > Thanks! Here's the rebase.\n>\n> I was looking into this patch, it seems like three different\n> optimizations are squeezed in a single patch\n> 1) dividing buffer space in banks to reduce the seq search cost 2) guc\n> parameter for buffer size scale 3) some of the buffer size values are\n> modified compared to what it is on the head. I think these are 3\n> patches which should be independently committable.\nThere's no point in dividing SLRU buffers in parts unless the buffer's\nsize is configurable.\nAnd it's only possible to enlarge default buffers size if SLRU buffers\nare divided into banks.\nSo the features can be viewed as independent commits, but make no\nsense separately.\n\nBut, probably, it's a good idea to split the patch back anyway, for\neasier review.\n\n>\n> While looking into the first idea of dividing the buffer space in\n> banks, I see that it will speed up finding the buffers but OTOH while\n> searching the victim buffer it will actually can hurt the performance\n> the slru pages which are frequently accessed are not evenly\n> distributed across the banks. So imagine the cases where we have some\n> banks with a lot of empty slots and other banks from which we\n> frequently have to evict out the pages in order to get the new pages\n> in.\n>\n\nYes. Despite the extremely low probability of such a case, this\npattern when a user accesses pages assigned to only one bank may\nhappen.\nThis case is equivalent to having just one bank, which means small\nbuffers. Just as we have now.\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:13:43 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Mon, 9 Jan 2023 at 09:49, Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 5:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > does not apply on top of HEAD as in [1], please post a rebased patch:\n> >\n> Thanks! Here's the rebase.\n\nI'm seeing that there has been no activity in this thread for more\nthan 1 year now, I'm planning to close this in the current commitfest\nunless someone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 20 Jan 2024 09:01:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 20 Jan 2024, at 08:31, vignesh C <vignesh21@gmail.com> wrote:\n> \n> On Mon, 9 Jan 2023 at 09:49, Andrey Borodin <amborodin86@gmail.com> wrote:\n>> \n>> On Tue, Jan 3, 2023 at 5:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>>> does not apply on top of HEAD as in [1], please post a rebased patch:\n>>> \n>> Thanks! Here's the rebase.\n> \n> I'm seeing that there has been no activity in this thread for more\n> than 1 year now, I'm planning to close this in the current commitfest\n> unless someone is planning to take it forward.\n\nHi Vignesh,\n\nthanks for the ping! Most important parts of this patch set are discussed in [0]. If that patchset will be committed, I'll withdraw entry for this thread from commitfest.\nThere's a version of Multixact-specific optimizations [1], but I hope they will not be necessary with effective caches developed in [0]. It seems to me that most important part of those optimization is removing sleeps under SLRU lock on standby [2] by Kyotaro Horiguchi. But given that cache optimizations took 4 years to get closer to commit, I'm not sure we will get this optimization any time soon...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/CAFiTN-vzDvNz%3DExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/2ECE132B-C042-4489-930E-DBC5D0DAB84A%40yandex-team.ru#5f7e7022647be9eeecfc2ae75d765500\n[2] https://www.postgresql.org/message-id/flat/20200515.090333.24867479329066911.horikyota.ntt%40gmail.com#855f8bb7205890579a363d2344b4484d\n\n", "msg_date": "Sat, 27 Jan 2024 09:58:18 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Jan-27, Andrey Borodin wrote:\n\n> thanks for the ping! Most important parts of this patch set are discussed in [0]. If that patchset will be committed, I'll withdraw entry for this thread from commitfest.\n> There's a version of Multixact-specific optimizations [1], but I hope they will not be necessary with effective caches developed in [0]. It seems to me that most important part of those optimization is removing sleeps under SLRU lock on standby [2] by Kyotaro Horiguchi. But given that cache optimizations took 4 years to get closer to commit, I'm not sure we will get this optimization any time soon...\n\nI'd appreciate it if you or Horiguchi-san can update his patch to remove\nuse of usleep in favor of a CV in multixact, and keep this CF entry to\ncover it.\n\nPerhaps a test to make the code reach the usleep(1000) can be written\nusing injection points (49cd2b93d7db)?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)\n\n\n", "msg_date": "Sun, 28 Jan 2024 13:49:31 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> On 28 Jan 2024, at 17:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> I'd appreciate it if you or Horiguchi-san can update his patch to remove\n> use of usleep in favor of a CV in multixact, and keep this CF entry to\n> cover it.\n\nSure! Sounds great!\n\n> Perhaps a test to make the code reach the usleep(1000) can be written\n> using injection points (49cd2b93d7db)?\n\nI've tried to prototype something like that. But interesting point between GetNewMultiXactId() and RecordNewMultiXact() is a critical section, and we cannot have injection points in critical sections...\nAlso, to implement such a test we need \"wait\" type of injection points, see step 2 in attachment. With this type of injection points I can stop a backend amidst entering information about new MultiXact.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Sun, 28 Jan 2024 23:17:16 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> On 28 Jan 2024, at 23:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n>> Perhaps a test to make the code reach the usleep(1000) can be written\n>> using injection points (49cd2b93d7db)?\n> \n> I've tried to prototype something like that. But interesting point between GetNewMultiXactId() and RecordNewMultiXact() is a critical section, and we cannot have injection points in critical sections...\n> Also, to implement such a test we need \"wait\" type of injection points, see step 2 in attachment. With this type of injection points I can stop a backend amidst entering information about new MultiXact.\n\nHere's the test draft. This test reliably reproduces sleep on CV when waiting next multixact to be filled into \"members\" SLRU.\nCost of having this test:\n1. We need a new injection point type \"wait\" (in addition to \"error\" and \"notice\"). It cannot be avoided, because we need to sync at least 3 processed to observe condition we want.\n2. We need new way to declare injection point that can happen inside critical section. I've called it \"prepared injection point\".\n\nComplexity of having this test is higher than complexity of CV-sleep patch itself. Do we want it? If so I can produce cleaner version, currently all multixact tests are int injection_points test module.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Sat, 3 Feb 2024 22:32:45 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "At Sat, 3 Feb 2024 22:32:45 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> Here's the test draft. This test reliably reproduces sleep on CV when waiting next multixact to be filled into \"members\" SLRU.\n\nBy the way, I raised a question about using multiple CVs\nsimultaneously [1]. That is, I suspect that the current CV\nimplementation doesn't allow us to use multiple condition variables at\nthe same time, because all CVs use the same PCPROC member cvWaitLink\nto accommodate different waiter sets. If this assumption is correct,\nwe should resolve the issue before spreading more uses of CVs.\n\n[1] https://www.postgresql.org/message-id/20240227.150709.1766217736683815840.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 29 Feb 2024 10:59:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 29 Feb 2024, at 06:59, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Sat, 3 Feb 2024 22:32:45 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n>> Here's the test draft. This test reliably reproduces sleep on CV when waiting next multixact to be filled into \"members\" SLRU.\n> \n> By the way, I raised a question about using multiple CVs\n> simultaneously [1]. That is, I suspect that the current CV\n> implementation doesn't allow us to use multiple condition variables at\n> the same time, because all CVs use the same PCPROC member cvWaitLink\n> to accommodate different waiter sets. If this assumption is correct,\n> we should resolve the issue before spreading more uses of CVs.\n\nAlvaro, Kyotaro, what's our plan for this?\nIt seems to late to deal with this pg_usleep(1000L) for PG17.\nI propose following course of action\n1. Close this long-standing CF item\n2. Start new thread with CV-sleep patch aimed at PG18\n3. Create new entry in July CF\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 6 Apr 2024 16:24:12 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 6 Apr 2024, at 14:24, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> What do you think?\n\nOK, I'll follow this plan.\nAs long as most parts of this thread were committed, I'll mark CF item as \"committed\".\nThanks to everyone involved!\nSee you in a followup thread about sleeping on CV.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 7 Apr 2024 15:52:16 +0300", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Feb-03, Andrey M. Borodin wrote:\n\n> Here's the test draft. This test reliably reproduces sleep on CV when waiting next multixact to be filled into \"members\" SLRU.\n> Cost of having this test:\n> 1. We need a new injection point type \"wait\" (in addition to \"error\" and \"notice\"). It cannot be avoided, because we need to sync at least 3 processed to observe condition we want.\n> 2. We need new way to declare injection point that can happen inside critical section. I've called it \"prepared injection point\".\n> \n> Complexity of having this test is higher than complexity of CV-sleep patch itself. Do we want it? If so I can produce cleaner version, currently all multixact tests are int injection_points test module.\n\nWell, it would be nice to have *some* test, but as you say it is way\nmore complex than the thing being tested, and it zooms in on the\nfunctioning of the multixact creation in insane quantum-physics ways ...\nto the point that you can no longer trust that multixact works the same\nway with the test than without it. So what I did is manually run other\ntests (pgbench) to verify that the corner case in multixact creation is\nbeing handled correctly, and pushed the patch after a few corrections,\nin particular so that it would follow the CV sleeping protocol a little\nmore closely to what the CV documentation suggests, which is not at all\nwhat the patch did. I also fixed a few comments that had been neglected\nand changed the name and description of the CV in the docs.\n\nNow, maybe we can still add the test later, but it needs a rebase.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n", "msg_date": "Sun, 7 Apr 2024 20:41:41 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 7 Apr 2024, at 21:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> Well, it would be nice to have *some* test, but as you say it is way\n> more complex than the thing being tested, and it zooms in on the\n> functioning of the multixact creation in insane quantum-physics ways ...\n> to the point that you can no longer trust that multixact works the same\n> way with the test than without it. So what I did is manually run other\n> tests (pgbench) to verify that the corner case in multixact creation is\n> being handled correctly, and pushed the patch after a few corrections,\n> in particular so that it would follow the CV sleeping protocol a little\n> more closely to what the CV documentation suggests, which is not at all\n> what the patch did. I also fixed a few comments that had been neglected\n> and changed the name and description of the CV in the docs.\n\nThat's excellent! Thank you!\n\n> Now, maybe we can still add the test later, but it needs a rebase.\n\nSure. 'wait' injection points are there already, so I'll produce patch with \"prepared\" injection points and re-implement test on top of that. I'll put it on July CF.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 7 Apr 2024 22:13:12 +0300", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "> On 5 Jul 2024, at 14:16, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Jun 10, 2024 at 03:10:33PM +0900, Michael Paquier wrote:\n>> OK, cool. I'll try to get that into the tree once v18 opens up.\n> \n> And I've spent more time on this one, and applied it to v18 after some\n> slight tweaks. Please feel free to re-post your tests with\n> multixacts, Andrey.\n\nThanks Michael!\n\n\n> On 7 Apr 2024, at 23:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> Now, maybe we can still add the test later, but it needs a rebase.\n\nAlvaro, please find attached the test.\nI’ve addressed some Michael’s comments in a nearby thread: removed extra load, made injection point names lowercase, fixed some grammar issues.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Fri, 5 Jul 2024 23:18:32 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "\n\n> On 5 Jul 2024, at 23:18, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Alvaro, please find attached the test.\n> I’ve addressed some Michael’s comments in a nearby thread: removed extra load, made injection point names lowercase, fixed some grammar issues.\n\nI’ve made several runs on Github to test stability [0, 1, 2, 4]. CI seems to be stable.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/postgres_g/commit/c9c362679f244\n[1] https://github.com/x4m/postgres_g/commit/9d7e43cc1\n[2] https://github.com/x4m/postgres_g/commit/18cf186617\n[3] https://github.com/x4m/postgres_g/commit/4fbce73997\n\n", "msg_date": "Mon, 19 Aug 2024 12:35:53 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Aug-19, Andrey M. Borodin wrote:\n\n> > On 5 Jul 2024, at 23:18, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > \n> > Alvaro, please find attached the test.\n> > I’ve addressed some Michael’s comments in a nearby thread: removed\n> > extra load, made injection point names lowercase, fixed some grammar\n> > issues.\n> \n> I’ve made several runs on Github to test stability [0, 1, 2, 4]. CI seems to be stable.\n\nOK, I've made some minor adjustments and pushed. CI seemed OK for me,\nlet's see what does the BF have to say.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 20 Aug 2024 14:37:34 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Tue, Aug 20, 2024 at 02:37:34PM -0400, Alvaro Herrera wrote:\n> OK, I've made some minor adjustments and pushed. CI seemed OK for me,\n> let's see what does the BF have to say.\n\nI see that you've gone the way with the SQL function doing a load().\nWould it be worth switching the test to rely on the two macros for\nload and caching instead? I've mentioned that previously but never\ngot down to present a patch for the sake of this test.\n\nThis requires some more tweaks in the module to disable the stats when\nloaded through a GUC, and two shmem callbacks, then the test is able\nto work correctly. Please see attached.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 21 Aug 2024 08:30:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Aug-21, Michael Paquier wrote:\n\n> I see that you've gone the way with the SQL function doing a load().\n> Would it be worth switching the test to rely on the two macros for\n> load and caching instead? I've mentioned that previously but never\n> got down to present a patch for the sake of this test.\n\nHmm, I have no opinion on which way is best. You probably have a better\nsense of what's better for the injections point interface, so I'm happy\nto defer to you on this.\n\n> +\t/* reset in case this is a restart within the postmaster */\n> +\tinj_state = NULL;\n\nI'm not sure that this assignment actually accomplishes anything ...\n\nI don't understand what do the inj_stats_enabled stuff have to do with\nthis patch. I suspect it's a git operation error, ie., you seem to have\nsquashed two different things together.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Tue, 20 Aug 2024 20:13:12 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Tue, Aug 20, 2024 at 08:13:12PM -0400, Alvaro Herrera wrote:\n> I don't understand what do the inj_stats_enabled stuff have to do with\n> this patch. I suspect it's a git operation error, ie., you seem to have\n> squashed two different things together.\n\nSorry, I should have split that for clarity (one patch for the GUC,\none to change the test to use CACHED/LOAD). It is not an error\nthough: if we don't have a controlled way to disable the stats of the\nmodule, then the test would fail when calling the cached callback\nbecause we'd try to allocate some memory for the dshash entry in\npgstats.\n\nThe second effect of initializing the shmem state of the module with\nshared_preload_libraries is condition variables are set up for the\nsake if the test, removing the dependency to the SQL load() call.\nBoth are OK, but I'd prefer introducing one use case for these two\nmacros in the tree, so as these can be used as a reference in the\nfuture when developing new tests.\n--\nMichael", "msg_date": "Wed, 21 Aug 2024 12:46:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Wed, Aug 21, 2024 at 12:46:31PM +0900, Michael Paquier wrote:\n> Sorry, I should have split that for clarity (one patch for the GUC,\n> one to change the test to use CACHED/LOAD). It is not an error\n> though: if we don't have a controlled way to disable the stats of the\n> module, then the test would fail when calling the cached callback\n> because we'd try to allocate some memory for the dshash entry in\n> pgstats.\n> \n> The second effect of initializing the shmem state of the module with\n> shared_preload_libraries is condition variables are set up for the\n> sake if the test, removing the dependency to the SQL load() call.\n> Both are OK, but I'd prefer introducing one use case for these two\n> macros in the tree, so as these can be used as a reference in the\n> future when developing new tests.\n\nIn short, here is a better patch set, with 0001 and 0002 introducing\nthe pieces that the test would need to be able to use the LOAD() and\nCACHED() macros in 0003:\n- 0001: Add shmem callbacks to initialize shmem state of\ninjection_points with shared_preload_libraries.\n- 0002: Add a GUC to control if the stats of the module are enabled.\nBy default, they are disabled as they are only needed in the TAP test\nof injection_points for the stats.\n- 0003: Update the SLRU test to use INJECTION_POINT_LOAD and\nINJECTION_POINT_CACHED with injection_points loaded via\nshared_preload_libraries, removing the call to\ninjection_points_load() in the perl test.\n\nWhat do you think?\n--\nMichael", "msg_date": "Wed, 21 Aug 2024 15:26:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Aug-21, Michael Paquier wrote:\n\n> From fd8ab7b6845a2c56aa2c8d9c60f404f6b3407338 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Wed, 21 Aug 2024 15:16:06 +0900\n> Subject: [PATCH 2/3] injection_point: Add injection_points.stats\n\n> This GUC controls if statistics should be used or not in the module.\n> Custom statistics require the module to be loaded with\n> shared_preload_libraries, hence this GUC is made PGC_POSTMASTER. By\n> default, stats are disabled.\n> \n> This will be used by an upcoming change in a test where stats should not\n> be used, as the test has a dependency on a critical section.\n\nI find it's strange that the information that stats cannot be used with\ninjection points that have dependency on critical sections (?), is only\nin the commit message and not in the code.\n\nAlso, maybe it'd make sense for stats to be globally enabled, and that\nonly the tests that require it would disable them? (It's probably easy\nenough to have a value \"injection_points.stats=auto\" which means, if the\nmodule is loaded in shared_preload_libraries them set stats on,\notherwise turn them off.) TBH I don't understand why the issue that\nstats require shared_preload_libraries only comes up now ... Maybe\nanother approach is to say that if an injection point is loaded via\n_LOAD() rather than the normal way, then stats are disabled for that one\nrather than globally? Or give the _LOAD() macro a boolean argument to\nindicate whether to collect stats for that injection point or not.\n\nLastly, it's not clear to me what does it mean that the test has a\n\"dependency\" on a critical section. Do you mean to say that the\ninjection point runs inside a critical section?\n\n> From e5329d080b9d8436af8f65aac118745cf1f81ca2 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Wed, 21 Aug 2024 15:09:06 +0900\n> Subject: [PATCH 3/3] Rework new SLRU test with injection points\n> \n> Rather than the SQL injection_points_load, this commit makes the test\n> rely on the two macros to load and run an injection point from the\n> cache, acting as an example of how to use them.\n\nNo issues with this, feel free to go ahead.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Wed, 21 Aug 2024 13:55:06 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Wed, Aug 21, 2024 at 01:55:06PM -0400, Alvaro Herrera wrote:\n> I find it's strange that the information that stats cannot be used with\n> injection points that have dependency on critical sections (?), is only\n> in the commit message and not in the code.\n\nA comment close to where inj_stats_enabled is declared in\ninjection_points.c may be adapted for that, say:\n\"This GUC is useful to control if statistics should be enabled or not\nduring a test with injection points, like for example if a test relies\non a callback run in a critical section where no allocation should\nhappen.\"\n\n> Also, maybe it'd make sense for stats to be globally enabled, and that\n> only the tests that require it would disable them? (It's probably easy\n> enough to have a value \"injection_points.stats=auto\" which means, if the\n> module is loaded in shared_preload_libraries them set stats on,\n> otherwise turn them off.)\n\nI'm not sure that we need to get down to that until somebody has a\ncase where they want to rely on stats of injection points for their\nstuff. At this stage, I only want the stats to be enabled to provide\nautomated checks for the custom pgstats APIs, so disabling it by\ndefault and enabling it only in the stats test of the module\ninjection_points sounds kind of enough to me for now. The module\ncould always be tweaked to do that in the future, if there's a case.\n\n> TBH I don't understand why the issue that\n> stats require shared_preload_libraries only comes up now ...\n\nBecause there was no need to, simply. It is the first test that\nrelies on a critical section, and we need allocations if we want to\nuse a wait condition. \n\n> Maybe another approach is to say that if an injection point is loaded via\n> _LOAD() rather than the normal way, then stats are disabled for that one\n> rather than globally?\n\nOne trick would be to force the GUC to be false for the duration of\nthe callback based on a check of CritSectionCount, a second one would\nbe to just skip the stats if are under CritSectionCount. A third\noption, that I find actually interesting, would be to call\nMemoryContextAllowInCriticalSection in some strategic code paths of\nthe test module injection_points because we're OK to live with this\nrestriction in the module.\n\n> Or give the _LOAD() macro a boolean argument to\n> indicate whether to collect stats for that injection point or not.\n\nSticking some knowledge about the stats in the backend part of\ninjection points does not sound like a good idea to me.\n\n> Lastly, it's not clear to me what does it mean that the test has a\n> \"dependency\" on a critical section. Do you mean to say that the\n> injection point runs inside a critical section?\n\nYes.\n\n> No issues with this, feel free to go ahead.\n\nCool, thanks.\n--\nMichael", "msg_date": "Thu, 22 Aug 2024 09:29:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On 2024-Aug-22, Michael Paquier wrote:\n\n> On Wed, Aug 21, 2024 at 01:55:06PM -0400, Alvaro Herrera wrote:\n\n> > Also, maybe it'd make sense for stats to be globally enabled, and that\n> > only the tests that require it would disable them? (It's probably easy\n> > enough to have a value \"injection_points.stats=auto\" which means, if the\n> > module is loaded in shared_preload_libraries them set stats on,\n> > otherwise turn them off.)\n> \n> I'm not sure that we need to get down to that until somebody has a\n> case where they want to rely on stats of injection points for their\n> stuff. At this stage, I only want the stats to be enabled to provide\n> automated checks for the custom pgstats APIs, so disabling it by\n> default and enabling it only in the stats test of the module\n> injection_points sounds kind of enough to me for now.\n\nOh! I thought the stats were useful by themselves. That not being the\ncase, I agree with simplifying; and the other ways to enhance this point\nmight not be necessary for now.\n\n> > Or give the _LOAD() macro a boolean argument to\n> > indicate whether to collect stats for that injection point or not.\n> \n> Sticking some knowledge about the stats in the backend part of\n> injection points does not sound like a good idea to me.\n\nYou could flip this around: have the bool be for \"this injection point\nis going to be invoked inside a critical section\". Then core code just\nneeds to tell the injection points module what core code does, and it's\ninjection_points that decides what to do with that information.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:36:38 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" }, { "msg_contents": "On Thu, Aug 22, 2024 at 10:36:38AM -0400, Alvaro Herrera wrote:\n> On 2024-Aug-22, Michael Paquier wrote:\n>> I'm not sure that we need to get down to that until somebody has a\n>> case where they want to rely on stats of injection points for their\n>> stuff. At this stage, I only want the stats to be enabled to provide\n>> automated checks for the custom pgstats APIs, so disabling it by\n>> default and enabling it only in the stats test of the module\n>> injection_points sounds kind of enough to me for now.\n> \n> Oh! I thought the stats were useful by themselves.\n\nYep, currently they're not, but I don't want to discard that they'll\nnever be, either. Perhaps there would be a case where somebody would\nlike to run a callback N times and trigger a condition? That's\nsomething where the stats could be useful, but I don't have a specific \ncase for that now. I'm just imagining possibilities.\n\n> That not being the case, I agree with simplifying; and the other\n> ways to enhance this point might not be necessary for now.\n\nOkay.\n\n>> Sticking some knowledge about the stats in the backend part of\n>> injection points does not sound like a good idea to me.\n> \n> You could flip this around: have the bool be for \"this injection point\n> is going to be invoked inside a critical section\". Then core code just\n> needs to tell the injection points module what core code does, and it's\n> injection_points that decides what to do with that information.\n\nHmm. We could do that, but I'm not really on board with anything that\nadds more code footprint into the backend. For this one, this is even\ninformation specific to the code path where the injection point is\nadded.\n--\nMichael", "msg_date": "Fri, 23 Aug 2024 09:29:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MultiXact\\SLRU buffers configuration" } ]
[ { "msg_contents": "psql currently supports HTML, CSV, etc output formats. I was wondering\nif supporting JSON format was requested or discussed in past. If there's\ndesire for this feature, perhaps we can add it to the TODO list on wiki so\nsomeone can pick it up and work on it in future.\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n    psql currently supports HTML, CSV, etc output formats. I was wondering if supporting JSON format was requested or discussed in past. If there's desire for this feature, perhaps we can add it to the TODO list on wiki so someone can pick it up and work on it in future.Best regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Fri, 8 May 2020 11:17:46 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "JSON output from psql" }, { "msg_contents": "Hi\n\npá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:\n\n> psql currently supports HTML, CSV, etc output formats. I was wondering\n> if supporting JSON format was requested or discussed in past. If there's\n> desire for this feature, perhaps we can add it to the TODO list on wiki so\n> someone can pick it up and work on it in future.\n>\n\nis there some standardised format for output table?\n\nPavel\n\n\n> Best regards,\n> --\n> Gurjeet Singh http://gurjeet.singh.im/\n>\n\nHipá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:    psql currently supports HTML, CSV, etc output formats. I was wondering if supporting JSON format was requested or discussed in past. If there's desire for this feature, perhaps we can add it to the TODO list on wiki so someone can pick it up and work on it in future.is there some standardised format for output table? PavelBest regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Fri, 8 May 2020 21:01:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> pá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:\n>\n>> psql currently supports HTML, CSV, etc output formats. I was\n>> wondering if supporting JSON format was requested or discussed in past. If\n>> there's desire for this feature, perhaps we can add it to the TODO list on\n>> wiki so someone can pick it up and work on it in future.\n>>\n>\n> is there some standardised format for output table?\n>\n\nI see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably\nthat's to create HTML tables.\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\nOn Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:    psql currently supports HTML, CSV, etc output formats. I was wondering if supporting JSON format was requested or discussed in past. If there's desire for this feature, perhaps we can add it to the TODO list on wiki so someone can pick it up and work on it in future.is there some standardised format for output table? I see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably that's to create HTML tables.Best regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Fri, 8 May 2020 12:07:56 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "pá 8. 5. 2020 v 21:08 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:\n\n>\n> On Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> pá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:\n>>\n>>> psql currently supports HTML, CSV, etc output formats. I was\n>>> wondering if supporting JSON format was requested or discussed in past. If\n>>> there's desire for this feature, perhaps we can add it to the TODO list on\n>>> wiki so someone can pick it up and work on it in future.\n>>>\n>>\n>> is there some standardised format for output table?\n>>\n>\n> I see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably\n> that's to create HTML tables.\n>\n\nI though for JSON format. This format is too generic.\n\nPavel\n\n>\n> Best regards,\n> --\n> Gurjeet Singh http://gurjeet.singh.im/\n>\n>\n\npá 8. 5. 2020 v 21:08 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:On Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:    psql currently supports HTML, CSV, etc output formats. I was wondering if supporting JSON format was requested or discussed in past. If there's desire for this feature, perhaps we can add it to the TODO list on wiki so someone can pick it up and work on it in future.is there some standardised format for output table? I see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably that's to create HTML tables.I though for JSON format. This format is too generic. Pavel Best regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Fri, 8 May 2020 21:10:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Fri, May 8, 2020 at 12:10 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> pá 8. 5. 2020 v 21:08 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:\n>\n>>\n>> On Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> pá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im>\n>>> napsal:\n>>>\n>>>> psql currently supports HTML, CSV, etc output formats. I was\n>>>> wondering if supporting JSON format was requested or discussed in past. If\n>>>> there's desire for this feature, perhaps we can add it to the TODO list on\n>>>> wiki so someone can pick it up and work on it in future.\n>>>>\n>>>\n>>> is there some standardised format for output table?\n>>>\n>>\n>> I see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably\n>> that's to create HTML tables.\n>>\n>\n> I though for JSON format. This format is too generic.\n>\n\nI think I misunderstood your question earlier.\n\nThere's no standard format that comes to mind, but perhaps an output format\nsimilar to that of (array of row_to_json()) would be desirable. For\nexample, `select relname, relnamespace from pg_class;` would emit the\nfollowing:\n\n[\n{\"relname\": \"pgclass\", \"relnamespace\": 11},\n{\"relname\": \"pg_statistic\", \"relnamespace\": 11},\n]\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\nOn Fri, May 8, 2020 at 12:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 8. 5. 2020 v 21:08 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:On Fri, May 8, 2020 at 12:01 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 8. 5. 2020 v 20:18 odesílatel Gurjeet Singh <gurjeet@singh.im> napsal:    psql currently supports HTML, CSV, etc output formats. I was wondering if supporting JSON format was requested or discussed in past. If there's desire for this feature, perhaps we can add it to the TODO list on wiki so someone can pick it up and work on it in future.is there some standardised format for output table? I see \"-T, --table-attr=TEXT\" option in `psql --help` output, presumably that's to create HTML tables.I though for JSON format. This format is too generic. I think I misunderstood your question earlier.There's no standard format that comes to mind, but perhaps an output format similar to that of (array of row_to_json()) would be desirable. For example, `select relname, relnamespace from pg_class;` would emit the following:[{\"relname\": \"pgclass\", \"relnamespace\": 11},{\"relname\": \"pg_statistic\", \"relnamespace\": 11},]Best regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Fri, 8 May 2020 16:31:48 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Fri, May 8, 2020 at 7:32 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> There's no standard format that comes to mind, but perhaps an output format similar to that of (array of row_to_json()) would be desirable. For example, `select relname, relnamespace from pg_class;` would emit the following:\n>\n> [\n> {\"relname\": \"pgclass\", \"relnamespace\": 11},\n> {\"relname\": \"pg_statistic\", \"relnamespace\": 11},\n> ]\n\nI don't see why psql needs any special support. You can already\ngenerate this using the existing server side functions, if you want\nit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 16:23:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Mon, May 11, 2020 at 1:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, May 8, 2020 at 7:32 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > There's no standard format that comes to mind, but perhaps an output\n> format similar to that of (array of row_to_json()) would be desirable. For\n> example, `select relname, relnamespace from pg_class;` would emit the\n> following:\n> >\n> > [\n> > {\"relname\": \"pgclass\", \"relnamespace\": 11},\n> > {\"relname\": \"pg_statistic\", \"relnamespace\": 11},\n> > ]\n>\n> I don't see why psql needs any special support. You can already\n> generate this using the existing server side functions, if you want\n> it.\n>\n\nThat's a good point! It might still be desirable, perhaps for performance\ntrade-off of JSON conversion on the client-side instead of on the\nserver-side.\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\nOn Mon, May 11, 2020 at 1:24 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, May 8, 2020 at 7:32 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> There's no standard format that comes to mind, but perhaps an output format similar to that of (array of row_to_json()) would be desirable. For example, `select relname, relnamespace from pg_class;` would emit the following:\n>\n> [\n> {\"relname\": \"pgclass\", \"relnamespace\": 11},\n> {\"relname\": \"pg_statistic\", \"relnamespace\": 11},\n> ]\n\nI don't see why psql needs any special support. You can already\ngenerate this using the existing server side functions, if you want\nit.That's a good point! It might still be desirable, perhaps for performance trade-off of JSON conversion on the client-side instead of on the server-side.Best regards,--Gurjeet Singh http://gurjeet.singh.im/", "msg_date": "Mon, 11 May 2020 13:42:41 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Mon, May 11, 2020 at 4:42 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> That's a good point! It might still be desirable, perhaps for performance trade-off of JSON conversion on the client-side instead of on the server-side.\n\nIf there's a performance problem with the server's code here, we\nshould probably try to fix it, instead of adding the same feature on\nthe client side.\n\nBut also, we shouldn't start by deciding we need feature X and then\nlooking for the reason why we need it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 May 2020 15:50:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Wed, May 13, 2020 at 12:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, May 11, 2020 at 4:42 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > That's a good point! It might still be desirable, perhaps for performance trade-off of JSON conversion on the client-side instead of on the server-side.\n>\n> If there's a performance problem with the server's code here, we\n> should probably try to fix it, instead of adding the same feature on\n> the client side.\n\nPerformance problem is not just about how much CPU/RAM is used on\nserver-side but other resources like network consumption to get the\nresults to the client. Anecdotally, I have heard of a case where\nOracle implemented custom Huffman Encoding for a customer to speed up\ndelivery of their resultset that contained just rows of true/false.\n\nArguably, delivering JSON (with its repeating attribute names in every\nelement of the array, dquotes and commas) is more network intensive\nthan converting the resultset to JSON on network side.\n\n> But also, we shouldn't start by deciding we need feature X and then\n> looking for the reason why we need it.\n\nThat's better than, or at least on par with, the excuses like \"We\nshould do it because some other database does it, too\" :-)\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Wed, 13 May 2020 13:14:03 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Wed, May 13, 2020 at 1:14 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> Arguably, delivering JSON (with its repeating attribute names in every\n> element of the array, dquotes and commas) is more network intensive\n> than converting the resultset to JSON on network side.\n\ns/network side/client side/\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Wed, 13 May 2020 13:16:20 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On 05/13/20 16:16, Gurjeet Singh wrote:\n> On Wed, May 13, 2020 at 1:14 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>>\n>> Arguably, delivering JSON (with its repeating attribute names in every\n>> element of the array, dquotes and commas) is more network intensive\n>> than converting the resultset to JSON on [client] side.\n\nDoes this suggest perhaps some sort of hybrid approach, where jsonbc\ncould be available as a binary on-the-wire format and the client only\nneeds the ability to deparse it: a query on the server could marshal\nthe results into that form, the client negotiates the binary transfer\nformat, and deparses to normal JSON syntax on its end?\n\nIt seems the server-side \"compression\" to jsonbc should be optimizable\nwhen what is happening is marshaling of a tabular result: what the\nrepeating keys are going to be is known up front.\n\nMaybe could use a transient (or session lifetime?) 'external' dictionary\nthat gets generated and sent to the client. but not stored in\npg_jsonb_dict?\n\nSeems like a lot of work just to get json-shaped query results from psql,\nbut maybe the ability to receive jsonbc on the wire would be of interest\nto drivers generally.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 13 May 2020 17:01:18 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: JSON output from psql" }, { "msg_contents": "On Wed, May 13, 2020 at 2:01 PM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> Seems like a lot of work just to get json-shaped query results from psql,\n\n+1. If we look at the amount of work needed for the hybrid approach\nyou describe, compared to running CSV result through something like\ncsv2json, there's a 100% chance of the idea being shot down :-)\n\n> but maybe the ability to receive jsonbc on the wire would be of interest\n> to drivers generally.\n\nI'm not sure of that, but then I don't have visibility into the needs\nof consumers of our drivers.\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Wed, 13 May 2020 18:24:39 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: JSON output from psql" } ]
[ { "msg_contents": "I happened to notice $subject while working on the release notes.\nAFAICS, it is 100% inappropriate for the parser to compute the\nset of generated columns affected by an UPDATE, because that set\ncould change before execution. It would be really easy to break\nthis for an UPDATE in a stored rule, for example.\n\nI think that that processing should be done by the planner, instead.\nI don't object too much to keeping the data in RTEs ... but there had\nbetter be a bold annotation that the set is not valid till after\nplanning.\n\nAn alternative solution is to keep the set in some executor data structure\nand compute it during executor startup; perhaps near to where the\nexpressions are prepared for execution, so as to save extra stringToNode\ncalls.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 May 2020 15:05:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "fill_extraUpdatedCols is done in completely the wrong place" }, { "msg_contents": "On 2020-05-08 21:05, Tom Lane wrote:\n> I happened to notice $subject while working on the release notes.\n> AFAICS, it is 100% inappropriate for the parser to compute the\n> set of generated columns affected by an UPDATE, because that set\n> could change before execution. It would be really easy to break\n> this for an UPDATE in a stored rule, for example.\n\nDo you have a specific situation in mind? How would a rule change the \nset of columns updated by a query? Something involving CTEs? Having a \ntest case would be good.\n\n> I think that that processing should be done by the planner, instead.\n> I don't object too much to keeping the data in RTEs ... but there had\n> better be a bold annotation that the set is not valid till after\n> planning.\n> \n> An alternative solution is to keep the set in some executor data structure\n> and compute it during executor startup; perhaps near to where the\n> expressions are prepared for execution, so as to save extra stringToNode\n> calls.\n\nYeah, really only the executor ended up needing this, so perhaps it \nshould be handled in the executor.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 16:54:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fill_extraUpdatedCols is done in completely the wrong place" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-05-08 21:05, Tom Lane wrote:\n>> I happened to notice $subject while working on the release notes.\n>> AFAICS, it is 100% inappropriate for the parser to compute the\n>> set of generated columns affected by an UPDATE, because that set\n>> could change before execution. It would be really easy to break\n>> this for an UPDATE in a stored rule, for example.\n\n> Do you have a specific situation in mind? How would a rule change the \n> set of columns updated by a query? Something involving CTEs? Having a \n> test case would be good.\n\nbroken-update-rule.sql, attached, shows the scenario I had in mind:\nthe rule UPDATE query knows nothing of the generated column that\ngets added after the rule is stored, so the UPDATE fails to update it.\n\nHowever, on the way to preparing that test case I discovered that\nauto-updatable views have the same disease even when the generated column\nexists from the get-go; see broken-updatable-views.sql. In the context\nof the existing design, I suppose this means that there needs to be\na fill_extraUpdatedCols call somewhere in the code path that constructs\nan auto-update query. But if we moved the whole thing to the executor\nthen the problem would go away.\n\nI observe also that the executor doesn't seem to need this bitmap at all\nunless (a) there are triggers or (b) there are generated columns.\nSo in a lot of simpler cases, the cost of doing fill_extraUpdatedCols\nat either parse or plan time would be quite wasted. That might be a good\nargument for moving it to executor start, even though we'd then have\nto re-do it when re-using a prepared plan.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 18 May 2020 13:57:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: fill_extraUpdatedCols is done in completely the wrong place" } ]
[ { "msg_contents": "I believe check_pg_config as used by src/test/ssl/t/002_scram.pl\nshouldn't rely on /usr/include/postgresql/pg_config.h but use the file\nfrom the build tree instead:\n\nsrc/test/perl/TestLib.pm:\n\n Return the number of matches of the given regular expression\n within the installation's C<pg_config.h>.\n\n =cut\n\n sub check_pg_config\n {\n my ($regexp) = @_;\n my ($stdout, $stderr);\n my $result = IPC::Run::run [ 'pg_config', '--includedir' ], '>',\n \\$stdout, '2>', \\$stderr\n or die \"could not execute pg_config\";\n chomp($stdout);\n $stdout =~ s/\\r$//;\n\n open my $pg_config_h, '<', \"$stdout/pg_config.h\" or die \"$!\"; <-- here\n my $match = (grep { /^$regexp/ } <$pg_config_h>);\n close $pg_config_h;\n return $match;\n }\n\nsrc/test/ssl/README claims that it is possible to run the \"ssl\" extra\ntest from make check (as opposed to installcheck).\n\nChristoph\n\n\n", "msg_date": "Fri, 8 May 2020 21:16:44 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "src/test/perl/TestLib.pm: check_pg_config needs\n /usr/include/postgresql/pg_config.h" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I believe check_pg_config as used by src/test/ssl/t/002_scram.pl\n> shouldn't rely on /usr/include/postgresql/pg_config.h but use the file\n> from the build tree instead:\n\nBut during \"make check\", that should be executing pg_config from the\nthe temporary installation, so we should get the right answer no?\n\nConversely, in \"make installcheck\" scenarios, we definitely do want\nthe value from the installed file, or so I should think.\n\nDo you have a concrete scenario where you're getting wrong behavior?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 May 2020 15:34:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/test/perl/TestLib.pm: check_pg_config needs\n /usr/include/postgresql/pg_config.h" }, { "msg_contents": "Re: Tom Lane\n> But during \"make check\", that should be executing pg_config from the\n> the temporary installation, so we should get the right answer no?\n> \n> Conversely, in \"make installcheck\" scenarios, we definitely do want\n> the value from the installed file, or so I should think.\n> \n> Do you have a concrete scenario where you're getting wrong behavior?\n\nI just added the extra tests to the postgresql-13 package and got\nthis:\n\n$ cat build/src/test/ssl/tmp_check/log/regress_log_002_scram\nNo such file or directory at /srv/projects/postgresql/pg/master/build/../src/test/perl/TestLib.pm line 595.\n\nBut I just realized that the problem is caused by a Debian-specific\npatch that removes the find_my_exec logic from pg_config and replaces\nit with PGBINDIR. We need that patch because we install pg_config to\nboth /usr/bin and /usr/lib/postgresql/<version>/bin and want the same\noutput from both.\n\nI'll revisit that to see if we can come up with a different solution.\n\nChristoph\n\n\n", "msg_date": "Fri, 8 May 2020 21:45:33 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: src/test/perl/TestLib.pm: check_pg_config needs\n /usr/include/postgresql/pg_config.h" }, { "msg_contents": "> I just added the extra tests to the postgresql-13 package and got\n> this:\n\nSome other problem emerged here in the ldap test:\n\n17:28:58\n17:28:58 # Failed test 'StartTLS'\n17:28:58 # at t/001_auth.pl line 169.\n17:28:58 # got: '2'\n17:28:58 # expected: '0'\n17:28:58\n17:28:58 # Failed test 'LDAPS'\n17:28:58 # at t/001_auth.pl line 169.\n17:28:58 # got: '2'\n17:28:58 # expected: '0'\n17:28:59\n17:28:59 # Failed test 'LDAPS with URL'\n17:28:59 # at t/001_auth.pl line 169.\n17:28:59 # got: '2'\n17:28:59 # expected: '0'\n17:28:59 # Looks like you failed 3 tests of 22.\n17:28:59 t/001_auth.pl ..\n...\n17:28:59 # diagnostic message\n17:28:59 ok 18 - any attempt fails due to bad search pattern\n17:28:59 # TLS\n17:28:59 not ok 19 - StartTLS\n17:28:59 not ok 20 - LDAPS\n17:28:59 not ok 21 - LDAPS with URL\n17:28:59 ok 22 - bad combination of LDAPS and StartTLS\n17:28:59 Dubious, test returned 3 (wstat 768, 0x300)\n\nsrc/test/ldap/tmp_check/log/slapd.log is empty.\n\nIt consistently fails on the build server, but works on my notebook.\nMaybe that simply means slapd is crashing, but there's no slapd\noutput. Would it be possible to start slapd with \"-d 255\", even if\nthat means it doesn't background itself?\n\nThat'd be in src/test/ldap/t/001_auth.pl:\n\n system_or_bail $slapd, '-f', $slapd_conf, '-h', \"$ldap_url $ldaps_url\";\n\n END\n {\n kill 'INT', `cat $slapd_pidfile` if -f $slapd_pidfile;\n }\n\nServer and test output below:\n\n17:28:59 2020-05-13 15:28:58.136 UTC [31564] LOG: starting PostgreSQL 13devel (Debian 13~~devel~20200513.1505-1~801.git043e3e0.pgdg+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 9.3.0-12) 9.3.0, 64-bit\n17:28:59 2020-05-13 15:28:58.136 UTC [31564] LOG: listening on Unix socket \"/tmp/gM2rQtsMib/.s.PGSQL.53078\"\n17:28:59 2020-05-13 15:28:58.139 UTC [31565] LOG: database system was shut down at 2020-05-13 15:28:58 UTC\n17:28:59 2020-05-13 15:28:58.142 UTC [31564] LOG: database system is ready to accept connections\n17:28:59 2020-05-13 15:28:58.230 UTC [31573] [unknown] LOG: LDAP login failed for user \"?uid=test1\" on server \"localhost\": Invalid DN syntax\n17:28:59 2020-05-13 15:28:58.230 UTC [31573] [unknown] DETAIL: LDAP diagnostics: invalid DN\n17:28:59 2020-05-13 15:28:58.230 UTC [31573] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 2020-05-13 15:28:58.230 UTC [31573] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapserver=localhost ldapport=53076 ldapprefix=\"?uid=\" ldapsuffix=\"\"\"\n17:28:59 2020-05-13 15:28:58.264 UTC [31564] LOG: received fast shutdown request\n17:28:59 2020-05-13 15:28:58.264 UTC [31564] LOG: aborting any active transactions\n17:28:59 2020-05-13 15:28:58.266 UTC [31564] LOG: background worker \"logical replication launcher\" (PID 31571) exited with exit code 1\n17:28:59 2020-05-13 15:28:58.266 UTC [31566] LOG: shutting down\n17:28:59 2020-05-13 15:28:58.271 UTC [31564] LOG: database system is shut down\n17:28:59 2020-05-13 15:28:58.384 UTC [31575] LOG: starting PostgreSQL 13devel (Debian 13~~devel~20200513.1505-1~801.git043e3e0.pgdg+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 9.3.0-12) 9.3.0, 64-bit\n17:28:59 2020-05-13 15:28:58.385 UTC [31575] LOG: listening on Unix socket \"/tmp/gM2rQtsMib/.s.PGSQL.53078\"\n17:28:59 2020-05-13 15:28:58.387 UTC [31576] LOG: database system was shut down at 2020-05-13 15:28:58 UTC\n17:28:59 2020-05-13 15:28:58.390 UTC [31575] LOG: database system is ready to accept connections\n17:28:59 2020-05-13 15:28:58.479 UTC [31584] [unknown] LOG: could not start LDAP TLS session: Connect error\n17:28:59 2020-05-13 15:28:58.479 UTC [31584] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 2020-05-13 15:28:58.479 UTC [31584] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapserver=localhost ldapport=53076 ldapbasedn=\"dc=example,dc=net\" ldapsearchfilter=\"(uid=$username)\" ldaptls=1\"\n17:28:59 2020-05-13 15:28:58.514 UTC [31575] LOG: received fast shutdown request\n17:28:59 2020-05-13 15:28:58.514 UTC [31575] LOG: aborting any active transactions\n17:28:59 2020-05-13 15:28:58.516 UTC [31575] LOG: background worker \"logical replication launcher\" (PID 31582) exited with exit code 1\n17:28:59 2020-05-13 15:28:58.516 UTC [31577] LOG: shutting down\n17:28:59 2020-05-13 15:28:58.520 UTC [31575] LOG: database system is shut down\n17:28:59 2020-05-13 15:28:58.634 UTC [31586] LOG: starting PostgreSQL 13devel (Debian 13~~devel~20200513.1505-1~801.git043e3e0.pgdg+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 9.3.0-12) 9.3.0, 64-bit\n17:28:59 2020-05-13 15:28:58.634 UTC [31586] LOG: listening on Unix socket \"/tmp/gM2rQtsMib/.s.PGSQL.53078\"\n17:28:59 2020-05-13 15:28:58.636 UTC [31587] LOG: database system was shut down at 2020-05-13 15:28:58 UTC\n17:28:59 2020-05-13 15:28:58.639 UTC [31586] LOG: database system is ready to accept connections\n17:28:59 2020-05-13 15:28:58.728 UTC [31595] [unknown] LOG: could not perform initial LDAP bind for ldapbinddn \"\" on server \"localhost\": Can't contact LDAP server\n17:28:59 2020-05-13 15:28:58.728 UTC [31595] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 2020-05-13 15:28:58.728 UTC [31595] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapserver=localhost ldapscheme=ldaps ldapport=53077 ldapbasedn=\"dc=example,dc=net\" ldapsearchfilter=\"(uid=$username)\"\"\n17:28:59 2020-05-13 15:28:58.763 UTC [31586] LOG: received fast shutdown request\n17:28:59 2020-05-13 15:28:58.764 UTC [31586] LOG: aborting any active transactions\n17:28:59 2020-05-13 15:28:58.766 UTC [31586] LOG: background worker \"logical replication launcher\" (PID 31593) exited with exit code 1\n17:28:59 2020-05-13 15:28:58.766 UTC [31588] LOG: shutting down\n17:28:59 2020-05-13 15:28:58.772 UTC [31586] LOG: database system is shut down\n17:28:59 2020-05-13 15:28:58.886 UTC [31597] LOG: starting PostgreSQL 13devel (Debian 13~~devel~20200513.1505-1~801.git043e3e0.pgdg+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 9.3.0-12) 9.3.0, 64-bit\n17:28:59 2020-05-13 15:28:58.886 UTC [31597] LOG: listening on Unix socket \"/tmp/gM2rQtsMib/.s.PGSQL.53078\"\n17:28:59 2020-05-13 15:28:58.888 UTC [31598] LOG: database system was shut down at 2020-05-13 15:28:58 UTC\n17:28:59 2020-05-13 15:28:58.891 UTC [31597] LOG: database system is ready to accept connections\n17:28:59 2020-05-13 15:28:58.977 UTC [31606] [unknown] LOG: could not perform initial LDAP bind for ldapbinddn \"\" on server \"localhost\": Can't contact LDAP server\n17:28:59 2020-05-13 15:28:58.977 UTC [31606] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 2020-05-13 15:28:58.977 UTC [31606] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapurl=\"ldaps://localhost:53077/dc=example,dc=net??sub?(uid=$username)\"\"\n17:28:59 2020-05-13 15:28:59.014 UTC [31597] LOG: received fast shutdown request\n17:28:59 2020-05-13 15:28:59.014 UTC [31597] LOG: aborting any active transactions\n17:28:59 2020-05-13 15:28:59.017 UTC [31597] LOG: background worker \"logical replication launcher\" (PID 31604) exited with exit code 1\n17:28:59 2020-05-13 15:28:59.019 UTC [31599] LOG: shutting down\n17:28:59 2020-05-13 15:28:59.025 UTC [31597] LOG: database system is shut down\n17:28:59 2020-05-13 15:28:59.135 UTC [31608] LOG: starting PostgreSQL 13devel (Debian 13~~devel~20200513.1505-1~801.git043e3e0.pgdg+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 9.3.0-12) 9.3.0, 64-bit\n17:28:59 2020-05-13 15:28:59.135 UTC [31608] LOG: listening on Unix socket \"/tmp/gM2rQtsMib/.s.PGSQL.53078\"\n17:28:59 2020-05-13 15:28:59.137 UTC [31609] LOG: database system was shut down at 2020-05-13 15:28:59 UTC\n17:28:59 2020-05-13 15:28:59.140 UTC [31608] LOG: database system is ready to accept connections\n17:28:59 2020-05-13 15:28:59.228 UTC [31617] [unknown] LOG: could not start LDAP TLS session: Can't contact LDAP server\n17:28:59 2020-05-13 15:28:59.228 UTC [31617] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 2020-05-13 15:28:59.228 UTC [31617] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapurl=\"ldaps://localhost:53077/dc=example,dc=net??sub?(uid=$username)\" ldaptls=1\"\n17:28:59 2020-05-13 15:28:59.264 UTC [31608] LOG: received immediate shutdown request\n17:28:59 2020-05-13 15:28:59.265 UTC [31613] WARNING: terminating connection because of crash of another server process\n17:28:59 2020-05-13 15:28:59.265 UTC [31613] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n17:28:59 2020-05-13 15:28:59.265 UTC [31613] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n17:28:59 2020-05-13 15:28:59.268 UTC [31608] LOG: database system is shut down\n\n17:28:59 ******** build/src/test/ldap/tmp_check/log/regress_log_001_auth ********\n17:28:59 1..22\n17:28:59 # Checking port 53076\n17:28:59 # Found port 53076\n17:28:59 # Checking port 53077\n17:28:59 # Found port 53077\n17:28:59 # setting up slapd\n17:28:59 # Running: openssl req -new -nodes -keyout /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/ca.key -x509 -out /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/ca.crt -subj /CN=CA\n17:28:59 Generating a RSA private key\n17:28:59 ...........+++++\n17:28:59 ...............+++++\n17:28:59 writing new private key to '/<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/ca.key'\n17:28:59 -----\n17:28:59 # Running: openssl req -new -nodes -keyout /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/server.key -out /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/server.csr -subj /CN=server\n17:28:59 Generating a RSA private key\n17:28:59 ..............+++++\n17:28:59 .............+++++\n17:28:59 writing new private key to '/<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/server.key'\n17:28:59 -----\n17:28:59 # Running: openssl x509 -req -in /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/server.csr -CA /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/ca.crt -CAkey /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/ca.key -CAcreateserial -out /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd-certs/server.crt\n17:28:59 Signature ok\n17:28:59 subject=CN = server\n17:28:59 Getting CA Private Key\n17:28:59 # Running: /usr/sbin/slapd -f /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/slapd.conf -h ldap://localhost:53076 ldaps://localhost:53077\n17:28:59 # Running: ldapsearch -h localhost -p 53076 -s base -b dc=example,dc=net -D cn=Manager,dc=example,dc=net -y /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/ldappassword -n 'objectclass=*'\n17:28:59 # extended LDIF\n17:28:59 #\n17:28:59 # LDAPv3\n17:28:59 # base <dc=example,dc=net> with scope baseObject\n17:28:59 # filter: 'objectclass=*'\n17:28:59 # requesting: ALL\n17:28:59 #\n17:28:59 \n17:28:59 # loading LDAP data\n17:28:59 # Running: ldapadd -x -y /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/ldappassword -f authdata.ldif\n17:28:59 adding new entry \"dc=example,dc=net\"\n17:28:59 \n17:28:59 adding new entry \"uid=test1,dc=example,dc=net\"\n17:28:59 \n17:28:59 adding new entry \"uid=test2,dc=example,dc=net\"\n17:28:59 \n17:28:59 # Running: ldappasswd -x -y /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/ldappassword -s secret1 uid=test1,dc=example,dc=net\n17:28:59 # Running: ldappasswd -x -y /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/ldappassword -s secret2 uid=test2,dc=example,dc=net\n17:28:59 # setting up PostgreSQL instance\n17:28:59 # Checking port 53078\n17:28:59 # Found port 53078\n17:28:59 Name: node\n17:28:59 Data directory: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata\n17:28:59 Backup directory: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/backup\n17:28:59 Archive directory: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/archives\n17:28:59 Connection string: port=53078 host=/tmp/gM2rQtsMib\n17:28:59 Log file: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log\n17:28:59 # Running: initdb -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -A trust -N\n17:28:59 The files belonging to this database system will be owned by user \"buildd\".\n17:28:59 This user must also own the server process.\n17:28:59 \n17:28:59 The database cluster will be initialized with locale \"C\".\n17:28:59 The default database encoding has accordingly been set to \"SQL_ASCII\".\n17:28:59 The default text search configuration will be set to \"english\".\n17:28:59 \n17:28:59 Data page checksums are disabled.\n17:28:59 \n17:28:59 creating directory /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata ... ok\n17:28:59 creating subdirectories ... ok\n17:28:59 selecting dynamic shared memory implementation ... posix\n17:28:59 selecting default max_connections ... 100\n17:28:59 selecting default shared_buffers ... 128MB\n17:28:59 selecting default time zone ... Etc/UTC\n17:28:59 creating configuration files ... ok\n17:28:59 running bootstrap script ... ok\n17:28:59 performing post-bootstrap initialization ... ok\n17:28:59 \n17:28:59 Sync to disk skipped.\n17:28:59 The data directory might become corrupt if the operating system crashes.\n17:28:59 \n17:28:59 Success. You can now start the database server using:\n17:28:59 \n17:28:59 pg_ctl -D '/<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata' -l logfile start\n17:28:59 \n17:28:59 # Running: /<<PKGBUILDDIR>>/build/src/test/ldap/../../../src/test/regress/pg_regress --config-auth /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata\n17:28:59 ### Starting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log -o --cluster-name=node start\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31451\n17:28:59 # running tests\n17:28:59 # simple bind\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31466\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test0\"\n17:28:59 ok 1 - simple bind authentication fails if user not found in LDAP\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 2 - simple bind authentication fails with wrong password\n17:28:59 1\n17:28:59 ok 3 - simple bind authentication succeeds\n17:28:59 # search+bind\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31481\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test0\"\n17:28:59 ok 4 - search+bind authentication fails if user not found in LDAP\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 5 - search+bind authentication fails with wrong password\n17:28:59 1\n17:28:59 ok 6 - search+bind authentication succeeds\n17:28:59 # multiple servers\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31496\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test0\"\n17:28:59 ok 7 - search+bind authentication fails if user not found in LDAP\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 8 - search+bind authentication fails with wrong password\n17:28:59 1\n17:28:59 ok 9 - search+bind authentication succeeds\n17:28:59 # LDAP URLs\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31511\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test0\"\n17:28:59 ok 10 - search+bind with LDAP URL authentication fails if user not found in LDAP\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 11 - search+bind with LDAP URL authentication fails with wrong password\n17:28:59 1\n17:28:59 ok 12 - search+bind with LDAP URL authentication succeeds\n17:28:59 # search filters\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31526\n17:28:59 1\n17:28:59 ok 13 - search filter finds by uid\n17:28:59 1\n17:28:59 ok 14 - search filter finds by mail\n17:28:59 # search filters in LDAP URLs\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31539\n17:28:59 1\n17:28:59 ok 15 - search filter finds by uid\n17:28:59 1\n17:28:59 ok 16 - search filter finds by mail\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31552\n17:28:59 1\n17:28:59 ok 17 - combined LDAP URL and search filter\n17:28:59 # diagnostic message\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31564\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 18 - any attempt fails due to bad search pattern\n17:28:59 # TLS\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31575\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 not ok 19 - StartTLS\n17:28:59 \n17:28:59 # Failed test 'StartTLS'\n17:28:59 # at t/001_auth.pl line 169.\n17:28:59 # got: '2'\n17:28:59 # expected: '0'\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31586\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 not ok 20 - LDAPS\n17:28:59 \n17:28:59 # Failed test 'LDAPS'\n17:28:59 # at t/001_auth.pl line 169.\n17:28:59 # got: '2'\n17:28:59 # expected: '0'\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31597\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 not ok 21 - LDAPS with URL\n17:28:59 \n17:28:59 # Failed test 'LDAPS with URL'\n17:28:59 # at t/001_auth.pl line 169.\n17:28:59 # got: '2'\n17:28:59 # expected: '0'\n17:28:59 ### Restarting node \"node\"\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -l /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/log/001_auth_node.log restart\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 waiting for server to start.... done\n17:28:59 server started\n17:28:59 # Postmaster PID for node \"node\" is 31608\n17:28:59 psql: error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n17:28:59 ok 22 - bad combination of LDAPS and StartTLS\n17:28:59 ### Stopping node \"node\" using mode immediate\n17:28:59 # Running: pg_ctl -D /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata -m immediate stop\n17:28:59 waiting for server to shut down.... done\n17:28:59 server stopped\n17:28:59 # No postmaster PID for node \"node\"\n17:28:59 # Looks like you failed 3 tests of 22.\n17:28:59 make[1]: *** [debian/rules:193: override_dh_auto_test-arch] Error 1\n\nChristoph\n\n\n", "msg_date": "Wed, 13 May 2020 18:05:44 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "ldap tls test fails in some environments" }, { "msg_contents": "On Thu, May 14, 2020 at 4:05 AM Christoph Berg <myon@debian.org> wrote:\n> Some other problem emerged here in the ldap test:\n\nHi Christoph,\n\n> 17:28:59 Data directory: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata\n\nI know nothing about Debian package building so I could be missing\nsomething about how this works, but I wonder if our script variable\nhandling is hygienic enough for paths like that. Let's see... I get\nan error due to that when I run \"make -C src/test/ldap check\" from a\nsource tree under \"/<<PKGBUILDDIR>>/build\":\n\nsh: 1: cannot create /build/src/test/ldap/tmp_check/slapd.pid:\nDirectory nonexistent\n\nThat's fixable with:\n\n- kill 'INT', `cat $slapd_pidfile` if -f $slapd_pidfile;\n+ kill 'INT', `cat \"$slapd_pidfile\"` if -f \"$slapd_pidfile\";\n\nThat's a side issue, though.\n\n> 17:28:59 # TLS\n> 17:28:59 not ok 19 - StartTLS\n> 17:28:59 not ok 20 - LDAPS\n> 17:28:59 not ok 21 - LDAPS with URL\n\n> It consistently fails on the build server, but works on my notebook.\n> Maybe that simply means slapd is crashing, but there's no slapd\n> output. Would it be possible to start slapd with \"-d 255\", even if\n> that means it doesn't background itself?\n\nThat'd require more scripting to put it in the background...\n\n> 17:28:59 2020-05-13 15:28:58.479 UTC [31584] [unknown] LOG: could not start LDAP TLS session: Connect error\n\n> 17:28:59 2020-05-13 15:28:58.728 UTC [31595] [unknown] LOG: could not perform initial LDAP bind for ldapbinddn \"\" on server \"localhost\": Can't contact LDAP server\n\nHmm, I get exactly the same errors as this if I comment out the\nfollowing part of the test script:\n\n # don't bother to check the server's cert (though perhaps we should)\n append_to_file(\n $ldap_conf,\n qq{TLS_REQCERT never\n });\n\nThat's a file that we point to with the environment variable LDAPCONF.\nThe man page for ldap.conf says:\n\n Thus the following files and variables are read, in order:\n variable $LDAPNOINIT, and if that is not set:\n system file /etc/ldap/ldap.conf,\n user files $HOME/ldaprc, $HOME/.ldaprc, ./ldaprc,\n system file $LDAPCONF,\n user files $HOME/$LDAPRC, $HOME/.$LDAPRC, ./$LDAPRC,\n variables $LDAP<uppercase option name>.\n Settings late in the list override earlier ones.\n\nThis leads me to suspect that something in your build server's\nenvironment that comes later in that list is overridding the\nTLS_REQCERT setting. If that's the explanation, perhaps we should do\nthis, which seems to work OK on my system, since it comes last in the\nlist:\n\n-$ENV{'LDAPCONF'} = $ldap_conf;\n+$ENV{'LDAPTLS_REQCERT'} = \"never\";\n\n\n", "msg_date": "Thu, 14 May 2020 16:47:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "Re: Thomas Munro\n> > 17:28:59 Data directory: /<<PKGBUILDDIR>>/build/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata\n> \n> I know nothing about Debian package building so I could be missing\n> something about how this works, but I wonder if our script variable\n> handling is hygienic enough for paths like that.\n\nThat's from the sbuild log filter, the special chars are not the\nproblem:\n\n17:06:41 I: NOTICE: Log filtering will replace 'build/postgresql-13-fdfISX/postgresql-13-13~~devel~20200513.1505' with '<<PKGBUILDDIR>>'\n17:06:41 I: NOTICE: Log filtering will replace 'build/postgresql-13-fdfISX' with '<<BUILDDIR>>'\n\n> > It consistently fails on the build server, but works on my notebook.\n> > Maybe that simply means slapd is crashing, but there's no slapd\n> > output. Would it be possible to start slapd with \"-d 255\", even if\n> > that means it doesn't background itself?\n> \n> That'd require more scripting to put it in the background...\n\nMaybe adding \"&\" is enough, provided it still writes the pid file for\nshutting down...\n\n> This leads me to suspect that something in your build server's\n> environment that comes later in that list is overridding the\n> TLS_REQCERT setting. If that's the explanation, perhaps we should do\n> this, which seems to work OK on my system, since it comes last in the\n> list:\n\nIt's not messing with environment variables in that area.\n\nI'll see if I can catch a shell in the environment where it fails.\n\nChristoph\n\n\n", "msg_date": "Thu, 14 May 2020 13:10:47 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "> I'll see if I can catch a shell in the environment where it fails.\n\nIt failed right away when I tried on the buildd machine:\n\nThe slapd debug log is mostly garbage to me, the error seems to be\nthis:\nldap_read: want=8 error=Resource temporarily unavailable\n\n\nsrc/test/ldap/t/001_auth.pl:\n\nsystem_or_bail \"sh\", \"-c\", \"$slapd -f $slapd_conf -h '$ldap_url $ldaps_url' -d 255 &\";\n\nEND\n{\n kill 'INT', `cat $slapd_pidfile` if -f $slapd_pidfile;\n}\n\n\ntmp_check/log/001_auth_node.log:\n\n2020-05-15 14:06:18.915 CEST [30486] [unknown] LOG: could not start LDAP TLS session: Connect error\n2020-05-15 14:06:18.916 CEST [30486] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n2020-05-15 14:06:18.916 CEST [30486] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapserver=localhost ldapport=65510 ldapbasedn=\"dc=example,dc=net\" ldapsearchfilter=\"(uid=$username)\" ldaptls=1\"\n\n\ntmp_check/log/regress_log_001_auth:\n\n# TLS\n### Restarting node \"node\"\n# Running: pg_ctl -D /home/myon/postgresql-13-13~~devel~20200515.0434/build/src/test/ldap/tmp_ch\neck/t_001_auth_node_data/pgdata -l /home/myon/postgresql-13-13~~devel~20200515.0434/build/src/te\nst/ldap/tmp_check/log/001_auth_node.log restart\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"node\" is 30477\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on:\n5ebe85ba slap_listener_activate(6):\n5ebe85ba daemon: epoll: listen=6 busy\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba >>> slap_listener(ldap://localhost:65510)\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ebe85ba daemon: accept() = 10\n5ebe85ba daemon: listen=6, new connection on 10\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on:\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ebe85ba daemon: added 10r (active) listener=(nil)\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on: 10r\n5ebe85ba daemon: read active on 10\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ebe85ba connection_get(10)\n5ebe85ba connection_get(10): got connid=1033\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba connection_read(10): checking for input on id=1033\nber_get_next\n5ebe85ba daemon: activity on:\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\nldap_read: want=8, got=8\n 0000: 30 1d 02 01 01 77 18 80 0....w..\nldap_read: want=23, got=23\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n 0000: 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e 31 34 36 .1.3.6.1.4.1.146\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n 0010: 36 2e 32 30 30 33 37 6.20037\nber_get_next: tag 0x30 len 29 contents:\nber_dump: buf=0x7fa8ec107910 ptr=0x7fa8ec107910 end=0x7fa8ec10792d len=29\n 0000: 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 ...w...1.3.6.1.4\n 0010: 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .1.1466.20037\n5ebe85ba op tag 0x77, time 1589544378\nber_get_next\nldap_read: want=8 error=Resource temporarily unavailable\n5ebe85ba conn=1033 op=0 do_extended\nber_scanf fmt ({m) ber:\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on:ber_dump: buf=0x7fa8ec107910 ptr=0x7fa8ec107913 end=0x7fa8ec10792d len=26\n 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1.\n 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037\n\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ebe85ba do_extended: oid=1.3.6.1.4.1.1466.20037\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ebe85ba send_ldap_extended: err=0 oid= len=0\n5ebe85ba send_ldap_response: msgid=1 tag=120 err=0\nber_flush2: 14 bytes to sd 10\n 0000: 30 0c 02 01 01 78 07 0a 01 00 04 00 04 00 0....x........\nldap_write: want=14, written=14\n 0000: 30 0c 02 01 01 78 07 0a 01 00 04 00 04 00 0....x........\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on: 10r\n5ebe85ba daemon: read active on 10\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\npsql:5ebe85ba connection_get(10)\n error: could not connect to server: FATAL: LDAP authentication failed for user \"test1\"\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ebe85ba connection_get(10): got connid=1033\n5ebe85ba connection_read(10): checking for input on id=1033\ntls_read: want=5, got=5\n 0000: 30 05 02 01 02 0....\nTLS: can't accept: An unexpected TLS packet was received..\n5ebe85ba connection_read(10): TLS accept failure error=-1 id=1033, closing\n5ebe85ba connection_closing: readying conn=1033 sd=10 for close\n5ebe85ba connection_close: conn=1033 sd=10\n5ebe85ba daemon: removing 10\n5ebe85ba daemon: activity on 1 descriptor\n5ebe85ba daemon: activity on:\n5ebe85ba daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ebe85ba daemon: epoll: listen=9 active_threads=0 tvp=NULL\nnot ok 19 - StartTLS\n\n# Failed test 'StartTLS'\n# at t/001_auth.pl line 169.\n# got: '2'\n# expected: '0'\n### Restarting node \"node\"\n\n\nChristoph\n\n\n", "msg_date": "Fri, 15 May 2020 14:15:59 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> The slapd debug log is mostly garbage to me, the error seems to be\n> this:\n> ldap_read: want=8 error=Resource temporarily unavailable\n\nHm, so EAGAIN (although that's a BSD-ish spelling of the strerror\nstring, which seems pretty odd in a Debian context). I don't think\nthat's actually an error, it's just the daemon's data collection\nlogic trying to read data that isn't there. It then goes on and\nissues a response, so this must not indicate that the request is\nincomplete --- it's just a useless speculative read.\n\nSomebody should get out the LDAP RFCs and decode the packet contents\nthat this log helpfully provides, but I suspect that we're just looking\nat an authentication failure; there's still not much clue as to why.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 May 2020 10:02:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "Re: Tom Lane\n> Somebody should get out the LDAP RFCs and decode the packet contents\n> that this log helpfully provides, but I suspect that we're just looking\n> at an authentication failure; there's still not much clue as to why.\n\nThe non-TLS tests work, so it's not a plain auth failure...\n\nI'm attaching the full logs from that test, maybe someone with more\ninsight can compare the non-TLS with the TLS bits.\n\nWould it help to re-run that with log_debug on the PG side?\n\nChristoph", "msg_date": "Fri, 15 May 2020 16:25:49 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "On Sat, May 16, 2020 at 2:25 AM Christoph Berg <myon@debian.org> wrote:\n> > Somebody should get out the LDAP RFCs and decode the packet contents\n> > that this log helpfully provides, but I suspect that we're just looking\n> > at an authentication failure; there's still not much clue as to why.\n>\n> The non-TLS tests work, so it's not a plain auth failure...\n>\n> I'm attaching the full logs from that test, maybe someone with more\n> insight can compare the non-TLS with the TLS bits.\n>\n> Would it help to re-run that with log_debug on the PG side?\n\nIn your transcript for test 20, it looks like the client (PostgreSQL)\nis hanging up without even sending a TLS ClientHello:\n\ntls_read: want=5, got=0\n...\npsql:TLS: can't accept: The TLS connection was non-properly terminated..\n\nWith 001_auth.tl hacked to enable debug as you suggested*, on a local\nDebian 10 system I see:\n\ntls_read: want=5, got=5\n 0000: 16 03 01 01 4c ....L\n\nThat's: 0x16 = handshake record, 0x03 0x01 = TLS 1.0, and then a\nrecord length for the following ClientHello. You're not even getting\nthat far, so I guess libdap is setting up the connection but then the\nGNU TLS library (what Debian links libldap against) is not happy, even\nbefore any negotiations begin. I wonder how to figure out why... does\nthis tell you anything?\n\n $ENV{'LDAPCONF'} = $ldap_conf;\n+$ENV{\"GNUTLS_DEBUG_LEVEL\"} = '99';\n\n* Like this:\n-system_or_bail $slapd, '-f', $slapd_conf, '-h', \"$ldap_url $ldaps_url\";\n+system(\"$slapd -d 255 -f $slapd_conf -h '$ldap_url $ldaps_url' &\");\n\n\n", "msg_date": "Sun, 17 May 2020 01:15:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ldap tls test fails in some environments" }, { "msg_contents": "Re: Thomas Munro\n> In your transcript for test 20, it looks like the client (PostgreSQL)\n> is hanging up without even sending a TLS ClientHello:\n\nMaybe tests 19 and 20 are failing because 18 was already bad. (But\nprobably not.)\n\n> I wonder how to figure out why... does this tell you anything?\n> \n> $ENV{'LDAPCONF'} = $ldap_conf;\n> +$ENV{\"GNUTLS_DEBUG_LEVEL\"} = '99';\n\nSorry, I had not seen your message until now. Logs attached.\n\nEven ldapsearch ... -ZZ is failing with that slapd config, so it's not\na PG problem.\n\n$ GNUTLS_DEBUG_LEVEL=99 ldapsearch -h localhost -p 56118 -s base -b dc=example,dc=net -D cn=Manager,dc=example,dc=net -y /home/myon/postgresql-13-13~~devel~20200515.0434/build/src/test/ldap/tmp_check/ldappassword -n 'objectclass=*' -ZZ\ngnutls[2]: Enabled GnuTLS 3.6.13 logging...\ngnutls[2]: getrandom random generator was detected\ngnutls[2]: Intel SSSE3 was detected\ngnutls[2]: Intel AES accelerator was detected\ngnutls[2]: Intel GCM accelerator was detected\ngnutls[2]: cfg: unable to access: /etc/gnutls/config: 2\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: activity on:\n5ec3ec38 slap_listener_activate(6):\n5ec3ec38 daemon: epoll: listen=6 busy\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ec3ec38 >>> slap_listener(ldap://localhost:56118)\n5ec3ec38 daemon: accept() = 10\n5ec3ec38 daemon: listen=6, new connection on 10\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: activity on:\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ec3ec38 daemon: added 10r (active) listener=(nil)\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: activity on: 10r\n5ec3ec38 daemon: read active on 10\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ec3ec38 connection_get(10)\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 connection_get(10): got connid=1000\n5ec3ec38 connection_read(10): checking for input on id=1000\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ec3ec38 daemon: activity on 1 descriptor\nber_get_next\n5ec3ec38 daemon: activity on:\nldap_read: want=8, got=8\n 0000: 30 1d 02 01 01 77 18 80 0....w..\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\nldap_read: want=23, got=23\n 0000: 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e 31 34 36 .1.3.6.1.4.1.146\n 0010: 36 2e 32 30 30 33 37 6.20037\nber_get_next: tag 0x30 len 29 contents:\nber_dump: buf=0x7f0c50000bc0 ptr=0x7f0c50000bc0 end=0x7f0c50000bdd len=29\n 0000: 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 ...w...1.3.6.1.4\n 0010: 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .1.1466.20037\n5ec3ec38 op tag 0x77, time 1589898296\nber_get_next\nldap_read: want=8 error=Resource temporarily unavailable\n5ec3ec38 conn=1000 op=0 do_extended\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: activity on:\nber_scanf fmt ({m) ber:\nber_dump: buf=0x7f0c50000bc0 ptr=0x7f0c50000bc3 end=0x7f0c50000bdd len=26\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1.\n 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ec3ec38 do_extended: oid=1.3.6.1.4.1.1466.20037\n5ec3ec38 send_ldap_extended: err=0 oid= len=0\n5ec3ec38 send_ldap_response: msgid=1 tag=120 err=0\nber_flush2: 14 bytes to sd 10\n 0000: 30 0c 02 01 01 78 07 0a 01 00 04 00 04 00 0....x........\nldap_write: want=14, written=14\n 0000: 30 0c 02 01 01 78 07 0a 01 00 04 00 04 00 0....x........\ngnutls[2]: added 6 protocols, 29 ciphersuites, 19 sig algos and 10 groups into priority list\ngnutls[3]: ASSERT: ../../../lib/x509/verify-high2.c[gnutls_x509_trust_list_add_trust_file]:361\nldap_start_tls: Connect error (-11)\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: activity on: 10r\n5ec3ec38 daemon: read active on 10\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 connection_get(10)\n5ec3ec38 connection_get(10): got connid=1000\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n5ec3ec38 connection_read(10): checking for input on id=1000\ntls_read: want=5, got=5\n 0000: 30 05 02 01 02 0....\nTLS: can't accept: An unexpected TLS packet was received..\n5ec3ec38 connection_read(10): TLS accept failure error=-1 id=1000, closing\n5ec3ec38 connection_closing: readying conn=1000 sd=10 for close\n5ec3ec38 connection_close: conn=1000 sd=10\n5ec3ec38 daemon: activity on 1 descriptor\n5ec3ec38 daemon: removing 10\n5ec3ec38 daemon: activity on:\n5ec3ec38 daemon: epoll: listen=6 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=7 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=8 active_threads=0 tvp=NULL\n5ec3ec38 daemon: epoll: listen=9 active_threads=0 tvp=NULL\n\nThere is an assertion failure:\n\ngnutls[3]: ASSERT: ../../../lib/x509/verify-high2.c[gnutls_x509_trust_list_add_trust_file]:361\n\nhttps://sources.debian.org/src/gnutls28/3.6.13-2/lib/x509/verify-high2.c/#L361\n\n... which doesn't make sense to me yet.\n\nChristoph", "msg_date": "Tue, 19 May 2020 16:30:12 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: ldap tls test fails in some environments" } ]
[ { "msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ac0e30e0d0fe402fbdb3099fd8b32e4bc6755a6a\n\nAs usual, please send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 May 2020 16:41:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Back-branch minor release notes are up for review" }, { "msg_contents": "On 2020/05/09 5:41, Tom Lane wrote:\n> See\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ac0e30e0d0fe402fbdb3099fd8b32e4bc6755a6a\n\nThanks for making the release note!\n\n> As usual, please send any corrections by Sunday\n\n+ Fix possible undercounting of deleted B-tree index pages\n+ in <command>VACUUM VERBOSE</command> output (Peter Geoghegan)\n+ </para>\n+\n+ <para>\n+ </para>\n\nThe empty paragraph needs to be removed.\n\nI'd like to add the note about the following commit that I pushed recently.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=683e0ef5530f449f0f913de579b4f7bcd31c91fd\n\nWhat about the attached patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 9 May 2020 13:59:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Back-branch minor release notes are up for review" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> The empty paragraph needs to be removed.\n\nAh, thanks for catching that.\n\n> I'd like to add the note about the following commit that I pushed recently.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=683e0ef5530f449f0f913de579b4f7bcd31c91fd\n\nI revised this a bit and included it in today's updates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 May 2020 15:07:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Back-branch minor release notes are up for review" }, { "msg_contents": "\n\nOn 2020/05/11 4:07, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> The empty paragraph needs to be removed.\n> \n> Ah, thanks for catching that.\n> \n>> I'd like to add the note about the following commit that I pushed recently.\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=683e0ef5530f449f0f913de579b4f7bcd31c91fd\n> \n> I revised this a bit and included it in today's updates.\n\nThanks a lot!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 11 May 2020 10:16:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Back-branch minor release notes are up for review" } ]
[ { "msg_contents": "Hi,\n\nI'd like to propose a fairly major refactoring of the server's\nbasebackup.c. The current code isn't horrific or anything, but the\nbase backup mechanism has grown quite a few features over the years\nand all of the code knows about all of the features. This is going to\nmake it progressively more difficult to add additional features, and I\nhave a few in mind that I'd like to add, as discussed below and also\non several other recent threads.[1][2] The attached patch set shows\nwhat I have in mind. It needs more work, but I believe that there's\nenough here for someone to review the overall direction, and even some\nof the specifics, and hopefully give me some useful feedback.\n\nThis patch set is built around the idea of creating two new\nabstractions, a base backup sink -- or bbsink -- and a base backup\narchiver -- or bbarchiver. Each of these works like a foreign data\nwrapper or custom scan or TupleTableSlot. That is, there's a table of\nfunction pointers that act like method callbacks. Every implementation\ncan allocate a struct of sufficient size for its own bookkeeping data,\nand the first member of the struct is always the same, and basically\nholds the data that all implementations must store, including a\npointer to the table of function pointers. If we were using C++,\nbbarchiver and bbsink would be abstract base classes.\n\nThey represent closely-related concepts, so much so that I initially\nthought we could get by with just one new abstraction layer. I found\non experimentation that this did not work well, so I split it up into\ntwo and that worked a lot better. The distinction is this: a bbsink is\nsomething to which you can send a bunch of archives -- currently, each\nwould be a tarfile -- and also a backup manifest. A bbarchiver is\nsomething to which you send every file in the data directory\nindividually, or at least the ones that are getting backed up, plus\nany that are being injected into the backup (e.g. the backup_label).\nCommonly, a bbsink will do something with the data and then forward it\nto a subsequent bbsink, or a bbarchiver will do something with the\ndata and then forward it to a subsequent bbarchiver or bbsink. For\nexample, there's a bbarchiver_tar object which, like any bbarchiver,\nsees all the files and their contents as input. The output is a\ntarfile, which gets send to a bbsink. As things stand in the patch set\nnow, the tar archives are ultimately sent to the \"libpq\" bbsink, which\nsends them to the client.\n\nIn the future, we could have other bbarchivers. For example, we could\nadd \"pax\", \"zip\", or \"cpio\" bbarchiver which produces archives of that\nformat, and any given backup could choose which one to use. Or, we\ncould have a bbarchiver that runs each individual file through a\ncompression algorithm and then forwards the resulting data to a\nsubsequent bbarchiver. That would make it easy to produce a tarfile of\nindividually compressed files, which is one possible way of creating a\nseekable achive.[3] Likewise, we could have other bbsinks. For\nexample, we could have a \"localdisk\" bbsink that cause the server to\nwrite the backup somewhere in the local filesystem instead of\nstreaming it out over libpq. Or, we could have an \"s3\" bbsink that\nwrites the archives to S3. We could also have bbsinks that compresses\nthe input archives using some compressor (e.g. lz4, zstd, bzip2, ...)\nand forward the resulting compressed archives to the next bbsink in\nthe chain. I'm not trying to pass judgement on whether any of these\nparticular things are things we want to do, nor am I saying that this\npatch set solves all the problems with doing them. However, I believe\nit will make such things a whole lot easier to implement, because all\nof the knowledge about whatever new functionality is being added is\ncentralized in one place, rather than being spread across the entirety\nof basebackup.c. As an example of this, look at how 0010 changes\nbasebackup.c and basebackup_tar.c: afterwards, basebackup.c no longer\nknows anything that is tar-specific, whereas right now it knows about\ntar-specific things in many places.\n\nHere's an overview of this patch set:\n\n0001-0003 are cleanup patches that I have posted for review on\nseparate threads.[4][5] They are included here to make it easy to\napply this whole series if someone wishes to do so.\n\n0004 is a minor refactoring that reduces by 1 the number of functions\nin basebackup.c that know about the specifics of tarfiles. It is just\na preparatory patch and probably not very interesting.\n\n0005 invents the bbsink abstraction.\n\n0006 creates basebackup_libpq.c and moves all code that knows about\nthe details of sending archives via libpq there. The functionality is\nexposed for use by basebackup.c as a new type of bbsink, bbsink_libpq.\n\n0007 creates basebackup_throttle.c and moves all code that knows about\nthrottling backups there. The functionality is exposed for use by\nbasebackup.c as a new type of bbsink, bbsink_throttle. This means that\nthe throttling logic could be reused to throttle output to any final\ndestination. Essentially, this is a bbsink that just passes everything\nit gets through to the next bbsink, but with a rate limit. If\nthrottling's not enabled, no bbsink_throttle object is created, so all\nof the throttling code is completely out of the execution pipeline.\n\n0008 creates basebackup_progress.c and moves all code that knows about\nprogress reporting there. The functionality is exposed for use by\nbasebackup.c as a new type of bbsink, bbsink_progress. Since the\nabstraction doesn't fit perfectly in this case, some extra functions\nare added to work around the problem. This is not entirely elegant,\nbut I don't think it's still an improvement over what we have now, and\nI don't have a better idea.\n\n0009 invents the bbarchiver abstraction.\n\n0010 invents two new bbarchivers, a tar bbarchiver and a tarsize\nbbarchiver, and refactors basebackup.c to make use of them. The tar\nbbarchiver puts the files it sees into tar archives and forwards the\nresulting archives to a bbsink. The tarsize bbarchiver is used to\nsupport the PROGRESS option to the BASE_BACKUP command. It just\nestimates the size of the backup by summing up the file sizes without\nreading them. This approach is good for a couple of reasons. First,\nwithout something like this, it's impossible to keep basebackup.c from\nknowing something about the tar format, because the PROGRESS option\ndoesn't just figure out how big the files to be backed up are: it\nfigures out how big it thinks the archives will be, and that involves\ntar-specific considerations. This area needs more work, as the whole\nidea of measuring progress by estimating the archive size is going to\nbreak down as soon as server-side compression is in the picture.\nSecond, this makes the code path that we use for figuring out the\nbackup size details much more similar to the path we use for\nperforming the actual backup. For instance, with this patch, we\ninclude the exact same files in the calculation that we will include\nin the backup, and in the same order, something that's not true today.\nThe basebackup_tar.c file added by this patch is sadly lacking in\ncomments, which I will add in a future version of the patch set. I\nthink, though, that it will not be too unclear what's going on here.\n\n0011 invents another new kind of bbarchiver. This bbarchiver just\neavesdrops on the stream of files to facilitate backup manifest\nconstruction, and then forwards everything through to a subsequent\nbbarchiver. Like bbsink_throttle, it can be entirely omitted if not\nused. This patch is a bit clunky at the moment and needs some polish,\nbut it is another demonstration of how these abstractions can be used\nto simplify basebackup.c, so that basebackup.c only has to worry about\ndetermining what should be backed up and not have to worry much about\nall the specific things that need to be done as part of that.\n\nAlthough this patch set adds quite a bit of code on net, it makes\nbasebackup.c considerably smaller and simpler, removing more than 400\nlines of code from that file, about 20% of the current total. There\nare some gratifying changes vs. the status quo. For example, in\nmaster, we have this:\n\nsendDir(const char *path, int basepathlen, bool sizeonly, List *tablespaces,\n bool sendtblspclinks, backup_manifest_info *manifest,\n const char *spcoid)\n\nNotably, the sizeonly flag makes the function not do what the name of\nthe function suggests that it does. Also, we've got to pass some extra\nfields through to enable specific features. With the patch set, the\nequivalent function looks like this:\n\narchive_directory(bbarchiver *archiver, const char *path, int basepathlen,\n List *tablespaces, bool sendtblspclinks)\n\nThe question \"what should I do with the directories and files we find\nas we recurse?\" is now answered by the choice of which bbarchiver to\npass to the function, rather than by the values of sizeonly, manifest,\nand spcoid. That's not night and day, but I think it's better,\nespecially as you imagine adding more features in the future. The\nreally important part, for me, is that you can make the bbarchiver do\nanything you like without needing to make any more changes to this\nfunction. It just arranges to invoke your callbacks. You take it from\nthere.\n\nOne pretty major question that this patch set doesn't address is what\nthe user interface for any of the hypothetical features mentioned\nabove ought to look like, or how basebackup.c ought to support them.\nThe syntax for the BASE_BACKUP command, like the contents of\nbasebackup.c, has grown organically, and doesn't seem to be very\nscalable. Also, the wire protocol - a series of CopyData results which\nthe client is entirely responsible for knowing how to interpret and\nabout which the server provides only minimal information - doesn't\nmuch lend itself to extensibility. Some careful design work is likely\nneeded in both areas, and this patch does not try to do any of it. I\nam quite interested in discussing those questions, but I felt that\nthey weren't the most important problems to solve first.\n\nWhat do you all think?\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] http://postgr.es/m/CA+TgmoZubLXYR+Pd_gi3MVgyv5hQdLm-GBrVXkun-Lewaw12Kg@mail.gmail.com\n[2] http://postgr.es/m/CA+TgmoYr7+-0_vyQoHbTP5H3QGZFgfhnrn6ewDteF=kUqkG=Fw@mail.gmail.com\n[3] http://postgr.es/m/CA+TgmoZQCoCyPv6fGoovtPEZF98AXCwYDnSB0=p5XtxNY68r_A@mail.gmail.com\nand following\n[4] http://postgr.es/m/CA+TgmoYq+59SJ2zBbP891ngWPA9fymOqntqYcweSDYXS2a620A@mail.gmail.com\n[5] http://postgr.es/m/CA+TgmobWbfReO9-XFk8urR1K4wTNwqoHx_v56t7=T8KaiEoKNw@mail.gmail.com\n\n\n", "msg_date": "Fri, 8 May 2020 16:53:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "refactoring basebackup.c" }, { "msg_contents": "So it might be good if I'd remembered to attach the patches. Let's try\nthat again.\n\n...Robert", "msg_date": "Fri, 8 May 2020 16:55:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nOn 2020-05-08 16:53:09 -0400, Robert Haas wrote:\n> They represent closely-related concepts, so much so that I initially\n> thought we could get by with just one new abstraction layer. I found\n> on experimentation that this did not work well, so I split it up into\n> two and that worked a lot better. The distinction is this: a bbsink is\n> something to which you can send a bunch of archives -- currently, each\n> would be a tarfile -- and also a backup manifest. A bbarchiver is\n> something to which you send every file in the data directory\n> individually, or at least the ones that are getting backed up, plus\n> any that are being injected into the backup (e.g. the backup_label).\n> Commonly, a bbsink will do something with the data and then forward it\n> to a subsequent bbsink, or a bbarchiver will do something with the\n> data and then forward it to a subsequent bbarchiver or bbsink. For\n> example, there's a bbarchiver_tar object which, like any bbarchiver,\n> sees all the files and their contents as input. The output is a\n> tarfile, which gets send to a bbsink. As things stand in the patch set\n> now, the tar archives are ultimately sent to the \"libpq\" bbsink, which\n> sends them to the client.\n\nHm.\n\nI wonder if there's cases where recursively forwarding like this will\ncause noticable performance effects. The only operation that seems\nfrequent enough to potentially be noticable would be \"chunks\" of the\nfile. So perhaps it'd be good to make sure we read in large enough\nchunks?\n\n> 0010 invents two new bbarchivers, a tar bbarchiver and a tarsize\n> bbarchiver, and refactors basebackup.c to make use of them. The tar\n> bbarchiver puts the files it sees into tar archives and forwards the\n> resulting archives to a bbsink. The tarsize bbarchiver is used to\n> support the PROGRESS option to the BASE_BACKUP command. It just\n> estimates the size of the backup by summing up the file sizes without\n> reading them. This approach is good for a couple of reasons. First,\n> without something like this, it's impossible to keep basebackup.c from\n> knowing something about the tar format, because the PROGRESS option\n> doesn't just figure out how big the files to be backed up are: it\n> figures out how big it thinks the archives will be, and that involves\n> tar-specific considerations.\n\nISTM that it's not actually good to have the progress calculations\ninclude the tar overhead. As you say:\n\n> This area needs more work, as the whole idea of measuring progress by\n> estimating the archive size is going to break down as soon as\n> server-side compression is in the picture.\n\nThis, to me, indicates that we should measure the progress solely based\non how much of the \"source\" data was processed. The overhead of tar, the\nreduction due to compression, shouldn't be included.\n\n\n> What do you all think?\n\nI've not though enough about the specifics, but I think it looks like\nit's going roughly in a better direction.\n\nOne thing I wonder about is how stateful the interface is. Archivers\nwill pretty much always track which file is currently open etc. Somehow\nsuch a repeating state machine seems a bit ugly - but I don't really\nhave a better answer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 May 2020 14:27:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, May 8, 2020 at 5:27 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if there's cases where recursively forwarding like this will\n> cause noticable performance effects. The only operation that seems\n> frequent enough to potentially be noticable would be \"chunks\" of the\n> file. So perhaps it'd be good to make sure we read in large enough\n> chunks?\n\nYeah, that needs to be tested. Right now the chunk size is 32kB but it\nmight be a good idea to go larger. Another thing is that right now the\nchunk size is tied to the protocol message size, and I'm not sure\nwhether the size that's optimal for disk reads is also optimal for\nprotocol messages.\n\n> This, to me, indicates that we should measure the progress solely based\n> on how much of the \"source\" data was processed. The overhead of tar, the\n> reduction due to compression, shouldn't be included.\n\nI don't think it's a particularly bad thing that we include a small\namount of progress for sending an empty file, a directory, or a\nsymlink. That could make the results more meaningful if you have a\ndatabase with lots of empty relations in it. However, I agree that the\neffect of compression shouldn't be included. To get there, I think we\nneed to redesign the wire protocol. Right now, the server has no way\nof letting the client know how many uncompressed bytes it's sent, and\nthe client has no way of figuring it out without uncompressing, which\nseems like something we want to avoid.\n\nThere are some other problems with the current wire protocol, too:\n\n1. The syntax for the BASE_BACKUP command is large and unwieldy. We\nreally ought to adopt an extensible options syntax, like COPY, VACUUM,\nEXPLAIN, etc. do, rather than using a zillion ad-hoc bolt-ons, each\nwith bespoke lexer and parser support.\n\n2. The client is sent a list of tablespaces and is supposed to use\nthat to expect an equal number of archives, computing the name for\neach one on the client side from the tablespace info. However, I think\nwe should be able to support modes like \"put all the tablespaces in a\nsingle archive\" or \"send a separate archive for every 256GB\" or \"ship\nit all to the cloud and don't send me any archives\". To get there, I\nthink we should have the server send the archive name to the clients,\nand the client should just keep receiving the next archive until it's\ntold that there are no more. Then if there's one archive or ten\narchives or no archives, the client doesn't have to care. It just\nreceives what the server sends until it hears that there are no more.\nIt also doesn't know how the server is naming the archives; the server\ncan, for example, adjust the archive names based on which compression\nformat is being chosen, without knowledge of the server's naming\nconventions needing to exist on the client side.\n\nI think we should keep support for the current BASE_BACKUP command but\neither add a new variant using an extensible options, or else invent a\nwhole new command with a different name (BACKUP, SEND_BACKUP,\nwhatever) that takes extensible options. This command should send back\nall the archives and the backup manifest using a single COPY stream\nrather than multiple COPY streams. Within the COPY stream, we'll\ninvent a sub-protocol, e.g. based on the first letter of the message,\ne.g.:\n\nt = Tablespace boundary. No further message payload. Indicates, for\nprogress reporting purposes, that we are advancing to the next\ntablespace.\nf = Filename. The remainder of the message payload is the name of the\nnext file that will be transferred.\nd = Data. The next four bytes contain the number of uncompressed bytes\ncovered by this message, for progress reporting purposes. The rest of\nthe message is payload, possibly compressed. Could be empty, if the\ndata is being shipped elsewhere, and these messages are only being\nsent to update the client's notion of progress.\n\n> I've not though enough about the specifics, but I think it looks like\n> it's going roughly in a better direction.\n\nGood to hear.\n\n> One thing I wonder about is how stateful the interface is. Archivers\n> will pretty much always track which file is currently open etc. Somehow\n> such a repeating state machine seems a bit ugly - but I don't really\n> have a better answer.\n\nI thought about that a bit, too. There might be some way to unify that\nby having some common context object that's defined by basebackup.c\nand all archivers get it, so that they have some commonly-desired\ndetails without needing bespoke code, but I'm not sure at this point\nwhether that will actually produce a nicer result. Even if we don't\nhave it initially, it seems like it wouldn't be very hard to add it\nlater, so I'm not too stressed about it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 15:02:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\nPlease see my comments inline below.\n\nOn Tue, May 12, 2020 at 12:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Yeah, that needs to be tested. Right now the chunk size is 32kB but it\n> might be a good idea to go larger. Another thing is that right now the\n> chunk size is tied to the protocol message size, and I'm not sure\n> whether the size that's optimal for disk reads is also optimal for\n> protocol messages.\n\n\nOne needs a balance between the number of packets to be sent across the\nnetwork and so if the size\nof reading from the disk and the network packet size could be unified then\nit might provide a better optimization.\n\n\n>\n> I don't think it's a particularly bad thing that we include a small\n> amount of progress for sending an empty file, a directory, or a\n> symlink. That could make the results more meaningful if you have a\n> database with lots of empty relations in it. However, I agree that the\n> effect of compression shouldn't be included. To get there, I think we\n> need to redesign the wire protocol. Right now, the server has no way\n> of letting the client know how many uncompressed bytes it's sent, and\n> the client has no way of figuring it out without uncompressing, which\n> seems like something we want to avoid.\n>\n>\n I agree here too, except that if we have too many tar files one might\ncringe\n but sending the xtra amt from these tar files looks okay to me.\n\n\n> There are some other problems with the current wire protocol, too:\n>\n> 1. The syntax for the BASE_BACKUP command is large and unwieldy. We\n> really ought to adopt an extensible options syntax, like COPY, VACUUM,\n> EXPLAIN, etc. do, rather than using a zillion ad-hoc bolt-ons, each\n> with bespoke lexer and parser support.\n>\n> 2. The client is sent a list of tablespaces and is supposed to use\n> that to expect an equal number of archives, computing the name for\n> each one on the client side from the tablespace info. However, I think\n> we should be able to support modes like \"put all the tablespaces in a\n> single archive\" or \"send a separate archive for every 256GB\" or \"ship\n> it all to the cloud and don't send me any archives\". To get there, I\n> think we should have the server send the archive name to the clients,\n> and the client should just keep receiving the next archive until it's\n> told that there are no more. Then if there's one archive or ten\n> archives or no archives, the client doesn't have to care. It just\n> receives what the server sends until it hears that there are no more.\n> It also doesn't know how the server is naming the archives; the server\n> can, for example, adjust the archive names based on which compression\n> format is being chosen, without knowledge of the server's naming\n> conventions needing to exist on the client side.\n>\n> One thing to remember here could be that an optimization would need to\nbe made between the number of options\n we provide and people coming back and saying which combinations do not\nwork\n For example, if a user script has \"put all the tablespaces in a single\narchive\" and later on somebody makes some\n script changes to break it down at \"256 GB\" and there is a conflict then\nwhich one takes precedence needs to be chosen.\n When the number of options like this become very large this could lead to\nsome complications.\n\n\n> I think we should keep support for the current BASE_BACKUP command but\n> either add a new variant using an extensible options, or else invent a\n> whole new command with a different name (BACKUP, SEND_BACKUP,\n> whatever) that takes extensible options. This command should send back\n> all the archives and the backup manifest using a single COPY stream\n> rather than multiple COPY streams. Within the COPY stream, we'll\n> invent a sub-protocol, e.g. based on the first letter of the message,\n> e.g.:\n>\n> t = Tablespace boundary. No further message payload. Indicates, for\n> progress reporting purposes, that we are advancing to the next\n> tablespace.\n> f = Filename. The remainder of the message payload is the name of the\n> next file that will be transferred.\n> d = Data. The next four bytes contain the number of uncompressed bytes\n> covered by this message, for progress reporting purposes. The rest of\n> the message is payload, possibly compressed. Could be empty, if the\n> data is being shipped elsewhere, and these messages are only being\n> sent to update the client's notion of progress.\n>\n\n Here I support this.\n\n\n> I thought about that a bit, too. There might be some way to unify that\n> by having some common context object that's defined by basebackup.c\n> and all archivers get it, so that they have some commonly-desired\n> details without needing bespoke code, but I'm not sure at this point\n> whether that will actually produce a nicer result. Even if we don't\n> have it initially, it seems like it wouldn't be very hard to add it\n> later, so I'm not too stressed about it.\n>\n\n--Sumanta Mukherjee\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n>\n>\n>\n\nHi Robert,Please see my comments inline below.On Tue, May 12, 2020 at 12:33 AM Robert Haas <robertmhaas@gmail.com> wrote:Yeah, that needs to be tested. Right now the chunk size is 32kB but it\nmight be a good idea to go larger. Another thing is that right now the\nchunk size is tied to the protocol message size, and I'm not sure\nwhether the size that's optimal for disk reads is also optimal for\nprotocol messages.One needs a balance between the number of packets to be sent across the network and so if the size of reading from the disk and the network packet size could be unified then it might provide a better optimization. \n\nI don't think it's a particularly bad thing that we include a small\namount of progress for sending an empty file, a directory, or a\nsymlink. That could make the results more meaningful if you have a\ndatabase with lots of empty relations in it. However, I agree that the\neffect of compression shouldn't be included. To get there, I think we\nneed to redesign the wire protocol. Right now, the server has no way\nof letting the client know how many uncompressed bytes it's sent, and\nthe client has no way of figuring it out without uncompressing, which\nseems like something we want to avoid.\n  I agree here too, except that if we have too many tar files one might cringe   but sending the xtra amt from these tar files looks okay to me. \nThere are some other problems with the current wire protocol, too:\n\n1. The syntax for the BASE_BACKUP command is large and unwieldy. We\nreally ought to adopt an extensible options syntax, like COPY, VACUUM,\nEXPLAIN, etc. do, rather than using a zillion ad-hoc bolt-ons, each\nwith bespoke lexer and parser support.\n\n2. The client is sent a list of tablespaces and is supposed to use\nthat to expect an equal number of archives, computing the name for\neach one on the client side from the tablespace info. However, I think\nwe should be able to support modes like \"put all the tablespaces in a\nsingle archive\" or \"send a separate archive for every 256GB\" or \"ship\nit all to the cloud and don't send me any archives\". To get there, I\nthink we should have the server send the archive name to the clients,\nand the client should just keep receiving the next archive until it's\ntold that there are no more. Then if there's one archive or ten\narchives or no archives, the client doesn't have to care. It just\nreceives what the server sends until it hears that there are no more.\nIt also doesn't know how the server is naming the archives; the server\ncan, for example, adjust the archive names based on which compression\nformat is being chosen, without knowledge of the server's naming\nconventions needing to exist on the client side.\n  One thing to remember here could be that an optimization would need to be made between the number of options  we provide and people coming back and saying which combinations do not work  For example, if a user script has \"put all the tablespaces in a single archive\" and later on somebody makes some  script changes to break it down at \"256 GB\" and there is a conflict then which one takes precedence needs to be chosen.  When the number of options like this become very large this could lead to some complications.  \nI think we should keep support for the current BASE_BACKUP command but\neither add a new variant using an extensible options, or else invent a\nwhole new command with a different name (BACKUP, SEND_BACKUP,\nwhatever) that takes extensible options. This command should send back\nall the archives and the backup manifest using a single COPY stream\nrather than multiple COPY streams. Within the COPY stream, we'll\ninvent a sub-protocol, e.g. based on the first letter of the message,\ne.g.:\n\nt = Tablespace boundary. No further message payload. Indicates, for\nprogress reporting purposes, that we are advancing to the next\ntablespace.\nf = Filename. The remainder of the message payload is the name of the\nnext file that will be transferred.\nd = Data. The next four bytes contain the number of uncompressed bytes\ncovered by this message, for progress reporting purposes. The rest of\nthe message is payload, possibly compressed. Could be empty, if the\ndata is being shipped elsewhere, and these messages are only being\nsent to update the client's notion of progress.   Here I support this. I thought about that a bit, too. There might be some way to unify that\nby having some common context object that's defined by basebackup.c\nand all archivers get it, so that they have some commonly-desired\ndetails without needing bespoke code, but I'm not sure at this point\nwhether that will actually produce a nicer result. Even if we don't\nhave it initially, it seems like it wouldn't be very hard to add it\nlater, so I'm not too stressed about it. --Sumanta MukherjeeEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Tue, 12 May 2020 10:56:40 +0530", "msg_from": "Sumanta Mukherjee <sumanta.mukherjee@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Sat, May 9, 2020 at 2:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi,\n>\n> I'd like to propose a fairly major refactoring of the server's\n> basebackup.c. The current code isn't horrific or anything, but the\n> base backup mechanism has grown quite a few features over the years\n> and all of the code knows about all of the features. This is going to\n> make it progressively more difficult to add additional features, and I\n> have a few in mind that I'd like to add, as discussed below and also\n> on several other recent threads.[1][2] The attached patch set shows\n> what I have in mind. It needs more work, but I believe that there's\n> enough here for someone to review the overall direction, and even some\n> of the specifics, and hopefully give me some useful feedback.\n>\n> This patch set is built around the idea of creating two new\n> abstractions, a base backup sink -- or bbsink -- and a base backup\n> archiver -- or bbarchiver. Each of these works like a foreign data\n> wrapper or custom scan or TupleTableSlot. That is, there's a table of\n> function pointers that act like method callbacks. Every implementation\n> can allocate a struct of sufficient size for its own bookkeeping data,\n> and the first member of the struct is always the same, and basically\n> holds the data that all implementations must store, including a\n> pointer to the table of function pointers. If we were using C++,\n> bbarchiver and bbsink would be abstract base classes.\n>\n> They represent closely-related concepts, so much so that I initially\n> thought we could get by with just one new abstraction layer. I found\n> on experimentation that this did not work well, so I split it up into\n> two and that worked a lot better. The distinction is this: a bbsink is\n> something to which you can send a bunch of archives -- currently, each\n> would be a tarfile -- and also a backup manifest. A bbarchiver is\n> something to which you send every file in the data directory\n> individually, or at least the ones that are getting backed up, plus\n> any that are being injected into the backup (e.g. the backup_label).\n> Commonly, a bbsink will do something with the data and then forward it\n> to a subsequent bbsink, or a bbarchiver will do something with the\n> data and then forward it to a subsequent bbarchiver or bbsink. For\n> example, there's a bbarchiver_tar object which, like any bbarchiver,\n> sees all the files and their contents as input. The output is a\n> tarfile, which gets send to a bbsink. As things stand in the patch set\n> now, the tar archives are ultimately sent to the \"libpq\" bbsink, which\n> sends them to the client.\n>\n> In the future, we could have other bbarchivers. For example, we could\n> add \"pax\", \"zip\", or \"cpio\" bbarchiver which produces archives of that\n> format, and any given backup could choose which one to use. Or, we\n> could have a bbarchiver that runs each individual file through a\n> compression algorithm and then forwards the resulting data to a\n> subsequent bbarchiver. That would make it easy to produce a tarfile of\n> individually compressed files, which is one possible way of creating a\n> seekable achive.[3] Likewise, we could have other bbsinks. For\n> example, we could have a \"localdisk\" bbsink that cause the server to\n> write the backup somewhere in the local filesystem instead of\n> streaming it out over libpq. Or, we could have an \"s3\" bbsink that\n> writes the archives to S3. We could also have bbsinks that compresses\n> the input archives using some compressor (e.g. lz4, zstd, bzip2, ...)\n> and forward the resulting compressed archives to the next bbsink in\n> the chain. I'm not trying to pass judgement on whether any of these\n> particular things are things we want to do, nor am I saying that this\n> patch set solves all the problems with doing them. However, I believe\n> it will make such things a whole lot easier to implement, because all\n> of the knowledge about whatever new functionality is being added is\n> centralized in one place, rather than being spread across the entirety\n> of basebackup.c. As an example of this, look at how 0010 changes\n> basebackup.c and basebackup_tar.c: afterwards, basebackup.c no longer\n> knows anything that is tar-specific, whereas right now it knows about\n> tar-specific things in many places.\n>\n> Here's an overview of this patch set:\n>\n> 0001-0003 are cleanup patches that I have posted for review on\n> separate threads.[4][5] They are included here to make it easy to\n> apply this whole series if someone wishes to do so.\n>\n> 0004 is a minor refactoring that reduces by 1 the number of functions\n> in basebackup.c that know about the specifics of tarfiles. It is just\n> a preparatory patch and probably not very interesting.\n>\n> 0005 invents the bbsink abstraction.\n>\n> 0006 creates basebackup_libpq.c and moves all code that knows about\n> the details of sending archives via libpq there. The functionality is\n> exposed for use by basebackup.c as a new type of bbsink, bbsink_libpq.\n>\n> 0007 creates basebackup_throttle.c and moves all code that knows about\n> throttling backups there. The functionality is exposed for use by\n> basebackup.c as a new type of bbsink, bbsink_throttle. This means that\n> the throttling logic could be reused to throttle output to any final\n> destination. Essentially, this is a bbsink that just passes everything\n> it gets through to the next bbsink, but with a rate limit. If\n> throttling's not enabled, no bbsink_throttle object is created, so all\n> of the throttling code is completely out of the execution pipeline.\n>\n> 0008 creates basebackup_progress.c and moves all code that knows about\n> progress reporting there. The functionality is exposed for use by\n> basebackup.c as a new type of bbsink, bbsink_progress. Since the\n> abstraction doesn't fit perfectly in this case, some extra functions\n> are added to work around the problem. This is not entirely elegant,\n> but I don't think it's still an improvement over what we have now, and\n> I don't have a better idea.\n>\n> 0009 invents the bbarchiver abstraction.\n>\n> 0010 invents two new bbarchivers, a tar bbarchiver and a tarsize\n> bbarchiver, and refactors basebackup.c to make use of them. The tar\n> bbarchiver puts the files it sees into tar archives and forwards the\n> resulting archives to a bbsink. The tarsize bbarchiver is used to\n> support the PROGRESS option to the BASE_BACKUP command. It just\n> estimates the size of the backup by summing up the file sizes without\n> reading them. This approach is good for a couple of reasons. First,\n> without something like this, it's impossible to keep basebackup.c from\n> knowing something about the tar format, because the PROGRESS option\n> doesn't just figure out how big the files to be backed up are: it\n> figures out how big it thinks the archives will be, and that involves\n> tar-specific considerations. This area needs more work, as the whole\n> idea of measuring progress by estimating the archive size is going to\n> break down as soon as server-side compression is in the picture.\n> Second, this makes the code path that we use for figuring out the\n> backup size details much more similar to the path we use for\n> performing the actual backup. For instance, with this patch, we\n> include the exact same files in the calculation that we will include\n> in the backup, and in the same order, something that's not true today.\n> The basebackup_tar.c file added by this patch is sadly lacking in\n> comments, which I will add in a future version of the patch set. I\n> think, though, that it will not be too unclear what's going on here.\n>\n> 0011 invents another new kind of bbarchiver. This bbarchiver just\n> eavesdrops on the stream of files to facilitate backup manifest\n> construction, and then forwards everything through to a subsequent\n> bbarchiver. Like bbsink_throttle, it can be entirely omitted if not\n> used. This patch is a bit clunky at the moment and needs some polish,\n> but it is another demonstration of how these abstractions can be used\n> to simplify basebackup.c, so that basebackup.c only has to worry about\n> determining what should be backed up and not have to worry much about\n> all the specific things that need to be done as part of that.\n>\n> Although this patch set adds quite a bit of code on net, it makes\n> basebackup.c considerably smaller and simpler, removing more than 400\n> lines of code from that file, about 20% of the current total. There\n> are some gratifying changes vs. the status quo. For example, in\n> master, we have this:\n>\n> sendDir(const char *path, int basepathlen, bool sizeonly, List *tablespaces,\n> bool sendtblspclinks, backup_manifest_info *manifest,\n> const char *spcoid)\n>\n> Notably, the sizeonly flag makes the function not do what the name of\n> the function suggests that it does. Also, we've got to pass some extra\n> fields through to enable specific features. With the patch set, the\n> equivalent function looks like this:\n>\n> archive_directory(bbarchiver *archiver, const char *path, int basepathlen,\n> List *tablespaces, bool sendtblspclinks)\n>\n> The question \"what should I do with the directories and files we find\n> as we recurse?\" is now answered by the choice of which bbarchiver to\n> pass to the function, rather than by the values of sizeonly, manifest,\n> and spcoid. That's not night and day, but I think it's better,\n> especially as you imagine adding more features in the future. The\n> really important part, for me, is that you can make the bbarchiver do\n> anything you like without needing to make any more changes to this\n> function. It just arranges to invoke your callbacks. You take it from\n> there.\n>\n> One pretty major question that this patch set doesn't address is what\n> the user interface for any of the hypothetical features mentioned\n> above ought to look like, or how basebackup.c ought to support them.\n> The syntax for the BASE_BACKUP command, like the contents of\n> basebackup.c, has grown organically, and doesn't seem to be very\n> scalable. Also, the wire protocol - a series of CopyData results which\n> the client is entirely responsible for knowing how to interpret and\n> about which the server provides only minimal information - doesn't\n> much lend itself to extensibility. Some careful design work is likely\n> needed in both areas, and this patch does not try to do any of it. I\n> am quite interested in discussing those questions, but I felt that\n> they weren't the most important problems to solve first.\n>\n> What do you all think?\n\nThe overall idea looks quite nice. I had a look at some of the\npatches at least 0005 and 0006. At first look, I have one comment.\n\n+/*\n+ * Each archive is set as a separate stream of COPY data, and thus begins\n+ * with a CopyOutResponse message.\n+ */\n+static void\n+bbsink_libpq_begin_archive(bbsink *sink, const char *archive_name)\n+{\n+ SendCopyOutResponse();\n+}\n\nSome of the bbsink_libpq_* functions are directly calling pq_* e.g.\nbbsink_libpq_begin_backup whereas others are calling SendCopy*\nfunctions and therein those are calling pq_* functions. I think\nbbsink_libpq_* function can directly call pq_* functions instead of\nadding one more level of the function call.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 14:01:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, May 12, 2020 at 4:32 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Some of the bbsink_libpq_* functions are directly calling pq_* e.g.\n> bbsink_libpq_begin_backup whereas others are calling SendCopy*\n> functions and therein those are calling pq_* functions. I think\n> bbsink_libpq_* function can directly call pq_* functions instead of\n> adding one more level of the function call.\n\nI think all the helper functions have more than one caller, though.\nThat's why I created them - to avoid duplicating code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:26:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, May 13, 2020 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 12, 2020 at 4:32 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Some of the bbsink_libpq_* functions are directly calling pq_* e.g.\n> > bbsink_libpq_begin_backup whereas others are calling SendCopy*\n> > functions and therein those are calling pq_* functions. I think\n> > bbsink_libpq_* function can directly call pq_* functions instead of\n> > adding one more level of the function call.\n>\n> I think all the helper functions have more than one caller, though.\n> That's why I created them - to avoid duplicating code.\n\nYou are right, somehow I missed that part. Sorry for the noise.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 09:07:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nDid some performance testing by varying TAR_SEND_SIZE with Robert's\nrefactor patch and without the patch to check the impact.\n\nBelow are the details:\n\n*Backup type*: local backup using pg_basebackup\n*Data size*: Around 200GB (200 tables - each table around 1.05 GB)\n*different TAR_SEND_SIZE values*: 8kb, 32kb (default value), 128kB, 1MB (\n1024kB)\n\n*Server details:*\nRAM: 500 GB CPU details: Architecture: x86_64 CPU op-mode(s): 32-bit,\n64-bit Byte Order: Little Endian CPU(s): 128 Filesystem: ext4\n\n8kb 32kb (default value) 128kB 1024kB\nWithout refactor patch real 10m22.718s\nuser 1m23.629s\nsys 8m51.410s real 8m36.245s\nuser 1m8.471s\nsys 7m21.520s real 6m54.299s\nuser 0m55.690s\nsys 5m46.502s real 18m3.511s\nuser 1m38.197s\nsys 9m36.517s\nWith refactor patch (Robert's patch) real 10m11.350s\nuser 1m25.038s\nsys 8m39.226s real 8m56.226s\nuser 1m9.774s\nsys 7m41.032s real 7m26.678s\nuser 0m54.833s\nsys 6m20.057s real 18m17.230s\nuser 1m42.749s\nsys 9m53.704s\n\nThe above numbers are taken from the minimum of two runs of each scenario.\n\nI can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a\ngood performance whereas, for 1Mb it is taking 2.5x more time.\n\nPlease let me know your thoughts/suggestions on the same.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nHi,Did some performance testing by varying TAR_SEND_SIZE with Robert's refactor patch and without the patch to check the impact.Below are the details:Backup type: local backup using pg_basebackupData size: Around 200GB (200 tables - each table around 1.05 GB)different TAR_SEND_SIZE values: 8kb, 32kb (default value), 128kB, 1MB (1024kB)Server details:RAM: 500 GB\nCPU details:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nFilesystem: ext48kb 32kb (default value)128kB1024kBWithout refactor patchreal 10m22.718suser 1m23.629ssys 8m51.410sreal 8m36.245suser 1m8.471ssys 7m21.520sreal 6m54.299suser 0m55.690ssys 5m46.502sreal 18m3.511suser 1m38.197ssys 9m36.517sWith refactor patch (Robert's patch)real 10m11.350suser 1m25.038ssys 8m39.226sreal 8m56.226suser 1m9.774ssys 7m41.032sreal 7m26.678suser 0m54.833ssys 6m20.057sreal 18m17.230suser 1m42.749ssys 9m53.704sThe above numbers are taken from the minimum of two runs of each scenario.I can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a good performance whereas, for 1Mb it is taking 2.5x more time.Please let me know your thoughts/suggestions on the same.-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Wed, 13 May 2020 09:31:26 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Suraj,\n\nTwo points I wanted to mention.\n\n\n 1. The max rate at which the transfer is happening when the tar size is\n 128 Kb is at most .48 GB/sec. Is there a possibility to understand what is\n the buffer size which is being used. That could help us explain some part\n of the puzzle.\n 2. Secondly the idea of taking just the min of two runs is a bit counter\n to the following. How do we justify the performance numbers and attribute\n that the differences is not related to noise. It might be better to do a\n few experiments for each of the kind and then try and fit a basic linear\n model and report the std deviation. \"Order statistics\" where you get the\n min(X1, X2, ... , Xn) is generally a biased estimator. A variance\n calculation of the biased statistics is a bit tricky and so the results\n could be corrupted by noise.\n\n\nWith Regards,\nSumanta Mukherjee.\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Wed, May 13, 2020 at 9:31 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> Hi,\n>\n> Did some performance testing by varying TAR_SEND_SIZE with Robert's\n> refactor patch and without the patch to check the impact.\n>\n> Below are the details:\n>\n> *Backup type*: local backup using pg_basebackup\n> *Data size*: Around 200GB (200 tables - each table around 1.05 GB)\n> *different TAR_SEND_SIZE values*: 8kb, 32kb (default value), 128kB, 1MB (\n> 1024kB)\n>\n> *Server details:*\n> RAM: 500 GB CPU details: Architecture: x86_64 CPU op-mode(s): 32-bit,\n> 64-bit Byte Order: Little Endian CPU(s): 128 Filesystem: ext4\n>\n> 8kb 32kb (default value) 128kB 1024kB\n> Without refactor patch real 10m22.718s\n> user 1m23.629s\n> sys 8m51.410s real 8m36.245s\n> user 1m8.471s\n> sys 7m21.520s real 6m54.299s\n> user 0m55.690s\n> sys 5m46.502s real 18m3.511s\n> user 1m38.197s\n> sys 9m36.517s\n> With refactor patch (Robert's patch) real 10m11.350s\n> user 1m25.038s\n> sys 8m39.226s real 8m56.226s\n> user 1m9.774s\n> sys 7m41.032s real 7m26.678s\n> user 0m54.833s\n> sys 6m20.057s real 18m17.230s\n> user 1m42.749s\n> sys 9m53.704s\n>\n> The above numbers are taken from the minimum of two runs of each scenario.\n>\n> I can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a\n> good performance whereas, for 1Mb it is taking 2.5x more time.\n>\n> Please let me know your thoughts/suggestions on the same.\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\nHi Suraj,Two points I wanted to mention.The max rate at which the transfer is happening when the tar size is 128 Kb is at most .48 GB/sec. Is there a possibility to understand what is the buffer size which is being used. That could help us explain some part of the puzzle.Secondly the idea of taking just the min of two runs is a bit counter to the following. How do we justify the performance numbers and attribute that the differences is not related to noise. It might be better to do a few experiments for each of the kind and then try and fit a basic linear model and report the std deviation. \"Order statistics\"  where you get the min(X1, X2, ... , Xn) is generally a biased estimator.  A variance calculation of the biased statistics is a bit tricky and so the results could be corrupted by noise. With Regards,Sumanta Mukherjee.EnterpriseDB: http://www.enterprisedb.comOn Wed, May 13, 2020 at 9:31 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Hi,Did some performance testing by varying TAR_SEND_SIZE with Robert's refactor patch and without the patch to check the impact.Below are the details:Backup type: local backup using pg_basebackupData size: Around 200GB (200 tables - each table around 1.05 GB)different TAR_SEND_SIZE values: 8kb, 32kb (default value), 128kB, 1MB (1024kB)Server details:RAM: 500 GB\nCPU details:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nFilesystem: ext48kb 32kb (default value)128kB1024kBWithout refactor patchreal 10m22.718suser 1m23.629ssys 8m51.410sreal 8m36.245suser 1m8.471ssys 7m21.520sreal 6m54.299suser 0m55.690ssys 5m46.502sreal 18m3.511suser 1m38.197ssys 9m36.517sWith refactor patch (Robert's patch)real 10m11.350suser 1m25.038ssys 8m39.226sreal 8m56.226suser 1m9.774ssys 7m41.032sreal 7m26.678suser 0m54.833ssys 6m20.057sreal 18m17.230suser 1m42.749ssys 9m53.704sThe above numbers are taken from the minimum of two runs of each scenario.I can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a good performance whereas, for 1Mb it is taking 2.5x more time.Please let me know your thoughts/suggestions on the same.-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Wed, 13 May 2020 17:24:15 +0530", "msg_from": "Sumanta Mukherjee <sumanta.mukherjee@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, May 13, 2020 at 12:01 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> 8kb 32kb (default value) 128kB 1024kB\n> Without refactor patch real 10m22.718s\n> user 1m23.629s\n> sys 8m51.410s real 8m36.245s\n> user 1m8.471s\n> sys 7m21.520s real 6m54.299s\n> user 0m55.690s\n> sys 5m46.502s real 18m3.511s\n> user 1m38.197s\n> sys 9m36.517s\n> With refactor patch (Robert's patch) real 10m11.350s\n> user 1m25.038s\n> sys 8m39.226s real 8m56.226s\n> user 1m9.774s\n> sys 7m41.032s real 7m26.678s\n> user 0m54.833s\n> sys 6m20.057s real 18m17.230s\n> user 1m42.749s\n> sys 9m53.704s\n>\n> The above numbers are taken from the minimum of two runs of each scenario.\n>\n> I can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a\n> good performance whereas, for 1Mb it is taking 2.5x more time.\n>\n> Please let me know your thoughts/suggestions on the same.\n>\n\nSo the patch came out slightly faster at 8kB and slightly slower in the\nother tests. That's kinda strange. I wonder if it's just noise. How much do\nthe results vary run to run?\n\nI would've expected (and I think Andres thought the same) that a smaller\nblock size would be bad for the patch and a larger block size would be\ngood, but that's not what these numbers show.\n\nI wouldn't worry too much about the regression at 1MB. Probably what's\nhappening there is that we're losing some concurrency - perhaps with\nsmaller block sizes the OS can buffer the entire chunk in the pipe\nconnecting pg_basebackup to the server and start on the next one, but when\nyou go up to 1MB it doesn't fit and ends up spending a lot of time waiting\nfor data to be read from the pipe. Wait event profiling might tell you\nmore. Probably what this suggests is that you want the largest buffer size\nthat doesn't cause you to overrun the network/pipe buffer and no larger.\nUnfortunately, I have no idea how we'd figure that out dynamically, and I\ndon't see a reason to believe that everyone will have the same size buffers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, May 13, 2020 at 12:01 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:8kb 32kb (default value)128kB1024kBWithout refactor patchreal 10m22.718suser 1m23.629ssys 8m51.410sreal 8m36.245suser 1m8.471ssys 7m21.520sreal 6m54.299suser 0m55.690ssys 5m46.502sreal 18m3.511suser 1m38.197ssys 9m36.517sWith refactor patch (Robert's patch)real 10m11.350suser 1m25.038ssys 8m39.226sreal 8m56.226suser 1m9.774ssys 7m41.032sreal 7m26.678suser 0m54.833ssys 6m20.057sreal 18m17.230suser 1m42.749ssys 9m53.704sThe above numbers are taken from the minimum of two runs of each scenario.I can see, when we have TAR_SEND_SIZE as 32kb or 128kb, it is giving us a good performance whereas, for 1Mb it is taking 2.5x more time.Please let me know your thoughts/suggestions on the same.So the patch came out slightly faster at 8kB and slightly slower in the other tests. That's kinda strange. I wonder if it's just noise. How much do the results vary run to run?I would've expected (and I think Andres thought the same) that a smaller block size would be bad for the patch and a larger block size would be good, but that's not what these numbers show.I wouldn't worry too much about the regression at 1MB. Probably what's happening there is that we're losing some concurrency - perhaps with smaller block sizes the OS can buffer the entire chunk in the pipe connecting pg_basebackup to the server and start on the next one, but when you go up to 1MB it doesn't fit and ends up spending a lot of time waiting for data to be read from the pipe. Wait event profiling might tell you more. Probably what this suggests is that you want the largest buffer size that doesn't cause you to overrun the network/pipe buffer and no larger. Unfortunately, I have no idea how we'd figure that out dynamically, and I don't see a reason to believe that everyone will have the same size buffers. -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 13 May 2020 10:19:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nOn Wed, May 13, 2020 at 7:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> So the patch came out slightly faster at 8kB and slightly slower in the\n> other tests. That's kinda strange. I wonder if it's just noise. How much do\n> the results vary run to run?\n>\nIt is not varying much except for 8kB run. Please see below details for\nboth runs of each scenario.\n\n8kb 32kb (default value) 128kB 1024kB\nWIthout refactor\npatch 1st run real 10m50.924s\nuser 1m29.774s\nsys 9m13.058s real 8m36.245s\nuser 1m8.471s\nsys 7m21.520s real 7m8.690s\nuser 0m54.840s\nsys 6m1.725s real 18m16.898s\nuser 1m39.105s\nsys 9m42.803s\n2nd run real 10m22.718s\nuser 1m23.629s\nsys 8m51.410s real 8m44.455s\nuser 1m7.896s\nsys 7m28.909s real 6m54.299s\nuser 0m55.690s\nsys 5m46.502s real 18m3.511s\nuser 1m38.197s\nsys 9m36.517s\nWIth refactor\npatch 1st run real 10m11.350s\nuser 1m25.038s\nsys 8m39.226s real 8m56.226s\nuser 1m9.774s\nsys 7m41.032s real 7m26.678s\nuser 0m54.833s\nsys 6m20.057s real 19m5.218s\nuser 1m44.122s\nsys 10m17.623s\n2nd run real 11m30.500s\nuser 1m45.221s\nsys 9m37.815s real 9m4.103s\nuser 1m6.893s\nsys 7m49.393s real 7m26.713s\nuser 0m54.868s\nsys 6m19.652s real 18m17.230s\nuser 1m42.749s\nsys 9m53.704s\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nHi,On Wed, May 13, 2020 at 7:49 PM Robert Haas <robertmhaas@gmail.com> wrote:So the patch came out slightly faster at 8kB and slightly slower in the other tests. That's kinda strange. I wonder if it's just noise. How much do the results vary run to run?It is not varying much except for 8kB run. Please see below details for both runs of each scenario.8kb 32kb (default value)128kB1024kBWIthout refactor patch1st runreal\t10m50.924suser\t1m29.774ssys\t9m13.058sreal 8m36.245suser 1m8.471ssys 7m21.520sreal\t7m8.690suser\t0m54.840ssys\t6m1.725sreal\t18m16.898suser\t1m39.105ssys\t9m42.803s2nd runreal 10m22.718suser 1m23.629ssys 8m51.410sreal\t8m44.455suser\t1m7.896ssys\t7m28.909sreal 6m54.299suser 0m55.690ssys 5m46.502sreal 18m3.511suser 1m38.197ssys 9m36.517sWIth refactor patch1st runreal 10m11.350suser 1m25.038ssys 8m39.226sreal 8m56.226suser 1m9.774ssys 7m41.032sreal 7m26.678suser 0m54.833ssys 6m20.057sreal 19m5.218suser 1m44.122ssys 10m17.623s2nd runreal 11m30.500suser 1m45.221ssys 9m37.815sreal 9m4.103suser 1m6.893ssys 7m49.393sreal\t7m26.713suser\t0m54.868ssys\t6m19.652sreal 18m17.230suser 1m42.749ssys 9m53.704s -- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Thu, 14 May 2020 07:50:22 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nI have repeated the experiment with 8K block size and found that the\nresults are not varying much after applying the patch.\nPlease find the details below.\n\n*Backup type*: local backup using pg_basebackup\n*Data size*: Around 200GB (200 tables - each table around 1.05 GB)\n*TAR_SEND_SIZE value*: 8kb\n\n*Server details:*\nRAM: 500 GB CPU details: Architecture: x86_64 CPU op-mode(s): 32-bit,\n64-bit Byte Order: Little Endian CPU(s): 128 Filesystem: ext4\n\n*Results:*\n\nIteration WIthout refactor\npatch WIth refactor\npatch\n1st run real 10m19.001s\nuser 1m37.895s\nsys 8m33.008s real 9m45.291s\nuser 1m23.192s\nsys 8m14.993s\n2nd run real 9m33.970s\nuser 1m19.490s\nsys 8m6.062s real 9m30.560s\nuser 1m22.124s\nsys 8m0.979s\n3rd run real 9m19.327s\nuser 1m21.772s\nsys 7m50.613s real 8m59.241s\nuser 1m19.001s\nsys 7m32.645s\n4th run real 9m56.873s\nuser 1m22.370s\nsys 8m27.054s real 9m52.290s\nuser 1m22.175s\nsys 8m23.052s\n5th run real 9m45.343s\nuser 1m23.113s\nsys 8m15.418s real 9m49.633s\nuser 1m23.122s\nsys 8m19.240s\n\nLater I connected with Suraj to validate the experiment details and found\nthat the setup and steps followed are exactly the same in this\nexperiment when compared with the previous experiment.\n\nThanks,\nDipesh\n\nOn Thu, May 14, 2020 at 7:50 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> Hi,\n>\n> On Wed, May 13, 2020 at 7:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>>\n>> So the patch came out slightly faster at 8kB and slightly slower in the\n>> other tests. That's kinda strange. I wonder if it's just noise. How much do\n>> the results vary run to run?\n>>\n> It is not varying much except for 8kB run. Please see below details for\n> both runs of each scenario.\n>\n> 8kb 32kb (default value) 128kB 1024kB\n> WIthout refactor\n> patch 1st run real 10m50.924s\n> user 1m29.774s\n> sys 9m13.058s real 8m36.245s\n> user 1m8.471s\n> sys 7m21.520s real 7m8.690s\n> user 0m54.840s\n> sys 6m1.725s real 18m16.898s\n> user 1m39.105s\n> sys 9m42.803s\n> 2nd run real 10m22.718s\n> user 1m23.629s\n> sys 8m51.410s real 8m44.455s\n> user 1m7.896s\n> sys 7m28.909s real 6m54.299s\n> user 0m55.690s\n> sys 5m46.502s real 18m3.511s\n> user 1m38.197s\n> sys 9m36.517s\n> WIth refactor\n> patch 1st run real 10m11.350s\n> user 1m25.038s\n> sys 8m39.226s real 8m56.226s\n> user 1m9.774s\n> sys 7m41.032s real 7m26.678s\n> user 0m54.833s\n> sys 6m20.057s real 19m5.218s\n> user 1m44.122s\n> sys 10m17.623s\n> 2nd run real 11m30.500s\n> user 1m45.221s\n> sys 9m37.815s real 9m4.103s\n> user 1m6.893s\n> sys 7m49.393s real 7m26.713s\n> user 0m54.868s\n> sys 6m19.652s real 18m17.230s\n> user 1m42.749s\n> sys 9m53.704s\n>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\nHi,I have repeated the experiment with 8K block size and found that the results are not varying much after applying the patch. Please find the details below.Backup type: local backup using pg_basebackupData size: Around 200GB (200 tables - each table around 1.05 GB)TAR_SEND_SIZE value: 8kbServer details:RAM: 500 GB\nCPU details:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nFilesystem: ext4Results:IterationWIthout refactor patchWIth refactor patch1st runreal 10m19.001suser 1m37.895ssys 8m33.008sreal 9m45.291suser 1m23.192ssys 8m14.993s2nd runreal 9m33.970suser 1m19.490ssys 8m6.062sreal 9m30.560suser 1m22.124ssys 8m0.979s3rd runreal 9m19.327suser 1m21.772ssys 7m50.613sreal 8m59.241suser 1m19.001ssys 7m32.645s4th runreal 9m56.873suser 1m22.370ssys 8m27.054sreal 9m52.290suser 1m22.175ssys 8m23.052s5th runreal 9m45.343suser 1m23.113ssys 8m15.418sreal 9m49.633suser 1m23.122ssys 8m19.240sLater I connected with Suraj to validate the experiment details and found that the setup and steps followed are exactly the same in this experiment when compared with the previous experiment. Thanks,DipeshOn Thu, May 14, 2020 at 7:50 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Hi,On Wed, May 13, 2020 at 7:49 PM Robert Haas <robertmhaas@gmail.com> wrote:So the patch came out slightly faster at 8kB and slightly slower in the other tests. That's kinda strange. I wonder if it's just noise. How much do the results vary run to run?It is not varying much except for 8kB run. Please see below details for both runs of each scenario.8kb 32kb (default value)128kB1024kBWIthout refactor patch1st runreal\t10m50.924suser\t1m29.774ssys\t9m13.058sreal 8m36.245suser 1m8.471ssys 7m21.520sreal\t7m8.690suser\t0m54.840ssys\t6m1.725sreal\t18m16.898suser\t1m39.105ssys\t9m42.803s2nd runreal 10m22.718suser 1m23.629ssys 8m51.410sreal\t8m44.455suser\t1m7.896ssys\t7m28.909sreal 6m54.299suser 0m55.690ssys 5m46.502sreal 18m3.511suser 1m38.197ssys 9m36.517sWIth refactor patch1st runreal 10m11.350suser 1m25.038ssys 8m39.226sreal 8m56.226suser 1m9.774ssys 7m41.032sreal 7m26.678suser 0m54.833ssys 6m20.057sreal 19m5.218suser 1m44.122ssys 10m17.623s2nd runreal 11m30.500suser 1m45.221ssys 9m37.815sreal 9m4.103suser 1m6.893ssys 7m49.393sreal\t7m26.713suser\t0m54.868ssys\t6m19.652sreal 18m17.230suser 1m42.749ssys 9m53.704s -- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Tue, 30 Jun 2020 10:45:43 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jun 30, 2020 at 10:45 AM Dipesh Pandit <dipesh.pandit@gmail.com>\nwrote:\n\n> Hi,\n>\n> I have repeated the experiment with 8K block size and found that the\n> results are not varying much after applying the patch.\n> Please find the details below.\n>\n>\n> Later I connected with Suraj to validate the experiment details and found\n> that the setup and steps followed are exactly the same in this\n> experiment when compared with the previous experiment.\n>\n>\nThanks Dipesh.\nIt looks like the results are not varying much with your run as you\nfollowed the same steps.\nOne of my run with 8kb which took more time than others might be because of\nnoise at that time.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nOn Tue, Jun 30, 2020 at 10:45 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:Hi,I have repeated the experiment with 8K block size and found that the results are not varying much after applying the patch. Please find the details below.Later I connected with Suraj to validate the experiment details and found that the setup and steps followed are exactly the same in this experiment when compared with the previous experiment. Thanks Dipesh.It looks like the results are not varying much with your run as you followed the same steps. One of my run with 8kb which took more time than others might be because of noise at that time.-- --Thanks & Regards, Suraj kharage,  edbpostgres.com", "msg_date": "Tue, 30 Jun 2020 11:49:34 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, May 8, 2020 at 4:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> So it might be good if I'd remembered to attach the patches. Let's try\n> that again.\n\nHere's an updated patch set. This is now rebased over master and\nincludes as 0001 the patch I posted separately at\nhttp://postgr.es/m/CA+TgmobAczXDRO_Gr2euo_TxgzaH1JxbNxvFx=HYvBinefNH8Q@mail.gmail.com\nbut drops some other patches that were committed meanwhile. 0002-0009\nof this series are basically the same as 0004-0011 from the previous\nseries, except for rebasing and fixing a bug I discovered in what's\nnow 0006. 0012 does a refactoring of pg_basebackup along similar lines\nto the server-side refactoring from patches earlier in the series.\n0012 is a really terrible, hacky, awful demonstration of how this\ninfrastructure can support server-side compression. If you apply it\nand take a tar-format backup without -R, you will get .tar files that\nare actually .tar.gz files. You can rename them, decompress them, and\nuse pg_verifybackup to check that everything is OK. If you try to do\nanything else with 0012 applied, everything will break.\n\nIn the process of working on this, I learned a lot about how\npg_basebackup actually works, and found out about a number of things\nthat, with the benefit of hindsight, seem like they might not have\nbeen the best way to go.\n\n1. pg_basebackup -R injects recovery.conf (on older versions) or\ninjects standby.signal and appends to postgresql.auto.conf (on newer\nversions) by parsing the tar file sent by the server and editing it on\nthe fly. From the point of view of server-side compression, this is\nnot ideal, because if you want to make these kinds of changes when\nserver-side compression is in use, you'd have to decompress the stream\non the client side in order to figure out where in the steam you ought\nto inject your changes. But having to do that is a major expense. If\nthe client instead told the server what to change when generating the\narchive, and the server did it, this expense could be avoided. It\nwould have the additional advantage that the backup manifest could\nreflect the effects of those changes; right now it doesn't, and\npg_verifybackup just knows to expect differences in those files.\n\n2. According to the comments, some tar programs require two tar blocks\n(i.e. 512-byte blocks) of zero bytes at the end of an archive. The\nserver does not generate these blocks of zero bytes, so it basically\ncreates a tar file that works fine with my copy of tar but might break\nwith somebody else's. Instead, the client appends 1024 zero bytes to\nthe end of every file it receives from the server. That is an odd way\nof fixing this problem, and it makes things rather inflexible. If the\nserver sends you any kind of a file OTHER THAN a tar file with the\nlast 1024 zero bytes stripped off, then adding 1024 zero bytes will be\nthe wrong thing to do. It would be better if the server just generated\nfully correct tar files (whatever we think that means) and the client\nwrote out exactly what it got from the server. Then, we could have the\nserver generate cpio archives or zip files or gzip-compressed tar\nfiles or lz4-compressed tar files or anything we like, and the client\nwouldn't really need to care as long as it didn't need to extract\nthose archives. That seems a lot cleaner.\n\n3. The way that progress reporting works relies on the server knowing\nexactly how large the archive sent to the client is going to be.\nProgress as reckoned by the client is equal to the number of archive\npayload bytes the client has received. This works OK with a tar\nbecause we know how big the tar file is going to be based on the size\nof the input files we intend to send, but it's unsuitable for any sort\nof compressed archive (tar.gz, zip, whatever) because the compression\nratio cannot be predicted in advance. It would be better if the server\nsent the payload bytes (possibly compressed) interleaved with progress\nindicators, so that the client could correctly indicate that, say, the\nbackup is 30% complete because 30GB of 100GB has been processed on the\nserver side, even though the amount of data actually received by the\nclient might be 25GB or 20GB or 10GB or whatever because it got\ncompressed before transmission.\n\n4. A related consideration is that we might want to have an option to\ndo something with the backup other than send it to the client. For\nexample, it might be useful to have an option for pg_basebackup to\ntell the server to write the backup files to some specified server\ndirectory, or to, say, S3. There are security concerns there, and I'm\nnot proposing to do anything about this immediately, but it seems like\nsomething we might eventually want to have. In such a case, we are not\ngoing to send any payload to the client, but the client probably still\nwants progress indicators, so the current system of coupling progress\nto the number of bytes received by the client breaks down for that\nreason also.\n\n5. As things stand today, the client must know exactly how many\narchives it should expect to receive from the server and what each one\nis. It can do that, because it knows to expect one archive per\ntablespace, and the archive must be an uncompressed tarfile, so there\nis no ambiguity. But, if the server could send archives to other\nplaces, or send other kinds of archives to the client, then this would\nbecome more complex. There is no intrinsic reason why the logic on the\nclient side can't simply be made more complicated in order to cope,\nbut it doesn't seem like great design, because then every time you\nenhance the server, you've also got to enhance the client, and that\nlimits cross-version compatibility, and also seems more fragile. I\nwould rather that the server advertise the number of archives and the\nnames of each archive to the client explicitly, allowing the client to\nbe dumb unless it needs to post-process (e.g. extract) those archives.\n\nPutting all of the above together, what I propose - but have not yet\ntried to implement - is a new COPY sub-protocol for taking base\nbackups. Instead of sending a COPY stream per archive, the server\nwould send a single COPY stream where the first byte of each message\nis a type indicator, like we do with the replication sub-protocol\ntoday. For example, if the first byte is 'a' that could indicate that\nwe're beginning a new archive and the rest of the message would\nindicate the archive name and perhaps some flags or options. If the\nfirst byte is 'p' that could indicate that we're sending archive\npayload, perhaps with the first four bytes of the message being\nprogress, i.e. the number of newly-processed bytes on the server side\nprior to any compression, and the remaining bytes being payload. On\nreceipt of such a message, the client would increment the progress\nindicator by the value indicated in those first four bytes, and then\nprocess the remaining bytes by writing them to a file or whatever\nbehavior the user selected via -Fp, -Ft, -Z, etc. To be clear, I'm not\nsaying that this specific thing is the right thing, just something of\nthis sort. The server would need to continue supporting the current\nmulti-copy protocol for compatibility with older pg_basebackup\nversions, and pg_basebackup would need to continue to support it for\ncompatibility with older server versions, but we could use the new\napproach going forward. (Or, we could break compatibility, but that\nwould probably be unpopular and seems unnecessary and even risky to me\nat this point.)\n\nThe ideas in the previous paragraph would address #3-#5 directly, but\nthey also indirectly address #2 because while we're switching\nprotocols we could easily move the padding with zero bytes to the\nserver side, and I think we should. #1 is a bit of a separate\nconsideration. To tackle #1 along the lines proposed above, the client\nneeds a way to send the recovery.conf contents to the server so that\nthe server can inject them into the tar file. It's not exactly clear\nto me what the best way of permitting this is, and maybe there's a\ntotally different approach that would be altogether better. One thing\nto consider is that we might well want the client to be able to send\n*multiple* chunks of data to the server at the start of a backup. For\ninstance, suppose we want to support incremental backups. I think the\nright approach is for the client to send the backup_manifest file from\nthe previous full backup to the server. What exactly the server does\nwith it afterward depends on your preferred approach, but the\nnecessary information is there. Maybe incremental backup is based on\ncomparing cryptographic checksums, so the server looks at all the\nfiles and sends to the client those where the checksum (hopefully\nSHA-something!) does not match. I wouldn't favor this approach myself,\nbut I know some people like it. Or maybe it's based on finding blocks\nmodified since the LSN of the previous backup; the manifest has enough\ninformation for that to work, too. In such an approach, there can be\naltogether new files with old LSNs, because files can be flat-copied\nwithout changing block LSNs, so it's important to have the complete\nlist of files from the previous backup, and that too is in the\nmanifest. There are even timestamps for the bold among you. Anyway, my\npoint is to advocate for a design where the client says (1) I want a\nbackup with these options and then (2) here are 0, 1, or >1 files\n(recovery parameters and/or backup manifest and/or other things) in\nsupport of that and then the server hands back a stream of archives\nwhich the client may or may not choose to post-process.\n\nIt's tempting to think about solving this problem by appealing to\nCopyBoth, but I think that might be the wrong idea. The reason we use\nCopyBoth for the replication subprotocol is because there's periodic\nmessages flowing in both directions that are only loosely coupled to\neach other. Apart from reading frequently enough to avoid a deadlock\nbecause both sides have full write buffers, each end of the connection\ncan kind of do whatever it wants. But for the kinds of use cases I'm\ntalking about here, that's not so. First the client talks and the\nserver acknowledges, then the reverse. That doesn't mean we couldn't\nuse CopyBoth, but maybe a CopyIn followed by a CopyOut would be more\nstraightforward. I am not sure of the details here and am happy to\nhear the ideas of others.\n\nOne final thought is that the options framework for pg_basebackup is a\nlittle unfortunate. As of today, what the client receives, always, is\na series of tar files. If you say -Fp, it doesn't change the backup\nformat; it just extracts the tar files. If you say -Ft, it doesn't. If\nyou say -Ft but also -Z, it compresses the tar files. Thinking just\nabout server-side compression and ignoring for the moment more remote\nfeatures like alternate archive formats (e.g. zip) or things like\nstoring the backup to an alternate location rather than returning it\nto the client, you probably want the client to be able to specify at\nleast (1) server-side compression (perhaps with one of several\nalgorithms) and the client just writes the results, (2) server-side\ncompression (still with a choice of algorithm) and the client\ndecompresses but does not extract, (3) server-side compression (still\nwith a choice of algorithms) and the client decompresses and extracts,\n(4) client-side compression (with a choice of algorithms), and (5)\nclient-side extraction. You might also want (6) server-side\ncompression (with a choice of algorithms) and client-side decompresses\nand then re-compresses with a different algorithm (e.g. lz4 on the\nserver to save bandwidth at moderate CPU cost into parallel bzip2 on\nthe client for minimum archival storage). Or, as also discussed\nupthread, you might want (7) server-side compression of each file\nindividually, so that you get a seekable archive of individually\ncompressed files (e.g. to support fast delta restore). I think that\nwith these refactoring patches - and the wire protocol redesign\nmentioned above - it is very reasonable to offer as many of these\noptions as we believe users will find useful, but it is not very clear\nhow to extend the current command-line option framework to support\nthem. It probably would have been better if pg_basebackup, instead of\nhaving -Fp and -Ft, just had an --extract option that you could either\nspecify or omit, because that would not have presumed anything about\nthe archive format, but the existing structure is well-baked at this\npoint.\n\nThanks,\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 29 Jul 2020 11:31:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nOn 2020-07-29 11:31:26 -0400, Robert Haas wrote:\n> Here's an updated patch set. This is now rebased over master and\n> includes as 0001 the patch I posted separately at\n> http://postgr.es/m/CA+TgmobAczXDRO_Gr2euo_TxgzaH1JxbNxvFx=HYvBinefNH8Q@mail.gmail.com\n> but drops some other patches that were committed meanwhile. 0002-0009\n> of this series are basically the same as 0004-0011 from the previous\n> series, except for rebasing and fixing a bug I discovered in what's\n> now 0006. 0012 does a refactoring of pg_basebackup along similar lines\n> to the server-side refactoring from patches earlier in the series.\n\nHave you tested whether this still works against older servers? Or do\nyou think we should not have that as a goal?\n\n\n> 1. pg_basebackup -R injects recovery.conf (on older versions) or\n> injects standby.signal and appends to postgresql.auto.conf (on newer\n> versions) by parsing the tar file sent by the server and editing it on\n> the fly. From the point of view of server-side compression, this is\n> not ideal, because if you want to make these kinds of changes when\n> server-side compression is in use, you'd have to decompress the stream\n> on the client side in order to figure out where in the steam you ought\n> to inject your changes. But having to do that is a major expense. If\n> the client instead told the server what to change when generating the\n> archive, and the server did it, this expense could be avoided. It\n> would have the additional advantage that the backup manifest could\n> reflect the effects of those changes; right now it doesn't, and\n> pg_verifybackup just knows to expect differences in those files.\n\nHm. I don't think I terribly like the idea of things like -R having to\nbe processed server side. That'll be awfully annoying to keep working\nacross versions, for one. But perhaps the config file should just not be\nin the main tar file going forward?\n\nI think we should eventually be able to use one archive for multiple\npurposes, e.g. to set up a standby as well as using it for a base\nbackup. Or multiple standbys with different tablespace remappings.\n\n\n> 2. According to the comments, some tar programs require two tar blocks\n> (i.e. 512-byte blocks) of zero bytes at the end of an archive. The\n> server does not generate these blocks of zero bytes, so it basically\n> creates a tar file that works fine with my copy of tar but might break\n> with somebody else's. Instead, the client appends 1024 zero bytes to\n> the end of every file it receives from the server. That is an odd way\n> of fixing this problem, and it makes things rather inflexible. If the\n> server sends you any kind of a file OTHER THAN a tar file with the\n> last 1024 zero bytes stripped off, then adding 1024 zero bytes will be\n> the wrong thing to do. It would be better if the server just generated\n> fully correct tar files (whatever we think that means) and the client\n> wrote out exactly what it got from the server. Then, we could have the\n> server generate cpio archives or zip files or gzip-compressed tar\n> files or lz4-compressed tar files or anything we like, and the client\n> wouldn't really need to care as long as it didn't need to extract\n> those archives. That seems a lot cleaner.\n\nYea.\n\n\n> 5. As things stand today, the client must know exactly how many\n> archives it should expect to receive from the server and what each one\n> is. It can do that, because it knows to expect one archive per\n> tablespace, and the archive must be an uncompressed tarfile, so there\n> is no ambiguity. But, if the server could send archives to other\n> places, or send other kinds of archives to the client, then this would\n> become more complex. There is no intrinsic reason why the logic on the\n> client side can't simply be made more complicated in order to cope,\n> but it doesn't seem like great design, because then every time you\n> enhance the server, you've also got to enhance the client, and that\n> limits cross-version compatibility, and also seems more fragile. I\n> would rather that the server advertise the number of archives and the\n> names of each archive to the client explicitly, allowing the client to\n> be dumb unless it needs to post-process (e.g. extract) those archives.\n\nISTM that that can help to some degree, but things like tablespace\nremapping etc IMO aren't best done server side, so I think the client\nwill continue to need to know about the contents to a significnat\ndegree?\n\n\n> Putting all of the above together, what I propose - but have not yet\n> tried to implement - is a new COPY sub-protocol for taking base\n> backups. Instead of sending a COPY stream per archive, the server\n> would send a single COPY stream where the first byte of each message\n> is a type indicator, like we do with the replication sub-protocol\n> today. For example, if the first byte is 'a' that could indicate that\n> we're beginning a new archive and the rest of the message would\n> indicate the archive name and perhaps some flags or options. If the\n> first byte is 'p' that could indicate that we're sending archive\n> payload, perhaps with the first four bytes of the message being\n> progress, i.e. the number of newly-processed bytes on the server side\n> prior to any compression, and the remaining bytes being payload. On\n> receipt of such a message, the client would increment the progress\n> indicator by the value indicated in those first four bytes, and then\n> process the remaining bytes by writing them to a file or whatever\n> behavior the user selected via -Fp, -Ft, -Z, etc.\n\nWonder if there's a way to get this to be less stateful. It seems a bit\nugly that the client would know what the last 'a' was for a 'p'? Perhaps\nwe could actually make 'a' include an identifier for each archive, and\nthen 'p' would append to a specific archive? Which would then also would\nallow for concurrent processing of those archives on the server side.\n\nI'd personally rather have a separate message type for progress and\npayload. Seems odd to have to send payload messages with 0 payload just\nbecause we want to update progress (in case of uploading to\ne.g. S3). And I think it'd be nice if we could have a more extensible\nprogress measurement approach than a fixed length prefix. E.g. it might\nbe nice to allow it to report both the overall progress, as well as a\nper archive progress. Or we might want to send progress when uploading\nto S3, even when not having pre-calculated the total size of the data\ndirectory.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Jul 2020 09:49:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Jul 31, 2020 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n> Have you tested whether this still works against older servers? Or do\n> you think we should not have that as a goal?\n\nI haven't tested that recently but I intended to keep it working. I'll\nmake sure to nail that down before I get to the point of committing\nanything, but I don't expect big problems. It's kind of annoying to\nhave so much backward compatibility stuff here but I think ripping any\nof that out should wait for another time.\n\n> Hm. I don't think I terribly like the idea of things like -R having to\n> be processed server side. That'll be awfully annoying to keep working\n> across versions, for one. But perhaps the config file should just not be\n> in the main tar file going forward?\n\nThat'd be a user-visible change, though, whereas what I'm proposing\nisn't. Instead of directly injecting stuff, the client can just send\nit to the server and have the server inject it, provided the server is\nnew enough. Cross-version issues don't seem to be any worse than now.\nThat being said, I don't love it, either. We could just suggest to\npeople that using -R together with server compression is\n\n> I think we should eventually be able to use one archive for multiple\n> purposes, e.g. to set up a standby as well as using it for a base\n> backup. Or multiple standbys with different tablespace remappings.\n\nI don't think I understand your point here.\n\n> ISTM that that can help to some degree, but things like tablespace\n> remapping etc IMO aren't best done server side, so I think the client\n> will continue to need to know about the contents to a significnat\n> degree?\n\nIf I'm not mistaken, those mappings are only applied with -Fp i.e. if\nwe're extracting. And it's no problem to jigger things in that case;\nwe can only do this if we understand the archive in the first place.\nThe problem is when you have to decompress and recompress to jigger\nthings.\n\n> Wonder if there's a way to get this to be less stateful. It seems a bit\n> ugly that the client would know what the last 'a' was for a 'p'? Perhaps\n> we could actually make 'a' include an identifier for each archive, and\n> then 'p' would append to a specific archive? Which would then also would\n> allow for concurrent processing of those archives on the server side.\n\n...says the guy working on asynchronous I/O. I don't know, it's not a\nbad idea, but I think we'd have to change a LOT of code to make it\nactually do something useful. I feel like this could be added as a\nlater extension of the protocol, rather than being something that we\nnecessarily need to do now.\n\n> I'd personally rather have a separate message type for progress and\n> payload. Seems odd to have to send payload messages with 0 payload just\n> because we want to update progress (in case of uploading to\n> e.g. S3). And I think it'd be nice if we could have a more extensible\n> progress measurement approach than a fixed length prefix. E.g. it might\n> be nice to allow it to report both the overall progress, as well as a\n> per archive progress. Or we might want to send progress when uploading\n> to S3, even when not having pre-calculated the total size of the data\n> directory.\n\nI don't mind a separate message type here, but if you want merging of\nshort messages with adjacent longer messages to generate a minimal\nnumber of system calls, that might have some implications for the\nother thread where we're talking about how to avoid extra memory\ncopies when generating protocol messages. If you don't mind them going\nout as separate network packets, then it doesn't matter.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 13:50:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "\n\n> On Jul 29, 2020, at 8:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, May 8, 2020 at 4:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> So it might be good if I'd remembered to attach the patches. Let's try\n>> that again.\n> \n> Here's an updated patch set.\n\nHi Robert,\n\nv2-0001 through v2-0009 still apply cleanly, but v2-0010 no longer applies. It seems to be conflicting with Heikki's work from August. Could you rebase please?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 09:14:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Oct 21, 2020 at 12:14 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> v2-0001 through v2-0009 still apply cleanly, but v2-0010 no longer applies. It seems to be conflicting with Heikki's work from August. Could you rebase please?\n\nHere at last is a new version. I've dropped the \"bbarchiver\" patch for\nnow, added a new patch that I'll talk about below, and revised the\nothers. I'm pretty happy with the code now, so I guess the main things\nthat I'd like feedback on are (1) whether design changes seem to be\nneeded and (2) the UI. Once we have that stuff hammered out, I'll work\non adding documentation, which is missing at present. The interesting\npatches in terms of functionality are 0006 and 0007; the rest is\npreparatory refactoring.\n\n0006 adds a concept of base backup \"targets,\" which means that it lets\nyou send the base backup to someplace other than the client. You\nspecify the target using a new \"-t\" option to pg_basebackup. By way of\nexample, 0006 adds a \"blackhole\" target which throws the backup away\ninstead of sending it anywhere, and also a \"server\" target which\nstores the backup to the server filesystem in lieu of streaming it to\nthe client. So you can say something like \"pg_basebackup -Xnone -Ft -t\nserver:/backup/2021-07-08\" and, provided that you're superuser, the\nserver will try to drop the backup there. At present, you can't use\n-Fp or -Xfetch or -Xstream with a backup target, because that\nfunctionality is implemented on the client side. I think that's an\nacceptable restriction. Eventually I imagine we will want to have\ntargets like \"aws\" or \"s3\" or maybe some kind of plug-in system for\nnew targets. I haven't designed anything like that yet, but I think\nit's probably not all that hard to generalize what I've got.\n\n0007 adds server-side compression; currently, it only supports\nserver-side compression using gzip, but I hope that it won't be hard\nto generalize that to support LZ4 as well, and Andres told me he\nthinks we should aim to support zstd since that library has built-in\nparallel compression which is very appealing in this context. So you\nsay something like \"pg_basebackup -Ft --server-compression=gzip -D\n/backup/2021-07-08\" or, if you want that compressed backup stored on\nthe server and compressed as hard as possible, you could say\n\"pg_basebackup -Xnone -Ft --server-compression=gzip9 -t\nserver:/backup/2021-07-08\". Unfortunately, here again there are a\nnumber of features that are implemented on the client side, and they\ndon't work in combination with this. -Fp could be made to work by\nteaching the client to decompress; I just haven't written the code to\ndo that. It's probably not very useful in general, but maybe there's a\nuse case if you're really tight on network bandwidth. Making -R work\nlooks outright useless, because the client would have to get the whole\ncompressed tarfile from the server and then uncompress it, edit the\ntar file, and recompress. That seems like a thing no one can possibly\nwant. Also, if you say pg_basebackup -Ft -D- >whatever.tar, the server\ninjects the backup manifest into the tarfile, which if you used\n--server-compression would require decompressing and recompressing the\nwhole thing, so it doesn't seem worth supporting. It's more likely to\nbe a footgun than to help anybody. This option can be used with\n-Xstream or -Xfetch, but it doesn't compress pg_wal.tar, because\nthat's generated on the client side.\n\nThe thing I'm really unhappy with here is the -F option to\npg_basebackup, which presently allows only p for plain or t for tar.\nFor purposes of these patches, I've essentially treated this as if -Fp\nmeans \"I want the tar files the server sends to be extracted\" and\n\"-Ft\" as if it means \"I'm happy with them the way they are.\" Under\nthat interpretation, it's fine for --server-compression to cause e.g.\nbase.tar.gz to be written, because that's what the server sent. But\nit's not really a \"tar\" output format; it's a \"tar.gz\" output format.\nHowever, it doesn't seem to make any sense to define -Fz to mean \"i\nwant tar.gz output\" because -Z or -z already produces tar.gz output\nwhen used with -Ft, and also because it would be redundant to make\npeople specify both -Fz and --server-compression. Similarly, when you\nuse --target, the output format is arguably, well, nothing. I mean,\nsome tar files got stored to the target, but you don't have them, but\nagain it seems redundant to have people specify --target and then also\nhave to change the argument to -F. Hindsight being 20-20, I think we\nwould have been better off not having a -Ft or -Fp option at all, and\nhaving an --extract option that says you want to extract what the\nserver sends you, but it's probably too late to make that change now.\nOr maybe it isn't, and we should just break command-line argument\ncompatibility for v15. I don't know. Opinions appreciated, especially\nif they are nuanced.\n\nIf you're curious about what the other patches in the series do,\nhere's a very fast recap; see commit messages for more. 0001 revises\nthe grammar for some replication commands to use an extensible-options\nsyntax. 0002 is a trivial refactoring of basebackup.c. 0003 and 0004\nrefactor the server's basebackup.c and the client's pg_basebackup.c,\nrespectively, by introducing abstractions called bbsink and\nbbstreamer. 0005 introduces a new COPY sub-protocol for taking base\nbackups. I think it's worth mentioning that I believe that this\nrefactoring is quite powerful and could let us do a bunch of other\nthings that this patch set doesn't attempt. For instance, since this\nmakes it pretty easy to implement server-side compression, it could\nprobably also pretty easily be made to do server-side encryption, if\nyou're brave enough to want to have a discussion on pgsql-hackers\nabout how to design an encryption feature.\n\nThanks to my colleague Tushar Ahuja for helping test some of this code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 8 Jul 2021 11:56:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 7/8/21 9:26 PM, Robert Haas wrote:\n> Here at last is a new version.\nPlease refer this scenario ,where backup target using \n--server-compression is closing the server\nunexpectedly if we don't provide -no-manifest option\n\n[tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4  -t \nserver:/tmp/data_1  -Xnone\nNOTICE:  WAL archiving is not enabled; you must ensure that all required \nWAL segments are copied through other means to complete the backup\npg_basebackup: error: could not read COPY data: server closed the \nconnection unexpectedly\n     This probably means the server terminated abnormally\n     before or while processing the request.\n\nif we try to check with -Ft then this same scenario  is working ?\n\n[tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4  -Ft \n-D data_0 -Xnone\nNOTICE:  WAL archiving is not enabled; you must ensure that all required \nWAL segments are copied through other means to complete the backup\n[tushar@localhost bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 12 Jul 2021 17:51:06 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jul 12, 2021 at 5:51 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> On 7/8/21 9:26 PM, Robert Haas wrote:\n> > Here at last is a new version.\n> Please refer this scenario ,where backup target using\n> --server-compression is closing the server\n> unexpectedly if we don't provide -no-manifest option\n>\n> [tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4 -t\n> server:/tmp/data_1 -Xnone\n> NOTICE: WAL archiving is not enabled; you must ensure that all required\n> WAL segments are copied through other means to complete the backup\n> pg_basebackup: error: could not read COPY data: server closed the\n> connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n\nI think the problem is that bbsink_gzip_end_archive() is not\nforwarding the end request to the next bbsink. The attached patch so\nfix it.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Jul 2021 12:43:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 7/8/21 9:26 PM, Robert Haas wrote:\n> Here at last is a new version.\nif i try to perform pg_basebackup using \"-t server \" option against \nlocalhost V/S remote machine ,\ni can see difference in backup size.\n\ndata directory whose size is\n\n[edb@centos7tushar bin]$ du -sch data/\n578M    data/\n578M    total\n\n-h=localhost\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2*-h \nlocalhost*   -Xnone --no-manifest -P -v\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\nNOTICE:  all required WAL segments have been archived\n329595/329595 kB (100%), 1/1 tablespace\npg_basebackup: base backup completed\n\n[edb@centos7tushar bin]$ du -sch /tmp/all_data2\n322M    /tmp/all_data2\n322M    total\n[edb@centos7tushar bin]$\n\n-h=remote\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2 *-h \n<remote IP>* -Xnone --no-manifest -P -v\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\nNOTICE:  all required WAL segments have been archived\n170437/170437 kB (100%), 1/1 tablespace\npg_basebackup: base backup completed\n\n[edb@0 bin]$ du -sch /tmp/all_data2\n167M    /tmp/all_data2\n167M    total\n[edb@0 bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 7/8/21 9:26 PM, Robert Haas wrote:\n\n\nHere at last is a new version.\n\n if i try to perform pg_basebackup using \"-t server \" option against\n localhost V/S remote machine ,\n i can see difference in backup size.\ndata directory whose size is \n\n[edb@centos7tushar bin]$ du -sch data/\n 578M    data/\n 578M    total\n\n-h=localhost \n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2\n -h localhost   -Xnone --no-manifest -P -v\n pg_basebackup: initiating base backup, waiting for checkpoint to\n complete\n pg_basebackup: checkpoint completed\n NOTICE:  all required WAL segments have been\n archived                           \n 329595/329595 kB (100%), 1/1\n tablespace                                         \n pg_basebackup: base backup completed\n\n[edb@centos7tushar bin]$ du -sch /tmp/all_data2\n 322M    /tmp/all_data2\n 322M    total\n [edb@centos7tushar bin]$ \n\n-h=remote \n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2\n -h <remote IP> -Xnone --no-manifest -P -v\n pg_basebackup: initiating base backup, waiting for checkpoint to\n complete\n pg_basebackup: checkpoint completed\n NOTICE:  all required WAL segments have been\n archived                           \n 170437/170437 kB (100%), 1/1\n tablespace                                         \n pg_basebackup: base backup completed\n\n[edb@0 bin]$ du -sch /tmp/all_data2\n 167M    /tmp/all_data2\n 167M    total\n [edb@0 bin]$ \n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Jul 2021 16:34:27 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Jul 16, 2021 at 12:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 5:51 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> >\n> > On 7/8/21 9:26 PM, Robert Haas wrote:\n> > > Here at last is a new version.\n> > Please refer this scenario ,where backup target using\n> > --server-compression is closing the server\n> > unexpectedly if we don't provide -no-manifest option\n> >\n> > [tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4 -t\n> > server:/tmp/data_1 -Xnone\n> > NOTICE: WAL archiving is not enabled; you must ensure that all required\n> > WAL segments are copied through other means to complete the backup\n> > pg_basebackup: error: could not read COPY data: server closed the\n> > connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> >\n>\n> I think the problem is that bbsink_gzip_end_archive() is not\n> forwarding the end request to the next bbsink. The attached patch so\n> fix it.\n\nI was going through the patch, I think the refactoring made the base\nbackup code really clean and readable. I have a few minor\nsuggestions.\n\nv3-0003\n\n1.\n+ Assert(sink->bbs_next != NULL);\n+ bbsink_begin_archive(sink->bbs_next, gz_archive_name);\n\nI have noticed that the interface for forwarding the request to next\nbbsink is not uniform, for example bbsink_gzip_begin_archive() is\ncalling bbsink_begin_archive(sink->bbs_next, gz_archive_name); for\nforwarding the request to next bbsink where as\nbbsink_progress_begin_backup() is calling\nbbsink_forward_begin_backup(sink); I think it will be good if we keep\nthe usage uniform.\n\n2.\nI have noticed that bbbsink_copytblspc_* are not forwarding the\nrequest to the next sink, thats probably because we assume this should\nalways be the last sink. I agree that its true for this patch but the\ncommit message of the patch says that in future this might change, so\nwouldn't it be good to keep the interface generic? I mean\nbbsink_copytblspc_new(), should take the next sink as an input and the\ncaller can pass it as NULL. And the other apis can also try to\nforward the request if next is not NULL?\n\n3.\nIt would make more sense to order the function in\nbasebackup_progress.c same as done in other file i.e\nbbsink_progress_begin_backup, bbsink_progress_archive_contents and\nthen bbsink_progress_end_archive, and this will also be in sync with\nfunction pointer declaration in bbsink_ops.\n\nv3-0005-\n4.\n+ *\n+ * 'copystream' sends a starts a single COPY OUT operation and transmits\n+ * all the archives and the manifest if present during the course of that\n\ntypo 'copystream' sends a starts a single COPY OUT --> 'copystream'\nsends a single COPY OUT\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Jul 2021 17:03:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 7/16/21 12:43 PM, Dilip Kumar wrote:\n> I think the problem is that bbsink_gzip_end_archive() is not\n> forwarding the end request to the next bbsink. The attached patch so\n> fix it.\n\nThanks Dilip. Reported issue seems to be fixed now with your patch\n\n[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4  -t \nserver:/tmp/data_2 -v  -Xnone -R\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\nNOTICE:  all required WAL segments have been archived\npg_basebackup: base backup completed\n[edb@centos7tushar bin]$\n\nOR\n\n[edb@centos7tushar bin]$ ./pg_basebackup   -t server:/tmp/pv1 -Xnone   \n--server-compression=gzip4 -r 1024  -P\nNOTICE:  all required WAL segments have been archived\n23133/23133 kB (100%), 1/1 tablespace\n[edb@centos7tushar bin]$\n\nPlease refer this scenario ,where -R option is working with '-t server' \nbut not with -Ft\n\n--not working\n\n[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4  \n-Ft  -D ccv   -Xnone  -R --no-manifest\npg_basebackup: error: unable to parse archive: base.tar.gz\npg_basebackup: only tar archives can be parsed\npg_basebackup: the -R option requires pg_basebackup to parse the archive\npg_basebackup: removing data directory \"ccv\"\n\n--working\n\n[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4 -t   \nserver:/tmp/ccv    -Xnone  -R --no-manifest\nNOTICE:  all required WAL segments have been archived\n[edb@centos7tushar bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 19 Jul 2021 18:02:43 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jul 19, 2021 at 6:02 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> On 7/16/21 12:43 PM, Dilip Kumar wrote:\n> > I think the problem is that bbsink_gzip_end_archive() is not\n> > forwarding the end request to the next bbsink. The attached patch so\n> > fix it.\n>\n> Thanks Dilip. Reported issue seems to be fixed now with your patch\n\nThanks for the confirmation.\n\n> Please refer this scenario ,where -R option is working with '-t server'\n> but not with -Ft\n>\n> --not working\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4\n> -Ft -D ccv -Xnone -R --no-manifest\n> pg_basebackup: error: unable to parse archive: base.tar.gz\n> pg_basebackup: only tar archives can be parsed\n> pg_basebackup: the -R option requires pg_basebackup to parse the archive\n> pg_basebackup: removing data directory \"ccv\"\n\nAs per the error message and code, if we are giving -R then we need to\ninject recovery-conf file and that is only supported with tar format\nbut since you are enabling server compression which is no more .tar\nformat so it is giving an error.\n\n> --working\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4 -t\n> server:/tmp/ccv -Xnone -R --no-manifest\n> NOTICE: all required WAL segments have been archived\n> [edb@centos7tushar bin]$\n\nI am not sure why this is working, from the code I could not find if\nthe backup target is server then are we doing anything with the -R\noption or we are just silently ignoring it\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Jul 2021 20:29:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "\n\n> On Jul 8, 2021, at 8:56 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> The interesting\n> patches in terms of functionality are 0006 and 0007;\n\nThe difficulty in v3-0007 with pg_basebackup only knowing how to parse tar archives seems to be a natural consequence of not sufficiently abstracting out the handling of the tar format. If the bbsink and bbstreamer abstractions fully encapsulated a set of parsing callbacks, then pg_basebackup wouldn't contain things like:\n\n streamer = bbstreamer_tar_parser_new(streamer);\n\nbut instead would use the parser callbacks without knowledge of whether they were parsing tar vs. cpio vs. whatever. It just seems really odd that pg_basebackup is using the extensible abstraction layer and then defeating the purpose by knowing too much about the format. It might even be a useful exercise to write cpio support into this patch set rather than waiting until v16, just to make sure the abstraction layer doesn't have tar-specific assumptions left over.\n\n\n printf(_(\" -F, --format=p|t output format (plain (default), tar)\\n\"));\n\n printf(_(\" -z, --gzip compress tar output\\n\"));\n printf(_(\" -Z, --compress=0-9 compress tar output with given compression level\\n\"));\n\nThis is the pre-existing --help output, not changed by your patch, but if you anticipate that other output formats will be supported in future releases, perhaps it's better not to write the --help output in such a way as to imply that -z and -Z are somehow connected with the choice of tar format? Would changing the --help now make for less confusion later? I'm just asking...\n\nThe new options to pg_basebackup should have test coverage in src/bin/pg_basebackup/t/010_pg_basebackup.pl, though I expect you are waiting to hammer out the interface before writing the tests.\n\n> the rest is\n> preparatory refactoring.\n\npatch v3-0001:\n\nThe new function AppendPlainCommandOption writes too many spaces, which does no harm, but seems silly, resulting in lines like:\n\n LOG: received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, WAIT 0, MANIFEST 'yes')\n\n\npatch v3-0003:\n\nThe introduction of the sink abstraction seems incomplete, as basebackup.c still has knowledge of things like tar headers. Calls like _tarWriteHeader(sink, ...) feel like an abstraction violation. I expected perhaps this would get addressed in later patches, but it doesn't.\n\n+ * 'bbs_buffer' is the buffer into which data destined for the bbsink\n+ * should be stored. It must be a multiple of BLCKSZ.\n+ *\n+ * 'bbs_buffer_length' is the allocated length of the buffer.\n\nThe length must be a multiple of BLCKSZ, not the pointer.\n\n\npatch-v3-0005:\n\n+ * 'copystream' sends a starts a single COPY OUT operation and transmits\n\ntoo many verbs.\n\n+ * Regardless of which method is used, we sent a result set with\n\n\"is used\" vs. \"sent\" verb tense mismatch.\n\n+ * So we only check it after the number of bytes sine the last check reaches\n\ntypo. s/sine/since/\n\n- * (2) we need to inject backup_manifest or recovery configuration into it.\n+ * (2) we need to inject backup_manifest or recovery configuration into\n+ * it.\n\nsrc/bin/pg_basebackup/pg_basebackup.c contains word wrap changes like the above which would better be left to a different commit, if done at all.\n\n+ if (state.manifest_file !=NULL)\n\nNeed a space after !=\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Jul 2021 11:51:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jul 19, 2021 at 2:51 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The difficulty in v3-0007 with pg_basebackup only knowing how to parse tar archives seems to be a natural consequence of not sufficiently abstracting out the handling of the tar format. If the bbsink and bbstreamer abstractions fully encapsulated a set of parsing callbacks, then pg_basebackup wouldn't contain things like:\n>\n> streamer = bbstreamer_tar_parser_new(streamer);\n>\n> but instead would use the parser callbacks without knowledge of whether they were parsing tar vs. cpio vs. whatever. It just seems really odd that pg_basebackup is using the extensible abstraction layer and then defeating the purpose by knowing too much about the format. It might even be a useful exercise to write cpio support into this patch set rather than waiting until v16, just to make sure the abstraction layer doesn't have tar-specific assumptions left over.\n\nWell, I had a patch in an earlier patch set that tried to get\nknowledge of tar out of basebackup.c, but it couldn't use the bbsink\nabstraction; it needed a whole separate abstraction layer which I had\ncalled bbarchiver with a different API. So I dropped it, for fear of\nbeing told, not without some justification, that I was just changing\nthings for the sake of changing them, and also because having exactly\none implementation of some interface is really not great. I do\nconceptually like the idea of making the whole thing flexible enough\nto generate cpio or zip archives, because like you I think that having\ntar-specific stuff all over the place is grotty, but I have a feeling\nthere's little market demand for having pg_basebackup produce cpio,\npax, zip, iso, etc. archives. On the other hand, server-side\ncompression and server-side backup seem like functionality with real\nutility. Still, if you or others want to vote for resurrecting\nbbarchiver on the grounds that general code cleanup is worthwhile for\nits own sake, I'm OK with that, too.\n\nI don't really understand what your problem is with how the patch set\nleaves pg_basebackup. On the server side, because I dropped the\nbbarchiver stuff, basebackup.c still ends up knowing a bunch of stuff\nabout tar. pg_basebackup.c, however, really doesn't know anything much\nabout tar any more. It knows that if it's getting a tar file and needs\nto parse a tar file then it had better call the tar parsing code, but\nthat seems difficult to avoid. What we can avoid, and I think the\npatch set does, is pg_basebackup.c having any real knowledge of what\nthe tar parser is doing under the hood.\n\nThanks also for the detailed comments. I'll try to the right number of\nverbs in each sentence in the next version of the patch. I will also\nlook into the issues mentioned by Dilip and Tushar.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 14:57:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "\n\n> On Jul 20, 2021, at 11:57 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I don't really understand what your problem is with how the patch set\n> leaves pg_basebackup.\n\nI don't have a problem with how the patch set leaves pg_basebackup. \n\n> On the server side, because I dropped the\n> bbarchiver stuff, basebackup.c still ends up knowing a bunch of stuff\n> about tar. pg_basebackup.c, however, really doesn't know anything much\n> about tar any more. It knows that if it's getting a tar file and needs\n> to parse a tar file then it had better call the tar parsing code, but\n> that seems difficult to avoid.\n\nI was only imagining having a callback for injecting manifests or recovery configurations. It is not necessary that this be done in the current patch set, or perhaps ever.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 20 Jul 2021 13:03:40 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jul 20, 2021 at 4:03 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I was only imagining having a callback for injecting manifests or recovery configurations. It is not necessary that this be done in the current patch set, or perhaps ever.\n\nA callback where?\n\nI actually think the ideal scenario would be if the server always did\nall the work and the client wasn't involved in editing the tarfile,\nbut it's not super-easy to get there from here. We could add an option\nto tell the server whether to inject the manifest into the archive,\nwhich probably wouldn't be too bad. For it to inject the recovery\nconfiguration, we'd have to send that configuration to the server\nsomehow. I thought about using COPY BOTH mode instead of COPY OUT mode\nto allow for stuff like that, but it seems pretty complicated, and I\nwasn't really sure that we'd get consensus that it was better even if\nI went to the trouble of coding it up.\n\nIf we don't do that and stick with the current system where it's\nhandled on the client side, then I agree that we want to separate the\ntar-specific concerns from the injection-type concerns, which the\npatch does by making those operations different kinds of bbstreamer\nthat know only a relatively limited amount about what each other are\ndoing. You get [server] => [tar parser] => [recovery injector] => [tar\narchiver], where the [recovery injector] step nukes the archive file\nheaders for the files it adds or modifies, and the [tar archiver] step\nfixes them up again. So the only thing that the [recovery injector]\npiece needs to know is that if it makes any changes to a file, it\nshould send that file to the next step with a 0-length archive header,\nand all the [tar archiver] piece needs to know is that already-valid\nheaders can be left alone and 0-length ones need to be regenerated.\n\nThere may be a better scheme; I don't think this is perfectly elegant.\nI do think it's better than what we've got now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Jul 2021 11:09:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "\n\n> On Jul 21, 2021, at 8:09 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> A callback where?\n\nIf you were going to support lots of formats, not just tar, you might want the streamer class for each format to have a callback which sets up the injector, rather than having CreateBackupStreamer do it directly. Even then, having now studied CreateBackupStreamer a bit more, the idea seems less appealing than it did initially. I don't think it makes things any cleaner when only supporting tar, and maybe not even when supporting multiple formats, so I'll withdraw the suggestion.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 21 Jul 2021 09:11:49 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Jul 21, 2021 at 12:11 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> If you were going to support lots of formats, not just tar, you might want the streamer class for each format to have a callback which sets up the injector, rather than having CreateBackupStreamer do it directly. Even then, having now studied CreateBackupStreamer a bit more, the idea seems less appealing than it did initially. I don't think it makes things any cleaner when only supporting tar, and maybe not even when supporting multiple formats, so I'll withdraw the suggestion.\n\nGotcha. I think if we had a lot of formats I'd probably make a\nseparate function where you passed in the file extension and archive\ntype and it hands you back a parser for the appropriate kind of\narchive, or something like that. And then maybe a second, similar\nfunction where you pass in the injector and archive type and it wraps\nan archiver of the right type around it and hands that back. But I\ndon't think that's worth doing until we have 2 or 3 formats, which may\nor may not happen any time in the forseeable future.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Jul 2021 12:21:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 7/19/21 8:29 PM, Dilip Kumar wrote:\n> I am not sure why this is working, from the code I could not find if\n> the backup target is server then are we doing anything with the -R\n> option or we are just silently ignoring it\n\nOK, in an  another scenario  I can see , \"-t server\" working with \n\"--server-compression\" option  but not with -z  or -Z ?\n\n\"-t  server\" with option \"-z\"  / or (-Z )\n\n[tushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataN -Xnone  -z  \n--no-manifest -p 9033\npg_basebackup: error: only tar mode backups can be compressed\nTry \"pg_basebackup --help\" for more information.\n\ntushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataNa -Z 1    \n-Xnone  --server-compression=gzip4  --no-manifest -p 9033\npg_basebackup: error: only tar mode backups can be compressed\nTry \"pg_basebackup --help\" for more information.\n\n\"-t server\" with \"server-compression\"  (working)\n\n[tushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataN -Xnone  \n--server-compression=gzip4  --no-manifest -p 9033\nNOTICE:  WAL archiving is not enabled; you must ensure that all required \nWAL segments are copied through other means to complete the backup\n[tushar@localhost bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 22 Jul 2021 22:44:31 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jul 22, 2021 at 1:14 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 7/19/21 8:29 PM, Dilip Kumar wrote:\n> > I am not sure why this is working, from the code I could not find if\n> > the backup target is server then are we doing anything with the -R\n> > option or we are just silently ignoring it\n>\n> OK, in an another scenario I can see , \"-t server\" working with\n> \"--server-compression\" option but not with -z or -Z ?\n\nRight. The error messages or documentation might need some work, but\nit's expected that you won't be able to do client-side compression if\nthe backup is being sent someplace other than to the client.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Jul 2021 14:36:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": ">\n> 0007 adds server-side compression; currently, it only supports\n> server-side compression using gzip, but I hope that it won't be hard\n> to generalize that to support LZ4 as well, and Andres told me he\n> thinks we should aim to support zstd since that library has built-in\n> parallel compression which is very appealing in this context.\n>\n\nThanks, Robert for laying the foundation here.\nSo, I gave a try to LZ4 streaming API for server-side compression.\nLZ4 APIs are documented here[1].\n\nWith the attached WIP patch, I am now able to take the backup using the lz4\ncompression. The attached patch is basically applicable on top of Robert's\nV3\npatch-set[2].\n\nI could take the backup using the command:\npg_basebackup -t server:/tmp/data_lz4 -Xnone --server-compression=lz4\n\nFurther, when restored the backup `/tmp/data_lz4` and started the server, I\ncould see the tables I created, along with the data inserted on the original\nserver.\n\nWhen I tried to look into the binary difference between the original data\ndirectory and the backup `data_lz4` directory here is how it looked:\n\n$ diff -qr data/ /tmp/data_lz4\nOnly in /tmp/data_lz4: backup_label\nOnly in /tmp/data_lz4: backup_manifest\nOnly in data/base: pgsql_tmp\nOnly in /tmp/data_lz4: base.tar\nOnly in /tmp/data_lz4: base.tar.lz4\nFiles data/global/pg_control and /tmp/data_lz4/global/pg_control differ\nFiles data/logfile and /tmp/data_lz4/logfile differ\nOnly in data/pg_stat: db_0.stat\nOnly in data/pg_stat: global.stat\nOnly in data/pg_subtrans: 0000\nOnly in data/pg_wal: 000000010000000000000099.00000028.backup\nOnly in data/pg_wal: 00000001000000000000009A\nOnly in data/pg_wal: 00000001000000000000009B\nOnly in data/pg_wal: 00000001000000000000009C\nOnly in data/pg_wal: 00000001000000000000009D\nOnly in data/pg_wal: 00000001000000000000009E\nOnly in data/pg_wal/archive_status:\n000000010000000000000099.00000028.backup.done\nOnly in data/: postmaster.opts\n\nFor now, what concerns me here is, the following `LZ4F_compressUpdate()`\nAPI,\nis the one which is doing the core work of streaming compression:\n\nsize_t LZ4F_compressUpdate(LZ4F_cctx* cctx,\n void* dstBuffer, size_t dstCapacity,\n const void* srcBuffer, size_t srcSize,\n const LZ4F_compressOptions_t* cOptPtr);\n\nwhere, `dstCapacity`, is basically provided by the earlier call to\n`LZ4F_compressBound()` which provides minimum `dstCapacity` required to\nguarantee success of `LZ4F_compressUpdate()`, given a `srcSize` and\n`preferences`, for a worst-case scenario. `LZ4F_compressBound()` is:\n\nsize_t LZ4F_compressBound(size_t srcSize, const LZ4F_preferences_t*\nprefsPtr);\n\nNow, hard learning here is that the `dstCapacity` returned by the\n`LZ4F_compressBound()` even for a single byte i.e. 1 as `srcSize` is about\n~256K (seems it is has something to do with the blockSize in lz4 frame that\nwe\nchose, the minimum we can have is 64K), though the actual length of\ncompressed\ndata by the `LZ4F_compressUpdate()` is very less. Whereas, the destination\nbuffer length for us i.e. `mysink->base.bbs_next->bbs_buffer_length` is only\n32K. In the function call `LZ4F_compressUpdate()`, if I directly try to\nprovide\nthis `mysink->base.bbs_next->bbs_buffer + bytes_written` as `dstBuffer` and\nthe value returned by the `LZ4F_compressBound()` as the `dstCapacity` that\nsounds very much incorrect to me, since the actual out buffer length\nremaining\nis much less than what is calculated for the worst case by\n`LZ4F_compressBound()`.\n\nFor now, I am creating a temporary buffer of the required size, passing it\nfor compression, assert that the actual compressed bytes are less than the\nwhatever length we have available and then copy it to our output buffer.\n\nTo give an example, I put some logging statements, and I can see in the log:\n\"\nbytes remaining in mysink->base.bbs_next->bbs_buffer: 16537\ninput size to be compressed: 512\nestimated size for compressed buffer by LZ4F_compressBound(): 262667\nactual compressed size: 16\n\"\n\nWill really appreciate any inputs, comments, suggestions here.\n\nRegards,\nJeevan Ladhe\n\n[1] https://fossies.org/linux/lz4/doc/lz4frame_manual.html\n[2]\nhttps://www.postgresql.org/message-id/CA+TgmoYgVN=-Yoh71r3P9N7eKysd7_9b9s+1QFfFcs3w7Z-tig@mail.gmail.com", "msg_date": "Wed, 8 Sep 2021 23:43:42 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> To give an example, I put some logging statements, and I can see in the log:\n> \"\n> bytes remaining in mysink->base.bbs_next->bbs_buffer: 16537\n> input size to be compressed: 512\n> estimated size for compressed buffer by LZ4F_compressBound(): 262667\n> actual compressed size: 16\n> \"\n\nThat is pretty lame. I don't know why it needs a ~256k buffer to\nproduce 16 bytes of output.\n\nThe way the gzip APIs I used work, you tell it how big the output\nbuffer is and it writes until it fills that buffer, or until the input\nbuffer is empty, whichever happens first. But this seems to be the\nother way around: you tell it how much input you have, and it tells\nyou how big a buffer it needs. To handle that elegantly, I think I\nneed to make some changes to the design of the bbsink stuff. What I'm\nthinking is that each bbsink somehow tells the next bbsink how big to\nmake the buffer. So if the LZ4 buffer is told that its buffer should\nbe at least, I don't know, say 64kB. Then it can compute how large an\noutput buffer the LZ4 library requires for 64kB. Hopefully we can\nassume that liblz4 never needs a smaller buffer for a larger input.\nThen we can assume that if a 64kB input requires, say, a 300kB output\nbuffer, every possible input < 64kB also requires an output buffer <=\n300 kB.\n\nBut we can't just say, well, we were asked to create a 64kB buffer (or\nwhatever) so let's ask the next bbsink for a 300kB buffer (or\nwhatever), because then as soon as we write any data at all into it\nthe remaining buffer space might be insufficient for the next chunk.\nSo instead what I think we should do is have bbsink_lz4 set the size\nof the next sink's buffer to its own buffer size +\nLZ4F_compressBound(its own buffer size). So in this example if it's\nasked to create a 64kB buffer and LZ4F_compressBound(64kB) = 300kB\nthen it asks the next sink to set the buffer size to 364kB. Now, that\nmeans that there will always be at least 300 kB available in the\noutput buffer until we've accumulated a minimum of 64 kB of compressed\ndata, and then at that point we can flush.\n\nI think this would be relatively clean and would avoid the need for\nthe double copying that the current design forced you to do. What do\nyou think?\n\n+ /*\n+ * If we do not have enough space left in the output buffer for this\n+ * chunk to be written, first archive the already written contents.\n+ */\n+ if (nextChunkLen > mysink->base.bbs_next->bbs_buffer_length -\nmysink->bytes_written ||\n+ mysink->bytes_written >= mysink->base.bbs_next->bbs_buffer_length)\n+ {\n+ bbsink_archive_contents(sink->bbs_next, mysink->bytes_written);\n+ mysink->bytes_written = 0;\n+ }\n\nI think this is flat-out wrong. It assumes that the compressor will\nnever generate more than N bytes of output given N bytes of input,\nwhich is not true. Not sure there's much point in fixing it now\nbecause with the changes described above this code will have to change\nanyway, but I think it's just lucky that this has worked for you in\nyour testing.\n\n+ /*\n+ * LZ4F_compressUpdate() returns the number of bytes written into output\n+ * buffer. We need to keep track of how many bytes have been cumulatively\n+ * written into the output buffer(bytes_written). But,\n+ * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n+ * written to output buffer, set autoFlush to 1 to force the writing to the\n+ * output buffer.\n+ */\n+ prefs->autoFlush = 1;\n\nI don't see why this should be necessary. Elsewhere you have code that\ncaters to bytes being stuck inside LZ4's buffer, so why do we also\nrequire this?\n\nThanks for researching this!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Sep 2021 15:39:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Sep 8, 2021 at 3:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> The way the gzip APIs I used work, you tell it how big the output\n> buffer is and it writes until it fills that buffer, or until the input\n> buffer is empty, whichever happens first. But this seems to be the\n> other way around: you tell it how much input you have, and it tells\n> you how big a buffer it needs. To handle that elegantly, I think I\n> need to make some changes to the design of the bbsink stuff. What I'm\n> thinking is that each bbsink somehow tells the next bbsink how big to\n> make the buffer.\n\nHere's a new patch set with that design change (and a bug fix for 0001).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Sep 2021 19:55:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks, Robert for your response.\n\nOn Thu, Sep 9, 2021 at 1:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > To give an example, I put some logging statements, and I can see in the\n> log:\n> > \"\n> > bytes remaining in mysink->base.bbs_next->bbs_buffer: 16537\n> > input size to be compressed: 512\n> > estimated size for compressed buffer by LZ4F_compressBound(): 262667\n> > actual compressed size: 16\n> > \"\n>\n> That is pretty lame. I don't know why it needs a ~256k buffer to\n> produce 16 bytes of output.\n>\n\nAs I mentioned earlier, I think it has something to do with the lz4\nblocksize. Currently, I have chosen it has 256kB, which is 262144 bytes,\nand here the LZ4F_compressBound() has returned 262667 for worst-case\naccommodation of 512 bytes i.e. 262144(256kB) + 512 + I guess some\nbook-keeping bytes. If I choose to have blocksize as 64K, then this turns\nout to be: 66059 which is 65536(64 kB) + 512 + bookkeeping bytes.\n\nThe way the gzip APIs I used work, you tell it how big the output\n> buffer is and it writes until it fills that buffer, or until the input\n> buffer is empty, whichever happens first. But this seems to be the\n> other way around: you tell it how much input you have, and it tells\n> you how big a buffer it needs. To handle that elegantly, I think I\n> need to make some changes to the design of the bbsink stuff. What I'm\n> thinking is that each bbsink somehow tells the next bbsink how big to\n> make the buffer. So if the LZ4 buffer is told that its buffer should\n> be at least, I don't know, say 64kB. Then it can compute how large an\n> output buffer the LZ4 library requires for 64kB. Hopefully we can\n> assume that liblz4 never needs a smaller buffer for a larger input.\n> Then we can assume that if a 64kB input requires, say, a 300kB output\n> buffer, every possible input < 64kB also requires an output buffer <=\n> 300 kB.\n>\n\n I agree, this assumption is fair enough.\n\nBut we can't just say, well, we were asked to create a 64kB buffer (or\n> whatever) so let's ask the next bbsink for a 300kB buffer (or\n> whatever), because then as soon as we write any data at all into it\n> the remaining buffer space might be insufficient for the next chunk.\n> So instead what I think we should do is have bbsink_lz4 set the size\n> of the next sink's buffer to its own buffer size +\n> LZ4F_compressBound(its own buffer size). So in this example if it's\n> asked to create a 64kB buffer and LZ4F_compressBound(64kB) = 300kB\n> then it asks the next sink to set the buffer size to 364kB. Now, that\n> means that there will always be at least 300 kB available in the\n> output buffer until we've accumulated a minimum of 64 kB of compressed\n> data, and then at that point we can flush.\n\nI think this would be relatively clean and would avoid the need for\n> the double copying that the current design forced you to do. What do\n> you think?\n>\n\nI think this should work.\n\n\n>\n> + /*\n> + * If we do not have enough space left in the output buffer for this\n> + * chunk to be written, first archive the already written contents.\n> + */\n> + if (nextChunkLen > mysink->base.bbs_next->bbs_buffer_length -\n> mysink->bytes_written ||\n> + mysink->bytes_written >= mysink->base.bbs_next->bbs_buffer_length)\n> + {\n> + bbsink_archive_contents(sink->bbs_next, mysink->bytes_written);\n> + mysink->bytes_written = 0;\n> + }\n>\n> I think this is flat-out wrong. It assumes that the compressor will\n> never generate more than N bytes of output given N bytes of input,\n> which is not true. Not sure there's much point in fixing it now\n> because with the changes described above this code will have to change\n> anyway, but I think it's just lucky that this has worked for you in\n> your testing.\n\n\nI see your point. But for it to be accurate, I think we need to then\nconsidered the return value of LZ4F_compressBound() to check if that\nmany bytes are available. But, as explained earlier our output buffer is\nalready way smaller than that.\n\n\n>\n>\n+ /*\n> + * LZ4F_compressUpdate() returns the number of bytes written into output\n> + * buffer. We need to keep track of how many bytes have been cumulatively\n> + * written into the output buffer(bytes_written). But,\n> + * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n> + * written to output buffer, set autoFlush to 1 to force the writing to\n> the\n> + * output buffer.\n> + */\n> + prefs->autoFlush = 1;\n>\n> I don't see why this should be necessary. Elsewhere you have code that\n> caters to bytes being stuck inside LZ4's buffer, so why do we also\n> require this?\n>\n\nThis is needed to know the actual bytes written in the output buffer. If it\nis\n\nset to 0, then LZ4F_compressUpdate() would randomly return 0 or actual\n\nbytes are written to the output buffer, depending on whether it has buffered\n\nor really flushed data to the output buffer.\n\nIIUC, you are referring to the following comment for\nbbsink_lz4_end_archive():\n\n\n\"\n\n* There might be some data inside lz4's internal buffers; we need to get\n\n\n * that flushed out, also finalize the lz4 frame and then get that forwarded\n\n\n * to the successor sink as archive content.\n\n\"\n\n\nI think it should be modified to:\n\n\n\"\n\n* Finalize the lz4 frame and then get that forwarded to the successor sink\nas\n\n* archive content.\n\n\"\n\n\n\nRegards,\nJeevan Ladhe.\n\nThanks, Robert for your response.On Thu, Sep 9, 2021 at 1:09 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> To give an example, I put some logging statements, and I can see in the log:\n> \"\n> bytes remaining in mysink->base.bbs_next->bbs_buffer: 16537\n> input size to be compressed: 512\n> estimated size for compressed buffer by LZ4F_compressBound(): 262667\n> actual compressed size: 16\n> \"\n\nThat is pretty lame. I don't know why it needs a ~256k buffer to\nproduce 16 bytes of output.As I mentioned earlier, I think it has something to do with the lz4blocksize. Currently, I have chosen it has 256kB, which is 262144 bytes,and here the LZ4F_compressBound() has returned 262667 for worst-caseaccommodation of 512 bytes i.e. 262144(256kB) + 512 + I guess somebook-keeping bytes. If I choose to have blocksize as 64K, then this turnsout to be: 66059 which is 65536(64 kB) + 512 + bookkeeping bytes.\nThe way the gzip APIs I used work, you tell it how big the output\nbuffer is and it writes until it fills that buffer, or until the input\nbuffer is empty, whichever happens first. But this seems to be the\nother way around: you tell it how much input you have, and it tells\nyou how big a buffer it needs. To handle that elegantly, I think I\nneed to make some changes to the design of the bbsink stuff. What I'm\nthinking is that each bbsink somehow tells the next bbsink how big to\nmake the buffer. So if the LZ4 buffer is told that its buffer should\nbe at least, I don't know, say 64kB. Then it can compute how large an\noutput buffer the LZ4 library requires for 64kB. Hopefully we can\nassume that liblz4 never needs a smaller buffer for a larger input.\nThen we can assume that if a 64kB input requires, say, a 300kB output\nbuffer, every possible input < 64kB also requires an output buffer <=\n300 kB. I agree, this assumption is fair enough.\nBut we can't just say, well, we were asked to create a 64kB buffer (or\nwhatever) so let's ask the next bbsink for a 300kB buffer (or\nwhatever), because then as soon as we write any data at all into it\nthe remaining buffer space might be insufficient for the next chunk.\nSo instead what I think we should do is have bbsink_lz4 set the size\nof the next sink's buffer to its own buffer size +\nLZ4F_compressBound(its own buffer size). So in this example if it's\nasked to create a 64kB buffer and LZ4F_compressBound(64kB) = 300kB\nthen it asks the next sink to set the buffer size to 364kB. Now, that\nmeans that there will always be at least 300 kB available in the\noutput buffer until we've accumulated a minimum of 64 kB of compressed\ndata, and then at that point we can flush. \nI think this would be relatively clean and would avoid the need for\nthe double copying that the current design forced you to do. What do\nyou think?I think this should work. \n\n+ /*\n+ * If we do not have enough space left in the output buffer for this\n+ * chunk to be written, first archive the already written contents.\n+ */\n+ if (nextChunkLen > mysink->base.bbs_next->bbs_buffer_length -\nmysink->bytes_written ||\n+ mysink->bytes_written >= mysink->base.bbs_next->bbs_buffer_length)\n+ {\n+ bbsink_archive_contents(sink->bbs_next, mysink->bytes_written);\n+ mysink->bytes_written = 0;\n+ }\n\nI think this is flat-out wrong. It assumes that the compressor will\nnever generate more than N bytes of output given N bytes of input,\nwhich is not true. Not sure there's much point in fixing it now\nbecause with the changes described above this code will have to change\nanyway, but I think it's just lucky that this has worked for you in\nyour testing.I see your point. But for it to be accurate, I think we need to thenconsidered the return value of LZ4F_compressBound() to check if thatmany bytes are available. But, as explained earlier our output buffer isalready way smaller than that.  \n+ /*\n+ * LZ4F_compressUpdate() returns the number of bytes written into output\n+ * buffer. We need to keep track of how many bytes have been cumulatively\n+ * written into the output buffer(bytes_written). But,\n+ * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n+ * written to output buffer, set autoFlush to 1 to force the writing to the\n+ * output buffer.\n+ */\n+ prefs->autoFlush = 1;\n\nI don't see why this should be necessary. Elsewhere you have code that\ncaters to bytes being stuck inside LZ4's buffer, so why do we also\nrequire this?This is needed to know the actual bytes written in the output buffer. If it is\nset to 0, then LZ4F_compressUpdate() would randomly return 0 or actual\nbytes are written to the output buffer, depending on whether it has buffered\nor really flushed data to the output buffer.\nIIUC, you are referring to the following comment for bbsink_lz4_end_archive():\n\n\"* There might be some data inside lz4's internal buffers; we need to get            \n * that flushed out, also finalize the lz4 frame and then get that forwarded         \n * to the successor sink as archive content.\"\n\nI think it should be modified to:\n\n\"* Finalize the lz4 frame and then get that forwarded to the successor sink as\n* archive content.\"\n\nRegards,Jeevan Ladhe.", "msg_date": "Mon, 13 Sep 2021 15:32:45 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Sep 10, 2021 at 5:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 8, 2021 at 3:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The way the gzip APIs I used work, you tell it how big the output\n> > buffer is and it writes until it fills that buffer, or until the input\n> > buffer is empty, whichever happens first. But this seems to be the\n> > other way around: you tell it how much input you have, and it tells\n> > you how big a buffer it needs. To handle that elegantly, I think I\n> > need to make some changes to the design of the bbsink stuff. What I'm\n> > thinking is that each bbsink somehow tells the next bbsink how big to\n> > make the buffer.\n>\n> Here's a new patch set with that design change (and a bug fix for 0001).\n\nSeems like nothing has been done about the issue reported in [1]\n\nThis one line change shall fix the issue,\n\n--- a/src/backend/replication/basebackup_gzip.c\n+++ b/src/backend/replication/basebackup_gzip.c\n@@ -264,6 +264,8 @@ bbsink_gzip_end_archive(bbsink *sink)\n bbsink_archive_contents(sink->bbs_next, mysink->bytes_written);\n mysink->bytes_written = 0;\n }\n+\n+ bbsink_forward_end_archive(sink);\n }\n\n\n[1] https://www.postgresql.org/message-id/CAFiTN-uhg4iKA7FGWxaG9J8WD_LTx655%2BAUW3_KiK1%3DSakQy4A%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:49:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 6:03 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n>> + /*\n>> + * If we do not have enough space left in the output buffer for this\n>> + * chunk to be written, first archive the already written contents.\n>> + */\n>> + if (nextChunkLen > mysink->base.bbs_next->bbs_buffer_length -\n>> mysink->bytes_written ||\n>> + mysink->bytes_written >= mysink->base.bbs_next->bbs_buffer_length)\n>> + {\n>> + bbsink_archive_contents(sink->bbs_next, mysink->bytes_written);\n>> + mysink->bytes_written = 0;\n>> + }\n>>\n>> I think this is flat-out wrong. It assumes that the compressor will\n>> never generate more than N bytes of output given N bytes of input,\n>> which is not true. Not sure there's much point in fixing it now\n>> because with the changes described above this code will have to change\n>> anyway, but I think it's just lucky that this has worked for you in\n>> your testing.\n>\n> I see your point. But for it to be accurate, I think we need to then\n> considered the return value of LZ4F_compressBound() to check if that\n> many bytes are available. But, as explained earlier our output buffer is\n> already way smaller than that.\n\nWell, in your last version of the patch, you kind of had two output\nbuffers: a bigger one that you use internally and then the \"official\"\none which is associated with the next sink. With my latest patch set\nyou should be able to make that go away by just arranging for the next\nsink's buffer to be as big as you need it to be. But, if we were going\nto stick with using an extra buffer, then the solution would not be to\ndo this, but to copy the internal buffer to the official buffer in\nmultiple chunks if needed. So don't bother doing this here but just\nwait and see how much data you get and then chunk it to the next\nsink's buffer, calling bbsink_archive_contents() multiple times if\nrequired. That would be annoying and expensive so I'm glad we're not\ndoing it that way, but it could be done correctly.\n\n>> + /*\n>> + * LZ4F_compressUpdate() returns the number of bytes written into output\n>> + * buffer. We need to keep track of how many bytes have been cumulatively\n>> + * written into the output buffer(bytes_written). But,\n>> + * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n>> + * written to output buffer, set autoFlush to 1 to force the writing to the\n>> + * output buffer.\n>> + */\n>> + prefs->autoFlush = 1;\n>>\n>> I don't see why this should be necessary. Elsewhere you have code that\n>> caters to bytes being stuck inside LZ4's buffer, so why do we also\n>> require this?\n>\n> This is needed to know the actual bytes written in the output buffer. If it is\n> set to 0, then LZ4F_compressUpdate() would randomly return 0 or actual\n> bytes are written to the output buffer, depending on whether it has buffered\n> or really flushed data to the output buffer.\n\nThe problem is that if we autoflush, I think it will cause the\ncompression ratio to be less good. Try un-lz4ing a file that is\nproduced this way and then re-lz4 it and compare the size of the\nre-lz4'd file to the original one. Compressors rely on postponing\ndecisions about how to compress until they've seen as much of the\ninput as possible, and flushing forces them to decide earlier, and\nmaybe making a decision that isn't as good as it could have been. So I\nbelieve we should look for a way of avoiding this. Now I realize\nthere's a problem there with doing that and also making sure the\noutput buffer is large enough, and I'm not quite sure how we solve\nthat problem, but there is probably a way to do it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 12:04:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Seems like nothing has been done about the issue reported in [1]\n>\n> This one line change shall fix the issue,\n\nOops. Try this version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Sep 2021 12:12:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 9:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Seems like nothing has been done about the issue reported in [1]\n> >\n> > This one line change shall fix the issue,\n>\n> Oops. Try this version.\n\nThanks, this version works fine.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Sep 2021 20:00:45 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hello\n\nI found that in 0001 you propose to rename few options. Probably we could rename another option for clarify? I think FAST (it's about some bw limits?) and WAIT (wait for what? checkpoint?) option names are confusing.\nCould we replace FAST with \"CHECKPOINT [fast|spread]\" and WAIT to WAIT_WAL_ARCHIVED? I think such names would be more descriptive.\n\n-\t\tif (PQserverVersion(conn) >= 100000)\n-\t\t\t/* pg_recvlogical doesn't use an exported snapshot, so suppress */\n-\t\t\tappendPQExpBufferStr(query, \" NOEXPORT_SNAPSHOT\");\n+\t\t/* pg_recvlogical doesn't use an exported snapshot, so suppress */\n+\t\tif (use_new_option_syntax)\n+\t\t\tAppendStringCommandOption(query, use_new_option_syntax,\n+\t\t\t\t\t\t\t\t\t \"SNAPSHOT\", \"nothing\");\n+\t\telse\n+\t\t\tAppendPlainCommandOption(query, use_new_option_syntax,\n+\t\t\t\t\t\t\t\t\t \"NOEXPORT_SNAPSHOT\");\n\nIn 0002, it looks like condition for 9.x releases was lost?\n\nAlso my gcc version 8.3.0 is not happy with v5-0007-Support-base-backup-targets.patch and produces:\n\nbasebackup.c: In function ‘parse_basebackup_options’:\nbasebackup.c:970:7: error: ‘target_str’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n errmsg(\"target '%s' does not accept a target detail\",\n ^~~~~~\n\nregards, Sergei\n\n\n", "msg_date": "Tue, 14 Sep 2021 18:30:22 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks for the newer set of the patches Robert!\n\nI was wondering if we should change the bbs_buffer_length in bbsink to\nbe size_t instead of int, because that's what most of the compression\nlibraries have their length variables defined as.\n\nRegards,\nJeevan Ladhe\n\nOn Mon, Sep 13, 2021 at 9:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Seems like nothing has been done about the issue reported in [1]\n> >\n> > This one line change shall fix the issue,\n>\n> Oops. Try this version.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nThanks for the newer set of the patches Robert!I was wondering if we should change the bbs_buffer_length in bbsink tobe size_t instead of int, because that's what most of the compressionlibraries have their length variables defined as.Regards,Jeevan LadheOn Mon, Sep 13, 2021 at 9:42 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Seems like nothing has been done about the issue reported in [1]\n>\n> This one line change shall fix the issue,\n\nOops. Try this version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Sep 2021 17:23:54 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": ">\n> >> + /*\n> >> + * LZ4F_compressUpdate() returns the number of bytes written into\n> output\n> >> + * buffer. We need to keep track of how many bytes have been\n> cumulatively\n> >> + * written into the output buffer(bytes_written). But,\n> >> + * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n> >> + * written to output buffer, set autoFlush to 1 to force the writing\n> to the\n> >> + * output buffer.\n> >> + */\n> >> + prefs->autoFlush = 1;\n> >>\n> >> I don't see why this should be necessary. Elsewhere you have code that\n> >> caters to bytes being stuck inside LZ4's buffer, so why do we also\n> >> require this?\n> >\n> > This is needed to know the actual bytes written in the output buffer. If\n> it is\n> > set to 0, then LZ4F_compressUpdate() would randomly return 0 or actual\n> > bytes are written to the output buffer, depending on whether it has\n> buffered\n> > or really flushed data to the output buffer.\n>\n> The problem is that if we autoflush, I think it will cause the\n> compression ratio to be less good. Try un-lz4ing a file that is\n> produced this way and then re-lz4 it and compare the size of the\n> re-lz4'd file to the original one. Compressors rely on postponing\n> decisions about how to compress until they've seen as much of the\n> input as possible, and flushing forces them to decide earlier, and\n> maybe making a decision that isn't as good as it could have been. So I\n> believe we should look for a way of avoiding this. Now I realize\n> there's a problem there with doing that and also making sure the\n> output buffer is large enough, and I'm not quite sure how we solve\n> that problem, but there is probably a way to do it.\n>\n\nYes, you are right here, and I could verify this fact with an experiment.\nWhen autoflush is 1, the file gets less compressed i.e. the compressed file\nis of more size than the one generated when autoflush is set to 0.\nBut, as of now, I couldn't think of a solution as we need to really advance\nthe\nbytes written to the output buffer so that we can write into the output\nbuffer.\n\nRegards,\nJeevan Ladhe\n\n>> + /*\n>> + * LZ4F_compressUpdate() returns the number of bytes written into output\n>> + * buffer. We need to keep track of how many bytes have been cumulatively\n>> + * written into the output buffer(bytes_written). But,\n>> + * LZ4F_compressUpdate() returns 0 in case the data is buffered and not\n>> + * written to output buffer, set autoFlush to 1 to force the writing to the\n>> + * output buffer.\n>> + */\n>> + prefs->autoFlush = 1;\n>>\n>> I don't see why this should be necessary. Elsewhere you have code that\n>> caters to bytes being stuck inside LZ4's buffer, so why do we also\n>> require this?\n>\n> This is needed to know the actual bytes written in the output buffer. If it is\n> set to 0, then LZ4F_compressUpdate() would randomly return 0 or actual\n> bytes are written to the output buffer, depending on whether it has buffered\n> or really flushed data to the output buffer.\n\nThe problem is that if we autoflush, I think it will cause the\ncompression ratio to be less good. Try un-lz4ing a file that is\nproduced this way and then re-lz4 it and compare the size of the\nre-lz4'd file to the original one. Compressors rely on postponing\ndecisions about how to compress until they've seen as much of the\ninput as possible, and flushing forces them to decide earlier, and\nmaybe making a decision that isn't as good as it could have been. So I\nbelieve we should look for a way of avoiding this. Now I realize\nthere's a problem there with doing that and also making sure the\noutput buffer is large enough, and I'm not quite sure how we solve\nthat problem, but there is probably a way to do it.Yes, you are right here, and I could verify this fact with an experiment.When autoflush is 1, the file gets less compressed i.e. the compressed fileis of more size than the one generated when autoflush is set to 0.But, as of now, I couldn't think of a solution as we need to really advance thebytes written to the output buffer so that we can write into the output buffer.Regards,Jeevan Ladhe", "msg_date": "Tue, 21 Sep 2021 18:37:37 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nHere is a patch for lz4 based on the v5 set of patches. The patch adapts\nwith the\nbbsink changes, and is now able to make the provision for the required\nlength\nfor the output buffer using the new callback\nfunction bbsink_lz4_begin_backup().\n\nSample command to take backup:\npg_basebackup -t server:/tmp/data_lz4 -Xnone --server-compression=lz4\n\nPlease let me know your thoughts.\n\nRegards,\nJeevan Ladhe\n\nOn Mon, Sep 13, 2021 at 9:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Seems like nothing has been done about the issue reported in [1]\n> >\n> > This one line change shall fix the issue,\n>\n> Oops. Try this version.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Tue, 21 Sep 2021 19:05:01 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 14, 2021 at 11:30 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> I found that in 0001 you propose to rename few options. Probably we could rename another option for clarify? I think FAST (it's about some bw limits?) and WAIT (wait for what? checkpoint?) option names are confusing.\n> Could we replace FAST with \"CHECKPOINT [fast|spread]\" and WAIT to WAIT_WAL_ARCHIVED? I think such names would be more descriptive.\n\nI think CHECKPOINT { 'spread' | 'fast' } is probably a good idea; the\noptions logic for pg_basebackup uses the same convention, and if\nsomebody ever wanted to introduce a third kind of checkpoint, it would\nbe a lot easier if you could just make pg_basebackup -cbanana send\nCHECKPOINT 'banana' to the server. I don't think renaming WAIT ->\nWAIT_WAL_ARCHIVED has much value. The replication grammar isn't really\nintended to be consumed directly by end-users, and it's also not clear\nthat WAIT_WAL_ARCHIVED would attract more support than any of 5 or 10\nother possible variants. I'd rather leave it alone.\n\n> - if (PQserverVersion(conn) >= 100000)\n> - /* pg_recvlogical doesn't use an exported snapshot, so suppress */\n> - appendPQExpBufferStr(query, \" NOEXPORT_SNAPSHOT\");\n> + /* pg_recvlogical doesn't use an exported snapshot, so suppress */\n> + if (use_new_option_syntax)\n> + AppendStringCommandOption(query, use_new_option_syntax,\n> + \"SNAPSHOT\", \"nothing\");\n> + else\n> + AppendPlainCommandOption(query, use_new_option_syntax,\n> + \"NOEXPORT_SNAPSHOT\");\n>\n> In 0002, it looks like condition for 9.x releases was lost?\n\nGood catch, thanks.\n\nI'll post an updated version of these two patches on the thread\ndedicated to those two patches, which can be found at\nhttp://postgr.es/m/CA+Tgmob2cbCPNbqGoixp0J6aib0p00XZerswGZwx-5G=0M+BMA@mail.gmail.com\n\n> Also my gcc version 8.3.0 is not happy with v5-0007-Support-base-backup-targets.patch and produces:\n>\n> basebackup.c: In function ‘parse_basebackup_options’:\n> basebackup.c:970:7: error: ‘target_str’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> errmsg(\"target '%s' does not accept a target detail\",\n> ^~~~~~\n\nOK, I'll fix that. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Sep 2021 11:25:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 21, 2021 at 7:54 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I was wondering if we should change the bbs_buffer_length in bbsink to\n> be size_t instead of int, because that's what most of the compression\n> libraries have their length variables defined as.\n\nI looked into this and found that I was already using size_t or Size\nin a bunch of related places, so this seems to make sense.\n\nHere's a new patch set, responding also to Sergei's comments.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Sep 2021 12:51:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Yes, you are right here, and I could verify this fact with an experiment.\n> When autoflush is 1, the file gets less compressed i.e. the compressed file\n> is of more size than the one generated when autoflush is set to 0.\n> But, as of now, I couldn't think of a solution as we need to really advance the\n> bytes written to the output buffer so that we can write into the output buffer.\n\nI don't understand why you think we need to do that. What happens if\nyou just change prefs->autoFlush = 1 to set it to 0 instead? What I\nthink will happen is that you'll call LZ4F_compressUpdate a bunch of\ntimes without outputting anything, and then suddenly one of the calls\nwill produce a bunch of output all at once. But so what? I don't see\nthat anything in bbsink_lz4_archive_contents() would get broken by\nthat.\n\nIt would be a problem if LZ4F_compressUpdate() didn't produce anything\nand also didn't buffer the data internally, and expected us to keep\nthe input around. That we would have difficulty doing, because we\nwouldn't be calling LZ4F_compressUpdate() if we didn't need to free up\nsome space in that sink's input buffer. But if it buffers the data\ninternally, I don't know why we care.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Sep 2021 12:56:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 21, 2021 at 9:35 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Here is a patch for lz4 based on the v5 set of patches. The patch adapts with the\n> bbsink changes, and is now able to make the provision for the required length\n> for the output buffer using the new callback function bbsink_lz4_begin_backup().\n>\n> Sample command to take backup:\n> pg_basebackup -t server:/tmp/data_lz4 -Xnone --server-compression=lz4\n>\n> Please let me know your thoughts.\n\nThis pretty much looks right, with the exception of the autoFlush\nthing about which I sent a separate email. I need to write docs for\nall of this, and ideally test cases. It might also be good if\npg_basebackup had an option to un-gzip or un-lz4 archives, but I\nhaven't thought too hard about what would be required to make that\nwork.\n\n+ if (opt->compression == BACKUP_COMPRESSION_LZ4)\n\nelse if\n\n+ /* First of all write the frame header to destination buffer. */\n+ Assert(CHUNK_SIZE >= LZ4F_HEADER_SIZE_MAX);\n+ headerSize = LZ4F_compressBegin(mysink->ctx,\n+ mysink->base.bbs_next->bbs_buffer,\n+ CHUNK_SIZE,\n+ prefs);\n\nI think this is wrong. I think you should be passing bbs_buffer_length\ninstead of CHUNK_SIZE, and I think you can just delete CHUNK_SIZE. If\nyou think otherwise, why?\n\n+ * sink's bbs_buffer of length that can accomodate the compressed input\n\nSpelling.\n\n+ * Make it next multiple of BLCKSZ since the buffer length is expected so.\n\nThe buffer length is expected to be a multiple of BLCKSZ, so round up.\n\n+ * If we are falling short of available bytes needed by\n+ * LZ4F_compressUpdate() per the upper bound that is decided by\n+ * LZ4F_compressBound(), send the archived contents to the next sink to\n+ * process it further.\n\nIf the number of available bytes has fallen below the value computed\nby LZ4F_compressBound(), ask the next sink to process the data so that\nwe can empty the buffer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Sep 2021 13:20:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 21, 2021 at 10:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > Yes, you are right here, and I could verify this fact with an experiment.\n> > When autoflush is 1, the file gets less compressed i.e. the compressed\n> file\n> > is of more size than the one generated when autoflush is set to 0.\n> > But, as of now, I couldn't think of a solution as we need to really\n> advance the\n> > bytes written to the output buffer so that we can write into the output\n> buffer.\n>\n> I don't understand why you think we need to do that. What happens if\n> you just change prefs->autoFlush = 1 to set it to 0 instead? What I\n> think will happen is that you'll call LZ4F_compressUpdate a bunch of\n> times without outputting anything, and then suddenly one of the calls\n> will produce a bunch of output all at once. But so what? I don't see\n> that anything in bbsink_lz4_archive_contents() would get broken by\n> that.\n>\n> It would be a problem if LZ4F_compressUpdate() didn't produce anything\n> and also didn't buffer the data internally, and expected us to keep\n> the input around. That we would have difficulty doing, because we\n> wouldn't be calling LZ4F_compressUpdate() if we didn't need to free up\n> some space in that sink's input buffer. But if it buffers the data\n> internally, I don't know why we care.\n>\n\nIf I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns an\nerror: ERROR_dstMaxSize_tooSmall after a few iterations.\n\nAfter digging a bit in the source of LZ4F_compressUpdate() in LZ4\nrepository, I\nsee that it throws this error when the destination buffer capacity, which in\nour case is mysink->base.bbs_next->bbs_buffer_length is less than the\ncompress bound which it calculates internally by calling\nLZ4F_compressBound()\ninternally for buffered_bytes + input buffer(CHUNK_SIZE in this case). Not\nsure\nhow can we control this.\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Sep 21, 2021 at 10:27 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Yes, you are right here, and I could verify this fact with an experiment.\n> When autoflush is 1, the file gets less compressed i.e. the compressed file\n> is of more size than the one generated when autoflush is set to 0.\n> But, as of now, I couldn't think of a solution as we need to really advance the\n> bytes written to the output buffer so that we can write into the output buffer.\n\nI don't understand why you think we need to do that. What happens if\nyou just change prefs->autoFlush = 1 to set it to 0 instead? What I\nthink will happen is that you'll call LZ4F_compressUpdate a bunch of\ntimes without outputting anything, and then suddenly one of the calls\nwill produce a bunch of output all at once. But so what? I don't see\nthat anything in bbsink_lz4_archive_contents() would get broken by\nthat.\n\nIt would be a problem if LZ4F_compressUpdate() didn't produce anything\nand also didn't buffer the data internally, and expected us to keep\nthe input around. That we would have difficulty doing, because we\nwouldn't be calling LZ4F_compressUpdate() if we didn't need to free up\nsome space in that sink's input buffer. But if it buffers the data\ninternally, I don't know why we care.If I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns anerror: ERROR_dstMaxSize_tooSmall after a few iterations.After digging a bit in the source of LZ4F_compressUpdate() in LZ4 repository, Isee that it throws this error when the destination buffer capacity, which inour case is mysink->base.bbs_next->bbs_buffer_length is less than thecompress bound which it calculates internally by calling LZ4F_compressBound()internally for buffered_bytes + input buffer(CHUNK_SIZE in this case). Not surehow can we control this.Regards,Jeevan Ladhe", "msg_date": "Wed, 22 Sep 2021 22:10:32 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Sep 21, 2021 at 10:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> + if (opt->compression == BACKUP_COMPRESSION_LZ4)\n>\n> else if\n>\n> + /* First of all write the frame header to destination buffer. */\n> + Assert(CHUNK_SIZE >= LZ4F_HEADER_SIZE_MAX);\n> + headerSize = LZ4F_compressBegin(mysink->ctx,\n> + mysink->base.bbs_next->bbs_buffer,\n> + CHUNK_SIZE,\n> + prefs);\n>\n> I think this is wrong. I think you should be passing bbs_buffer_length\n> instead of CHUNK_SIZE, and I think you can just delete CHUNK_SIZE. If\n> you think otherwise, why?\n>\n> + * sink's bbs_buffer of length that can accomodate the compressed input\n>\n> Spelling.\n>\n> + * Make it next multiple of BLCKSZ since the buffer length is expected so.\n>\n> The buffer length is expected to be a multiple of BLCKSZ, so round up.\n>\n> + * If we are falling short of available bytes needed by\n> + * LZ4F_compressUpdate() per the upper bound that is decided by\n> + * LZ4F_compressBound(), send the archived contents to the next sink to\n> + * process it further.\n>\n> If the number of available bytes has fallen below the value computed\n> by LZ4F_compressBound(), ask the next sink to process the data so that\n> we can empty the buffer.\n>\n\nThanks for your comments, Robert.\nHere is the patch addressing the comments, except the one regarding the\nautoFlush flag setting.\n\nKindly have a look.\n\nRegards,\nJeevan Ladhe", "msg_date": "Wed, 22 Sep 2021 22:22:40 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Sep 22, 2021 at 12:41 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> If I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns an\n> error: ERROR_dstMaxSize_tooSmall after a few iterations.\n>\n> After digging a bit in the source of LZ4F_compressUpdate() in LZ4 repository, I\n> see that it throws this error when the destination buffer capacity, which in\n> our case is mysink->base.bbs_next->bbs_buffer_length is less than the\n> compress bound which it calculates internally by calling LZ4F_compressBound()\n> internally for buffered_bytes + input buffer(CHUNK_SIZE in this case). Not sure\n> how can we control this.\n\nUggh. It had been my guess was that the reason why\nLZ4F_compressBound() was returning such a large value was because it\nhad to allow for the possibility of bytes inside of its internal\nbuffers. But, if the amount of internally buffered data counts against\nthe argument that you have to pass to LZ4F_compressBound(), then that\nmakes it more complicated.\n\nStill, there's got to be a simple way to make this work, and it can't\ninvolve setting autoFlush. Like, look at this:\n\nhttps://github.com/lz4/lz4/blob/dev/examples/frameCompress.c\n\nThat uses the same APIs that we're here and a fixed-size input buffer\nand a fixed-size output buffer, just as we have here, to compress a\nfile. And it probably works, because otherwise it likely wouldn't be\nin the \"examples\" directory. And it sets autoFlush to 0.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Sep 2021 11:22:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": ">\n> Still, there's got to be a simple way to make this work, and it can't\n> involve setting autoFlush. Like, look at this:\n>\n> https://github.com/lz4/lz4/blob/dev/examples/frameCompress.c\n>\n> That uses the same APIs that we're here and a fixed-size input buffer\n> and a fixed-size output buffer, just as we have here, to compress a\n> file. And it probably works, because otherwise it likely wouldn't be\n> in the \"examples\" directory. And it sets autoFlush to 0.\n>\n\nThanks, Robert. I have seen this example, and it is similar to what we have.\nI went through each of the steps and appears that I have done it correctly.\nI am still trying to debug and figure out where it is going wrong.\n\nI am going to try hooking the pg_basebackup with the lz4 source and\ndebug both the sources.\n\nRegards,\nJeevan Ladhe\n\nStill, there's got to be a simple way to make this work, and it can't\ninvolve setting autoFlush. Like, look at this:\n\nhttps://github.com/lz4/lz4/blob/dev/examples/frameCompress.c\n\nThat uses the same APIs that we're here and a fixed-size input buffer\nand a fixed-size output buffer, just as we have here, to compress a\nfile. And it probably works, because otherwise it likely wouldn't be\nin the \"examples\" directory. And it sets autoFlush to 0.Thanks, Robert. I have seen this example, and it is similar to what we have.I went through each of the steps and appears that I have done it correctly.I am still trying to debug and figure out where it is going wrong.I am going to try hooking the pg_basebackup with the lz4 source anddebug both the sources.Regards,Jeevan Ladhe", "msg_date": "Fri, 24 Sep 2021 18:27:44 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nI have fixed the autoFlush issue. Basically, I was wrongly initializing\nthe lz4 preferences in bbsink_lz4_begin_archive() instead of\nbbsink_lz4_begin_backup(). I have fixed the issue in the attached\npatch, please have a look at it.\n\nRegards,\nJeevan Ladhe\n\n\nOn Fri, Sep 24, 2021 at 6:27 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Still, there's got to be a simple way to make this work, and it can't\n>> involve setting autoFlush. Like, look at this:\n>>\n>> https://github.com/lz4/lz4/blob/dev/examples/frameCompress.c\n>>\n>> That uses the same APIs that we're here and a fixed-size input buffer\n>> and a fixed-size output buffer, just as we have here, to compress a\n>> file. And it probably works, because otherwise it likely wouldn't be\n>> in the \"examples\" directory. And it sets autoFlush to 0.\n>>\n>\n> Thanks, Robert. I have seen this example, and it is similar to what we\n> have.\n> I went through each of the steps and appears that I have done it correctly.\n> I am still trying to debug and figure out where it is going wrong.\n>\n> I am going to try hooking the pg_basebackup with the lz4 source and\n> debug both the sources.\n>\n> Regards,\n> Jeevan Ladhe\n>", "msg_date": "Tue, 5 Oct 2021 15:21:11 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nI think the patch v6-0007-Support-base-backup-targets.patch has broken\nthe case for multiple tablespaces. When I tried to take the backup\nfor target 'none' and extract the base.tar I was not able to locate\ntablespace_map file.\n\nI debugged and figured out in normal tar backup i.e. '-Ft' case\npg_basebackup command is sent with TABLESPACE_MAP to the server:\nBASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS,\nTABLESPACE_MAP, MANIFEST 'yes', TARGET 'client')\n\nBut, with the target command i.e. \"pg_basebackup -t server:/tmp/data_v1\n-Xnone\", we are not sending the TABLESPACE_MAP, here is how the command\nis sent:\nBASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, MANIFEST\n'yes', TARGET 'server', TARGET_DETAIL '/tmp/data_none')\n\nI am attaching a patch to fix this issue.\n\nWith the patch the command sent is now:\nBASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, MANIFEST\n'yes', TABLESPACE_MAP, TARGET 'server', TARGET_DETAIL '/tmp/data_none')\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Sep 21, 2021 at 10:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 21, 2021 at 7:54 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > I was wondering if we should change the bbs_buffer_length in bbsink to\n> > be size_t instead of int, because that's what most of the compression\n> > libraries have their length variables defined as.\n>\n> I looked into this and found that I was already using size_t or Size\n> in a bunch of related places, so this seems to make sense.\n>\n> Here's a new patch set, responding also to Sergei's comments.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Thu, 7 Oct 2021 17:20:10 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Oct 7, 2021 at 7:50 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I think the patch v6-0007-Support-base-backup-targets.patch has broken\n> the case for multiple tablespaces. When I tried to take the backup\n> for target 'none' and extract the base.tar I was not able to locate\n> tablespace_map file.\n>\n> I debugged and figured out in normal tar backup i.e. '-Ft' case\n> pg_basebackup command is sent with TABLESPACE_MAP to the server:\n> BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS,\n> TABLESPACE_MAP, MANIFEST 'yes', TARGET 'client')\n>\n> But, with the target command i.e. \"pg_basebackup -t server:/tmp/data_v1\n> -Xnone\", we are not sending the TABLESPACE_MAP, here is how the command\n> is sent:\n> BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, MANIFEST\n> 'yes', TARGET 'server', TARGET_DETAIL '/tmp/data_none')\n>\n> I am attaching a patch to fix this issue.\n\nThanks. Here's a new patch set incorporating that change. I committed\nthe preparatory patches to add an extensible options syntax for\nCREATE_REPLICATION_SLOT and BASE_BACKUP, so those patches are no\nlonger included in this patch set. Barring objections, I will also\npush 0001, a small preparatory refactoring patch, soon.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Oct 2021 12:07:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Oct 5, 2021 at 5:51 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I have fixed the autoFlush issue. Basically, I was wrongly initializing\n> the lz4 preferences in bbsink_lz4_begin_archive() instead of\n> bbsink_lz4_begin_backup(). I have fixed the issue in the attached\n> patch, please have a look at it.\n\nThanks for the new patch. Seems like this is getting closer, but:\n\n+/*\n+ * Read the input buffer in CHUNK_SIZE length in each iteration and pass it to\n+ * the lz4 compression. Defined as 8k, since the input buffer is multiple of\n+ * BLCKSZ i.e. multiple of 8k.\n+ */\n+#define CHUNK_SIZE 8192\n\nBLCKSZ does not have to be 8kB.\n\n+ size_t compressedSize;\n+ int nextChunkLen = CHUNK_SIZE;\n+\n+ /* Last chunk to be read from the input. */\n+ if (avail_in < CHUNK_SIZE)\n+ nextChunkLen = avail_in;\n\nThis is the only place where CHUNK_SIZE gets used, and I don't think I\nsee any point to it. I think the 5th argument to LZ4F_compressUpdate\ncould just be avail_in. And as soon as you do that then I think\nbbsink_lz4_archive_contents() no longer needs to be a loop. For gzip,\nthe output buffer isn't guaranteed to be big enough to write all the\ndata, so the compression step can fail to compress all the data. But\nLZ4 forces us to make the output buffer big enough that no such\nfailure can happen. Therefore, that can't happen here except if you\nartificially limit the amount of data that you pass to\nLZ4F_compressUpdate() to something less than the size of the input\nbuffer. And I don't see any reason to do that.\n\n+ /* First of all write the frame header to destination buffer. */\n+ headerSize = LZ4F_compressBegin(mysink->ctx,\n+ mysink->base.bbs_next->bbs_buffer,\n+ mysink->base.bbs_next->bbs_buffer_length,\n+ &mysink->prefs);\n\n+ compressedSize = LZ4F_compressEnd(mysink->ctx,\n+ mysink->base.bbs_next->bbs_buffer + mysink->bytes_written,\n+ mysink->base.bbs_next->bbs_buffer_length - mysink->bytes_written,\n+ NULL);\n\nI think there's some issue with these two chunks of code. What happens\nif one of these functions wants to write more data than will fit in\nthe output buffer? It seems like either there needs to be some code\nsomeplace that ensures adequate space in the output buffer at the time\nof these calls, or else there needs to be a retry loop that writes as\nmuch of the data as possible, flushes the output buffer, and then\nloops to generate more output data. But there's clearly no retry loop\nhere, and I don't see any code that guarantees that the output buffer\nhas to be large enough (and in the case of LZ4F_compressEnd, have\nenough remaining space) either. In other words, all the same concerns\nthat apply to LZ4F_compressUpdate() also apply here ... but in\nLZ4F_compressUpdate() you seem to BOTH have a retry loop and ALSO code\nto make sure that the buffer is certain to be large enough (which is\nmore than you need, you only need one of those) and here you seem to\nhave NEITHER of those things (which is not enough, you need one or the\nother).\n\n+ /* Initialize compressor object. */\n+ prefs->frameInfo.blockSizeID = LZ4F_max256KB;\n+ prefs->frameInfo.blockMode = LZ4F_blockLinked;\n+ prefs->frameInfo.contentChecksumFlag = LZ4F_noContentChecksum;\n+ prefs->frameInfo.frameType = LZ4F_frame;\n+ prefs->frameInfo.contentSize = 0;\n+ prefs->frameInfo.dictID = 0;\n+ prefs->frameInfo.blockChecksumFlag = LZ4F_noBlockChecksum;\n+ prefs->compressionLevel = 0;\n+ prefs->autoFlush = 0;\n+ prefs->favorDecSpeed = 0;\n+ prefs->reserved[0] = 0;\n+ prefs->reserved[1] = 0;\n+ prefs->reserved[2] = 0;\n\nHow about instead using memset() to zero the whole thing and then\nomitting the zero initializations? That seems like it would be less\nfragile, if the upstream structure definition ever changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Oct 2021 13:39:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks, Robert for reviewing the patch.\n\n\nOn Tue, Oct 12, 2021 at 11:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nThis is the only place where CHUNK_SIZE gets used, and I don't think I\n> see any point to it. I think the 5th argument to LZ4F_compressUpdate\n> could just be avail_in. And as soon as you do that then I think\n> bbsink_lz4_archive_contents() no longer needs to be a loop.\n>\n\nAgree. Removed the CHUNK_SIZE and the loop.\n\n\n>\n> + /* First of all write the frame header to destination buffer. */\n> + headerSize = LZ4F_compressBegin(mysink->ctx,\n> + mysink->base.bbs_next->bbs_buffer,\n> + mysink->base.bbs_next->bbs_buffer_length,\n> + &mysink->prefs);\n>\n> + compressedSize = LZ4F_compressEnd(mysink->ctx,\n> + mysink->base.bbs_next->bbs_buffer + mysink->bytes_written,\n> + mysink->base.bbs_next->bbs_buffer_length - mysink->bytes_written,\n> + NULL);\n>\n> I think there's some issue with these two chunks of code. What happens\n> if one of these functions wants to write more data than will fit in\n> the output buffer? It seems like either there needs to be some code\n> someplace that ensures adequate space in the output buffer at the time\n> of these calls, or else there needs to be a retry loop that writes as\n> much of the data as possible, flushes the output buffer, and then\n> loops to generate more output data. But there's clearly no retry loop\n> here, and I don't see any code that guarantees that the output buffer\n> has to be large enough (and in the case of LZ4F_compressEnd, have\n> enough remaining space) either. In other words, all the same concerns\n> that apply to LZ4F_compressUpdate() also apply here ... but in\n> LZ4F_compressUpdate() you seem to BOTH have a retry loop and ALSO code\n> to make sure that the buffer is certain to be large enough (which is\n> more than you need, you only need one of those) and here you seem to\n> have NEITHER of those things (which is not enough, you need one or the\n> other).\n>\n\nFair enough. I have made the change in the bbsink_lz4_begin_backup() to\nmake sure we reserve enough extra bytes for the header and the footer those\nare written by LZ4F_compressBegin() and LZ4F_compressEnd() respectively.\nThe LZ4F_compressBound() when passed the input size as \"0\", would give\nthe upper bound for output buffer needed by the LZ4F_compressEnd().\n\nHow about instead using memset() to zero the whole thing and then\n> omitting the zero initializations? That seems like it would be less\n> fragile, if the upstream structure definition ever changes.\n>\n\nMade this change.\n\nPlease review the patch, and let me know your comments.\n\nRegards,\nJeevan Ladhe", "msg_date": "Thu, 14 Oct 2021 22:50:55 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Oct 14, 2021 at 1:21 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Agree. Removed the CHUNK_SIZE and the loop.\n\nTry harder. :-)\n\nThe loop is gone, but CHUNK_SIZE itself seems to have evaded the executioner.\n\n> Fair enough. I have made the change in the bbsink_lz4_begin_backup() to\n> make sure we reserve enough extra bytes for the header and the footer those\n> are written by LZ4F_compressBegin() and LZ4F_compressEnd() respectively.\n> The LZ4F_compressBound() when passed the input size as \"0\", would give\n> the upper bound for output buffer needed by the LZ4F_compressEnd().\n\nI think this is not the best way to accomplish the goal. Adding\nLZ4F_compressBound(0) to next_buf_len makes the buffer substantially\nbigger for something that's only going to happen once. We are assuming\nin any case, I think, that LZ4F_compressBound(0) <=\nLZ4F_compressBound(mysink->base.bbs_buffer_length), so all you need to\ndo is have bbsink_end_archive() empty the buffer, if necessary, before\ncalling LZ4F_compressEnd(). With just that change, you can set\nnext_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --\nbut that's also more than you need. You can instead do next_buf_len =\nMin(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound). Now, you're\nprobably thinking that won't work, because bbsink_lz4_begin_archive()\ncould fill up the buffer partway, and then the first call to\nbbsink_lz4_archive_contents() could overrun it. But that problem can\nbe solved by reversing the order of operations in\nbbsink_lz4_archive_contents(): before you call LZ4F_compressUpdate(),\ntest whether you need to empty the buffer first, and if so, do it.\n\nThat's actually less confusing than the way you've got it, because as\nyou have it written, we don't really know why we're emptying the\nbuffer -- is it to prepare for the next call to LZ4F_compressUpdate(),\nor is it to prepare for the call to LZ4F_compressEnd()? How do we know\nnow how much space the next person writing into the buffer is going to\nneed? It seems better if bbsink_lz4_archive_contents() empties the\nbuffer before calling LZ4F_compressUpdate() if that call might not\nhave enough space, and likewise bbsink_lz4_end_archive() empties the\nbuffer before calling LZ4F_compressEnd() if that's needed. That way,\neach callback makes the space *it* needs, not the space the *next*\ncaller needs. (bbsink_lz4_end_archive() still needs to ALSO empty the\nbuffer after LZ4F_compressEnd(), so we don't orphan any data.)\n\nOn another note, if the call to LZ4F_freeCompressionContext() is\nrequired in bbsink_lz4_end_archive(), then I think this code is going\nto just leak the memory used by the compression context if an error\noccurs before this code is reached. That kind of sucks. The way to fix\nit, I suppose, is a TRY/CATCH block, but I don't think that can be\nsomething internal to basebackup_lz4.c: I think the bbsink stuff would\nneed to provide some kind of infrastructure for basebackup_lz4.c to\nuse. It would be a lot better if we could instead get LZ4 to allocate\nmemory using palloc(), but a quick Google search suggests that you\ncan't accomplish that without recompiling liblz4, and that's not\nworkable since we don't want to require a liblz4 built specifically\nfor PostgreSQL. Do you see any other solution?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Oct 2021 14:14:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\n> The loop is gone, but CHUNK_SIZE itself seems to have evaded the\nexecutioner.\n\nI am sorry, but I did not really get it. Or it is what you have pointed\nin the following paragraphs?\n\n> I think this is not the best way to accomplish the goal. Adding\n> LZ4F_compressBound(0) to next_buf_len makes the buffer substantially\n> bigger for something that's only going to happen once.\n\nYes, you are right. I missed this.\n\n> We are assuming in any case, I think, that LZ4F_compressBound(0) <=\n> LZ4F_compressBound(mysink->base.bbs_buffer_length), so all you need to\n> do is have bbsink_end_archive() empty the buffer, if necessary, before\n> calling LZ4F_compressEnd().\n\nThis is a fair enough assumption.\n\n> With just that change, you can set\n> next_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --\n> but that's also more than you need. You can instead do next_buf_len =\n> Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound). Now, you're\n> probably thinking that won't work, because bbsink_lz4_begin_archive()\n> could fill up the buffer partway, and then the first call to\n> bbsink_lz4_archive_contents() could overrun it. But that problem can\n> be solved by reversing the order of operations in\n> bbsink_lz4_archive_contents(): before you call LZ4F_compressUpdate(),\n> test whether you need to empty the buffer first, and if so, do it.\n\nI am still not able to get - how can we survive with a mere\nsize of Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound).\nLZ4F_HEADER_SIZE_MAX is defined as 19 in lz4 library. With this\nproposal, it is almost guaranteed that the next buffer length will\nbe always set to 19, which will result in failure of a call to\nLZ4F_compressUpdate() with the error LZ4F_ERROR_dstMaxSize_tooSmall,\neven if we had called bbsink_archive_contents() before.\n\n> That's actually less confusing than the way you've got it, because as\n> you have it written, we don't really know why we're emptying the\n> buffer -- is it to prepare for the next call to LZ4F_compressUpdate(),\n> or is it to prepare for the call to LZ4F_compressEnd()? How do we know\n> now how much space the next person writing into the buffer is going to\n> need? It seems better if bbsink_lz4_archive_contents() empties the\n> buffer before calling LZ4F_compressUpdate() if that call might not\n> have enough space, and likewise bbsink_lz4_end_archive() empties the\n> buffer before calling LZ4F_compressEnd() if that's needed. That way,\n> each callback makes the space *it* needs, not the space the *next*\n> caller needs. (bbsink_lz4_end_archive() still needs to ALSO empty the\n> buffer after LZ4F_compressEnd(), so we don't orphan any data.)\n\nSure, I get your point here.\n\n> On another note, if the call to LZ4F_freeCompressionContext() is\n> required in bbsink_lz4_end_archive(), then I think this code is going\n> to just leak the memory used by the compression context if an error\n> occurs before this code is reached. That kind of sucks.\n\nYes, the LZ4F_freeCompressionContext() is needed to clear the\nLZ4F_cctx. The structure LZ4F_cctx_s maintains internal stages\nof compression, internal buffers, etc.\n\n> The way to fix\n> it, I suppose, is a TRY/CATCH block, but I don't think that can be\n> something internal to basebackup_lz4.c: I think the bbsink stuff would\n> need to provide some kind of infrastructure for basebackup_lz4.c to\n> use. It would be a lot better if we could instead get LZ4 to allocate\n> memory using palloc(), but a quick Google search suggests that you\n> can't accomplish that without recompiling liblz4, and that's not\n> workable since we don't want to require a liblz4 built specifically\n> for PostgreSQL. Do you see any other solution?\n\nYou mean the way gzip allows us to use our own alloc and free functions\nby means of providing the function pointers for them. Unfortunately,\nno, LZ4 does not have that kind of provision. Maybe that makes a\ngood proposal for LZ4 library ;-).\nI cannot think of another solution to it right away.\n\nRegards,\nJeevan Ladhe\n\nHi Robert,> The loop is gone, but CHUNK_SIZE itself seems to have evaded the executioner.I am sorry, but I did not really get it. Or it is what you have pointedin the following paragraphs?> I think this is not the best way to accomplish the goal. Adding> LZ4F_compressBound(0) to next_buf_len makes the buffer substantially> bigger for something that's only going to happen once.Yes, you are right. I missed this.> We are assuming in any case, I think, that LZ4F_compressBound(0) <=> LZ4F_compressBound(mysink->base.bbs_buffer_length), so all you need to> do is have bbsink_end_archive() empty the buffer, if necessary, before> calling LZ4F_compressEnd(). This is a fair enough assumption.> With just that change, you can set> next_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --> but that's also more than you need. You can instead do next_buf_len => Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound). Now, you're> probably thinking that won't work, because bbsink_lz4_begin_archive()> could fill up the buffer partway, and then the first call to> bbsink_lz4_archive_contents() could overrun it. But that problem can> be solved by reversing the order of operations in> bbsink_lz4_archive_contents(): before you call LZ4F_compressUpdate(),> test whether you need to empty the buffer first, and if so, do it.I am still not able to get - how can we survive with a meresize of Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound).LZ4F_HEADER_SIZE_MAX is defined as 19 in lz4 library. With thisproposal, it is almost guaranteed that the next buffer length willbe always set to 19, which will result in failure of a call toLZ4F_compressUpdate() with the error LZ4F_ERROR_dstMaxSize_tooSmall,even if we had called bbsink_archive_contents() before.> That's actually less confusing than the way you've got it, because as> you have it written, we don't really know why we're emptying the> buffer -- is it to prepare for the next call to LZ4F_compressUpdate(),> or is it to prepare for the call to LZ4F_compressEnd()? How do we know> now how much space the next person writing into the buffer is going to> need? It seems better if bbsink_lz4_archive_contents() empties the> buffer before calling LZ4F_compressUpdate() if that call might not> have enough space, and likewise bbsink_lz4_end_archive() empties the> buffer before calling LZ4F_compressEnd() if that's needed. That way,> each callback makes the space *it* needs, not the space the *next*> caller needs. (bbsink_lz4_end_archive() still needs to ALSO empty the> buffer after LZ4F_compressEnd(), so we don't orphan any data.)Sure, I get your point here.> On another note, if the call to LZ4F_freeCompressionContext() is> required in bbsink_lz4_end_archive(), then I think this code is going> to just leak the memory used by the compression context if an error> occurs before this code is reached. That kind of sucks. Yes, the LZ4F_freeCompressionContext() is needed to clear theLZ4F_cctx. The structure LZ4F_cctx_s maintains internal stagesof compression, internal buffers, etc.> The way to fix> it, I suppose, is a TRY/CATCH block, but I don't think that can be> something internal to basebackup_lz4.c: I think the bbsink stuff would> need to provide some kind of infrastructure for basebackup_lz4.c to> use. It would be a lot better if we could instead get LZ4 to allocate> memory using palloc(), but a quick Google search suggests that you> can't accomplish that without recompiling liblz4, and that's not> workable since we don't want to require a liblz4 built specifically> for PostgreSQL. Do you see any other solution?You mean the way gzip allows us to use our own alloc and free functionsby means of providing the function pointers for them. Unfortunately,no, LZ4 does not have that kind of provision. Maybe that makes agood proposal for LZ4 library ;-).I cannot think of another solution to it right away.Regards,Jeevan Ladhe", "msg_date": "Fri, 15 Oct 2021 17:24:15 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Oct 15, 2021 at 7:54 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> > The loop is gone, but CHUNK_SIZE itself seems to have evaded the executioner.\n>\n> I am sorry, but I did not really get it. Or it is what you have pointed\n> in the following paragraphs?\n\nI mean #define CHUNK_SIZE is still in the patch.\n\n> > With just that change, you can set\n> > next_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --\n> > but that's also more than you need. You can instead do next_buf_len =\n> > Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound). Now, you're\n> > probably thinking that won't work, because bbsink_lz4_begin_archive()\n> > could fill up the buffer partway, and then the first call to\n> > bbsink_lz4_archive_contents() could overrun it. But that problem can\n> > be solved by reversing the order of operations in\n> > bbsink_lz4_archive_contents(): before you call LZ4F_compressUpdate(),\n> > test whether you need to empty the buffer first, and if so, do it.\n>\n> I am still not able to get - how can we survive with a mere\n> size of Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound).\n> LZ4F_HEADER_SIZE_MAX is defined as 19 in lz4 library. With this\n> proposal, it is almost guaranteed that the next buffer length will\n> be always set to 19, which will result in failure of a call to\n> LZ4F_compressUpdate() with the error LZ4F_ERROR_dstMaxSize_tooSmall,\n> even if we had called bbsink_archive_contents() before.\n\nSorry, should have been Max(), not Min().\n\n> You mean the way gzip allows us to use our own alloc and free functions\n> by means of providing the function pointers for them. Unfortunately,\n> no, LZ4 does not have that kind of provision. Maybe that makes a\n> good proposal for LZ4 library ;-).\n> I cannot think of another solution to it right away.\n\nOK. Will give it some thought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Oct 2021 08:05:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nI mean #define CHUNK_SIZE is still in the patch.\n>\n\nOops, removed now.\n\n\n> > > With just that change, you can set\n> > > next_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --\n> > > but that's also more than you need. You can instead do next_buf_len =\n> > > Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound). Now, you're\n> > > probably thinking that won't work, because bbsink_lz4_begin_archive()\n> > > could fill up the buffer partway, and then the first call to\n> > > bbsink_lz4_archive_contents() could overrun it. But that problem can\n> > > be solved by reversing the order of operations in\n> > > bbsink_lz4_archive_contents(): before you call LZ4F_compressUpdate(),\n> > > test whether you need to empty the buffer first, and if so, do it.\n> >\n> > I am still not able to get - how can we survive with a mere\n> > size of Min(LZ4F_HEADER_SIZE_MAX, mysink->output_buffer_bound).\n> > LZ4F_HEADER_SIZE_MAX is defined as 19 in lz4 library. With this\n> > proposal, it is almost guaranteed that the next buffer length will\n> > be always set to 19, which will result in failure of a call to\n> > LZ4F_compressUpdate() with the error LZ4F_ERROR_dstMaxSize_tooSmall,\n> > even if we had called bbsink_archive_contents() before.\n>\n> Sorry, should have been Max(), not Min().\n>\n\nAhh ok.\nI looked into the code of LZ4F_compressBound() and the header size is\nalready included in the calculations, so when we compare\nLZ4F_HEADER_SIZE_MAX and mysink->output_buffer_bound, the latter\nwill be always greater, and hence sufficient length for the output buffer. I\nmade this change in the attached patch, and also cleared the buffer by\ncalling bbsink_archive_contents() before calling LZ4_compressUpdate().\nSimilarly cleared the buffer before calling LZ4F_compressEnd().\n\n> You mean the way gzip allows us to use our own alloc and free functions\n> > by means of providing the function pointers for them. Unfortunately,\n> > no, LZ4 does not have that kind of provision. Maybe that makes a\n> > good proposal for LZ4 library ;-).\n> > I cannot think of another solution to it right away.\n>\n> OK. Will give it some thought.\n\n\nI have started a thread[1] on LZ4 community for this, but so far no\nreply on that.\n\nRegards,\nJeevan Ladhe\n\n[1]\nhttps://groups.google.com/g/lz4c/c/WnJkKwBWlcM/m/zszrla2mBQAJ?utm_medium=email&utm_source=footer", "msg_date": "Wed, 20 Oct 2021 18:18:55 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Oct 15, 2021 at 8:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > You mean the way gzip allows us to use our own alloc and free functions\n> > by means of providing the function pointers for them. Unfortunately,\n> > no, LZ4 does not have that kind of provision. Maybe that makes a\n> > good proposal for LZ4 library ;-).\n> > I cannot think of another solution to it right away.\n>\n> OK. Will give it some thought.\n\nHere's a new patch set. I've tried adding a \"cleanup\" callback to the\nbbsink method and ensuring that it gets called even in case of an\nerror. The code for that is untested since I have no use for it with\nthe existing basebackup sink types, so let me know how it goes when\nyou try to use it for LZ4.\n\nI've also added documentation for the new pg_basebackup options in\nthis version, and I fixed up a couple of these patches to be\npgindent-clean when they previously were not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Oct 2021 16:15:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks, Robert for the patches.\n\nI tried to take a backup using gzip compression and got a core.\n\n$ pg_basebackup -t server:/tmp/data_gzip -Xnone --server-compression=gzip\nNOTICE: WAL archiving is not enabled; you must ensure that all required\nWAL segments are copied through other means to complete the backup\npg_basebackup: error: could not read COPY data: server closed the\nconnection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\n\nThe backtrace:\n\ngdb) bt\n#0 0x0000000000000000 in ?? ()\n#1 0x0000558264bfc40a in bbsink_cleanup (sink=0x55826684b5f8) at\n../../../src/include/replication/basebackup_sink.h:268\n#2 0x0000558264bfc838 in bbsink_forward_cleanup (sink=0x55826684b710) at\nbasebackup_sink.c:124\n#3 0x0000558264bf4cab in bbsink_cleanup (sink=0x55826684b710) at\n../../../src/include/replication/basebackup_sink.h:268\n#4 0x0000558264bf7738 in SendBaseBackup (cmd=0x55826683bd10) at\nbasebackup.c:1020\n#5 0x0000558264c10915 in exec_replication_command (\n cmd_string=0x5582667bc580 \"BASE_BACKUP ( LABEL 'pg_basebackup base\nbackup', PROGRESS, MANIFEST 'yes', TABLESPACE_MAP, TARGET 'server',\n TARGET_DETAIL '/tmp/data_g\nzip', COMPRESSION 'gzip')\") at walsender.c:1731\n#6 0x0000558264c8a69b in PostgresMain (dbname=0x5582667e84d8 \"\",\nusername=0x5582667e84b8 \"hadoop\") at postgres.c:4493\n#7 0x0000558264bb10a6 in BackendRun (port=0x5582667de160) at\npostmaster.c:4560\n#8 0x0000558264bb098b in BackendStartup (port=0x5582667de160) at\npostmaster.c:4288\n#9 0x0000558264bacb55 in ServerLoop () at postmaster.c:1801\n#10 0x0000558264bac2ee in PostmasterMain (argc=3, argv=0x5582667b68c0) at\npostmaster.c:1473\n#11 0x0000558264aa0950 in main (argc=3, argv=0x5582667b68c0) at main.c:198\n\nbbsink_gzip_ops have the cleanup() callback set to NULL, and when the\nbbsink_cleanup() callback is triggered, it tries to invoke a function that\nis NULL. I think either bbsink_gzip_ops should set the cleanup callback\nto bbsink_forward_cleanup or we should be calling the cleanup() callback\nfrom PG_CATCH instead of PG_FINALLY()? But in the latter case, even if\nwe call from PG_CATCH, it will have a similar problem for gzip and other\nsinks which may not need a custom cleanup() callback in case there is any\nerror before the backup could finish up normally.\n\nI have attached a patch to fix this.\n\nThoughts?\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Oct 26, 2021 at 1:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Oct 15, 2021 at 8:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > You mean the way gzip allows us to use our own alloc and free functions\n> > > by means of providing the function pointers for them. Unfortunately,\n> > > no, LZ4 does not have that kind of provision. Maybe that makes a\n> > > good proposal for LZ4 library ;-).\n> > > I cannot think of another solution to it right away.\n> >\n> > OK. Will give it some thought.\n>\n> Here's a new patch set. I've tried adding a \"cleanup\" callback to the\n> bbsink method and ensuring that it gets called even in case of an\n> error. The code for that is untested since I have no use for it with\n> the existing basebackup sink types, so let me know how it goes when\n> you try to use it for LZ4.\n>\n> I've also added documentation for the new pg_basebackup options in\n> this version, and I fixed up a couple of these patches to be\n> pgindent-clean when they previously were not.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Fri, 29 Oct 2021 18:28:24 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Oct 29, 2021 at 8:59 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:>\n> bbsink_gzip_ops have the cleanup() callback set to NULL, and when the\n> bbsink_cleanup() callback is triggered, it tries to invoke a function that\n> is NULL. I think either bbsink_gzip_ops should set the cleanup callback\n> to bbsink_forward_cleanup or we should be calling the cleanup() callback\n> from PG_CATCH instead of PG_FINALLY()? But in the latter case, even if\n> we call from PG_CATCH, it will have a similar problem for gzip and other\n> sinks which may not need a custom cleanup() callback in case there is any\n> error before the backup could finish up normally.\n>\n> I have attached a patch to fix this.\n\nYes, this is the right fix. Apologies for the oversight.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 09:24:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "I have implemented the cleanup callback bbsink_lz4_cleanup() in the\nattached patch.\n\n\nPlease have a look and let me know of any comments.\n\n\nRegards,\n\nJeevan Ladhe\n\nOn Fri, Oct 29, 2021 at 6:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Oct 29, 2021 at 8:59 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:>\n> > bbsink_gzip_ops have the cleanup() callback set to NULL, and when the\n> > bbsink_cleanup() callback is triggered, it tries to invoke a function\n> that\n> > is NULL. I think either bbsink_gzip_ops should set the cleanup callback\n> > to bbsink_forward_cleanup or we should be calling the cleanup() callback\n> > from PG_CATCH instead of PG_FINALLY()? But in the latter case, even if\n> > we call from PG_CATCH, it will have a similar problem for gzip and other\n> > sinks which may not need a custom cleanup() callback in case there is any\n> > error before the backup could finish up normally.\n> >\n> > I have attached a patch to fix this.\n>\n> Yes, this is the right fix. Apologies for the oversight.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Tue, 2 Nov 2021 17:22:29 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Nov 2, 2021 at 7:53 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I have implemented the cleanup callback bbsink_lz4_cleanup() in the attached patch.\n>\n> Please have a look and let me know of any comments.\n\nLooks pretty good. I think you should work on stuff like documentation\nand tests, and I need to do some work on that stuff, too. Also, I\nthink you should try to figure out how to support different\ncompression levels. For gzip, I did that by making gzip1..gzip9\npossible compression settings. But that might not have been the right\nidea because something like lz43 to mean lz4 at level 3 would be\nconfusing. Also, for the lz4 command line utility, there's not only\n\"lz4 -3\" which means LZ4 with level 3 compression, but also \"lz4\n--fast=3\" which selects \"ultra-fast compression level 3\" rather than\nregular old level 3. And apparently LZ4 levels go up to 12 rather than\njust 9 like gzip. I'm thinking maybe we should go with something like\n\"gzip@9\" rather than just \"gzip9\" to mean gzip with compression level\n9, and then things like \"lz4@3\" or \"lz4@fast3\" would select either the\nregular compression levels or the ultra-fast compression levels.\n\nMeanwhile, I think it's probably OK for me to go ahead and commit\n0001-0003 from my patches at this point, since it seems we have pretty\ngood evidence that the abstraction basically works, and there doesn't\nseem to be any value in holding off and maybe having to do a bunch\nmore rebasing. We may also want to look into making -Fp work with\n--server-compression, which would require pg_basebackup to know how to\ndecompress. I'm actually not sure if this is worthwhile; you'd need to\nhave a network connection slow enough that it's worth spending a lot\nof CPU time compressing on the server and decompressing on the client\nto make up for the cost of network transfer. But some people might\nhave that case. It might make it easier to test this, too, since we\nprobably can't rely on having an LZ4 binary installed. Another thing\nthat you probably need to investigate is also supporting client-side\nLZ4 compression. I think that is probably a really desirable addition\nto your patch set, since people might find it odd if that were\nexclusively a server-side option. Hopefully it's not that much work.\n\nOne minor nitpick in terms of the code:\n\n+ mysink->bytes_written = mysink->bytes_written + headerSize;\n\nI would use += here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 10:32:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Nov 2, 2021 at 10:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Looks pretty good. I think you should work on stuff like documentation\n> and tests, and I need to do some work on that stuff, too. Also, I\n> think you should try to figure out how to support different\n> compression levels.\n\nOn second thought, maybe we don't need to do this. There's a thread on\n\"Teach pg_receivewal to use lz4 compression\" which concluded that\nsupporting different compression levels was unnecessary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 12:34:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Nov 2, 2021 at 10:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Meanwhile, I think it's probably OK for me to go ahead and commit\n> 0001-0003 from my patches at this point, since it seems we have pretty\n> good evidence that the abstraction basically works, and there doesn't\n> seem to be any value in holding off and maybe having to do a bunch\n> more rebasing.\n\nI went ahead and committed 0001 and 0002, but got nervous about\nproceeding with 0003. For those who may not have been following along\nclosely, what was 0003 and is now 0001 introduces a new COPY\nsubprotocol for taking backups. That probably needs to be documented\nand as of now the patch does not do that, but the bigger question is\nwhat to do about backward compatibility. I wrote the patch in such a\nway that, post-patch, the server can do backups either the way that we\ndo them now, or the new way that it introduces, but I'm wondering if I\nshould rip that out and just support the new way only. If you run a\nnewer pg_basebackup against an older server, it will work, and still\ndoes with the patch. If, however, you run an older pg_basebackup\nagainst a newer server, it complains. For example running a pg13\npg_basebackup against a pg14 cluster produces this:\n\npg_basebackup: error: incompatible server version 14.0\npg_basebackup: removing data directory \"pgstandby\"\n\nNow for all I know there is out-of-core software out there that speaks\nthe replication protocol and can take base backups using it and would\nlike it to continue working as it does today, and that's easy for me\nto do, because that's the way the patch works. But on the other hand\nsince the patch adapts the in-core tools to use the new method when\ntalking to a new server, we wouldn't have test coverage for the old\nmethod any more, which might possibly make it annoying to maintain.\nBut then again that is a problem we could leave for the future, and\nrip it out then rather than now. I'm not sure which way to jump.\nAnyone else have thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 Nov 2021 11:50:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Nov 5, 2021 at 11:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I went ahead and committed 0001 and 0002, but got nervous about\n> proceeding with 0003.\n\nIt turns out that these commits are causing failures on prairiedog.\nPer email from Tom off-list, that's apparently because prairiedog has\na fussy version of tar that doesn't like it when you omit the trailing\nNUL blocks that are supposed to be part of a tar file. So how did this\nget broken?\n\nIt turns out that in the current state of the world, the server sends\nan almost-tarfile to the client. What I mean by an almost-tarfile is\nthat it sends something that looks like a valid tarfile except that\nthe two blocks of trailing NUL bytes are omitted. Prior to these\npatches, that was a very strategic omission, because the pg_basebackup\ncode wants to edit the tar files, and it wasn't smart enough to parse\nthem, so it just received all the data from the server, then added any\nmembers that it wanted to add (e.g. recovery.signal) and then added\nthe terminator itself. I would classify this as an ugly hack, but it\nworked. With these changes, the client is now capable of really\nparsing a tarfile, so it would have no problem injecting new files\ninto the archive whether or not the server terminates it properly. It\nalso has no problem adding the two blocks of terminating NUL bytes if\nthe server omits them, but not otherwise. All in all, it's\nsignificantly smarter code.\n\nHowever, I also set things up so that the client doesn't bother\nparsing the tar file from the server if it's not doing anything that\nrequires editing the tar file on the fly. That saves some overhead,\nand it's also important for the rest of the patch set, which wants to\nmake it so that the server could send us something besides a tarfile,\nlike maybe a .tar.gz. We can't just have a convention of adding 1024\nNUL bytes to any file the server sends us unless what the server sends\nus is always and precisely an unterminated tarfile. Unfortunately,\nthat means that in the case where the tar parsing logic isn't used,\nthe tar file ends up with the proper terminator. Because most 'tar'\nimplementations are happy to ignore that defect, the tests pass on my\nmachine, but not on prairiedog. I think I realized this problem at\nsome point during the development process of this patch, but then I\nforgot about it again and ended up committing something that has a\nproblem of which, at some earlier point in time, I had been entirely\naware. Oops.\n\nIt's tempting to try to fix this problem by changing the server so\nthat it properly terminates the tar files it sends to the client.\nHonestly, I don't know how we ever thought it was OK to design a\nprotocol for base backups that involved the server sending something\nthat is almost but not quite a valid tarfile. However, that's not\nquite good enough, because pg_basebackup is supposed to be backward\ncompatible, so we'd still have the same problem if a new version of\npg_basebackup were used with an old server. So what I'm inclined to do\nis fix both the server and pg_basebackup. On the server side, properly\nterminate the tarfile. On the client side, if we're talking to a\npre-v15 server and don't need to parse the tarfile, blindly add 1024\nNUL bytes at the end.\n\nI think I can get patches for this done today. Please let me know ASAP\nif you have objections to this line of attack.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Nov 2021 09:52:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It turns out that these commits are causing failures on prairiedog.\n> Per email from Tom off-list, that's apparently because prairiedog has\n> a fussy version of tar that doesn't like it when you omit the trailing\n> NUL blocks that are supposed to be part of a tar file.\n\nFTR, prairiedog is green. It's Noah's AIX menagerie that's complaining.\n\nIt's actually a little bit disturbing that we're only seeing a failure\non that one platform, because that means that nothing else is anchoring\nus to the strict POSIX specification for tarfile format. We knew that\nGNU tar is forgiving about missing trailing zero blocks, but apparently\nso is BSD tar.\n\nOne part of me wants to add some explicit test for the trailing blocks.\nAnother says, well, the *de facto* tar standard seems not to require\nthe trailing blocks, never mind the letter of POSIX --- so when AIX\ndies, will anyone care anymore? Maybe not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Nov 2021 10:59:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Nov 8, 2021 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It turns out that these commits are causing failures on prairiedog.\n> > Per email from Tom off-list, that's apparently because prairiedog has\n> > a fussy version of tar that doesn't like it when you omit the trailing\n> > NUL blocks that are supposed to be part of a tar file.\n>\n> FTR, prairiedog is green. It's Noah's AIX menagerie that's complaining.\n\nWoops.\n\n> It's actually a little bit disturbing that we're only seeing a failure\n> on that one platform, because that means that nothing else is anchoring\n> us to the strict POSIX specification for tarfile format. We knew that\n> GNU tar is forgiving about missing trailing zero blocks, but apparently\n> so is BSD tar.\n\nYeah.\n\n> One part of me wants to add some explicit test for the trailing blocks.\n> Another says, well, the *de facto* tar standard seems not to require\n> the trailing blocks, never mind the letter of POSIX --- so when AIX\n> dies, will anyone care anymore? Maybe not.\n\nFWIW, I think both of those are pretty defensible positions. Honestly,\nI'm not sure how likely the bug is to recur once we fix it here,\neither. The only reason this is a problem is because of the kludge of\nhaving the server generate the entire output file except for the last\n1kB. If we eliminate that behavior I don't know that this particular\nproblem is especially likely to come back. But adding a test isn't\nstupid either, just a bit tricky to write. When I was testing locally\nthis morning I found that there were considerably more than 1024 zero\nbytes at the end of the file because the last file it backs up is\npg_control which ends with lots of zero bytes. So it's not sufficient\nto just write a test that checks for non-zero bytes in the last 1kB of\nthe file. What I think you'd need to do is figure out the number of\nfiles in the archive and the sizes of each one, and based on that work\nout how big the tar archive should be: 512 bytes per file or directory\nor symlink plus enough extra 512 byte chunks to cover the contents of\neach file plus an extra 1024 bytes at the end. That doesn't seem\nparticularly simple to code. We could run 'tar tvf' and parse the\noutput to get the number of files and their lengths, but that seems\nlikely to cause more portability headaches than the underlying issue.\nSince pg_basebackup now has the logic to do all of this parsing\ninternally, we could make it complain if it receives from a v15+\nserver an archive trailer that is not 1024 bytes of zeroes, but that\nwouldn't help with this exact problem, because the issue in this case\nis when pg_basebackup decides it doesn't need to parse in the first\nplace. We could add a pg_basebackup option\n--force-parsing-and-check-if-the-server-seems-broken, but that seems\nlike overkill to me. So overall I'm inclined to just do nothing about\nthis unless someone has a better idea how to write a reasonable test.\n\nAnyway, here's my proposal for fixing the issue immediately before us.\n0001 adds logic to pad out the unterminated tar archives, and 0002\nmakes the server terminate its tar archives while preserving the logic\nadded by 0001 for cases where we're talking to an older server. I\nassume that it's best to get something committed quickly here so will\ndo that in ~4 hours if there are no major objections, or sooner if I\nhear some enthusiastic endorsement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 8 Nov 2021 11:34:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Nov 8, 2021 at 11:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Anyway, here's my proposal for fixing the issue immediately before us.\n> 0001 adds logic to pad out the unterminated tar archives, and 0002\n> makes the server terminate its tar archives while preserving the logic\n> added by 0001 for cases where we're talking to an older server. I\n> assume that it's best to get something committed quickly here so will\n> do that in ~4 hours if there are no major objections, or sooner if I\n> hear some enthusiastic endorsement.\n\nI have now committed 0001 and will wait to see what the buildfarm\nthinks about that before doing anything more.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Nov 2021 16:41:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Nov 8, 2021 at 4:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Nov 8, 2021 at 11:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Anyway, here's my proposal for fixing the issue immediately before us.\n> > 0001 adds logic to pad out the unterminated tar archives, and 0002\n> > makes the server terminate its tar archives while preserving the logic\n> > added by 0001 for cases where we're talking to an older server. I\n> > assume that it's best to get something committed quickly here so will\n> > do that in ~4 hours if there are no major objections, or sooner if I\n> > hear some enthusiastic endorsement.\n>\n> I have now committed 0001 and will wait to see what the buildfarm\n> thinks about that before doing anything more.\n\nIt seemed OK, so I have now committed 0002 as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Nov 2021 14:36:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "> On Fri, Nov 05, 2021 at 11:50:01AM -0400, Robert Haas wrote:\n> On Tue, Nov 2, 2021 at 10:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Meanwhile, I think it's probably OK for me to go ahead and commit\n> > 0001-0003 from my patches at this point, since it seems we have pretty\n> > good evidence that the abstraction basically works, and there doesn't\n> > seem to be any value in holding off and maybe having to do a bunch\n> > more rebasing.\n>\n> I went ahead and committed 0001 and 0002, but got nervous about\n> proceeding with 0003.\n\nHi,\n\nI'm observing a strange issue which I can only relate to bef47ff85d\nwhere bbsink abstraction was introduced. The problem is about failing\nassertion when doing:\n\n DETAIL: Failed process was running: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, WAIT 0, MAX_RATE 102400, MANIFEST 'yes')\n\nWalsender tries to send a backup manifest, but crashes on the trottling sink:\n\n #2 0x0000560857b551af in ExceptionalCondition (conditionName=0x560857d15d27 \"sink->bbs_next != NULL\", errorType=0x560857d15c23 \"FailedAssertion\", fileName=0x560857d15d15 \"basebackup_sink.c\", lineNumber=91) at assert.c:69\n #3 0x0000560857918a94 in bbsink_forward_manifest_contents (sink=0x5608593f73f8, len=32768) at basebackup_sink.c:91\n #4 0x0000560857918d68 in bbsink_throttle_manifest_contents (sink=0x5608593f7450, len=32768) at basebackup_throttle.c:125\n #5 0x00005608579186d0 in bbsink_manifest_contents (sink=0x5608593f7450, len=32768) at ../../../src/include/replication/basebackup_sink.h:240\n #6 0x0000560857918b1b in bbsink_forward_manifest_contents (sink=0x5608593f74e8, len=32768) at basebackup_sink.c:94\n #7 0x0000560857911edc in bbsink_manifest_contents (sink=0x5608593f74e8, len=32768) at ../../../src/include/replication/basebackup_sink.h:240\n #8 0x00005608579129f6 in SendBackupManifest (manifest=0x7ffdaea9d120, sink=0x5608593f74e8) at backup_manifest.c:373\n\nLooking at the similar bbsink_throttle_archive_contents it's not clear\nwhy comments for both functions (archive and manifest throttling) say\n\"pass archive contents to next sink\", but only bbsink_throttle_manifest_contents\ndoes pass bbs_next into the bbsink_forward_manifest_contents. Is it\nsupposed to be like that? Passing the same sink object instead the next\none into bbsink_forward_manifest_contents seems to solve the problem in\nthis case.\n\n\n", "msg_date": "Mon, 15 Nov 2021 17:26:41 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Nov 15, 2021 at 11:25 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Walsender tries to send a backup manifest, but crashes on the trottling sink:\n>\n> #2 0x0000560857b551af in ExceptionalCondition (conditionName=0x560857d15d27 \"sink->bbs_next != NULL\", errorType=0x560857d15c23 \"FailedAssertion\", fileName=0x560857d15d15 \"basebackup_sink.c\", lineNumber=91) at assert.c:69\n> #3 0x0000560857918a94 in bbsink_forward_manifest_contents (sink=0x5608593f73f8, len=32768) at basebackup_sink.c:91\n> #4 0x0000560857918d68 in bbsink_throttle_manifest_contents (sink=0x5608593f7450, len=32768) at basebackup_throttle.c:125\n> #5 0x00005608579186d0 in bbsink_manifest_contents (sink=0x5608593f7450, len=32768) at ../../../src/include/replication/basebackup_sink.h:240\n> #6 0x0000560857918b1b in bbsink_forward_manifest_contents (sink=0x5608593f74e8, len=32768) at basebackup_sink.c:94\n> #7 0x0000560857911edc in bbsink_manifest_contents (sink=0x5608593f74e8, len=32768) at ../../../src/include/replication/basebackup_sink.h:240\n> #8 0x00005608579129f6 in SendBackupManifest (manifest=0x7ffdaea9d120, sink=0x5608593f74e8) at backup_manifest.c:373\n>\n> Looking at the similar bbsink_throttle_archive_contents it's not clear\n> why comments for both functions (archive and manifest throttling) say\n> \"pass archive contents to next sink\", but only bbsink_throttle_manifest_contents\n> does pass bbs_next into the bbsink_forward_manifest_contents. Is it\n> supposed to be like that? Passing the same sink object instead the next\n> one into bbsink_forward_manifest_contents seems to solve the problem in\n> this case.\n\nYeah, that's what it should be doing. I'll commit a fix, thanks for\nthe report and diagnosis.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:23:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Nov 15, 2021 at 2:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, that's what it should be doing. I'll commit a fix, thanks for\n> the report and diagnosis.\n\nHere's a new patch set.\n\n0001 - When I committed the patch to add the missing 2 blocks of zero\nbytes to the tar archives generated by the server, I failed to adjust\nthe documentation. So 0001 does that. This is the only new patch in\nthe series. I was not sure whether to just remove the statement from\nthe documentation saying that those blocks aren't included, or whether\nto mention that we used to include them and no longer do. I went for\nthe latter; opinions welcome.\n\n0002 - This adds a new COPY subprotocol for taking base backups. I've\nimproved it over the previous version by adding documentation. I'm\nstill seeking comments on the points I raised in\nhttp://postgr.es/m/CA+TgmobrOXbDh+hCzzVkD3weV3R-QRy3SPa=FRb_Rv9wF5iPJw@mail.gmail.com\nbut what I'm leaning toward doing is committing the patch as is and\nthen submitting - or maybe several patches - later to rip some this\nand a few other old things out. That way the debate - or lack thereof\n- about what to do here doesn't have to block the main patch set, and\nalso, it feels safer to make removing the existing stuff a separate\neffort rather than doing it now.\n\n0003 - This adds \"server\" and \"blackhole\" as backup targets. In this\nversion, I've improved the documentation. Also, the previous version\nonly let you use a backup target with -Xnone, and I realized that was\nstupid. -Xfetch is OK too. -Xstream still doesn't work, since that's\nimplemented via client-side logic. I think this still needs some work\nto be committable, like adding tests, but I don't expect to make any\nmajor changes.\n\n0004 - Server-side gzip compression. Similar level of maturity to 0003.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Nov 2021 16:47:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nPlease find the lz4 compression patch here that basically has:\n\n1. Documentation\n2. pgindent run over it.\n3. your comments addressed for using \"+=\"\n\nI have not included the compression level per your comment below:\n---------\n> \"On second thought, maybe we don't need to do this. There's a thread on\n> \"Teach pg_receivewal to use lz4 compression\" which concluded that\n> supporting different compression levels was unnecessary.\"\n---------\n\nRegards,\nJeevan Ladhe\n\nOn Wed, Nov 17, 2021 at 3:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 15, 2021 at 2:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Yeah, that's what it should be doing. I'll commit a fix, thanks for\n> > the report and diagnosis.\n>\n> Here's a new patch set.\n>\n> 0001 - When I committed the patch to add the missing 2 blocks of zero\n> bytes to the tar archives generated by the server, I failed to adjust\n> the documentation. So 0001 does that. This is the only new patch in\n> the series. I was not sure whether to just remove the statement from\n> the documentation saying that those blocks aren't included, or whether\n> to mention that we used to include them and no longer do. I went for\n> the latter; opinions welcome.\n>\n> 0002 - This adds a new COPY subprotocol for taking base backups. I've\n> improved it over the previous version by adding documentation. I'm\n> still seeking comments on the points I raised in\n>\n> http://postgr.es/m/CA+TgmobrOXbDh+hCzzVkD3weV3R-QRy3SPa=FRb_Rv9wF5iPJw@mail.gmail.com\n> but what I'm leaning toward doing is committing the patch as is and\n> then submitting - or maybe several patches - later to rip some this\n> and a few other old things out. That way the debate - or lack thereof\n> - about what to do here doesn't have to block the main patch set, and\n> also, it feels safer to make removing the existing stuff a separate\n> effort rather than doing it now.\n>\n> 0003 - This adds \"server\" and \"blackhole\" as backup targets. In this\n> version, I've improved the documentation. Also, the previous version\n> only let you use a backup target with -Xnone, and I realized that was\n> stupid. -Xfetch is OK too. -Xstream still doesn't work, since that's\n> implemented via client-side logic. I think this still needs some work\n> to be committable, like adding tests, but I don't expect to make any\n> major changes.\n>\n> 0004 - Server-side gzip compression. Similar level of maturity to 0003.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Mon, 22 Nov 2021 23:05:51 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> Please find the lz4 compression patch here that basically has:\nThanks, Could you please rebase your patch, it is failing at my end -\n\n[edb@centos7tushar pg15_lz]$ git apply /tmp/v8-0001-LZ4-compression.patch\nerror: patch failed: doc/src/sgml/ref/pg_basebackup.sgml:230\nerror: doc/src/sgml/ref/pg_basebackup.sgml: patch does not apply\nerror: patch failed: src/backend/replication/Makefile:19\nerror: src/backend/replication/Makefile: patch does not apply\nerror: patch failed: src/backend/replication/basebackup.c:64\nerror: src/backend/replication/basebackup.c: patch does not apply\nerror: patch failed: src/include/replication/basebackup_sink.h:285\nerror: src/include/replication/basebackup_sink.h: patch does not apply\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 27 Dec 2021 19:01:26 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Tushar,\n\nYou need to apply Robert's v10 version patches 0002, 0003 and 0004, before\napplying the lz4 patch(v8 version).\nPlease let me know if you still face any issues.\n\nRegards,\nJeevan Ladhe\n\nOn Mon, Dec 27, 2021 at 7:01 PM tushar <tushar.ahuja@enterprisedb.com>\nwrote:\n\n> On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> > Please find the lz4 compression patch here that basically has:\n> Thanks, Could you please rebase your patch, it is failing at my end -\n>\n> [edb@centos7tushar pg15_lz]$ git apply /tmp/v8-0001-LZ4-compression.patch\n> error: patch failed: doc/src/sgml/ref/pg_basebackup.sgml:230\n> error: doc/src/sgml/ref/pg_basebackup.sgml: patch does not apply\n> error: patch failed: src/backend/replication/Makefile:19\n> error: src/backend/replication/Makefile: patch does not apply\n> error: patch failed: src/backend/replication/basebackup.c:64\n> error: src/backend/replication/basebackup.c: patch does not apply\n> error: patch failed: src/include/replication/basebackup_sink.h:285\n> error: src/include/replication/basebackup_sink.h: patch does not apply\n>\n> --\n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n>\n>\n\nHi Tushar,You need to apply Robert's v10 version patches 0002, 0003 and 0004, before applying the lz4 patch(v8 version).Please let me know if you still face any issues.Regards,Jeevan LadheOn Mon, Dec 27, 2021 at 7:01 PM tushar <tushar.ahuja@enterprisedb.com> wrote:On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> Please find the lz4 compression patch here that basically has:\nThanks, Could you please rebase your patch, it is failing at my end -\n\n[edb@centos7tushar pg15_lz]$ git apply /tmp/v8-0001-LZ4-compression.patch\nerror: patch failed: doc/src/sgml/ref/pg_basebackup.sgml:230\nerror: doc/src/sgml/ref/pg_basebackup.sgml: patch does not apply\nerror: patch failed: src/backend/replication/Makefile:19\nerror: src/backend/replication/Makefile: patch does not apply\nerror: patch failed: src/backend/replication/basebackup.c:64\nerror: src/backend/replication/basebackup.c: patch does not apply\nerror: patch failed: src/include/replication/basebackup_sink.h:285\nerror: src/include/replication/basebackup_sink.h: patch does not apply\n\n-- \nregards,tushar\nEnterpriseDB  https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 28 Dec 2021 13:11:53 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 12/28/21 1:11 PM, Jeevan Ladhe wrote:\n> You need to apply Robert's v10 version patches 0002, 0003 and 0004, \n> before applying the lz4 patch(v8 version).\nThanks, able to apply now.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 28 Dec 2021 22:46:15 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> Please find the lz4 compression patch here that basically has:\nOne small issue, in the \"pg_basebackup --help\", we are not displaying\nlz4 value under --server-compression option\n\n\n[edb@tusharcentos7-v14 bin]$ ./pg_basebackup --help | grep \nserver-compression\n       --server-compression=none|gzip|gzip[1-9]\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 3 Jan 2022 19:39:58 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> Please find the lz4 compression patch here that basically has:\nPlease refer to this  scenario , where --server-compression is only \ncompressing\nbase backup into lz4 format but not pg_wal directory\n\n[edb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=lz4 \n-Xstream -D foo\n\n[edb@centos7tushar bin]$ ls foo\nbackup_manifest  base.tar.lz4  pg_wal.tar\n\nthis same is valid for gzip as well if server-compression is set to gzip\n\nedb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=gzip4 \n-Xstream -D foo1\n\n[edb@centos7tushar bin]$ ls foo1\nbackup_manifest  base.tar.gz  pg_wal.tar\n\nif this scenario is valid then both the folders format should be in lz4 \nformat otherwise we should\nget an error something like - not a valid option ?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 3 Jan 2022 22:42:03 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jan 3, 2022 at 12:12 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 11/22/21 11:05 PM, Jeevan Ladhe wrote:\n> > Please find the lz4 compression patch here that basically has:\n> Please refer to this scenario , where --server-compression is only\n> compressing\n> base backup into lz4 format but not pg_wal directory\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=lz4\n> -Xstream -D foo\n>\n> [edb@centos7tushar bin]$ ls foo\n> backup_manifest base.tar.lz4 pg_wal.tar\n>\n> this same is valid for gzip as well if server-compression is set to gzip\n>\n> edb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=gzip4\n> -Xstream -D foo1\n>\n> [edb@centos7tushar bin]$ ls foo1\n> backup_manifest base.tar.gz pg_wal.tar\n>\n> if this scenario is valid then both the folders format should be in lz4\n> format otherwise we should\n> get an error something like - not a valid option ?\n\nBefore sending an email like this, it would be a good idea to read the\ndocumentation for the --server-compression option.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jan 2022 09:37:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/4/22 8:07 PM, Robert Haas wrote:\n> Before sending an email like this, it would be a good idea to read the\n> documentation for the --server-compression option.\nSure, Thanks Robert.\n\nOne scenario where I feel error message is confusing and if it is not \nsupported at all then error message need to be a little bit more clear\n\nif we use -z  (or -Z ) with -t , we are getting this error\n[edb@centos7tushar bin]$  ./pg_basebackup -t server:/tmp/test0 -Xfetch -z\npg_basebackup: error: only tar mode backups can be compressed\nTry \"pg_basebackup --help\" for more information.\n\nbut after removing -z option  backup is in tar mode only\n\nedb@centos7tushar bin]$  ./pg_basebackup -t server:/tmp/test0 -Xfetch\n[edb@centos7tushar bin]$ ls /tmp/test0\nbackup_manifest  base.tar\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 15:41:38 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Jan 5, 2022 at 5:11 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> One scenario where I feel error message is confusing and if it is not\n> supported at all then error message need to be a little bit more clear\n>\n> if we use -z (or -Z ) with -t , we are getting this error\n> [edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/test0 -Xfetch -z\n> pg_basebackup: error: only tar mode backups can be compressed\n> Try \"pg_basebackup --help\" for more information.\n>\n> but after removing -z option backup is in tar mode only\n>\n> edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/test0 -Xfetch\n> [edb@centos7tushar bin]$ ls /tmp/test0\n> backup_manifest base.tar\n\nOK, fair enough, I can adjust the error message for that case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jan 2022 10:05:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Hi Tushar,\n>\n> You need to apply Robert's v10 version patches 0002, 0003 and 0004, before\n> applying the lz4 patch(v8 version).\n> Please let me know if you still face any issues.\n>\n\nThanks, Jeevan.\nI tested —server-compression option using different other options of\npg_basebackup, also checked -t/—server-compression from pg_basebackup of\nv15 will\nthrow an error if the server version is v14 or below. Things are looking\ngood to me.\nTwo open issues -\n1)lz4 value is missing for --server-compression in pg_basebackup --help\n2)Error messages need to improve if using -t server with -z/-Z\n\nregards,\n\nOn Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Tushar,You need to apply Robert's v10 version patches 0002, 0003 and 0004, before applying the lz4 patch(v8 version).Please let me know if you still face any issues.Thanks, Jeevan. I tested —server-compression option using different other options of pg_basebackup, also checked -t/—server-compression from pg_basebackup of v15 will throw an error if the server version is v14 or below. Things are looking good to me. Two open  issues -1)lz4 value is missing for --server-compression in pg_basebackup --help2)Error messages need to improve if using -t server with -z/-Z  regards,", "msg_date": "Wed, 5 Jan 2022 22:24:01 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nSimilar to LZ4 server-side compression, I have also tried to add a ZSTD\nserver-side compression in the attached patch. I have done some initial\ntesting and things seem to be working.\n\nExample run:\npg_basebackup -t server:/tmp/data_zstd -Xnone --server-compression=zstd\n\nThe patch surely needs some grooming, but I am expecting some initial\nreview, specially in the area where we are trying to close the zstd stream\nin bbsink_zstd_end_archive(). We need to tell the zstd library to end the\ncompression by calling ZSTD_compressStream2() thereby sending a\nZSTD_e_end flag. But, this also needs some input string, which per\nexample[1] line # 686, I have taken as an empty ZSTD_inBuffer.\n\nThanks, Tushar for testing the LZ4 patch. I have added the LZ4 option in\nthe pg_basebackup help now.\n\nNote: Before applying these patches please apply Robert's v10 version\nof patches 0002, 0003, and 0004.\n\n[1]\nhttps://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/zircon/tools/zbi/zbi.cc\n\nRegards,\nJeevan Ladhe\n\nOn Wed, Jan 5, 2022 at 10:24 PM tushar <tushar.ahuja@enterprisedb.com>\nwrote:\n\n>\n>\n> On Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe <\n> jeevan.ladhe@enterprisedb.com> wrote:\n>\n>> Hi Tushar,\n>>\n>> You need to apply Robert's v10 version patches 0002, 0003 and 0004,\n>> before applying the lz4 patch(v8 version).\n>> Please let me know if you still face any issues.\n>>\n>\n> Thanks, Jeevan.\n> I tested —server-compression option using different other options of\n> pg_basebackup, also checked -t/—server-compression from pg_basebackup of\n> v15 will\n> throw an error if the server version is v14 or below. Things are looking\n> good to me.\n> Two open issues -\n> 1)lz4 value is missing for --server-compression in pg_basebackup --help\n> 2)Error messages need to improve if using -t server with -z/-Z\n>\n> regards,\n>", "msg_date": "Tue, 18 Jan 2022 20:12:44 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Nov 16, 2021 at 4:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's a new patch set.\n\nAnd here's another one.\n\nI've committed the first two patches from the previous set, the second\nof those just today, and so we're getting down to the meat of the\npatch set.\n\n0001 adds \"server\" and \"blackhole\" as backup targets. It now has some\ntests. This might be more or less ready to ship, unless somebody else\nsees a problem, or I find one.\n\n0002 adds server-side gzip compression. This one hasn't got tests yet.\nAlso, it's going to need some adjustment based on the parallel\ndiscussion on the new options structure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Jan 2022 13:55:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 18, 2022 at 9:43 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> The patch surely needs some grooming, but I am expecting some initial\n> review, specially in the area where we are trying to close the zstd stream\n> in bbsink_zstd_end_archive(). We need to tell the zstd library to end the\n> compression by calling ZSTD_compressStream2() thereby sending a\n> ZSTD_e_end flag. But, this also needs some input string, which per\n> example[1] line # 686, I have taken as an empty ZSTD_inBuffer.\n\nAs far as I can see, this is correct. I found\nhttps://zstd.docsforge.com/dev/api-documentation/#streaming-compression-howto\nwhich seems to endorse what you've done here.\n\nOne (minor) thing that I notice is that, the way you've written the\nloop in bbsink_zstd_end_archive(), I think it will typically call\nbbsink_archive_contents() twice. It will flush whatever is already\npresent in the next sink's buffer as a result of the previous calls to\nbbsink_zstd_archive_contents(), and then it will call\nZSTD_compressStream2() which will partially refill the buffer you just\nemptied, and then there will be nothing left in the internal buffer,\nso it will call bbsink_archive_contents() again. But ... the initial\nflush may not have been necessary. It could be that there was enough\nspace already in the output buffer for the ZSTD_compressStream2() call\nto succeed without a prior flush. So maybe:\n\ndo\n{\n yet_to_flush = ZSTD_compressStream2(..., ZSTD_e_end);\n check ZSTD_isError here;\n if (mysink->zstd_outBuf.pos > 0)\n bbsink_archive_contents();\n} while (yet_to_flush > 0);\n\nI believe this might be very slightly more efficient.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jan 2022 16:27:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nI have added support for decompressing a gzip compressed tar file\nat client. pg_basebackup can enable server side compression for\nplain format backup with this change.\n\nAdded a gzip extractor which decompresses the compressed archive\nand forwards it to the next streamer. I have done initial testing and\nworking on updating the test coverage.\n\nNote: Before applying the patch, please apply Robert's v11 version\nof the patches 0001 and 0002.\n\nThanks,\nDipesh", "msg_date": "Wed, 19 Jan 2022 17:46:47 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Jan 19, 2022 at 7:16 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> I have added support for decompressing a gzip compressed tar file\n> at client. pg_basebackup can enable server side compression for\n> plain format backup with this change.\n>\n> Added a gzip extractor which decompresses the compressed archive\n> and forwards it to the next streamer. I have done initial testing and\n> working on updating the test coverage.\n\nCool. It's going to need some documentation changes, too.\n\nI don't like the way you coded this in CreateBackupStreamer(). I would\nlike the decision about whether to use\nbbstreamer_gzip_extractor_new(), and/or throw an error about not being\nable to parse an archive, to based on the file type i.e. \"did we get a\n.tar.gz file?\" rather than on whether we asked for server-side\ncompression. Notice that the existing logic checks whether we actually\ngot a .tar file from the server rather than assuming that's what must\nhave happened.\n\nAs a matter of style, I don't think it's good for the only thing\ninside of an \"if\" statement to be another \"if\" statement. The two\ncould be merged, but we also don't want to have the \"if\" conditional\nbe too complex. I am imagining that this should end up saying\nsomething like if (must_parse_archive && !is_tar && !is_tar_gz) {\npg_log_error(...\n\n+ * \"windowBits\" must be greater than or equal to \"windowBits\" value\n+ * provided to deflateInit2 while compressing.\n\nIt would be nice to clarify why we know the value we're using is safe.\nMaybe we're using the maximum possible value, in which case you could\njust add that to the end of the comment: \"...so we use the maximum\npossible value for safety.\"\n\n+ /*\n+ * End of the stream, if there is some pending data in output buffers then\n+ * we must forward it to next streamer.\n+ */\n+ if (res == Z_STREAM_END) {\n+ bbstreamer_content(mystreamer->base.bbs_next, member,\nmystreamer->base.bbs_buffer.data,\n+ mystreamer->bytes_written, context);\n+ }\n\nUncuddle the brace.\n\nIt probably doesn't make much difference, but I would be inclined to\ndo the final flush in bbstreamer_gzip_extractor_finalize() rather than\nhere. That way we rely on our own notion of when there's no more input\ndata rather than zlib's notion. Probably terrible things are going to\nhappen if those two ideas don't match up .... but there might be some\nother compression algorithm that doesn't return a distinguishing code\nat end-of-stream. Such an algorithm would have to take care of any\nleftover data in the finalize function, so I think we should do that\nhere too, so the code can be similar in all cases.\n\nPerhaps we should move all the gzip stuff to a new file bbstreamer_gzip.c.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 10:56:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/18/22 8:12 PM, Jeevan Ladhe wrote:\n> Similar to LZ4 server-side compression, I have also tried to add a ZSTD\n> server-side compression in the attached patch.\nThanks Jeevan. while testing found one scenario where the server is \ngetting crash while performing pg_basebackup\nagainst server-compression=zstd for a huge data second time\n\nSteps to reproduce\n--PG sources ( apply v11-0001,v11-0001,v9-0001,v9-0002 , configure \n--with-lz4,--with-zstd, make/install, initdb, start server)\n--insert huge data (./pgbench -i -s 2000 postgres)\n--restart the server (./pg_ctl -D data restart)\n--pg_basebackup ( ./pg_basebackup  -t server:/tmp/yc1 \n--server-compression=zstd -R  -Xnone -n -N -l 'ccc' --no-estimate-size -v)\n--insert huge data (./pgbench -i -s 1000 postgres)\n--restart the server (./pg_ctl -D data restart)\n--run pg_basebackup again (./pg_basebackup  -t server:/tmp/yc11 \n--server-compression=zstd -v  -Xnone )\n\n[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/yc11 \n--server-compression=zstd -v  -Xnone\npg_basebackup: initiating base backup, waiting for checkpoint to complete\n2022-01-19 21:23:26.508 IST [30219] LOG:  checkpoint starting: force wait\n2022-01-19 21:23:26.608 IST [30219] LOG:  checkpoint complete: wrote 0 \nbuffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=0.001 \ns, sync=0.001 s, total=0.101 s; sync files=0, longest=0.000 s, \naverage=0.000 s; distance=16369 kB, estimate=16369 kB\npg_basebackup: checkpoint completed\nTRAP: FailedAssertion(\"len > 0 && len <= sink->bbs_buffer_length\", File: \n\"../../../src/include/replication/basebackup_sink.h\", Line: 208, PID: 30226)\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(ExceptionalCondition+0x7a)[0x94ceca]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"[0x7b9a08]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"[0x7b9be2]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"[0x7b5b30]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(SendBaseBackup+0x563)[0x7b7053]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(exec_replication_command+0x961)[0x7c9a41]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(PostgresMain+0x92f)[0x81ca3f]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"[0x48e430]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(PostmasterMain+0xfd2)[0x785702]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"(main+0x1c6)[0x48fb96]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f63642c8555]\npostgres: walsender edb [local] sending backup \"pg_basebackup base \nbackup\"[0x48feb5]\npg_basebackup: error: could not read COPY data: server closed the \nconnection unexpectedly\n     This probably means the server terminated abnormally\n     before or while processing the request.\n2022-01-19 21:25:34.485 IST [30205] LOG:  server process (PID 30226) was \nterminated by signal 6: Aborted\n2022-01-19 21:25:34.485 IST [30205] DETAIL:  Failed process was running: \nBASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS,  MANIFEST \n'yes',  TABLESPACE_MAP,  TARGET 'server', TARGET_DETAIL '/tmp/yc11',  \nCOMPRESSION 'zstd')\n2022-01-19 21:25:34.485 IST [30205] LOG:  terminating any other active \nserver processes\n[edb@centos7tushar bin]$ 2022-01-19 21:25:34.489 IST [30205] LOG: all \nserver processes terminated; reinitializing\n2022-01-19 21:25:34.536 IST [30228] LOG:  database system was \ninterrupted; last known up at 2022-01-19 21:23:26 IST\n2022-01-19 21:25:34.669 IST [30228] LOG:  database system was not \nproperly shut down; automatic recovery in progress\n2022-01-19 21:25:34.671 IST [30228] LOG:  redo starts at 9/7000028\n2022-01-19 21:25:34.671 IST [30228] LOG:  invalid record length at \n9/7000148: wanted 24, got 0\n2022-01-19 21:25:34.671 IST [30228] LOG:  redo done at 9/7000110 system \nusage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n2022-01-19 21:25:34.673 IST [30229] LOG:  checkpoint starting: \nend-of-recovery immediate wait\n2022-01-19 21:25:34.713 IST [30229] LOG:  checkpoint complete: wrote 3 \nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 \ns, sync=0.001 s, total=0.041 s; sync files=2, longest=0.001 s, \naverage=0.001 s; distance=0 kB, estimate=0 kB\n2022-01-19 21:25:34.718 IST [30205] LOG:  database system is ready to \naccept connections\n\nObservation -\n\nif we change server-compression method to lz4 from zstd then it is NOT \nhappening.\n\n[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/ycc1 \n--server-compression=lz4 -v  -Xnone\npg_basebackup: initiating base backup, waiting for checkpoint to complete\n2022-01-19 21:27:51.642 IST [30229] LOG:  checkpoint starting: force wait\n2022-01-19 21:27:51.687 IST [30229] LOG:  checkpoint complete: wrote 0 \nbuffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=0.001 \ns, sync=0.001 s, total=0.046 s; sync files=0, longest=0.000 s, \naverage=0.000 s; distance=16383 kB, estimate=16383 kB\npg_basebackup: checkpoint completed\n\nNOTICE:  WAL archiving is not enabled; you must ensure that all required \nWAL segments are copied through other means to complete the backup\npg_basebackup: base backup completed\n[edb@centos7tushar bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 19 Jan 2022 22:00:42 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Jan 19, 2022 at 7:16 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> I have done initial testing and\n> working on updating the test coverage.\n\nI spent some time thinking about test coverage for the server-side\nbackup code today and came up with the attached (v12-0003). It does an\nend-to-end test that exercises server-side backup and server-side\ncompression and then untars the backup and validity-checks it using\npg_verifybackup. In addition to being good test coverage for these\npatches, it also plugs a gap in the test coverage of pg_verifybackup,\nwhich currently has no test case that untars a tar-format backup and\nthen verifies the result. I couldn't figure out a way to do that back\nat the time I was working on pg_verifybackup, because I didn't think\nwe had any existing precedent for using 'tar' from a TAP test. But it\nwas pointed out to me that we do, so I used that as the model for this\ntest. It should be easy to generalize this test case to test lz4 and\nzstd as well, I think. But I guess we'll still need something\ndifferent to test what your patch is doing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 19 Jan 2022 16:26:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nThanks for the feedback, I have incorporated the suggestions and\nupdated a new patch v2.\n\n> I spent some time thinking about test coverage for the server-side\n> backup code today and came up with the attached (v12-0003). It does an\n> end-to-end test that exercises server-side backup and server-side\n> compression and then untars the backup and validity-checks it using\n> pg_verifybackup. In addition to being good test coverage for these\n> patches, it also plugs a gap in the test coverage of pg_verifybackup,\n> which currently has no test case that untars a tar-format backup and\n> then verifies the result. I couldn't figure out a way to do that back\n> at the time I was working on pg_verifybackup, because I didn't think\n> we had any existing precedent for using 'tar' from a TAP test. But it\n> was pointed out to me that we do, so I used that as the model for this\n> test. It should be easy to generalize this test case to test lz4 and\n> zstd as well, I think. But I guess we'll still need something\n> different to test what your patch is doing.\n\nI tried to add the test coverage for server side gzip compression with\nplain format backup using pg_verifybackup. I have modified the test\nto use a flag specific to plain format. If this flag is set then it takes a\nplain format backup (with server compression enabled) and verifies\nthis using pg_verifybackup. I have updated (v2-0002) for the test\ncoverage.\n\n> It's going to need some documentation changes, too.\nyes, I am working on it.\n\nNote: Before applying the patches, please apply Robert's v12 version\nof the patches 0001, 0002 and 0003.\n\nThanks,\nDipesh", "msg_date": "Thu, 20 Jan 2022 18:30:43 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jan 20, 2022 at 8:00 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> Thanks for the feedback, I have incorporated the suggestions and\n> updated a new patch v2.\n\nCool. I'll do a detailed review later, but I think this is going in a\ngood direction.\n\n> I tried to add the test coverage for server side gzip compression with\n> plain format backup using pg_verifybackup. I have modified the test\n> to use a flag specific to plain format. If this flag is set then it takes a\n> plain format backup (with server compression enabled) and verifies\n> this using pg_verifybackup. I have updated (v2-0002) for the test\n> coverage.\n\nInteresting approach. This unfortunately has the effect of making that\ntest case file look a bit incoherent -- the comment at the top of the\nfile isn't really accurate any more, for example, and the plain_format\nflag does more than just cause us to use -Fp; it also causes us NOT to\nuse --target server:X. However, that might be something we can figure\nout a way to clean up. Alternatively, we could have a new test case\nfile that is structured like 002_algorithm.pl but looping over\ncompression methods rather than checksum algorithms, and testing each\none with --server-compress and -Fp. It might be easier to make that\nlook nice (but I'm not 100% sure).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jan 2022 11:10:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Jan 19, 2022 at 4:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I spent some time thinking about test coverage for the server-side\n> backup code today and came up with the attached (v12-0003).\n\nI committed the base backup target patch yesterday, and today I\nupdated the remaining code in light of Michael Paquier's commit\n5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.\n\nMichael, I am proposing to that we remove this message as part of this commit:\n\n- pg_log_info(\"no value specified for compression\nlevel, switching to default\");\n\nI think most people won't want to specify a compression level, so\nemitting a message when they don't seems too verbose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 Jan 2022 13:33:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jan 20, 2022 at 11:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Jan 20, 2022 at 8:00 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> > Thanks for the feedback, I have incorporated the suggestions and\n> > updated a new patch v2.\n>\n> Cool. I'll do a detailed review later, but I think this is going in a\n> good direction.\n\nHere is a more detailed review.\n\n+ if (inflateInit2(zs, 15 + 16) != Z_OK)\n+ {\n+ pg_log_error(\"could not initialize compression library\");\n+ exit(1);\n+\n+ }\n\nExtra blank line.\n\n+ /* At present, we only know how to parse tar and gzip archives. */\n\ngzip -> tar.gz. You can gzip something that is not a tar.\n\n+ * Extract the gzip compressed archive using a gzip extractor and then\n+ * forward it to next streamer.\n\nThis comment is not good. First, we're not necessarily doing it.\nSecond, it just describes what the code does, not why it does it.\nMaybe something like \"If the user requested both that the server\ncompress the backup and also that we extract the backup, we need to\ndecompress it.\"\n\n+ if (server_compression != NULL)\n+ {\n+ if (strcmp(server_compression, \"gzip\") == 0)\n+ server_compression_type = BACKUP_COMPRESSION_GZIP;\n+ else if (strlen(server_compression) == 5 &&\n+ strncmp(server_compression, \"gzip\", 4) == 0 &&\n+ server_compression[4] >= '1' && server_compression[4] <= '9')\n+ {\n+ server_compression_type = BACKUP_COMPRESSION_GZIP;\n+ server_compression_level = server_compression[4] - '0';\n+ }\n+ }\n+ else\n+ server_compression_type = BACKUP_COMPRESSION_NONE;\n\nI think this is not required any more. I think probably some other\nthings need to be adjusted as well, based on Michael's changes and the\nupdates in my patch to match.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Jan 2022 14:30:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\n> Here is a more detailed review.\n\nThanks for the feedback, I have incorporated the suggestions\nand updated a new version of the patch (v3-0001).\n\nThe required documentation changes are also incorporated in\nupdated patch (v3-0001).\n\n> Interesting approach. This unfortunately has the effect of making that\n> test case file look a bit incoherent -- the comment at the top of the\n> file isn't really accurate any more, for example, and the plain_format\n> flag does more than just cause us to use -Fp; it also causes us NOT to\n> use --target server:X. However, that might be something we can figure\n> out a way to clean up. Alternatively, we could have a new test case\n> file that is structured like 002_algorithm.pl but looping over\n> compression methods rather than checksum algorithms, and testing each\n> one with --server-compress and -Fp. It might be easier to make that\n> look nice (but I'm not 100% sure).\n\nAdded a new test case file \"009_extract.pl\" to test server compressed plain\nformat backup (v3-0002).\n\n> I committed the base backup target patch yesterday, and today I\n> updated the remaining code in light of Michael Paquier's commit\n> 5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.\n\nv13 patch does not apply on the latest head, it requires a rebase. I have\napplied\nit on commit dc43fc9b3aa3e0fa9c84faddad6d301813580f88 to validate gzip\ndecompression patches.\n\nThanks,\nDipesh", "msg_date": "Mon, 24 Jan 2022 19:59:55 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jan 24, 2022 at 9:30 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> v13 patch does not apply on the latest head, it requires a rebase. I have applied\n> it on commit dc43fc9b3aa3e0fa9c84faddad6d301813580f88 to validate gzip\n> decompression patches.\n\nIt only needed trivial rebasing; I have committed it after doing that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jan 2022 15:14:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi, \r\nThank you for committing a great feature. I have tested the committed features. \r\nThe attached small patch fixes the output of the --help message. In the previous commit, only gzip and none were output, but in the attached patch, client-gzip and server-gzip are added.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Robert Haas <robertmhaas@gmail.com> \r\nSent: Saturday, January 22, 2022 3:33 AM\r\nTo: Dipesh Pandit <dipesh.pandit@gmail.com>; Michael Paquier <michael@paquier.xyz>\r\nCc: Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>; tushar <tushar.ahuja@enterprisedb.com>; Dmitry Dolgov <9erthalion6@gmail.com>; Mark Dilger <mark.dilger@enterprisedb.com>; pgsql-hackers@postgresql.org\r\nSubject: Re: refactoring basebackup.c\r\n\r\nOn Wed, Jan 19, 2022 at 4:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> I spent some time thinking about test coverage for the server-side \r\n> backup code today and came up with the attached (v12-0003).\r\n\r\nI committed the base backup target patch yesterday, and today I updated the remaining code in light of Michael Paquier's commit 5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.\r\n\r\nMichael, I am proposing to that we remove this message as part of this commit:\r\n\r\n- pg_log_info(\"no value specified for compression\r\nlevel, switching to default\");\r\n\r\nI think most people won't want to specify a compression level, so emitting a message when they don't seems too verbose.\r\n\r\n--\r\nRobert Haas\r\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 25 Jan 2022 03:54:52 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: refactoring basebackup.c" }, { "msg_contents": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com> writes:\n\n> Hi, \n> Thank you for committing a great feature. I have tested the committed features. \n> The attached small patch fixes the output of the --help message. In the\n> previous commit, only gzip and none were output, but in the attached\n> patch, client-gzip and server-gzip are added.\n\nI think it would be better to write that as `[{client,server}-]gzip`,\nespecially as we add more compression agorithms, where it would\npresumably become `[{client,server}-]METHOD` (assuming all methods are\nsupported on both the client and server side).\n\nI also noticed that in the docs, the `client` and `server` are marked up\nas replaceable parameters, when they are actually literals, plus the\nhyphen is misplaced. The `--checkpoint` option also has the `fast` and\n`spread` literals marked up as parameters.\n\nAll of these are fixed in the attached patch.\n\n- ilmari", "msg_date": "Tue, 25 Jan 2022 13:06:27 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> \"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com> writes:\n>\n>> Hi, \n>> Thank you for committing a great feature. I have tested the committed features. \n>> The attached small patch fixes the output of the --help message. In the\n>> previous commit, only gzip and none were output, but in the attached\n>> patch, client-gzip and server-gzip are added.\n>\n> I think it would be better to write that as `[{client,server}-]gzip`,\n> especially as we add more compression agorithms, where it would\n> presumably become `[{client,server}-]METHOD` (assuming all methods are\n> supported on both the client and server side).\n>\n> I also noticed that in the docs, the `client` and `server` are marked up\n> as replaceable parameters, when they are actually literals, plus the\n> hyphen is misplaced. The `--checkpoint` option also has the `fast` and\n> `spread` literals marked up as parameters.\n>\n> All of these are fixed in the attached patch.\n\nI just noticed there was a superfluous [ in the SGM documentation, and\nthat the short form was missing the [{client|server}-] part. Updated\npatch attaced.\n\n- ilmari", "msg_date": "Tue, 25 Jan 2022 13:42:27 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/22/22 12:03 AM, Robert Haas wrote:\n> I committed the base backup target patch yesterday, and today I\n> updated the remaining code in light of Michael Paquier's commit\n> 5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.\nThanks Robert,  I tested against the latest PG Head and found a few issues -\n\nA)Getting syntax error if -z is used along with -t\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/data902 -z -Xfetch\npg_basebackup: error: could not initiate base backup: ERROR:  syntax error\n\nOR\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/t2 \n--compress=server-gzip:9 -Xfetch -v -z\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: error: could not initiate base backup: ERROR:  syntax error\n\nB)No information of \"client-gzip\" or \"server-gzip\" added under \n\"--compress\" option/method of ./pg_basebackup --help.\n\nC) -R option is silently ignoring\n\n[edb@centos7tushar bin]$  ./pg_basebackup  -Z 4  -v  -t server:/tmp/pp \n-Xfetch -R\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\npg_basebackup: write-ahead log start point: 0/30000028 on timeline 1\npg_basebackup: write-ahead log end point: 0/30000100\npg_basebackup: base backup completed\n[edb@centos7tushar bin]$\n\ngo to /tmp/pp folder and extract it - there is no \"standby.signal\" file \nand if we start cluster against this data directory,it will not be in \nslave mode.\n\nif this is not supported then I think we should throw some errors.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 21:52:12 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 8:42 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> I just noticed there was a superfluous [ in the SGM documentation, and\n> that the short form was missing the [{client|server}-] part. Updated\n> patch attaced.\n\nCommitted, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jan 2022 15:12:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 03:54:52AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Michael, I am proposing to that we remove this message as part of\n> this commit: \n> \n> - pg_log_info(\"no value specified for compression\n> level, switching to default\");\n> \n> I think most people won't want to specify a compression level, so\n> emitting a message when they don't seems too verbose. \n\n(Just noticed this message as I am not in CC.)\nRemoving this message is fine by me, thanks!\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 10:23:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 09:52:12PM +0530, tushar wrote:\n> C) -R option is silently ignoring\n> \n> go to /tmp/pp folder and extract it - there is no \"standby.signal\" file and\n> if we start cluster against this data directory,it will not be in slave\n> mode.\n\nYeah, I don't think it's good to silently ignore the option, and we\nshould not generate the file on the server-side. Rather than erroring\nin this case, you'd better add the file to the existing compressed\nfile of the base data folder on the client-side.\n\nThis makes me wonder whether we should begin tracking any open items\nfor v15.. We don't want to lose track of any issue with features\ncommitted already in the tree.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 10:29:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 8:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jan 25, 2022 at 03:54:52AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> > Michael, I am proposing to that we remove this message as part of\n> > this commit:\n> >\n> > - pg_log_info(\"no value specified for compression\n> > level, switching to default\");\n> >\n> > I think most people won't want to specify a compression level, so\n> > emitting a message when they don't seems too verbose.\n>\n> (Just noticed this message as I am not in CC.)\n> Removing this message is fine by me, thanks!\n\nOh, I thought I'd CC'd you. I know I meant to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jan 2022 12:31:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 11:22 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> A)Getting syntax error if -z is used along with -t\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/data902 -z -Xfetch\n> pg_basebackup: error: could not initiate base backup: ERROR: syntax error\n\nOops. The attached patch should fix this.\n\n> B)No information of \"client-gzip\" or \"server-gzip\" added under\n> \"--compress\" option/method of ./pg_basebackup --help.\n\nAlready fixed by e1f860f13459e186479319aa9f65ef184277805f.\n\n> C) -R option is silently ignoring\n\nThe attached patch should fix this, too.\n\nThanks for finding these issues.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 Jan 2022 15:45:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\n> It only needed trivial rebasing; I have committed it after doing that.\n\nI have updated the patches to support server compression (gzip) for\nplain format backup. Please find attached v4 patches.\n\nThanks,\nDipesh", "msg_date": "Thu, 27 Jan 2022 13:07:45 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/27/22 2:15 AM, Robert Haas wrote:\n> The attached patch should fix this, too.\nThanks, the issues seem to be fixed now.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 17:45:16 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jan 27, 2022 at 7:15 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 1/27/22 2:15 AM, Robert Haas wrote:\n> > The attached patch should fix this, too.\n> Thanks, the issues seem to be fixed now.\n\nCool. I committed that patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Jan 2022 11:47:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/27/22 10:17 PM, Robert Haas wrote:\n> Cool. I committed that patch.\nThanks , Please refer to this scenario  where the label is set to  0 for \nserver-gzip but the directory is still  compressed\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/11 --gzip \n--compress=0 -Xnone\nNOTICE:  all required WAL segments have been archived\n[edb@centos7tushar bin]$ ls /tmp/11\n16384.tar  backup_manifest  base.tar\n\n\n[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/10 --gzip \n--compress=server-gzip:0 -Xnone\nNOTICE:  all required WAL segments have been archived\n[edb@centos7tushar bin]$ ls /tmp/10\n16384.tar.gz  backup_manifest  base.tar.gz\n\n0 is for no compression so the directory should not be compressed if we \nmention server-gzip:0 and both these\nabove scenarios should match?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 22:38:15 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jan 27, 2022 at 12:08 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 1/27/22 10:17 PM, Robert Haas wrote:\n> > Cool. I committed that patch.\n> Thanks , Please refer to this scenario where the label is set to 0 for\n> server-gzip but the directory is still compressed\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/11 --gzip\n> --compress=0 -Xnone\n> NOTICE: all required WAL segments have been archived\n> [edb@centos7tushar bin]$ ls /tmp/11\n> 16384.tar backup_manifest base.tar\n>\n>\n> [edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/10 --gzip\n> --compress=server-gzip:0 -Xnone\n> NOTICE: all required WAL segments have been archived\n> [edb@centos7tushar bin]$ ls /tmp/10\n> 16384.tar.gz backup_manifest base.tar.gz\n>\n> 0 is for no compression so the directory should not be compressed if we\n> mention server-gzip:0 and both these\n> above scenarios should match?\n\nWell what's weird here is that you are using both --gzip and also\n--compress. Those both control the same behavior, so it's a surprising\nidea to specify both. But I guess if someone does, we should make the\nsecond one fully override the first one. Here's a patch to try to do\nthat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 Jan 2022 12:42:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Jan 27, 2022 at 2:37 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> I have updated the patches to support server compression (gzip) for\n> plain format backup. Please find attached v4 patches.\n\nI made a pass over these patches today and made a bunch of minor\ncorrections. New version attached. The two biggest things I changed\nare (1) s/gzip_extractor/gzip_compressor/, because I feel like you\nextract an archive like a tarfile, but that is not what is happening\nhere, this is not an archive and (2) I took a few bits of out of the\ntest case that didn't seem to be necessary. There wasn't any reason\nthat I could see why testing for PG_VERSION needed to be skipped when\nthe compression method is 'none', so my first thought was to just take\nout the 'if' statement around that, but then after more thought that\ntest and the one for pg_verifybackup are certainly going to fail if\nthose files are not present, so why have an extra test? It might make\nsense if we were only conditionally able to run pg_verifybackup and\nwanted to have some test coverage even when we can't, but that's not\nthe case here, so I see no point.\n\nI studied this a bit to see whether I needed to make any adjustments\nalong the lines of 4f0bcc735038e96404fae59aa16ef9beaf6bb0aa in order\nfor this to work on msys. I think I don't, because 002_algorithm.pl\nand 003_corruption.pl both pass $backup_path, not $real_backup_path,\nto command_ok -- and I think something inside there does the\ntranslation, which is weird, but we might as well be consistent.\n008_untar.pl and 4f0bcc735038e96404fae59aa16ef9beaf6bb0aa needed to do\nsomething different because --target server:X confused the msys magic,\nbut I think that shouldn't be an issue for this patch. However, I\nmight be wrong.\n\nBarring objections or problems, I plan to commit this version\ntomorrow. I'd do it today, but I have plans for tonight that are\nincompatible with discovering that the build farm hates this ....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 Jan 2022 14:13:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\n> I made a pass over these patches today and made a bunch of minor\n> corrections. New version attached. The two biggest things I changed\n> are (1) s/gzip_extractor/gzip_compressor/, because I feel like you\n> extract an archive like a tarfile, but that is not what is happening\n> here, this is not an archive and (2) I took a few bits of out of the\n> test case that didn't seem to be necessary. There wasn't any reason\n> that I could see why testing for PG_VERSION needed to be skipped when\n> the compression method is 'none', so my first thought was to just take\n> out the 'if' statement around that, but then after more thought that\n> test and the one for pg_verifybackup are certainly going to fail if\n> those files are not present, so why have an extra test? It might make\n> sense if we were only conditionally able to run pg_verifybackup and\n> wanted to have some test coverage even when we can't, but that's not\n> the case here, so I see no point.\n\nThanks. This makes sense.\n\n+#ifdef HAVE_LIBZ\n+ /*\n+ * If the user has requested a server compressed archive along with\narchive\n+ * extraction at client then we need to decompress it.\n+ */\n+ if (format == 'p' && compressmethod == COMPRESSION_GZIP &&\n+ compressloc == COMPRESS_LOCATION_SERVER)\n+ streamer = bbstreamer_gzip_decompressor_new(streamer);\n+#endif\n\nI think it is not required to have HAVE_LIBZ check in pg_basebackup.c\nwhile creating a new gzip writer/decompressor. This check is already\nin place in bbstreamer_gzip_writer_new() and\nbbstreamer_gzip_decompressor_new()\nand it throws an error in case the build does not have required library\nsupport. I have removed this check from pg_basebackup.c and updated\na delta patch. The patch can be applied on v5 patch.\n\nThanks,\nDipesh", "msg_date": "Fri, 28 Jan 2022 14:24:38 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 1/27/22 11:12 PM, Robert Haas wrote:\n> Well what's weird here is that you are using both --gzip and also\n> --compress. Those both control the same behavior, so it's a surprising\n> idea to specify both. But I guess if someone does, we should make the\n> second one fully override the first one. Here's a patch to try to do\n> that.\nright, the current behavior was  -\n\n[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip -Z \nnone  -Xnone\npg_basebackup: error: cannot use compression level with method none\nTry \"pg_basebackup --help\" for more information.\n\nand even this was not matching with PG v14 behavior too\ne.g\n  ./pg_basebackup -Ft -z -Z none  -D /tmp/test1  ( working in PG v14 but \nthrowing above error on PG HEAD)\n\nand somewhere we were breaking the backward compatibility.\n\nnow with your patch -this seems working fine\n\n[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip*-Z \nnone*  -Xnone\nNOTICE:  WAL archiving is not enabled; you must ensure that all required \nWAL segments are copied through other means to complete the backup\n[edb@centos7tushar bin]$ ls /tmp/y101\nbackup_manifest *base.tar*\n\nOR\n\n[edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/y0p -Z none  \n-Xfetch *-z*\n[edb@centos7tushar bin]$ ls /tmp/y0p\nbackup_manifest *base.tar.gz*\n\nbut what about server-gzip:0? should it allow compressing the directory?\n\n[edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/1 \n--compress=server-gzip:0  -Xfetch\n[edb@centos7tushar bin]$ ls /tmp/1\nbackup_manifest  base.tar.gz\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 1/27/22 11:12 PM, Robert Haas wrote:\n\n\nWell what's weird here is that you are using both --gzip and also\n--compress. Those both control the same behavior, so it's a surprising\nidea to specify both. But I guess if someone does, we should make the\nsecond one fully override the first one. Here's a patch to try to do\nthat.\n\n right, the current behavior was  -\n\n [edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip\n -Z none  -Xnone\n pg_basebackup: error: cannot use compression level with method none\n Try \"pg_basebackup --help\" for more information.\n\n and even this was not matching with PG v14 behavior too\n e.g\n  ./pg_basebackup -Ft -z -Z none  -D /tmp/test1  ( working in PG v14\n but throwing above error on PG HEAD)\n\n and somewhere we were breaking the backward compatibility.\n\n now with your patch -this seems working fine\n\n [edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip\n -Z none  -Xnone\n NOTICE:  WAL archiving is not enabled; you must ensure that all\n required WAL segments are copied through other means to complete the\n backup\n [edb@centos7tushar bin]$ ls /tmp/y101\n backup_manifest  base.tar\n\n OR\n\n [edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/y0p -Z\n none  -Xfetch -z\n [edb@centos7tushar bin]$ ls /tmp/y0p\n backup_manifest  base.tar.gz\n\n but what about server-gzip:0? should it allow compressing the\n directory?\n\n [edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/1\n --compress=server-gzip:0  -Xfetch\n [edb@centos7tushar bin]$ ls /tmp/1\n backup_manifest  base.tar.gz\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 28 Jan 2022 16:15:41 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Jan 28, 2022 at 3:54 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> Thanks. This makes sense.\n>\n> +#ifdef HAVE_LIBZ\n> + /*\n> + * If the user has requested a server compressed archive along with archive\n> + * extraction at client then we need to decompress it.\n> + */\n> + if (format == 'p' && compressmethod == COMPRESSION_GZIP &&\n> + compressloc == COMPRESS_LOCATION_SERVER)\n> + streamer = bbstreamer_gzip_decompressor_new(streamer);\n> +#endif\n>\n> I think it is not required to have HAVE_LIBZ check in pg_basebackup.c\n> while creating a new gzip writer/decompressor. This check is already\n> in place in bbstreamer_gzip_writer_new() and bbstreamer_gzip_decompressor_new()\n> and it throws an error in case the build does not have required library\n> support. I have removed this check from pg_basebackup.c and updated\n> a delta patch. The patch can be applied on v5 patch.\n\nRight, makes sense. Committed with that change, plus I realized the\nskip count in the test case file was wrong after the changes I made\nyesterday, so I fixed that as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:42:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nI have attached the latest rebased version of the LZ4 server-side\ncompression\npatch on the recent commits. This patch also introduces the compression\nlevel\nand adds a tap test.\n\nAlso, while adding the lz4 case in the pg_verifybackup/t/008_untar.pl, I\nfound\nan unused variable {have_zlib}. I have attached a cleanup patch for that as\nwell.\n\nPlease review and let me know your thoughts.\n\nRegards,\nJeevan Ladhe", "msg_date": "Fri, 28 Jan 2022 23:18:15 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Jan 28, 2022 at 12:48 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I have attached the latest rebased version of the LZ4 server-side compression\n> patch on the recent commits. This patch also introduces the compression level\n> and adds a tap test.\n\nIn view of this morning's commit of\nd45099425eb19e420433c9d81d354fe585f4dbd6 I think the threshold for\ncommitting this patch has gone up. We need to make it support\ndecompression with LZ4 on the client side, as we now have for gzip.\n\nOther comments:\n\n- Even if we were going to support LZ4 only on the server side, surely\nit's not right to refuse --compress lz4 and --compress client-lz4 at\nthe parsing stage. I don't even think the message you added to main()\nis reachable.\n\n- In the new test case you set decompress_flags but according to the\ndocumentation I have here, -m is for multiple files (and so should not\nbe needed here) and -d is for decompression (which is what we want\nhere). So I'm confused why this is like this.\n\nOther than that this seems like it's in pretty good shape.\n\n> Also, while adding the lz4 case in the pg_verifybackup/t/008_untar.pl, I found\n> an unused variable {have_zlib}. I have attached a cleanup patch for that as well.\n\nThis part seems clearly correct, so I have committed it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:49:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Sat, Jan 29, 2022 at 1:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Jan 28, 2022 at 12:48 PM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > I have attached the latest rebased version of the LZ4 server-side\n> compression\n> > patch on the recent commits. This patch also introduces the compression\n> level\n> > and adds a tap test.\n>\n> In view of this morning's commit of\n> d45099425eb19e420433c9d81d354fe585f4dbd6 I think the threshold for\n> committing this patch has gone up. We need to make it support\n> decompression with LZ4 on the client side, as we now have for gzip.\n>\n\nFair enough. Makes sense.\n\n\n> - In the new test case you set decompress_flags but according to the\n> documentation I have here, -m is for multiple files (and so should not\n> be needed here) and -d is for decompression (which is what we want\n> here). So I'm confused why this is like this.\n>\n>\n'-d' is the default when we have a .lz4 extension, which is true in our\ncase,\nhence elimininated that. About, '-m' introduction, without any option, or\neven\nafter providing the explicit '-d' option, weirdly lz4 command was throwing\ndecompressed tar on the console, that's when in my lz4 man version I saw\nthese 2 lines and tried adding '-m' option, and it worked:\n\n\" It is considered bad practice to rely on implicit output in scripts.\n because the script´s environment may change. Always use explicit\n output in scripts. -c ensures that output will be stdout. Conversely,\n providing a destination name, or using -m ensures that the output will\n be either the specified name, or filename.lz4 respectively.\"\n\nand\n\n\"Similarly, lz4 -m -d can decompress multiple *.lz4 files.\"\n\n\n> This part seems clearly correct, so I have committed it.\n\n\nThanks for pushing it.\n\nRegards,\nJeevan Ladhe\n\nOn Sat, Jan 29, 2022 at 1:20 AM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Jan 28, 2022 at 12:48 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I have attached the latest rebased version of the LZ4 server-side compression\n> patch on the recent commits. This patch also introduces the compression level\n> and adds a tap test.\n\nIn view of this morning's commit of\nd45099425eb19e420433c9d81d354fe585f4dbd6 I think the threshold for\ncommitting this patch has gone up. We need to make it support\ndecompression with LZ4 on the client side, as we now have for gzip.Fair enough. Makes sense. \n- In the new test case you set decompress_flags but according to the\ndocumentation I have here, -m is for multiple files (and so should not\nbe needed here) and -d is for decompression (which is what we want\nhere). So I'm confused why this is like this.'-d' is the default when we have a .lz4 extension, which is true in our case,hence elimininated that. About, '-m' introduction, without any option, or evenafter providing the explicit '-d' option, weirdly lz4 command was throwingdecompressed tar on the console, that's when in my lz4 man version I sawthese 2 lines and tried adding '-m' option, and it worked:\" It is considered bad practice to rely on implicit output in scripts. because the script´s environment may change. Always use explicit output in scripts. -c ensures that output will be stdout. Conversely, providing a destination name, or using -m ensures that the output will be either the specified name, or filename.lz4 respectively.\"and\"Similarly, lz4 -m -d can decompress multiple *.lz4 files.\" \nThis part seems clearly correct, so I have committed it.Thanks for pushing it.Regards,Jeevan Ladhe", "msg_date": "Sat, 29 Jan 2022 09:08:51 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nI had an offline discussion with Dipesh, and he will be working on the\nlz4 client side decompression part.\n\nPlease find the attached patch with the following changes:\n\n- Even if we were going to support LZ4 only on the server side, surely\n> it's not right to refuse --compress lz4 and --compress client-lz4 at\n> the parsing stage. I don't even think the message you added to main()\n> is reachable.\n>\n\nI think you are right, I have removed the message and again introduced\nthe Assert() back.\n\n- In the new test case you set decompress_flags but according to the\n> documentation I have here, -m is for multiple files (and so should not\n> be needed here) and -d is for decompression (which is what we want\n> here). So I'm confused why this is like this.\n>\n\nAs explained earlier in the tap test the 'lz4 -d base.tar.lz4' command was\nthrowing the decompression to stdout. Now, I have removed the '-m',\nadded '-d' for decompression, and also added the target file explicitly in\nthe command.\n\nRegards,\nJeevan Ladhe", "msg_date": "Mon, 31 Jan 2022 16:40:25 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Jan 31, 2022 at 6:11 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I had an offline discussion with Dipesh, and he will be working on the\n> lz4 client side decompression part.\n\nOK. I guess we should also be thinking about client-side LZ4\ncompression. It's probably best to focus on that before worrying about\nZSTD, even though ZSTD would be really cool to have.\n\n>> - In the new test case you set decompress_flags but according to the\n>> documentation I have here, -m is for multiple files (and so should not\n>> be needed here) and -d is for decompression (which is what we want\n>> here). So I'm confused why this is like this.\n>\n> As explained earlier in the tap test the 'lz4 -d base.tar.lz4' command was\n> throwing the decompression to stdout. Now, I have removed the '-m',\n> added '-d' for decompression, and also added the target file explicitly in\n> the command.\n\nI don't see the behavior you describe here. For me:\n\n[rhaas ~]$ lz4 q.lz4\nDecoding file q\nq.lz4 : decoded 3785 bytes\n[rhaas ~]$ rm q\n[rhaas ~]$ lz4 -m q.lz4\n[rhaas ~]$ ls q\nq\n[rhaas ~]$ rm q\n[rhaas ~]$ lz4 -d q.lz4\nDecoding file q\nq.lz4 : decoded 3785 bytes\n[rhaas ~]$ rm q\n[rhaas ~]$ lz4 -d -m q.lz4\n[rhaas ~]$ ls q\nq\n\nIn other words, on my system, the file gets decompressed with or\nwithout -d, and with or without -m. The only difference I see is that\nusing -m makes it happen silently, without printing anything on the\nterminal. Anyway, I wasn't saying that using -m was necessarily wrong,\njust that I didn't understand why you had it like that. Now that I'm\nmore informed, I recommend that we use -d -m, the former to be\nexplicit about wanting to decompress and the latter because it either\nmakes it less noisy (on my system) or makes it work at all (on yours).\nIt's surprising that the command behavior would be different like that\non different systems, but it is what it is. I think any set of flags\nwe put here is better than adding more logical in perl, as it keeps\nthings simpler.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 31 Jan 2022 08:38:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "> I think you are right, I have removed the message and again introduced\n> the Assert() back.\n>\nIn my previous version of patch, this was a problem, basically, there should\nnot be an assert as the code is still reachable be it server-lz4 or\nclient-lz4.\nI removed the assert and added the level range check there similar to gzip.\n\nRegards,\nJeevan Ladhe", "msg_date": "Mon, 31 Jan 2022 19:54:28 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Jan 18, 2022 at 1:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 0001 adds \"server\" and \"blackhole\" as backup targets. It now has some\n> tests. This might be more or less ready to ship, unless somebody else\n> sees a problem, or I find one.\n\nI played around with this a bit and it seems quite easy to extend this\nfurther. So please find attached a couple more patches to generalize\nthis mechanism.\n\n0001 adds an extensibility framework for backup targets. The idea is\nthat an extension loaded via shared_preload_libraries can call\nBaseBackupAddTarget() to define a new base backup target, which the\nuser can then access via pg_basebackup --target TARGET_NAME, or if\nthey want to pass a detail string, pg_basebackup --target\nTARGET_NAME:DETAIL. There might be slightly better ways of hooking\nthis into the system. I'm not unhappy with this approach, but there\nmight be a better idea out there.\n\n0002 adds an example contrib module called basebackup_to_shell. The\nsystem administrator can set basebackup_to_shell.command='SOMETHING'.\nA backup directed to the 'shell' target will cause the server to\nexecute the configured command once per generated archive, and once\nfor the backup_manifest, if any. When executing the command, %f gets\nreplaced with the archive filename (e.g. base.tar) and %d gets\nreplaced with the detail. The actual contents of the file are passed\nto the command's standard input, and it can then do whatever it likes\nwith that data. Clearly, this is not state of the art; for instance,\nif what you really want is to upload the backup files someplace via\nHTTP, using this to run 'curl' is probably not so good of an idea as\nusing an extension module that links with libcurl. That would likely\nlead to better error checking, better performance, nicer\nconfiguration, and just generally fewer things that can go wrong. On\nthe other hand, writing an integration in C is kind of tricky, and\nthis thing is quite easy to use -- and it does work.\n\nThere are a couple of things to be concerned about with 0002 from a\nsecurity perspective. First, in a backend environment, we have a\nfunction to spawn a subprocess via popen(), namely OpenPipeStream(),\nbut there is no function to spawn a subprocess with execve() and end\nup with a socket connected to its standard input. And that means that\nwhatever command the administrator configures is being interpreted by\nthe shell, which is a potential problem given that we're interpolating\nthe target detail string supplied by the user, who must have at least\nreplication privileges but need not be the superuser. I chose to\nhandle this by allowing the target detail to contain only alphanumeric\ncharacters. Refinement is likely possible, but whether the effort is\nworthwhile seems questionable. Second, what if the superuser wants to\nallow the use of this module to only some of the users who have\nreplication privileges? That seems a bit unlikely but it's possible,\nso I added a GUC basebackup_to_shell.required_role. If set, the\nfunctionality is only usable by members of the named role. If unset,\nanyone with replication privilege can use it. I guess someone could\ncriticize this as defaulting to the least secure setting, but\nconsidering that you have to have replication privileges to use this\nat all, I don't find that argument much to get excited about.\n\nI have to say that I'm incredibly happy with how easy these patches\nwere to write. I think this is going to make adding new base backup\ntargets as accessible as we can realistically hope to make it. There\nis some boilerplate code, as an examination of the patches will\nreveal, but it's not a lot, and at least IMHO it's pretty\nstraightforward. Granted, coding up a new base backup target is\nsomething only experienced C hackers are likely to do, but the fact\nthat I was able to throw this together so quickly suggests to me that\nI've got the design basically right, and that anyone who does want to\nplug into the new mechanism shouldn't have too much trouble doing so.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Feb 2022 10:55:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "At 2022-02-02 10:55:53 -0500, robertmhaas@gmail.com wrote:\n>\n> On Tue, Jan 18, 2022 at 1:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 0001 adds \"server\" and \"blackhole\" as backup targets. It now has some\n> > tests. This might be more or less ready to ship, unless somebody else\n> > sees a problem, or I find one.\n> \n> I played around with this a bit and it seems quite easy to extend this\n> further. So please find attached a couple more patches to generalize\n> this mechanism.\n\nIt took me a while to assimilate these patches, including the backup\ntargets one, which I hadn't looked at before. Now that I've wrapped my\nhead around how to put the pieces together, I really like the idea. As\nyou say, writing non-trivial integrations in C will take some effort,\nbut it seems worthwhile. It's also nice that one can continue to use\npg_basebackup to trigger the backups and see progress information.\n\n> Granted, coding up a new base backup target is\n> something only experienced C hackers are likely to do, but the fact\n> that I was able to throw this together so quickly suggests to me that\n> I've got the design basically right, and that anyone who does want to\n> plug into the new mechanism shouldn't have too much trouble doing so.\n> \n> Thoughts?\n\nYes, it looks simple to follow the example set by basebackup_to_shell to\nwrite a custom target. The complexity will be in whatever we need to do\nto store/forward the backup data, rather than in obtaining the data in\nthe first place, which is exactly as it should be.\n\nThanks!\n\n-- Abhijit\n\n\n", "msg_date": "Wed, 9 Feb 2022 19:11:27 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\n> On Mon, Jan 31, 2022 at 4:41 PM Jeevan Ladhe <\njeevan.ladhe@enterprisedb.com> wrote:\n\n> Hi Robert,\n>\n> I had an offline discussion with Dipesh, and he will be working on the\n> lz4 client side decompression part.\n>\n\nPlease find the attached patch to support client side compression\nand decompression using lz4.\n\nAdded a new lz4 bbstreamer to compress the archive chunks at\nclient if user has specified --compress=clinet-lz4:[LEVEL] option\nin pg_basebackup. The new streamer accepts archive chunks\ncompresses it and forwards it to plain-writer.\n\nSimilarly, If a user has specified a server compressed lz4 archive\nwith plain format (-F p) backup then it requires decompressing\nthe compressed archive chunks before forwarding it to tar extractor.\nAdded a new bbstreamer to decompress the compressed archive\nand forward it to tar extractor.\n\nNote: This patch can be applied on Jeevan Ladhe's v12 patch\nfor lz4 compression.\n\nThanks,\nDipesh", "msg_date": "Thu, 10 Feb 2022 18:10:52 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks for the patch, Dipesh.\nWith a quick look at the patch I have following observations:\n\n----------------------------------------------------------\nIn bbstreamer_lz4_compressor_new(), I think this alignment is not needed\non client side:\n\n /* Align the output buffer length. */\n compressed_bound += compressed_bound + BLCKSZ - (compressed_bound %\nBLCKSZ);\n----------------------------------------------------------\n\nbbstreamer_lz4_compressor_content(), avail_in and len variables both are\nnot changed. I think we can simply change the len to avail_in in the\nargument list.\n----------------------------------------------------------\n\nComment:\n+ * Update the offset and capacity of output buffer based on based\non number\n+ * of bytes written to output buffer.\n\nI think it is thinko:\n\n+ * Update the offset and capacity of output buffer based on number\nof\n+ * bytes written to output buffer.\n----------------------------------------------------------\n\nIndentation:\n\n+ if ((mystreamer->base.bbs_buffer.maxlen -\nmystreamer->bytes_written) <=\n+ footer_bound)\n\n----------------------------------------------------------\nI think similar to bbstreamer_lz4_compressor_content() in\nbbstreamer_lz4_decompressor_content() we can change len to avail_in.\n\n\nRegards,\nJeevan Ladhe\n\nOn Thu, 10 Feb 2022 at 18:11, Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n\n> Hi,\n>\n> > On Mon, Jan 31, 2022 at 4:41 PM Jeevan Ladhe <\n> jeevan.ladhe@enterprisedb.com> wrote:\n>\n>> Hi Robert,\n>>\n>> I had an offline discussion with Dipesh, and he will be working on the\n>> lz4 client side decompression part.\n>>\n>\n> Please find the attached patch to support client side compression\n> and decompression using lz4.\n>\n> Added a new lz4 bbstreamer to compress the archive chunks at\n> client if user has specified --compress=clinet-lz4:[LEVEL] option\n> in pg_basebackup. The new streamer accepts archive chunks\n> compresses it and forwards it to plain-writer.\n>\n> Similarly, If a user has specified a server compressed lz4 archive\n> with plain format (-F p) backup then it requires decompressing\n> the compressed archive chunks before forwarding it to tar extractor.\n> Added a new bbstreamer to decompress the compressed archive\n> and forward it to tar extractor.\n>\n> Note: This patch can be applied on Jeevan Ladhe's v12 patch\n> for lz4 compression.\n>\n> Thanks,\n> Dipesh\n>\n\nThanks for the patch, Dipesh.With a quick look at the patch I have following observations:----------------------------------------------------------In bbstreamer_lz4_compressor_new(), I think this alignment is not neededon client side:    /* Align the output buffer length. */    compressed_bound += compressed_bound + BLCKSZ - (compressed_bound %BLCKSZ);----------------------------------------------------------bbstreamer_lz4_compressor_content(), avail_in and len variables both arenot changed. I think we can simply change the len to avail_in in theargument list.----------------------------------------------------------Comment:+        * Update the offset and capacity of output buffer based on based on number+        * of bytes written to output buffer.I think it is thinko:+        * Update the offset and capacity of output buffer based on number of+        * bytes written to output buffer.----------------------------------------------------------Indentation:+       if ((mystreamer->base.bbs_buffer.maxlen - mystreamer->bytes_written) <=+                       footer_bound)----------------------------------------------------------I think similar to bbstreamer_lz4_compressor_content() inbbstreamer_lz4_decompressor_content() we can change len to avail_in.Regards,Jeevan LadheOn Thu, 10 Feb 2022 at 18:11, Dipesh Pandit <dipesh.pandit@gmail.com> wrote:Hi,> On Mon, Jan 31, 2022 at 4:41 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Robert,I had an offline discussion with Dipesh, and he will be working on thelz4 client side decompression part.Please find the attached patch to support client side compressionand decompression using lz4.Added a new lz4 bbstreamer to compress the archive chunks atclient if user has specified --compress=clinet-lz4:[LEVEL] optionin pg_basebackup. The new streamer accepts archive chunkscompresses it and forwards it to plain-writer.Similarly, If a user has specified a server compressed lz4 archivewith plain format (-F p) backup then it requires decompressingthe compressed archive chunks before forwarding it to tar extractor.Added a new bbstreamer to decompress the compressed archiveand forward it to tar extractor.Note: This patch can be applied on Jeevan Ladhe's v12 patchfor lz4 compression.Thanks,Dipesh", "msg_date": "Thu, 10 Feb 2022 20:01:50 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nThanks for the feedback, I have incorporated the suggestions\nand updated a new patch. PFA v2 patch.\n\n> I think similar to bbstreamer_lz4_compressor_content() in\n> bbstreamer_lz4_decompressor_content() we can change len to avail_in.\n\nIn bbstreamer_lz4_decompressor_content(), we are modifying avail_in\nbased on the number of bytes decompressed in each iteration. I think\nwe cannot replace it with \"len\" here.\n\nJeevan, Your v12 patch does not apply on HEAD, it requires a\nrebase. I have applied it on commit 400fc6b6487ddf16aa82c9d76e5cfbe64d94f660\nto validate my v2 patch.\n\nThanks,\nDipesh", "msg_date": "Fri, 11 Feb 2022 14:13:36 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": ">Jeevan, Your v12 patch does not apply on HEAD, it requires a\nrebase.\n\nSure, please find the rebased patch attached.\n\nRegards,\nJeevan\n\nOn Fri, 11 Feb 2022 at 14:13, Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n\n> Hi,\n>\n> Thanks for the feedback, I have incorporated the suggestions\n> and updated a new patch. PFA v2 patch.\n>\n> > I think similar to bbstreamer_lz4_compressor_content() in\n> > bbstreamer_lz4_decompressor_content() we can change len to avail_in.\n>\n> In bbstreamer_lz4_decompressor_content(), we are modifying avail_in\n> based on the number of bytes decompressed in each iteration. I think\n> we cannot replace it with \"len\" here.\n>\n> Jeevan, Your v12 patch does not apply on HEAD, it requires a\n> rebase. I have applied it on commit\n> 400fc6b6487ddf16aa82c9d76e5cfbe64d94f660\n> to validate my v2 patch.\n>\n> Thanks,\n> Dipesh\n>", "msg_date": "Fri, 11 Feb 2022 16:27:51 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "> Sure, please find the rebased patch attached.\n\nThanks, I have validated v2 patch on top of rebased patch.\n\nThanks,\nDipesh\n\n> Sure, please find the rebased patch attached.Thanks, I have validated v2 patch on top of rebased patch.Thanks,Dipesh", "msg_date": "Fri, 11 Feb 2022 17:50:30 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Feb 11, 2022 at 5:58 AM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> >Jeevan, Your v12 patch does not apply on HEAD, it requires a\n> rebase.\n>\n> Sure, please find the rebased patch attached.\n\nIt's Friday today, but I'm feeling brave, and it's still morning here,\nso ... committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 08:55:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Feb 11, 2022 at 7:20 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> > Sure, please find the rebased patch attached.\n>\n> Thanks, I have validated v2 patch on top of rebased patch.\n\nI'm still feeling brave, so I committed this too after fixing a few\nthings. In the process I noticed that we don't have support for LZ4\ncompression of streamed WAL (cf. CreateWalTarMethod). It would be good\nto fix that. I'm not quite sure whether\nhttp://postgr.es/m/pm1bMV6zZh9_4tUgCjSVMLxDX4cnBqCDGTmdGlvBLHPNyXbN18x_k00eyjkCCJGEajWgya2tQLUDpvb2iIwlD22IcUIrIt9WnMtssNh-F9k=@pm.me\nis basically what we need or whether something else is required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:01:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks Robert for the bravity :-)\n\nRegards,\nJeevan Ladhe\n\n\nOn Fri, 11 Feb 2022, 20:31 Robert Haas, <robertmhaas@gmail.com> wrote:\n\n> On Fri, Feb 11, 2022 at 7:20 AM Dipesh Pandit <dipesh.pandit@gmail.com>\n> wrote:\n> > > Sure, please find the rebased patch attached.\n> >\n> > Thanks, I have validated v2 patch on top of rebased patch.\n>\n> I'm still feeling brave, so I committed this too after fixing a few\n> things. In the process I noticed that we don't have support for LZ4\n> compression of streamed WAL (cf. CreateWalTarMethod). It would be good\n> to fix that. I'm not quite sure whether\n>\n> http://postgr.es/m/pm1bMV6zZh9_4tUgCjSVMLxDX4cnBqCDGTmdGlvBLHPNyXbN18x_k00eyjkCCJGEajWgya2tQLUDpvb2iIwlD22IcUIrIt9WnMtssNh-F9k=@pm.me\n> is basically what we need or whether something else is required.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nThanks Robert for the bravity :-)Regards,Jeevan LadheOn Fri, 11 Feb 2022, 20:31 Robert Haas, <robertmhaas@gmail.com> wrote:On Fri, Feb 11, 2022 at 7:20 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> > Sure, please find the rebased patch attached.\n>\n> Thanks, I have validated v2 patch on top of rebased patch.\n\nI'm still feeling brave, so I committed this too after fixing a few\nthings. In the process I noticed that we don't have support for LZ4\ncompression of streamed WAL (cf. CreateWalTarMethod). It would be good\nto fix that. I'm not quite sure whether\nhttp://postgr.es/m/pm1bMV6zZh9_4tUgCjSVMLxDX4cnBqCDGTmdGlvBLHPNyXbN18x_k00eyjkCCJGEajWgya2tQLUDpvb2iIwlD22IcUIrIt9WnMtssNh-F9k=@pm.me\nis basically what we need or whether something else is required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Feb 2022 20:35:25 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Feb 11, 2022 at 08:35:25PM +0530, Jeevan Ladhe wrote:\n> Thanks Robert for the bravity :-)\n\nFYI: there's a couple typos in the last 2 patches.\n\nI added them to my typos branch; feel free to wait until April if you'd prefer\nto see them fixed in bulk.\n\ndiff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml\nindex 53aa40dcd19..649b91208f3 100644\n--- a/doc/src/sgml/ref/pg_basebackup.sgml\n+++ b/doc/src/sgml/ref/pg_basebackup.sgml\n@@ -419,7 +419,7 @@ PostgreSQL documentation\n <para>\n The compression method can be set to <literal>gzip</literal> or\n <literal>lz4</literal>, or <literal>none</literal> for no\n- compression. A compression level can be optionally specified, by\n+ compression. A compression level can optionally be specified, by\n appending the level number after a colon (<literal>:</literal>). If no\n level is specified, the default compression level will be used. If\n only a level is specified without mentioning an algorithm,\n@@ -440,7 +440,7 @@ PostgreSQL documentation\n <literal>-Xstream</literal>, <literal>pg_wal.tar</literal> will\n be compressed using <literal>gzip</literal> if client-side gzip\n compression is selected, but will not be compressed if server-side\n- compresion or LZ4 compresion is selected.\n+ compression or LZ4 compression is selected.\n </para>\n </listitem>\n </varlistentry>\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:29:44 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Feb 11, 2022 at 10:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> FYI: there's a couple typos in the last 2 patches.\n\nHmm. OK. But I don't consider \"can be optionally specified\" incorrect\nor worse than \"can optionally be specified\".\n\nI do agree that spelling words correctly is a good idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:50:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi, Hackers.\r\nThank you for developing a great feature.\r\nThe current help message shown below does not seem to be able to specify the 'client-' or 'server-' for lz4 compression.\r\n --compress = {[{client, server}-]gzip, lz4, none}[:LEVEL]\r\n\r\nThe attached small patch fixes the help message as follows:\r\n --compress = {[{client, server}-]{gzip, lz4}, none}[:LEVEL]\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Robert Haas <robertmhaas@gmail.com> \r\nSent: Saturday, February 12, 2022 12:50 AM\r\nTo: Justin Pryzby <pryzby@telsasoft.com>\r\nCc: Jeevan Ladhe <jeevanladhe.os@gmail.com>; Dipesh Pandit <dipesh.pandit@gmail.com>; Abhijit Menon-Sen <ams@toroid.org>; Dmitry Dolgov <9erthalion6@gmail.com>; Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>; Mark Dilger <mark.dilger@enterprisedb.com>; pgsql-hackers@postgresql.org; tushar <tushar.ahuja@enterprisedb.com>\r\nSubject: Re: refactoring basebackup.c\r\n\r\nOn Fri, Feb 11, 2022 at 10:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> FYI: there's a couple typos in the last 2 patches.\r\n\r\nHmm. OK. But I don't consider \"can be optionally specified\" incorrect or worse than \"can optionally be specified\".\r\n\r\nI do agree that spelling words correctly is a good idea.\r\n\r\n--\r\nRobert Haas\r\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 12 Feb 2022 06:01:15 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: refactoring basebackup.c" }, { "msg_contents": "The LZ4 patches caused new compiler warnings.\nIt's the same issue that was fixed at 71cbbbbe8 for gzip.\nI think they would've been visible in the CI environment, too.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=wrasse&dt=2022-02-12%2005%3A08%3A48&stg=make\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/replication/basebackup_lz4.c\", line 87: warning: Function has no return statement : bbsink_lz4_new\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=bowerbird&dt=2022-02-12%2013%3A11%3A20&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hamerkop&dt=2022-02-12%2010%3A04%3A08&stg=make\nwarning C4715: 'bbsink_lz4_new': not all control paths return a value\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=anole&dt=2022-02-12%2005%3A46%3A44&stg=make\n\"basebackup_lz4.c\", line 87: warning #2940-D: missing return statement at end of non-void function \"bbsink_lz4_new\"\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=wrasse&dt=2022-02-12%2005%3A08%3A48&stg=make\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/replication/basebackup_lz4.c\", line 87: warning: Function has no return statement : bbsink_lz4_new\n\n\n", "msg_date": "Sat, 12 Feb 2022 15:12:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 15:12:21 -0600, Justin Pryzby wrote:\n> I think they would've been visible in the CI environment, too.\n\nYea, but only if you looked carefully enough. The postgres github repo has CI\nenabled, and it's green. But the windows build step does show the warnings:\n\nhttps://cirrus-ci.com/task/6185407539838976?logs=build#L2066\nhttps://cirrus-ci.com/github/postgres/postgres/\n\n[19:08:09.086] c:\\cirrus\\src\\backend\\replication\\basebackup_lz4.c(87): warning C4715: 'bbsink_lz4_new': not all control paths return a value [c:\\cirrus\\postgres.vcxproj]\n\nProbably worth scripting something to make the windows task error out if there\nhad been warnings, but only after running the tests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 13:23:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Sat, Feb 12, 2022 at 1:01 AM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n> Thank you for developing a great feature.\n> The current help message shown below does not seem to be able to specify the 'client-' or 'server-' for lz4 compression.\n> --compress = {[{client, server}-]gzip, lz4, none}[:LEVEL]\n>\n> The attached small patch fixes the help message as follows:\n> --compress = {[{client, server}-]{gzip, lz4}, none}[:LEVEL]\n\nHmm. After studying this a bit more closely, I think this might\nactually need a bit more revision than what you propose here. In most\nplaces, we use vertical bars to separate alternatives:\n\n -X, --wal-method=none|fetch|stream\n\nBut here, we're using commas in some places and the word \"or\" in one\ncase as well:\n\n -Z, --compress={[{client,server}-]gzip,lz4,none}[:LEVEL] or [LEVEL]\n\nWe're also not consistently using braces for grouping, which makes the\norder of operations a bit unclear, and it makes no sense to put\nbrackets around LEVEL when it's the only thing that's part of that\nalternative.\n\nA more consistent way of writing the supported syntax would be like this:\n\n -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|LEVEL|none}\n\nI would be somewhat inclined to leave the level-only variant\nundocumented and instead write it like this:\n\n -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|none}\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:00:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nPlease find the attached updated version of patch for ZSTD server side\ncompression.\n\nThis patch has following changes:\n\n- Fixes the issue Tushar reported[1].\n- Adds a tap test.\n- Makes document changes related to zstd.\n- Updates pg_basebackup help for pg_basebackup. Here I have chosen the\nsuggestion by Robert upthread (as given below):\n\n>> I would be somewhat inclined to leave the level-only variant\n>> undocumented and instead write it like this:\n>> -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|none}\n\n- pg_indent on basebackup_zstd.c.\n\nThanks Tushar, for offline help for testing the patch.\n\n[1]\nhttps://www.postgresql.org/message-id/6c3f1558-1e56-9946-78a2-c59340da1dbf%40enterprisedb.com\n\nRegards,\nJeevan Ladhe\n\nOn Mon, 14 Feb 2022 at 21:30, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Feb 12, 2022 at 1:01 AM Shinoda, Noriyoshi (PN Japan FSIP)\n> <noriyoshi.shinoda@hpe.com> wrote:\n> > Thank you for developing a great feature.\n> > The current help message shown below does not seem to be able to specify\n> the 'client-' or 'server-' for lz4 compression.\n> > --compress = {[{client, server}-]gzip, lz4, none}[:LEVEL]\n> >\n> > The attached small patch fixes the help message as follows:\n> > --compress = {[{client, server}-]{gzip, lz4}, none}[:LEVEL]\n>\n> Hmm. After studying this a bit more closely, I think this might\n> actually need a bit more revision than what you propose here. In most\n> places, we use vertical bars to separate alternatives:\n>\n> -X, --wal-method=none|fetch|stream\n>\n> But here, we're using commas in some places and the word \"or\" in one\n> case as well:\n>\n> -Z, --compress={[{client,server}-]gzip,lz4,none}[:LEVEL] or [LEVEL]\n>\n> We're also not consistently using braces for grouping, which makes the\n> order of operations a bit unclear, and it makes no sense to put\n> brackets around LEVEL when it's the only thing that's part of that\n> alternative.\n>\n> A more consistent way of writing the supported syntax would be like this:\n>\n> -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|LEVEL|none}\n>\n> I would be somewhat inclined to leave the level-only variant\n> undocumented and instead write it like this:\n>\n> -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|none}\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Tue, 15 Feb 2022 18:48:41 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Feb 9, 2022 at 8:41 AM Abhijit Menon-Sen <ams@toroid.org> wrote:\n> It took me a while to assimilate these patches, including the backup\n> targets one, which I hadn't looked at before. Now that I've wrapped my\n> head around how to put the pieces together, I really like the idea. As\n> you say, writing non-trivial integrations in C will take some effort,\n> but it seems worthwhile. It's also nice that one can continue to use\n> pg_basebackup to trigger the backups and see progress information.\n\nCool. Thanks for having a look.\n\n> Yes, it looks simple to follow the example set by basebackup_to_shell to\n> write a custom target. The complexity will be in whatever we need to do\n> to store/forward the backup data, rather than in obtaining the data in\n> the first place, which is exactly as it should be.\n\nYeah, that's what made me really happy with how this came out.\n\nHere's v2, rebased and with documentation added.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Feb 2022 11:26:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On 2/15/22 6:48 PM, Jeevan Ladhe wrote:\n> Please find the attached updated version of patch for ZSTD server side\nThanks, Jeevan, I again tested with the attached patch, and as mentioned \nthe crash is fixed now.\n\nalso, I tested with different labels with gzip V/s zstd against data \ndirectory size which is 29GB and found these results\n\n====\n./pg_basebackup  -t server:/tmp/<directory> \n--compress=server-zstd:<label>  -Xnone -n -N --no-estimate-size -v\n\n--compress=server-zstd:1 =  compress directory size is  1.3GB\n--compress=server-zstd:4 = compress  directory size is  1.3GB\n--compress=server-zstd:7 = compress  directory size is  1.2GB\n--compress=server-zstd:12 = compress directory size is 1.2GB\n====\n\n===\n./pg_basebackup  -t server:/tmp/<directooy> \n--compress=server-gzip:<label>  -Xnone -n -N --no-estimate-size -v\n\n--compress=server-gzip:1 =  compress directory size is  1.8GB\n--compress=server-gzip:4 = compress  directory size is  1.6GB\n--compress=server-gzip:9 = compress  directory size is  1.6GB\n===\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 22:24:48 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "+++ b/configure\n\n@@ -801,6 +805,7 @@ infodir\n docdir\n oldincludedir\n includedir\n+runstatedir\n\nThere's superfluous changes to ./configure unrelated to the changes in\nconfigure.ac. Probably because you're using a different version of autotools,\nor a vendor's patched copy. You can remove the changes with git checkout -p or\nsimilar.\n\n+++ b/src/backend/replication/basebackup_zstd.c\n+bbsink *\n+bbsink_zstd_new(bbsink *next, int compresslevel)\n+{\n+#ifndef HAVE_LIBZSTD\n+\tereport(ERROR,\n+\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t errmsg(\"zstd compression is not supported by this build\")));\n+#else\n\nThis should have an return; like what's added by 71cbbbbe8 and 302612a6c.\nAlso, the parens() around errcode aren't needed since last year.\n\n+\tbbsink_zstd *sink;\n+\n+\tAssert(next != NULL);\n+\tAssert(compresslevel >= 0 && compresslevel <= 22);\n+\n+\tif (compresslevel < 0 || compresslevel > 22)\n+\t\tereport(ERROR,\n\nThis looks like dead code in assert builds.\nIf it's unreachable, it can be elog().\n\n+ * Compress the input data to the output buffer until we run out of input\n+ * data. Each time the output buffer falls below the compression bound for\n+ * the input buffer, invoke the archive_contents() method for then next sink.\n\n*the next sink ?\n\nDoes anyone plan to include this for pg15 ? If so, I think at least the WAL\ncompression should have support added too. I'd plan to rebase Michael's patch.\nhttps://www.postgresql.org/message-id/YNqWd2GSMrnqWIfx@paquier.xyz\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 15 Feb 2022 11:59:44 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd)" }, { "msg_contents": "Thanks Tushar for the testing.\n\nI further worked on ZSTD and now have implemented client side\ncompression as well. Attached are the patches for both server-side and\nclient-side compression.\n\nThe patch 0001 is a server-side patch, and has not changed since the\nlast patch version - v10, but, just bumping the version number.\n\nPatch 0002 is the client-side compression patch.\n\nRegards,\nJeevan Ladhe\n\nOn Tue, 15 Feb 2022 at 22:24, tushar <tushar.ahuja@enterprisedb.com> wrote:\n\n> On 2/15/22 6:48 PM, Jeevan Ladhe wrote:\n> > Please find the attached updated version of patch for ZSTD server side\n> Thanks, Jeevan, I again tested with the attached patch, and as mentioned\n> the crash is fixed now.\n>\n> also, I tested with different labels with gzip V/s zstd against data\n> directory size which is 29GB and found these results\n>\n> ====\n> ./pg_basebackup -t server:/tmp/<directory>\n> --compress=server-zstd:<label> -Xnone -n -N --no-estimate-size -v\n>\n> --compress=server-zstd:1 = compress directory size is 1.3GB\n> --compress=server-zstd:4 = compress directory size is 1.3GB\n> --compress=server-zstd:7 = compress directory size is 1.2GB\n> --compress=server-zstd:12 = compress directory size is 1.2GB\n> ====\n>\n> ===\n> ./pg_basebackup -t server:/tmp/<directooy>\n> --compress=server-gzip:<label> -Xnone -n -N --no-estimate-size -v\n>\n> --compress=server-gzip:1 = compress directory size is 1.8GB\n> --compress=server-gzip:4 = compress directory size is 1.6GB\n> --compress=server-gzip:9 = compress directory size is 1.6GB\n> ===\n>\n> --\n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n>\n>", "msg_date": "Tue, 15 Feb 2022 23:33:50 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Feb 15, 2022 at 12:59 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> There's superfluous changes to ./configure unrelated to the changes in\n> configure.ac. Probably because you're using a different version of autotools,\n> or a vendor's patched copy. You can remove the changes with git checkout -p or\n> similar.\n\nI noticed this already and fixed it in the version of the patch I\nposted on the other thread.\n\n> +++ b/src/backend/replication/basebackup_zstd.c\n> +bbsink *\n> +bbsink_zstd_new(bbsink *next, int compresslevel)\n> +{\n> +#ifndef HAVE_LIBZSTD\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"zstd compression is not supported by this build\")));\n> +#else\n>\n> This should have an return; like what's added by 71cbbbbe8 and 302612a6c.\n> Also, the parens() around errcode aren't needed since last year.\n\nThe parens are still acceptable style, though. The return I guess is needed.\n\n> + bbsink_zstd *sink;\n> +\n> + Assert(next != NULL);\n> + Assert(compresslevel >= 0 && compresslevel <= 22);\n> +\n> + if (compresslevel < 0 || compresslevel > 22)\n> + ereport(ERROR,\n>\n> This looks like dead code in assert builds.\n> If it's unreachable, it can be elog().\n\nActually, the right thing to do here is remove the assert, I think. I\ndon't believe that the code is unreachable. If I'm wrong and it is\nunreachable then the test-and-ereport should be removed.\n\n> + * Compress the input data to the output buffer until we run out of input\n> + * data. Each time the output buffer falls below the compression bound for\n> + * the input buffer, invoke the archive_contents() method for then next sink.\n>\n> *the next sink ?\n\nYeah.\n\n> Does anyone plan to include this for pg15 ? If so, I think at least the WAL\n> compression should have support added too. I'd plan to rebase Michael's patch.\n> https://www.postgresql.org/message-id/YNqWd2GSMrnqWIfx@paquier.xyz\n\nYes, I'd like to get this into PG15. It's very similar to the LZ4\ncompression support which was already committed, so it feels like\nfinishing it up and including it in the release makes a lot of sense.\nI'm not against the idea of using ZSTD in other places where it makes\nsense as well, but I think that's a separate issue from this patch. As\nfar as I'm concerned, either basebackup compression with ZSTD or WAL\ncompression with ZSTD could be committed even if the other is not, and\nI plan to spend my time on this project, not that project. However, if\nyou're saying you want to work on the WAL compression stuff, I've got\nno objection to that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 10:51:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd)" }, { "msg_contents": "On 2022-Feb-14, Robert Haas wrote:\n\n> A more consistent way of writing the supported syntax would be like this:\n> \n> -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|LEVEL|none}\n> \n> I would be somewhat inclined to leave the level-only variant\n> undocumented and instead write it like this:\n> \n> -Z, --compress={[{client|server}-]{gzip|lz4}}[:LEVEL]|none}\n\nThis is hard to interpret for humans though because of the nested\nbrackets and braces. It gets considerably easier if you split it in\nseparate variants:\n\n -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n -Z, --compress=LEVEL\n -Z, --compress=none\n compress tar output with given compression method or level\n\n\nor, if you choose to leave the level-only variant undocumented, then\n\n -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n -Z, --compress=none\n compress tar output with given compression method or level\n\nThere still are some nested brackets and braces, but the scope is\nreduced enough that interpreting seems quite a bit simpler.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 16 Feb 2022 13:11:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Feb 16, 2022 at 11:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> This is hard to interpret for humans though because of the nested\n> brackets and braces. It gets considerably easier if you split it in\n> separate variants:\n>\n> -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n> -Z, --compress=LEVEL\n> -Z, --compress=none\n> compress tar output with given compression method or level\n>\n>\n> or, if you choose to leave the level-only variant undocumented, then\n>\n> -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n> -Z, --compress=none\n> compress tar output with given compression method or level\n>\n> There still are some nested brackets and braces, but the scope is\n> reduced enough that interpreting seems quite a bit simpler.\n\nI could go for that. I'm also just noticing that \"none\" is not really\na compression method or level, and the statement that it can only\ncompress \"tar\" output is no longer correct, because server-side\ncompression can be used together with -Fp. So maybe we should change\nthe sentence afterward to something a bit more generic, like \"specify\nwhether and how to compress the backup\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 11:16:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Everyone,\n\nSo, I went ahead and have now also implemented client side decompression\nfor zstd.\n\nRobert separated[1] the ZSTD configure switch from my original patch\nof server side compression and also added documentation related to\nthe switch. I have included that patch here in the patch series for\nsimplicity.\n\nThe server side compression patch\n0002-ZSTD-add-server-side-compression-support.patch has also taken care\nof Justin Pryzby's comments[2]. Also, made changes to pg_basebackup help\nas suggested by Álvaro Herrera.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmobRisF-9ocqYDcMng6iSijGj1EZX99PgXA%3D3VVbWuahog%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/20220215175944.GY31460%40telsasoft.com\n\nRegards,\nJeevan Ladhe\n\nOn Wed, 16 Feb 2022 at 21:46, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Feb 16, 2022 at 11:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> > This is hard to interpret for humans though because of the nested\n> > brackets and braces. It gets considerably easier if you split it in\n> > separate variants:\n> >\n> > -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n> > -Z, --compress=LEVEL\n> > -Z, --compress=none\n> > compress tar output with given compression\n> method or level\n> >\n> >\n> > or, if you choose to leave the level-only variant undocumented, then\n> >\n> > -Z, --compress=[{client|server}-]{gzip|lz4}[:LEVEL]\n> > -Z, --compress=none\n> > compress tar output with given compression\n> method or level\n> >\n> > There still are some nested brackets and braces, but the scope is\n> > reduced enough that interpreting seems quite a bit simpler.\n>\n> I could go for that. I'm also just noticing that \"none\" is not really\n> a compression method or level, and the statement that it can only\n> compress \"tar\" output is no longer correct, because server-side\n> compression can be used together with -Fp. So maybe we should change\n> the sentence afterward to something a bit more generic, like \"specify\n> whether and how to compress the backup\".\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Wed, 16 Feb 2022 23:15:56 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:46 PM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> So, I went ahead and have now also implemented client side decompression\n> for zstd.\n>\n> Robert separated[1] the ZSTD configure switch from my original patch\n> of server side compression and also added documentation related to\n> the switch. I have included that patch here in the patch series for\n> simplicity.\n>\n> The server side compression patch\n> 0002-ZSTD-add-server-side-compression-support.patch has also taken care\n> of Justin Pryzby's comments[2]. Also, made changes to pg_basebackup help\n> as suggested by Álvaro Herrera.\n\nThe first hunk of the documentation changes is missing a comma between\ngzip and lz4.\n\n+ * At the start of each archive we reset the state to start a new\n+ * compression operation. The parameters are sticky and they would stick\n+ * around as we are resetting with option ZSTD_reset_session_only.\n\nI don't think \"would\" is what you mean here. If you say something\nwould stick around, that means it could be that way it isn't. (\"I\nwould go to the store and buy some apples, but I know they don't have\nany so there's no point.\") I think you mean \"will\".\n\n- printf(_(\" -Z,\n--compress={[{client,server}-]gzip,lz4,none}[:LEVEL] or [LEVEL]\\n\"\n- \" compress tar output with given\ncompression method or level\\n\"));\n+ printf(_(\" -Z, --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL]\\n\"));\n+ printf(_(\" -Z, --compress=none\\n\"));\n\nYou deleted a line that you should have preserved here.\n\nOverall there doesn't seem to be much to complain about here on a\nfirst read-through. It will be good if we can also fix\nCreateWalTarMethod to support LZ4 and ZSTD.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 16:07:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Thanks for the comments Robert. I have addressed your comments in the\nattached patch v13-0002-ZSTD-add-server-side-compression-support.patch.\nRest of the patches are similar to v12, but just bumped the version number.\n\n> It will be good if we can also fix\n> CreateWalTarMethod to support LZ4 and ZSTD.\nOk we will see, either Dipesh or I will take care of it.\n\nRegards,\nJeevan Ladhe\n\n\nOn Thu, 17 Feb 2022 at 02:37, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Feb 16, 2022 at 12:46 PM Jeevan Ladhe <jeevanladhe.os@gmail.com>\n> wrote:\n> > So, I went ahead and have now also implemented client side decompression\n> > for zstd.\n> >\n> > Robert separated[1] the ZSTD configure switch from my original patch\n> > of server side compression and also added documentation related to\n> > the switch. I have included that patch here in the patch series for\n> > simplicity.\n> >\n> > The server side compression patch\n> > 0002-ZSTD-add-server-side-compression-support.patch has also taken care\n> > of Justin Pryzby's comments[2]. Also, made changes to pg_basebackup help\n> > as suggested by Álvaro Herrera.\n>\n> The first hunk of the documentation changes is missing a comma between\n> gzip and lz4.\n>\n> + * At the start of each archive we reset the state to start a new\n> + * compression operation. The parameters are sticky and they would\n> stick\n> + * around as we are resetting with option ZSTD_reset_session_only.\n>\n> I don't think \"would\" is what you mean here. If you say something\n> would stick around, that means it could be that way it isn't. (\"I\n> would go to the store and buy some apples, but I know they don't have\n> any so there's no point.\") I think you mean \"will\".\n>\n> - printf(_(\" -Z,\n> --compress={[{client,server}-]gzip,lz4,none}[:LEVEL] or [LEVEL]\\n\"\n> - \" compress tar output with given\n> compression method or level\\n\"));\n> + printf(_(\" -Z,\n> --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL]\\n\"));\n> + printf(_(\" -Z, --compress=none\\n\"));\n>\n> You deleted a line that you should have preserved here.\n>\n> Overall there doesn't seem to be much to complain about here on a\n> first read-through. It will be good if we can also fix\n> CreateWalTarMethod to support LZ4 and ZSTD.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Thu, 17 Feb 2022 07:16:22 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\n> > It will be good if we can also fix\n> > CreateWalTarMethod to support LZ4 and ZSTD.\n> Ok we will see, either Dipesh or I will take care of it.\n\nI took a look at the CreateWalTarMethod to support LZ4 compression\nfor WAL files. The current implementation involves a 3 step to backup\na WAL file to a tar archive. For each file:\n\n 1. It first writes the header in the function tar_open_for_write,\n flushes the contents of tar to disk and stores the header offset.\n 2. Next, the contents of WAL are written to the tar archive.\n 3. In the end, it recalculates the checksum in function tar_close() and\n overwrites the header at an offset stored in step #1.\n\nThe need for overwriting header in CreateWalTarMethod is mainly related to\npartial WAL files where the size of the WAL file < WalSegSize. The file is\nbeing\npadded and checksum is recalculated after adding pad bytes.\n\nIf we go ahead and implement LZ4 support for CreateWalTarMethod then\nwe have a problem here at step #3. In order to achieve better compression\nratio, compressed LZ4 blocks are linked to each other and these blocks\nare decoded sequentially. If we overwrite the header as part of step #3 then\nit corrupts the link between compressed LZ4 blocks. Although LZ4 provides\nan option to write the compressed block independently (using blockMode\noption set to LZ4F_blockIndepedent) but it is still a problem because we\ndon't\nknow if overwriting the header after recalculating the checksum will not\noverlap\nthe boundary of the next block.\n\nGZIP manages to overcome this problem as it provides an option to turn\non/off\ncompression on the fly while writing a compressed archive with the help of\nzlib\nlibrary function deflateParams(). The current gzip implementation for\nCreateWalTarMethod uses this library function to turn off compression just\nbefore\nstep #1 and it writes the uncompressed header of size equal to\nTAR_BLOCK_SIZE.\nIt uses the same library function to turn on the compression for writing\nthe contents\nof the WAL file as part of step #2. It again turns off the compression just\nbefore step\n#3 to overwrite the header. The header is overwritten at the same offset\nwith size\nequal to TAR_BLOCK_SIZE.\n\nSince GZIP provides this option to enable/disable compression, it is\npossible to\ncontrol the size of data we are writing to a compressed archive. Even if we\noverwrite\nan already written block in a compressed archive there is no risk of it\noverlapping\nwith the boundary of the next block. This mechanism is not available in LZ4\nand ZSTD.\n\nIn order to support LZ4 and ZSTD compression for CreateWalTarMethod we may\nneed to refactor this code unless I am missing something. We need to\nsomehow\nadd the padding bytes in case of partial WAL before we send it to the\ncompressed\narchive. This will make sure that all files which are being compressed does\nnot\nrequire any padding as the size is always equal to WalSegSize. There is no\nneed to\nrecalculate the checksum and we can avoid overwriting the header as part of\nstep #3.\n\nThoughts?\n\nThanks,\nDipesh\n\nHi,> > It will be good if we can also fix> > CreateWalTarMethod to support LZ4 and ZSTD.> Ok we will see, either Dipesh or I will take care of it.I took a look at the CreateWalTarMethod to support LZ4 compressionfor WAL files. The current implementation involves a 3 step to backupa WAL file to a tar archive. For each file:It first writes the header in the function tar_open_for_write, flushes the contents of tar to disk and stores the header offset. Next, the contents of WAL are written to the tar archive.In the end, it recalculates the checksum in function tar_close() and overwrites the header at an offset stored in step #1.The need for overwriting header in CreateWalTarMethod is mainly related to partial WAL files where the size of the WAL file < WalSegSize. The file is beingpadded and checksum is recalculated after adding pad bytes. If we go ahead and implement LZ4 support for CreateWalTarMethod thenwe have a problem here at step #3. In order to achieve better compressionratio, compressed LZ4 blocks are linked to each other and these blocksare decoded sequentially. If we overwrite the header as part of step #3 thenit corrupts the link between compressed LZ4 blocks. Although LZ4 providesan option to write the compressed block independently (using blockModeoption set to LZ4F_blockIndepedent) but it is still a problem because we don'tknow if overwriting the header after recalculating the checksum will not overlapthe boundary of the next block.GZIP manages to overcome this problem as it provides an option to turn on/offcompression on the fly while writing a compressed archive with the help of zliblibrary function deflateParams(). The current gzip implementation for CreateWalTarMethod uses this library function to turn off compression just beforestep #1 and it writes the uncompressed header of size equal to TAR_BLOCK_SIZE.It uses the same library function to turn on the compression for writing the contentsof the WAL file as part of step #2. It again turns off the compression just before step #3 to overwrite the header. The header is overwritten at the same offset with size equal to TAR_BLOCK_SIZE.Since GZIP provides this option to enable/disable compression, it is possible to control the size of data we are writing to a compressed archive. Even if we overwrite an already written block in a compressed archive there is no risk of it overlapping with the boundary of the next block. This mechanism is not available in LZ4 and ZSTD.In order to support LZ4 and ZSTD compression for CreateWalTarMethod we may need to refactor this code unless I am missing something. We need to somehow add the padding bytes in case of partial WAL before we send it to the compressed archive. This will make sure that all files which are being compressed does not require any padding as the size is always equal to WalSegSize. There is no need torecalculate the checksum and we can avoid overwriting the header as part of step #3.Thoughts?Thanks,Dipesh", "msg_date": "Fri, 4 Mar 2022 14:01:50 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Mar 4, 2022 at 3:32 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> GZIP manages to overcome this problem as it provides an option to turn on/off\n> compression on the fly while writing a compressed archive with the help of zlib\n> library function deflateParams(). The current gzip implementation for\n> CreateWalTarMethod uses this library function to turn off compression just before\n> step #1 and it writes the uncompressed header of size equal to TAR_BLOCK_SIZE.\n> It uses the same library function to turn on the compression for writing the contents\n> of the WAL file as part of step #2. It again turns off the compression just before step\n> #3 to overwrite the header. The header is overwritten at the same offset with size\n> equal to TAR_BLOCK_SIZE.\n\nThis is a real mess. To me, it seems like a pretty big hack to use\ndeflateParams() to shut off compression in the middle of the\ncompressed data stream so that we can go back and overwrite that part\nof the data later. It appears that the only reason we need that hack\nis because we don't know the file size starting out. Except we kind of\ndo know the size, because pad_to_size specifies a minimum size for the\nfile. It's true that the maximum file size is unbounded, but I'm not\nsure why that's important. I wonder if anyone else has an idea why we\ndidn't just set the file size to pad_to_size exactly when we write the\ntar header the first time, instead of this IMHO kind of nutty approach\nwhere we back up. I'd try to figure it out from the comments, but\nthere basically aren't any. I also had a look at the relevant commit\nmessages and didn't see anything relevant there either. If I'm missing\nsomething, please point it out.\n\nWhile I'm complaining, I noticed while looking at this code that it is\ndocumented that \"The caller must ensure that only one method is\ninstantiated in any given program, and that it's only instantiated\nonce!\" As far as I can see, this is because somebody thought about\nputting all of the relevant data into a struct and then decided on an\nalternative strategy of storing some of it there, and the rest in a\nglobal variable. I can't quite imagine why anyone would think that was\na good idea. There may be some reason that I can't see right now, but\nhere again there appear to be no relevant code comments.\n\nI'm somewhat inclined to wonder whether we could just get rid of\nwalmethods.c entirely and use the new bbstreamer stuff instead. That\ncode also knows how to write plain files into a directory, and write\ntar archives, and compress stuff, but in my totally biased opinion as\nthe author of most of that code, it's better code. It has no\nrestriction on using at most one method per program, or of\ninstantiating that method only once, and it already has LZ4 support,\nand there's a pending patch for ZSTD support that I intend to get\ncommitted soon as well. It also has, and I know I might be beating a\ndead horse here, comments. Now, admittedly, it does need to know the\nsize of each archive member up front in order to work, so if we can't\nsolve the problem then we can't go this route. But if we can't solve\nthat problem, then we also can't add LZ4 and ZSTD support to\nwalmethods.c, because random access to compressed data is not really a\nthing, even if we hacked it to work for gzip.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Mar 2022 09:31:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "walmethods.c is kind of a mess (was Re: refactoring basebackup.c)" }, { "msg_contents": "On Wed, Feb 16, 2022 at 8:46 PM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> Thanks for the comments Robert. I have addressed your comments in the\n> attached patch v13-0002-ZSTD-add-server-side-compression-support.patch.\n> Rest of the patches are similar to v12, but just bumped the version number.\n\nOK, here's a consolidated patch with all your changes from 0002-0004\nas 0001 plus a few proposed edits of my own in 0002. By and large I\nthink this is fine.\n\nMy proposed changes are largely cosmetic, but one thing that isn't is\nrevising the size - pos <= bound tests to instead check size - pos <\nbound. My reasoning for that change is: if the number of bytes\nremaining in the buffer is exactly equal to the maximum number we can\nwrite, we don't need to flush it yet. If that sounds correct, we\nshould fix the LZ4 code the same way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Mar 2022 16:25:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi Robert,\n\nMy proposed changes are largely cosmetic, but one thing that isn't is\n> revising the size - pos <= bound tests to instead check size - pos <\n> bound. My reasoning for that change is: if the number of bytes\n> remaining in the buffer is exactly equal to the maximum number we can\n> write, we don't need to flush it yet. If that sounds correct, we\n> should fix the LZ4 code the same way.\n>\n\nI agree with your patch. The patch looks good to me.\nYes, the LZ4 flush check should also be fixed. Please find the attached\npatch to fix the LZ4 code.\n\nRegards,\nJeevan Ladhe", "msg_date": "Tue, 8 Mar 2022 15:19:07 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Mar 8, 2022 at 4:49 AM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> I agree with your patch. The patch looks good to me.\n> Yes, the LZ4 flush check should also be fixed. Please find the attached\n> patch to fix the LZ4 code.\n\nOK, committed all that stuff.\n\nI think we also need to fix one other thing. Right now, for LZ4\nsupport we test HAVE_LIBLZ4, but TOAST and XLOG compression are\ntesting USE_LZ4, so I think we should be doing the same here. And\nsimilarly I think we should be testing USE_ZSTD not HAVE_LIBZSTD.\n\nPatch for that attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Mar 2022 10:28:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": ">\n> OK, committed all that stuff.\n>\n\nThanks for the commit Robert.\n\n\n> I think we also need to fix one other thing. Right now, for LZ4\n> support we test HAVE_LIBLZ4, but TOAST and XLOG compression are\n> testing USE_LZ4, so I think we should be doing the same here. And\n> similarly I think we should be testing USE_ZSTD not HAVE_LIBZSTD.\n>\n\nI reviewed the patch, and it seems to be capturing and replacing all the\nplaces of HAVE_LIB* with USE_* correctly.\nJust curious, apart from consistency, do you see other problems as well\nwhen testing one vs the other?\n\nRegards,\nJeevan Ladhe\n\nOK, committed all that stuff.Thanks for the commit Robert. \nI think we also need to fix one other thing. Right now, for LZ4\nsupport we test HAVE_LIBLZ4, but TOAST and XLOG compression are\ntesting USE_LZ4, so I think we should be doing the same here. And\nsimilarly I think we should be testing USE_ZSTD not HAVE_LIBZSTD.I reviewed the patch, and it seems to be capturing and replacing all theplaces of HAVE_LIB* with USE_* correctly.Just curious, apart from consistency, do you see other problems as wellwhen testing one vs the other?Regards,Jeevan Ladhe", "msg_date": "Tue, 8 Mar 2022 22:02:31 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Mar 8, 2022 at 11:32 AM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> I reviewed the patch, and it seems to be capturing and replacing all the\n> places of HAVE_LIB* with USE_* correctly.\n> Just curious, apart from consistency, do you see other problems as well\n> when testing one vs the other?\n\nSo, the kind of problem you would worry about in a case like this is:\nsuppose that configure detects LIBLZ4, but the user specifies\n--without-lz4. Then maybe there is some way for HAVE_LIBLZ4 to be\ntrue, while USE_LIBLZ4 is false, and therefore we should not be\ncompiling code that uses LZ4 but do anyway. As configure.ac is\ncurrently coded, I think that's impossible, because we only search for\nliblz4 if the user says --with-lz4, and if they do that, then USE_LZ4\nwill be set. Therefore, I don't think there is a live problem here,\njust an inconsistency.\n\nProbably still best to clean it up before an angry Andres chases me\ndown, since I know he's working on the build system...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Mar 2022 11:53:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "ok got it. Thanks for your insights.\n\nRegards,\nJeevan Ladhe\n\nOn Tue, 8 Mar 2022 at 22:23, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Mar 8, 2022 at 11:32 AM Jeevan Ladhe <jeevanladhe.os@gmail.com>\n> wrote:\n> > I reviewed the patch, and it seems to be capturing and replacing all the\n> > places of HAVE_LIB* with USE_* correctly.\n> > Just curious, apart from consistency, do you see other problems as well\n> > when testing one vs the other?\n>\n> So, the kind of problem you would worry about in a case like this is:\n> suppose that configure detects LIBLZ4, but the user specifies\n> --without-lz4. Then maybe there is some way for HAVE_LIBLZ4 to be\n> true, while USE_LIBLZ4 is false, and therefore we should not be\n> compiling code that uses LZ4 but do anyway. As configure.ac is\n> currently coded, I think that's impossible, because we only search for\n> liblz4 if the user says --with-lz4, and if they do that, then USE_LZ4\n> will be set. Therefore, I don't think there is a live problem here,\n> just an inconsistency.\n>\n> Probably still best to clean it up before an angry Andres chases me\n> down, since I know he's working on the build system...\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nok got it. Thanks for your insights.Regards,Jeevan LadheOn Tue, 8 Mar 2022 at 22:23, Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Mar 8, 2022 at 11:32 AM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> I reviewed the patch, and it seems to be capturing and replacing all the\n> places of HAVE_LIB* with USE_* correctly.\n> Just curious, apart from consistency, do you see other problems as well\n> when testing one vs the other?\n\nSo, the kind of problem you would worry about in a case like this is:\nsuppose that configure detects LIBLZ4, but the user specifies\n--without-lz4. Then maybe there is some way for HAVE_LIBLZ4 to be\ntrue, while USE_LIBLZ4 is false, and therefore we should not be\ncompiling code that uses LZ4 but do anyway. As configure.ac is\ncurrently coded, I think that's impossible, because we only search for\nliblz4 if the user says --with-lz4, and if they do that, then USE_LZ4\nwill be set. Therefore, I don't think there is a live problem here,\njust an inconsistency.\n\nProbably still best to clean it up before an angry Andres chases me\ndown, since I know he's working on the build system...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Mar 2022 22:28:34 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "I'm getting errors from pg_basebackup when using both -D- and --compress=server-*\nThe issue seems to go away if I use --no-manifest.\n\n$ ./src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method none --compress=server-gzip >/dev/null ; echo $?\npg_basebackup: error: tar member has empty name\n1\n\n$ ./src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method none --compress=server-gzip >/dev/null ; echo $?\nNOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup\npg_basebackup: error: COPY stream ended before last file was finished\n1\n\n\n", "msg_date": "Thu, 10 Mar 2022 19:02:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Mar 10, 2022 at 8:02 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm getting errors from pg_basebackup when using both -D- and --compress=server-*\n> The issue seems to go away if I use --no-manifest.\n>\n> $ ./src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method none --compress=server-gzip >/dev/null ; echo $?\n> pg_basebackup: error: tar member has empty name\n> 1\n>\n> $ ./src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method none --compress=server-gzip >/dev/null ; echo $?\n> NOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup\n> pg_basebackup: error: COPY stream ended before last file was finished\n> 1\n\nThanks for the report. The problem here is that, when the output is\nstandard output (-D -), pg_basebackup can only produce a single output\nfile, so the manifest gets injected into the tar file on the client\nside rather than being written separately as we do in normal cases.\nHowever, that only works if we're receiving a tar file that we can\nparse from the server, and here the server is sending a compressed\ntarfile. The current code mistakely attempts to parse the compressed\ntarfile as if it were an uncompressed tarfile, which causes the error\nmessages that you are seeing (and which I can also reproduce here). We\nactually have enough infrastructure available in pg_basebackup now\nthat we could do the \"right thing\" in this case: decompress the data\nreceived from the server, parse the resulting tar file, inject the\nbackup manifest, construct a new tar file, and recompress. However, I\nthink that's probably not a good idea, because it's unlikely that the\nuser will understand that the data is being compressed on the server,\nthen decompressed, and then recompressed again, and the performance of\nthe resulting pipeline will probably not be very good. So I think we\nshould just refuse this command. Patch for that attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Mar 2022 10:19:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Mar 11, 2022 at 10:19:29AM -0500, Robert Haas wrote:\n> So I think we should just refuse this command. Patch for that attached.\n\nSounds right.\n\nAlso, I think the magic 8 for .gz should actually be a 7.\n\nI'm not sure why it tests for \".gz\" but not \".tar.gz\", which would help to make\nthem all less magic.\n\ncommit 1fb1e21ba7a500bb2b85ec3e65f59130fcdb4a7e\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Mar 10 21:22:16 2022 -0600\n\n pg_basebackup: make magic numbers less magic\n \n The magic 8 for .gz should actually be a 7.\n \n .tar.gz\n 1234567\n \n .tar.lz4\n .tar.zst\n 12345678\n \n See d45099425, 751b8d23b, 7cf085f07.\n\ndiff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c\nindex 9f3ecc60fbe..8dd9721323d 100644\n--- a/src/bin/pg_basebackup/pg_basebackup.c\n+++ b/src/bin/pg_basebackup/pg_basebackup.c\n@@ -1223,17 +1223,17 @@ CreateBackupStreamer(char *archive_name, char *spclocation,\n \tis_tar = (archive_name_len > 4 &&\n \t\t\t strcmp(archive_name + archive_name_len - 4, \".tar\") == 0);\n \n-\t/* Is this a gzip archive? */\n-\tis_tar_gz = (archive_name_len > 8 &&\n-\t\t\t\t strcmp(archive_name + archive_name_len - 3, \".gz\") == 0);\n+\t/* Is this a .tar.gz archive? */\n+\tis_tar_gz = (archive_name_len > 7 &&\n+\t\t\t\t strcmp(archive_name + archive_name_len - 7, \".tar.gz\") == 0);\n \n-\t/* Is this a LZ4 archive? */\n+\t/* Is this a .tar.lz4 archive? */\n \tis_tar_lz4 = (archive_name_len > 8 &&\n-\t\t\t\t strcmp(archive_name + archive_name_len - 4, \".lz4\") == 0);\n+\t\t\t\t strcmp(archive_name + archive_name_len - 8, \".tar.lz4\") == 0);\n \n-\t/* Is this a ZSTD archive? */\n+\t/* Is this a .tar.zst archive? */\n \tis_tar_zstd = (archive_name_len > 8 &&\n-\t\t\t\t strcmp(archive_name + archive_name_len - 4, \".zst\") == 0);\n+\t\t\t\t strcmp(archive_name + archive_name_len - 8, \".tar.zst\") == 0);\n \n \t/*\n \t * We have to parse the archive if (1) we're suppose to extract it, or if\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:29:11 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Sounds right.\n\nOK, committed.\n\n> Also, I think the magic 8 for .gz should actually be a 7.\n>\n> I'm not sure why it tests for \".gz\" but not \".tar.gz\", which would help to make\n> them all less magic.\n>\n> commit 1fb1e21ba7a500bb2b85ec3e65f59130fcdb4a7e\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu Mar 10 21:22:16 2022 -0600\n\nYeah, your patch looks right. Committed that, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 12:37:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Feb 15, 2022 at 11:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Feb 9, 2022 at 8:41 AM Abhijit Menon-Sen <ams@toroid.org> wrote:\n> > It took me a while to assimilate these patches, including the backup\n> > targets one, which I hadn't looked at before. Now that I've wrapped my\n> > head around how to put the pieces together, I really like the idea. As\n> > you say, writing non-trivial integrations in C will take some effort,\n> > but it seems worthwhile. It's also nice that one can continue to use\n> > pg_basebackup to trigger the backups and see progress information.\n>\n> Cool. Thanks for having a look.\n>\n> > Yes, it looks simple to follow the example set by basebackup_to_shell to\n> > write a custom target. The complexity will be in whatever we need to do\n> > to store/forward the backup data, rather than in obtaining the data in\n> > the first place, which is exactly as it should be.\n>\n> Yeah, that's what made me really happy with how this came out.\n>\n> Here's v2, rebased and with documentation added.\n\nI don't hear many comments on this, but I'm pretty sure that it's a\ngood idea, and there haven't been many objections to this patch series\nas a whole, so I'd like to proceed with it. If nobody objects\nvigorously, I'll commit this next week.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 13:39:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nOn 2022-03-11 10:19:29 -0500, Robert Haas wrote:\n> Thanks for the report. The problem here is that, when the output is\n> standard output (-D -), pg_basebackup can only produce a single output\n> file, so the manifest gets injected into the tar file on the client\n> side rather than being written separately as we do in normal cases.\n> However, that only works if we're receiving a tar file that we can\n> parse from the server, and here the server is sending a compressed\n> tarfile. The current code mistakely attempts to parse the compressed\n> tarfile as if it were an uncompressed tarfile, which causes the error\n> messages that you are seeing (and which I can also reproduce here). We\n> actually have enough infrastructure available in pg_basebackup now\n> that we could do the \"right thing\" in this case: decompress the data\n> received from the server, parse the resulting tar file, inject the\n> backup manifest, construct a new tar file, and recompress. However, I\n> think that's probably not a good idea, because it's unlikely that the\n> user will understand that the data is being compressed on the server,\n> then decompressed, and then recompressed again, and the performance of\n> the resulting pipeline will probably not be very good. So I think we\n> should just refuse this command. Patch for that attached.\n\nYou could also just append a manifest as a compresed tar to the compressed tar\nstream. Unfortunately GNU tar requires -i to read concated compressed\narchives, so perhaps that's not quite an alternative.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:52:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> You could also just append a manifest as a compresed tar to the compressed tar\n> stream. Unfortunately GNU tar requires -i to read concated compressed\n> archives, so perhaps that's not quite an alternative.\n\ns/Unfortunately/Fortunately/ :-p\n\nI think we've already gone way too far in the direction of making this\nstuff rely on specific details of the tar format. What if someday we\nwanted to switch to pax, cpio, zip, 7zip, whatever, or even just have\none of those things as an option? It's not that I'm dying to have\nPostgreSQL produce rar or arj files, but I think we box ourselves into\na corner when we just assume tar everywhere. As an example of a\nsimilar issue with real consequences, consider the recent discovery\nthat we can't easily add support for LZ4 or ZSTD compression of\npg_wal.tar. The problem is that the existing code tells the gzip\nlibrary to emit the tar header as part of the compressed stream\nwithout actually compressing it, and then it goes back and overwrites\nthat data later! Unsurprisingly, that's not a feature every\ncompression library offers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 09:27:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Hi,\n\nI tried to implement support for parallel ZSTD compression. The\nlibrary provides an option (ZSTD_c_nbWorkers) to specify the\nnumber of compression workers. The number of parallel\nworkers can be set as part of compression parameter and if this\noption is specified then the library performs parallel compression\nbased on the specified number of workers.\n\nUser can specify the number of parallel worker as part of\n--compress option by appending an integer value after at sign (@).\n(-Z, --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL][@WORKERS])\n\nPlease find the attached patch v1 with the above changes.\n\nNote: ZSTD library version 1.5.x supports parallel compression\nby default and if the library version is lower than 1.5.x then\nparallel compression is enabled only the source is compiled with build\nmacro ZSTD_MULTITHREAD. If the linked library version doesn't\nsupport parallel compression then setting the value of parameter\nZSTD_c_nbWorkers to a value other than 0 will be no-op and\nreturns an error.\n\nThanks,\nDipesh", "msg_date": "Mon, 14 Mar 2022 21:41:35 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Mon, Mar 14, 2022 at 09:41:35PM +0530, Dipesh Pandit wrote:\n> I tried to implement support for parallel ZSTD compression. The\n> library provides an option (ZSTD_c_nbWorkers) to specify the\n> number of compression workers. The number of parallel\n> workers can be set as part of compression parameter and if this\n> option is specified then the library performs parallel compression\n> based on the specified number of workers.\n> \n> User can specify the number of parallel worker as part of\n> --compress option by appending an integer value after at sign (@).\n> (-Z, --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL][@WORKERS])\n\nI suggest to use a syntax that's more general than that, maybe something like\n\n:[level=]N,parallel=N,flag,flag,...\n\nFor example, someone may want to use zstd \"long\" mode or (when it's released)\nrsyncable mode, or specify fine-grained compression parameters (strategy,\nwindowLog, hashLog, etc).\n\nI hope the same syntax will be shared with wal_compression and pg_dump.\nAnd libpq, if that patch progresses.\n\nBTW, I think this may be better left for PG16.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Mar 2022 11:35:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 14, 2022 at 12:35 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I suggest to use a syntax that's more general than that, maybe something like\n>\n> :[level=]N,parallel=N,flag,flag,...\n>\n> For example, someone may want to use zstd \"long\" mode or (when it's released)\n> rsyncable mode, or specify fine-grained compression parameters (strategy,\n> windowLog, hashLog, etc).\n\nThat's an interesting idea. I wonder what the replication protocol\nought to look like in that case. Should we have a COMPRESSION_DETAIL\nargument that is just a string, and let the server parse it out? Or\nseparate protocol-level options? It does feel reasonable to have both\nCOMPRESSION_LEVEL and COMPRESSION_WORKERS as first-class options, but\nI don't know that we want COMPRESSION_HASHLOG true as part of our\nfirst-class grammar.\n\n> I hope the same syntax will be shared with wal_compression and pg_dump.\n> And libpq, if that patch progresses.\n>\n> BTW, I think this may be better left for PG16.\n\nPossibly so ... but if we're thinking of any revisions to the\nnewly-added grammar, we had better take care of that now, before it's\nset in stone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 13:02:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 14, 2022 at 01:02:20PM -0400, Robert Haas wrote:\n> On Mon, Mar 14, 2022 at 12:35 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I suggest to use a syntax that's more general than that, maybe something like\n> >\n> > :[level=]N,parallel=N,flag,flag,...\n> >\n> > For example, someone may want to use zstd \"long\" mode or (when it's released)\n> > rsyncable mode, or specify fine-grained compression parameters (strategy,\n> > windowLog, hashLog, etc).\n> \n> That's an interesting idea. I wonder what the replication protocol\n> ought to look like in that case. Should we have a COMPRESSION_DETAIL\n> argument that is just a string, and let the server parse it out? Or\n> separate protocol-level options? It does feel reasonable to have both\n> COMPRESSION_LEVEL and COMPRESSION_WORKERS as first-class options, but\n> I don't know that we want COMPRESSION_HASHLOG true as part of our\n> first-class grammar.\n\nI was only referring to the user-facing grammar.\n\nInternally, I was thinking they'd all be handled as first-class options, with\nseparate struct fields and separate replication protocol options. If an option\nisn't known, it'd be rejected on the client side, rather than causing an error\non the server.\n\nMaybe there'd be an option parser for this in common/ (I think that might\nrequire having new data structure there too, maybe one for each compression\nmethod, or maybe a union{} to handles them all). Most of the ~100 lines to\nsupport wal_compression='zstd:N' are to parse out the N.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Mar 2022 12:11:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 14, 2022 at 1:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Internally, I was thinking they'd all be handled as first-class options, with\n> separate struct fields and separate replication protocol options. If an option\n> isn't known, it'd be rejected on the client side, rather than causing an error\n> on the server.\n\nThere's some appeal to that, but one downside is that it means that\nthe client can't be used to fetch data that is compressed in a way\nthat the server knows about and the client doesn't. I don't think\nthat's great. Why should, for example, pg_basebackup need to be\ncompiled with zstd support in order to request zstd compression on the\nserver side? If the server knows about the brand new\njustin-magic-sauce compression algorithm, maybe the client should just\nbe able to request it and, when given various .jms files by the\nserver, shrug its shoulders and accept them for what they are. That\ndoesn't work if -Fp is involved, or similar, but it should work fine\nfor simple cases if we set things up right.\n\n> Maybe there'd be an option parser for this in common/ (I think that might\n> require having new data structure there too, maybe one for each compression\n> method, or maybe a union{} to handles them all). Most of the ~100 lines to\n> support wal_compression='zstd:N' are to parse out the N.\n\nYes, it's actually a very simple feature now that we've got the rest\nof the infrastructure set up correctly for it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 13:21:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Thanks for the patch, Dipesh.\nI had a look at the patch and also tried to take the backup. I have\nfollowing suggestions and observations:\n\nI get following error at my end:\n\n$ pg_basebackup -D /tmp/zstd_bk -Ft -Xfetch --compress=server-zstd:7@4\npg_basebackup: error: could not initiate base backup: ERROR: could not\ncompress data: Unsupported parameter\npg_basebackup: removing data directory \"/tmp/zstd_bk\"\n\nThis is mostly because I have the zstd library version v1.4.4, which\ndoes not have default support for parallel workers. Maybe we should\nhave a better error, something that is hinting that the parallelism is\nnot supported by the particular build.\n\nThe regression for pg_verifybackup test 008_untar.pl also fails with a\nsimilar error. Here, I think we should have some logic in regression to\nskip the test if the parameter is not supported?\n\n+ if (ZSTD_isError(ret))\n\n+ elog(ERROR,\n\n+ \"could not compress data: %s\",\n\n+ ZSTD_getErrorName(ret));\n\nI think all of this can go on one line, but anyhow we have to improve\nthe error message here.\n\nAlso, just a thought, for the versions where parallelism is not\nsupported, should we instead just throw a warning and fall back to\nnon-parallel behavior?\n\nRegards,\nJeevan Ladhe\n\nOn Mon, 14 Mar 2022 at 21:41, Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n\n> Hi,\n>\n> I tried to implement support for parallel ZSTD compression. The\n> library provides an option (ZSTD_c_nbWorkers) to specify the\n> number of compression workers. The number of parallel\n> workers can be set as part of compression parameter and if this\n> option is specified then the library performs parallel compression\n> based on the specified number of workers.\n>\n> User can specify the number of parallel worker as part of\n> --compress option by appending an integer value after at sign (@).\n> (-Z, --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL][@WORKERS])\n>\n> Please find the attached patch v1 with the above changes.\n>\n> Note: ZSTD library version 1.5.x supports parallel compression\n> by default and if the library version is lower than 1.5.x then\n> parallel compression is enabled only the source is compiled with build\n> macro ZSTD_MULTITHREAD. If the linked library version doesn't\n> support parallel compression then setting the value of parameter\n> ZSTD_c_nbWorkers to a value other than 0 will be no-op and\n> returns an error.\n>\n> Thanks,\n> Dipesh\n>\n\nThanks for the patch, Dipesh.I had a look at the patch and also tried to take the backup. I havefollowing suggestions and observations:I get following error at my end:$ pg_basebackup -D /tmp/zstd_bk -Ft -Xfetch --compress=server-zstd:7@4pg_basebackup: error: could not initiate base backup: ERROR:  could not compress data: Unsupported parameterpg_basebackup: removing data directory \"/tmp/zstd_bk\"This is mostly because I have the zstd library version v1.4.4, whichdoes not have default support for parallel workers. Maybe we shouldhave a better error, something that is hinting that the parallelism isnot supported by the particular build.The regression for pg_verifybackup test 008_untar.pl also fails with asimilar error. Here, I think we should have some logic in regression toskip the test if the parameter is not supported?+   if (ZSTD_isError(ret))                                                     +       elog(ERROR,                                                            +            \"could not compress data: %s\",                                    +            ZSTD_getErrorName(ret));    I think all of this can go on one line, but anyhow we have to improvethe error message here.Also, just a thought, for the versions where parallelism is notsupported, should we instead just throw a warning and fall back tonon-parallel behavior?Regards,Jeevan LadheOn Mon, 14 Mar 2022 at 21:41, Dipesh Pandit <dipesh.pandit@gmail.com> wrote:Hi,I tried to implement support for parallel ZSTD compression. Thelibrary provides an option (ZSTD_c_nbWorkers) to specify the number of compression workers. The number of parallelworkers can be set as part of compression parameter and if thisoption is specified then the library performs parallel compressionbased on the specified number of workers.User can specify the number of parallel worker as part of --compress option by appending an integer value after at sign (@).(-Z, --compress=[{client|server}-]{gzip|lz4|zstd}[:LEVEL][@WORKERS])Please find the attached patch v1 with the above changes.Note: ZSTD library version 1.5.x supports parallel compressionby default and if the library version is lower than 1.5.x thenparallel compression is enabled only the source is compiled with build macro ZSTD_MULTITHREAD. If the linked library version doesn'tsupport parallel compression then setting the value of parameterZSTD_c_nbWorkers to a value other than 0 will be no-op andreturns an error.Thanks,Dipesh", "msg_date": "Tue, 15 Mar 2022 16:03:05 +0530", "msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Tue, Mar 15, 2022 at 6:33 AM Jeevan Ladhe <jeevanladhe.os@gmail.com> wrote:\n> I get following error at my end:\n>\n> $ pg_basebackup -D /tmp/zstd_bk -Ft -Xfetch --compress=server-zstd:7@4\n> pg_basebackup: error: could not initiate base backup: ERROR: could not compress data: Unsupported parameter\n> pg_basebackup: removing data directory \"/tmp/zstd_bk\"\n>\n> This is mostly because I have the zstd library version v1.4.4, which\n> does not have default support for parallel workers. Maybe we should\n> have a better error, something that is hinting that the parallelism is\n> not supported by the particular build.\n\nI'm not averse to trying to improve that error message, but honestly\nI'd consider that to be good enough already to be acceptable. We could\nthink about trying to add an errhint() telling you that the problem\nmay be with your libzstd build.\n\n> The regression for pg_verifybackup test 008_untar.pl also fails with a\n> similar error. Here, I think we should have some logic in regression to\n> skip the test if the parameter is not supported?\n\nOr at least to have the test not fail.\n\n> Also, just a thought, for the versions where parallelism is not\n> supported, should we instead just throw a warning and fall back to\n> non-parallel behavior?\n\nI don't think so. I think it's better for the user to get an error and\nthen change their mind and request something we can do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Mar 2022 13:50:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "Should zstd's negative compression levels be supported here ?\n\nHere's a POC patch which is enough to play with it.\n\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd |wc -c\n12305659\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:1 |wc -c\n13827521\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:0 |wc -c\n12304018\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:-1 |wc -c\n16443893\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:-2 |wc -c\n17349563\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:-4 |wc -c\n19452631\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=zstd:-7 |wc -c\n21871505\n\nAlso, with a partial regression DB, this crashes when writing to stdout.\n\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=lz4 |wc -c\npg_basebackup: bbstreamer_lz4.c:172: bbstreamer_lz4_compressor_content: Assertion `mystreamer->base.bbs_buffer.maxlen >= out_bound' failed.\n24117248\n\n#4 0x000055555555e8b4 in bbstreamer_lz4_compressor_content (streamer=0x5555555a5260, member=0x7fffffffc760, \n data=0x7ffff3068010 \"{ \\\"PostgreSQL-Backup-Manifest-Version\\\": 1,\\n\\\"Files\\\": [\\n{ \\\"Path\\\": \\\"backup_label\\\", \\\"Size\\\": 227, \\\"Last-Modified\\\": \\\"2022-03-16 02:29:11 GMT\\\", \\\"Checksum-Algorithm\\\": \\\"CRC32C\\\", \\\"Checksum\\\": \\\"46f69d99\\\" },\\n{ \\\"Pa\"..., len=401072, context=BBSTREAMER_MEMBER_CONTENTS) at bbstreamer_lz4.c:172\n mystreamer = 0x5555555a5260\n next_in = 0x7ffff3068010 \"{ \\\"PostgreSQL-Backup-Manifest-Version\\\": 1,\\n\\\"Files\\\": [\\n{ \\\"Path\\\": \\\"backup_label\\\", \\\"Size\\\": 227, \\\"Last-Modified\\\": \\\"2022-03-16 02:29:11 GMT\\\", \\\"Checksum-Algorithm\\\": \\\"CRC32C\\\", \\\"Checksum\\\": \\\"46f69d99\\\" },\\n{ \\\"Pa\"...\n ...\n\n(gdb) p mystreamer->base.bbs_buffer.maxlen\n$1 = 524288\n(gdb) p (int) LZ4F_compressBound(len, &mystreamer->prefs)\n$4 = 524300\n\nThis is with: liblz4-1:amd64 1.9.2-2ubuntu0.20.04.1", "msg_date": "Wed, 16 Mar 2022 10:12:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd negative compression)" }, { "msg_contents": "On Mon, Mar 14, 2022 at 1:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> There's some appeal to that, but one downside is that it means that\n> the client can't be used to fetch data that is compressed in a way\n> that the server knows about and the client doesn't. I don't think\n> that's great. Why should, for example, pg_basebackup need to be\n> compiled with zstd support in order to request zstd compression on the\n> server side? If the server knows about the brand new\n> justin-magic-sauce compression algorithm, maybe the client should just\n> be able to request it and, when given various .jms files by the\n> server, shrug its shoulders and accept them for what they are. That\n> doesn't work if -Fp is involved, or similar, but it should work fine\n> for simple cases if we set things up right.\n\nConcretely, I propose the attached patch for v15. It renames the\nnewly-added COMPRESSION_LEVEL option to COMPRESSION_DETAIL, introduces\na flexible syntax for options along the lines you proposed, and\nadjusts things so that a client that doesn't support a particular type\nof compression can still request that type of compression from the\nserver.\n\nI think it's important to do this for v15 so that we don't end up with\nbackward-compatibility problems down the road.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Mar 2022 11:50:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\nindex 9178c779ba..00c593f1af 100644\n--- a/doc/src/sgml/protocol.sgml\n+++ b/doc/src/sgml/protocol.sgml\n@@ -2731,14 +2731,24 @@ The commands accepted in replication mode are:\n+ <para>\n+ For <literal>gzip</literal> the compression level should be an\n\ngzip comma\n\n+++ b/src/backend/replication/basebackup.c\n@@ -18,6 +18,7 @@\n \n #include \"access/xlog_internal.h\"\t/* for pg_start/stop_backup */\n #include \"common/file_perm.h\"\n+#include \"common/backup_compression.h\"\n\nalphabetical\n\n- errmsg(\"unrecognized compression algorithm: \\\"%s\\\"\",\n+ errmsg(\"unrecognized compression algorithm \\\"%s\\\"\",\n\nMost other places seem to say \"compression method\". So I'd suggest to change\nthat here, and in doc/src/sgml/ref/pg_basebackup.sgml.\n\n-\tif (o_compression_level && !o_compression)\n+\tif (o_compression_detail && !o_compression)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n \t\t\t\t errmsg(\"compression level requires compression\")));\n\ns/level/detail/\n\n /*\n+ * Basic parsing of a value specified for -Z/--compress.\n+ *\n+ * We're not concerned here with understanding exactly what behavior the\n+ * user wants, but we do need to know whether the user is requesting client\n+ * or server side compression or leaving it unspecified, and we need to\n+ * separate the name of the compression algorithm from the detail string.\n+ *\n+ * For instance, if the user writes --compress client-lz4:6, we want to\n+ * separate that into (a) client-side compression, (b) algorithm \"lz4\",\n+ * and (c) detail \"6\". Note, however, that all the client/server prefix is\n+ * optional, and so is the detail. The algorithm name is required, unless\n+ * the whole string is an integer, in which case we assume \"gzip\" as the\n+ * algorithm and use the integer as the detail.\n..\n */\n static void\n+parse_compress_options(char *option, char **algorithm, char **detail,\n+\t\t\t\t\t CompressionLocation *locationres)\n\nIt'd be great if this were re-usable for wal_compression, which I hope in pg16 will\nsupport at least level=N. And eventually pg_dump. But those clients shouldn't\naccept a client/server prefix. Maybe the way to handle that is for those tools\nto check locationres and reject it if it was specified.\n\n+ * We're not concerned with validation at this stage, so if the user writes\n+ * --compress client-turkey:sandwhich, the requested algorithm is \"turkey\"\n+ * and the detail string is \"sandwhich\". We'll sort out whether that's legal\n\nsp: sandwich\n\n+\t\tWalCompressionMethod\twal_compress_method;\n\nThis is confusingly similar to src/include/access/xlog.h:WalCompression.\nI think someone else mentioned this before ?\n\n+ * A compression specification specifies the parameters that should be used\n+ * when * performing compression with a specific algorithm. The simplest\n\nstar\n\n+/*\n+ * Get the human-readable name corresponding to a particular compression\n+ * algorithm.\n+ */\n+char *\n+get_bc_algorithm_name(bc_algorithm algorithm)\n\nshould be const ?\n\n+\t/* As a special case, the specification can be a bare integer. */\n+\tbare_level = strtol(specification, &bare_level_endp, 10);\n\nShould this call expect_integer_value()?\nSee below.\n\n+\t\t\tresult->parse_error =\n+\t\t\t\tpstrdup(\"found empty string where a compression option was expected\");\n\nNeeds to be localized with _() ?\nAlso, document that it's pstrdup'd.\n\n+/*\n+ * Parse 'value' as an integer and return the result.\n+ *\n+ * If parsing fails, set result->parse_error to an appropriate message\n+ * and return -1.\n+ */\n+static int\n+expect_integer_value(char *keyword, char *value, bc_specification *result)\n\n-1 isn't great, since it's also an integer, and, also a valid compression level\nfor zstd (did you see my message about that?). Maybe INT_MIN is ok.\n\n+{\n+\tint\t\tivalue;\n+\tchar *ivalue_endp;\n+\n+\tivalue = strtol(value, &ivalue_endp, 10);\n\nShould this also set/check errno ?\nAnd check if value != ivalue_endp ?\nSee strtol(3)\n\n+char *\n+validate_bc_specification(bc_specification *spec)\n...\n+\t/*\n+\t * If a compression level was specified, check that the algorithm expects\n+\t * a compression level and that the level is within the legal range for\n+\t * the algorithm.\n\nIt would be nice if this could be shared with wal_compression and pg_dump.\nWe shouldn't need multiple places with structures giving the algorithms and\nrange of compression levels.\n\n+\tunsigned\toptions;\t\t/* OR of BACKUP_COMPRESSION_OPTION constants */\n\nShould be \"unsigned int\" or \"bits32\" ?\n\nThe server crashes if I send an unknown option - you should hit that in the\nregression tests.\n\n$ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-lz4:a |wc -c\nTRAP: FailedAssertion(\"pointer != NULL\", File: \"../../../../src/include/utils/memutils.h\", Line: 123, PID: 8627)\npostgres: walsender pryzbyj [local] BASE_BACKUP(ExceptionalCondition+0xa0)[0x560b45d7b64b]\npostgres: walsender pryzbyj [local] BASE_BACKUP(pfree+0x5d)[0x560b45dad1ea]\npostgres: walsender pryzbyj [local] BASE_BACKUP(parse_bc_specification+0x154)[0x560b45dc5d4f]\npostgres: walsender pryzbyj [local] BASE_BACKUP(+0x43d56c)[0x560b45bc556c]\npostgres: walsender pryzbyj [local] BASE_BACKUP(SendBaseBackup+0x2d)[0x560b45bc85ca]\npostgres: walsender pryzbyj [local] BASE_BACKUP(exec_replication_command+0x3a2)[0x560b45bdddb2]\npostgres: walsender pryzbyj [local] BASE_BACKUP(PostgresMain+0x6b2)[0x560b45c39131]\npostgres: walsender pryzbyj [local] BASE_BACKUP(+0x40530e)[0x560b45b8d30e]\npostgres: walsender pryzbyj [local] BASE_BACKUP(+0x408572)[0x560b45b90572]\npostgres: walsender pryzbyj [local] BASE_BACKUP(+0x4087b9)[0x560b45b907b9]\npostgres: walsender pryzbyj [local] BASE_BACKUP(PostmasterMain+0x1135)[0x560b45b91d9b]\npostgres: walsender pryzbyj [local] BASE_BACKUP(main+0x229)[0x560b45ad0f78]\n\nThis is interpreted like client-gzip-1; should multiple specifications of\ncompress be prohibited ?\n\n| src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-lz4 --compress=1\n\n\n", "msg_date": "Thu, 17 Mar 2022 14:41:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Thanks for the review!\n\nI'll address most of these comments later, but quickly for right now...\n\nOn Thu, Mar 17, 2022 at 3:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It'd be great if this were re-usable for wal_compression, which I hope in pg16 will\n> support at least level=N. And eventually pg_dump. But those clients shouldn't\n> accept a client/server prefix. Maybe the way to handle that is for those tools\n> to check locationres and reject it if it was specified.\n> [...]\n> This is confusingly similar to src/include/access/xlog.h:WalCompression.\n> I think someone else mentioned this before ?\n\nA couple of people before me have had delusions of grandeur in this\narea. We have the WalCompression enum, which has values of the form\nCOMPRESSION_*, instead of WAL_COMPRESSION_*, as if the WAL were going\nto be the only thing that ever got compressed. And pg_dump.h also has\na CompressionAlgorithm enum, with values like COMPR_ALG_*, which isn't\ngreat naming either. Clearly there's some cleanup needed here: if we\ncan use the same enum for multiple systems, then it can have a name\nimplying that it's the only game in town, but otherwise both the enum\nname and the corresponding value need to use a suitable prefix. I\nthink that's a job for another patch, probably post-v15. For now I\nplan to do the right thing with the new names I'm adding, and leave\nthe existing names alone. That can be changed in the future, if and\nwhen it seems sensible.\n\nAs I said elsewhere, I think the WAL compression stuff is badly\ndesigned and should probably be rewritten completely, maybe to reuse\nthe bbstreamer stuff. In that case, WalCompressionMethod would\nprobably go away entirely, making the naming confusion moot, and\npicking up zstd and lz4 compression support for free. If that doesn't\nhappen, we can probably find some way to at least make them share an\nenum, but I think that's too hairy to try to clean up right now with\nfeature freeze pending.\n\n> The server crashes if I send an unknown option - you should hit that in the\n> regression tests.\n>\n> $ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-lz4:a |wc -c\n> TRAP: FailedAssertion(\"pointer != NULL\", File: \"../../../../src/include/utils/memutils.h\", Line: 123, PID: 8627)\n> postgres: walsender pryzbyj [local] BASE_BACKUP(ExceptionalCondition+0xa0)[0x560b45d7b64b]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(pfree+0x5d)[0x560b45dad1ea]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(parse_bc_specification+0x154)[0x560b45dc5d4f]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(+0x43d56c)[0x560b45bc556c]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(SendBaseBackup+0x2d)[0x560b45bc85ca]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(exec_replication_command+0x3a2)[0x560b45bdddb2]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(PostgresMain+0x6b2)[0x560b45c39131]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(+0x40530e)[0x560b45b8d30e]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(+0x408572)[0x560b45b90572]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(+0x4087b9)[0x560b45b907b9]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(PostmasterMain+0x1135)[0x560b45b91d9b]\n> postgres: walsender pryzbyj [local] BASE_BACKUP(main+0x229)[0x560b45ad0f78]\n\nThat's odd - I thought I had tested that case. Will double-check.\n\n> This is interpreted like client-gzip-1; should multiple specifications of\n> compress be prohibited ?\n>\n> | src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-lz4 --compress=1\n\nThey're not now and haven't been in the past. I think the last one\nshould just win (as it apparently does, here). We do that in some\nplaces and throw an error in others and I'm not sure if we have a 100%\nconsistent rule for it, but flipping one location between one behavior\nand the other isn't going to make things more consistent overall.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:29:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Thu, Mar 17, 2022 at 3:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> gzip comma\n\nI think it's fine the way it's written. If we made that change, then\nwe'd have a comma for gzip and not for the other two algorithms. Also,\nI'm just moving that sentence, so any change that there is to be made\nhere is a job for some other patch.\n\n> alphabetical\n\nFixed.\n\n> - errmsg(\"unrecognized compression algorithm: \\\"%s\\\"\",\n> + errmsg(\"unrecognized compression algorithm \\\"%s\\\"\",\n>\n> Most other places seem to say \"compression method\". So I'd suggest to change\n> that here, and in doc/src/sgml/ref/pg_basebackup.sgml.\n\nI'm not sure that's really better, and I don't think this patch is\nintroducing an altogether novel usage. I think I would probably try to\nstandardize on algorithm rather than method if I were standardizing\nthe whole source tree, but I think we can leave that discussion for\nanother time.\n\n> - if (o_compression_level && !o_compression)\n> + if (o_compression_detail && !o_compression)\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"compression level requires compression\")));\n>\n> s/level/detail/\n\nFixed.\n\n\n> It'd be great if this were re-usable for wal_compression, which I hope in pg16 will\n> support at least level=N. And eventually pg_dump. But those clients shouldn't\n> accept a client/server prefix. Maybe the way to handle that is for those tools\n> to check locationres and reject it if it was specified.\n\nOne thing I forgot to mention in my previous response is that I think\nthe parsing code is actually well set up for this the way I have it.\nserver- and client- gets parsed off in a different place than we\ninterpret the rest, which fits well with your observation that other\ncases wouldn't have a client or server prefix.\n\n> sp: sandwich\n\nFixed.\n\n> star\n\nFixed.\n\n> should be const ?\n\nOK.\n\n>\n> + /* As a special case, the specification can be a bare integer. */\n> + bare_level = strtol(specification, &bare_level_endp, 10);\n>\n> Should this call expect_integer_value()?\n> See below.\n\nI don't think that would be useful. We have no keyword to pass for the\nerror message, nor would we use the error message if one got\nconstructed.\n\n> + result->parse_error =\n> + pstrdup(\"found empty string where a compression option was expected\");\n>\n> Needs to be localized with _() ?\n> Also, document that it's pstrdup'd.\n\nDid the latter. The former would need to be fixed in a bunch of places\nand while I'm happy to accept an expert opinion on exactly what needs\nto be done here, I don't want to try to do it and do it wrong. Better\nto let someone with good knowledge of the subject matter patch it up\nlater than do a crummy job now.\n\n> -1 isn't great, since it's also an integer, and, also a valid compression level\n> for zstd (did you see my message about that?). Maybe INT_MIN is ok.\n\nIt really doesn't matter. Could just return 42. The client shouldn't\nuse the value if there's an error.\n\n> +{\n> + int ivalue;\n> + char *ivalue_endp;\n> +\n> + ivalue = strtol(value, &ivalue_endp, 10);\n>\n> Should this also set/check errno ?\n> And check if value != ivalue_endp ?\n> See strtol(3)\n\nEven after reading the man page for strtol, it's not clear to me that\nthis is needed. That page represents checking *endptr != '\\0' as\nsufficient to tell whether an error occurred. Maybe it wouldn't catch\nan out of range value, but in practice all of the algorithms we\nsupport now and any we support in the future are going to catch\nsomething clamped to LONG_MIN or LONG_MAX as out of range and display\nthe correct error message. What's your specific thinking here?\n\n> + unsigned options; /* OR of BACKUP_COMPRESSION_OPTION constants */\n>\n> Should be \"unsigned int\" or \"bits32\" ?\n\nI do not see why either of those would be better.\n\n> The server crashes if I send an unknown option - you should hit that in the\n> regression tests.\n\nTurns out I was testing this on the client side but not the server\nside. Fixed and added more tests.\n\nv2 attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 20 Mar 2022 15:05:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>> Should this also set/check errno ?\n>> And check if value != ivalue_endp ?\n>> See strtol(3)\n\n> Even after reading the man page for strtol, it's not clear to me that\n> this is needed. That page represents checking *endptr != '\\0' as\n> sufficient to tell whether an error occurred.\n\nI'm not sure whose man page you looked at, but the POSIX standard [1]\nhas a pretty clear opinion about this:\n\n Since 0, {LONG_MIN} or {LLONG_MIN}, and {LONG_MAX} or {LLONG_MAX} are\n returned on error and are also valid returns on success, an\n application wishing to check for error situations should set errno to\n 0, then call strtol() or strtoll(), then check errno.\n\nChecking *endptr != '\\0' is for detecting whether there is trailing\ngarbage after the number; which may be an error case or not as you\nchoose, but it's a different matter.\n\n\t\t\tregards, tom lane\n\n[1] https://pubs.opengroup.org/onlinepubs/9699919799/\n\n\n", "msg_date": "Sun, 20 Mar 2022 15:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Sun, Mar 20, 2022 at 03:05:28PM -0400, Robert Haas wrote:\n> On Thu, Mar 17, 2022 at 3:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > - errmsg(\"unrecognized compression algorithm: \\\"%s\\\"\",\n> > + errmsg(\"unrecognized compression algorithm \\\"%s\\\"\",\n> >\n> > Most other places seem to say \"compression method\". So I'd suggest to change\n> > that here, and in doc/src/sgml/ref/pg_basebackup.sgml.\n> \n> I'm not sure that's really better, and I don't think this patch is\n> introducing an altogether novel usage. I think I would probably try to\n> standardize on algorithm rather than method if I were standardizing\n> the whole source tree, but I think we can leave that discussion for\n> another time.\n\nThe user-facing docs are already standardized using \"compression method\", with\n2 exceptions, of which one is contrib/ and the other is what I'm suggesting to\nmake consistent here.\n\n$ git grep 'compression algorithm' doc\ndoc/src/sgml/pgcrypto.sgml: Which compression algorithm to use. Only available if\ndoc/src/sgml/ref/pg_basebackup.sgml: compression algorithm is selected, or if server-side compression\n\n> > + result->parse_error =\n> > + pstrdup(\"found empty string where a compression option was expected\");\n> >\n> > Needs to be localized with _() ?\n> > Also, document that it's pstrdup'd.\n> \n> Did the latter. The former would need to be fixed in a bunch of places\n> and while I'm happy to accept an expert opinion on exactly what needs\n> to be done here, I don't want to try to do it and do it wrong. Better\n> to let someone with good knowledge of the subject matter patch it up\n> later than do a crummy job now.\n\nI believe it just needs _(\"foo\")\nSee git grep '= _('\n\nI mentioned another issue off-list:\npg_basebackup.c:2741:10: warning: suggest parentheses around assignment used as truth value [-Wparentheses]\n 2741 | Assert(compressloc = COMPRESS_LOCATION_SERVER);\n | ^~~~~~~~~~~\npg_basebackup.c:2741:3: note: in expansion of macro ‘Assert’\n 2741 | Assert(compressloc = COMPRESS_LOCATION_SERVER);\n\nThis crashes the server using your v2 patch:\n\nsrc/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-zstd:level, |wc -c\n\nI wonder whether the syntax should really use both \":\" and \",\".\nMaybe \":\" isn't needed at all.\n\nThis patch also needs to update the other user-facing docs.\n\ntypo: contain a an\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 20 Mar 2022 14:40:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Sun, Mar 20, 2022 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Even after reading the man page for strtol, it's not clear to me that\n> > this is needed. That page represents checking *endptr != '\\0' as\n> > sufficient to tell whether an error occurred.\n>\n> I'm not sure whose man page you looked at, but the POSIX standard [1]\n> has a pretty clear opinion about this:\n>\n> Since 0, {LONG_MIN} or {LLONG_MIN}, and {LONG_MAX} or {LLONG_MAX} are\n> returned on error and are also valid returns on success, an\n> application wishing to check for error situations should set errno to\n> 0, then call strtol() or strtoll(), then check errno.\n>\n> Checking *endptr != '\\0' is for detecting whether there is trailing\n> garbage after the number; which may be an error case or not as you\n> choose, but it's a different matter.\n\nI think I'm guilty of verbal inexactitude here but not bad coding.\nChecking for *endptr != '\\0', as I did, is not sufficient to detect\n\"whether an error occurred,\" as I alleged. But, in the part of my\nresponse you didn't quote, I believe I made it clear that I only need\nto detect garbage, not out-of-range values. And I think *endptr !=\n'\\0' will do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Mar 2022 21:24:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think I'm guilty of verbal inexactitude here but not bad coding.\n> Checking for *endptr != '\\0', as I did, is not sufficient to detect\n> \"whether an error occurred,\" as I alleged. But, in the part of my\n> response you didn't quote, I believe I made it clear that I only need\n> to detect garbage, not out-of-range values. And I think *endptr !=\n> '\\0' will do that.\n\nHmm ... do you consider an empty string to be valid input?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Mar 2022 21:32:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Sun, Mar 20, 2022 at 3:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The user-facing docs are already standardized using \"compression method\", with\n> 2 exceptions, of which one is contrib/ and the other is what I'm suggesting to\n> make consistent here.\n>\n> $ git grep 'compression algorithm' doc\n> doc/src/sgml/pgcrypto.sgml: Which compression algorithm to use. Only available if\n> doc/src/sgml/ref/pg_basebackup.sgml: compression algorithm is selected, or if server-side compression\n\nWell, if you just count the number of occurrences of each string in\nthe documentation, sure. But all of the ones that are talking about a\ncompression method seem to have to do with configurable TOAST\ncompression, and the fact that the documentation for that feature is\nmore extensive than for the pre-existing feature that refers to a\ncompression algorithm does not, at least in my view, turn it into a\nproject standard from which no deviation is permitted.\n\n> > Did the latter. The former would need to be fixed in a bunch of places\n> > and while I'm happy to accept an expert opinion on exactly what needs\n> > to be done here, I don't want to try to do it and do it wrong. Better\n> > to let someone with good knowledge of the subject matter patch it up\n> > later than do a crummy job now.\n>\n> I believe it just needs _(\"foo\")\n> See git grep '= _('\n\nHmm. Maybe.\n\n> I mentioned another issue off-list:\n> pg_basebackup.c:2741:10: warning: suggest parentheses around assignment used as truth value [-Wparentheses]\n> 2741 | Assert(compressloc = COMPRESS_LOCATION_SERVER);\n> | ^~~~~~~~~~~\n> pg_basebackup.c:2741:3: note: in expansion of macro ‘Assert’\n> 2741 | Assert(compressloc = COMPRESS_LOCATION_SERVER);\n>\n> This crashes the server using your v2 patch:\n>\n> src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --no-manifest --compress=server-zstd:level, |wc -c\n\nWell that's unfortunate. Will fix.\n\n> I wonder whether the syntax should really use both \":\" and \",\".\n> Maybe \":\" isn't needed at all.\n\nI don't think we should treat the compression method name in the same\nway as a compression algorithm option.\n\n> This patch also needs to update the other user-facing docs.\n\nWhich ones exactly?\n\n> typo: contain a an\n\nOK, will fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Mar 2022 21:38:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Sun, Mar 20, 2022 at 9:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think I'm guilty of verbal inexactitude here but not bad coding.\n> > Checking for *endptr != '\\0', as I did, is not sufficient to detect\n> > \"whether an error occurred,\" as I alleged. But, in the part of my\n> > response you didn't quote, I believe I made it clear that I only need\n> > to detect garbage, not out-of-range values. And I think *endptr !=\n> > '\\0' will do that.\n>\n> Hmm ... do you consider an empty string to be valid input?\n\nNo, and I thought I had checked properly for that condition before\nreaching the point in the code where I call strtol(), but it turns out\nI have not, which I guess is what Justin has been trying to tell me\nfor a few emails now.\n\nI'll send an updated patch tomorrow after looking this all over more carefully.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Mar 2022 22:03:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Sun, Mar 20, 2022 at 09:38:44PM -0400, Robert Haas wrote:\n> > This patch also needs to update the other user-facing docs.\n> \n> Which ones exactly?\n\nI mean pg_basebackup -Z\n\n-Z level\n-Z [{client|server}-]method[:level]\n--compress=level\n--compress=[{client|server}-]method[:level]\n\n\n", "msg_date": "Mon, 21 Mar 2022 08:18:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 21, 2022 at 9:18 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Mar 20, 2022 at 09:38:44PM -0400, Robert Haas wrote:\n> > > This patch also needs to update the other user-facing docs.\n> >\n> > Which ones exactly?\n>\n> I mean pg_basebackup -Z\n>\n> -Z level\n> -Z [{client|server}-]method[:level]\n> --compress=level\n> --compress=[{client|server}-]method[:level]\n\nAh, right. Thanks.\n\nHere's v3. I have updated that section of the documentation. I also\nwent and added a bunch more test cases for validation of compression\ndetail strings, many inspired by your examples, and fixed all the bugs\nthat I found in the process. I think the crashes you complained about\nare now fixed, but please let me know if I have missed any. I also\nadded _() calls as you suggested. I searched for the \"contain a an\"\ntypo that you mentioned but was not able to find it. Can you give me a\nmore specific pointer?\n\nI looked a little bit more at the compression method vs. compression\nalgorithm thing. I agree that there is some inconsistency in\nterminology here, but I'm still not sure that we are well-served by\ntrying to make it totally uniform, especially if we pick the word\n\"method\" as the standard rather than \"algorithm\". In my opinion,\n\"method\" is less specific than \"algorithm\". If someone asks me to\nchoose a compression algorithm, I know that I should give an answer\nlike \"lz4\" or \"zstd\". If they ask me to pick a compression method, I'm\nnot quite sure whether they want that kind of answer or whether they\nwant something more detailed, like \"use lz4 with compression level 3\nand a 1MB block size\". After all, that is (at least according to my\nunderstanding of how English works) a perfectly valid answer to the\nquestion \"what method should I use to compress this data?\" -- but not\nto the question \"what algorithm should I use to compress this data?\".\nThe latter can ONLY be properly answered by saying something like\n\"lz4\". And I think that's really the root of my hesitation to make the\nkinds of changes you want here. If it's just a question of specifying\na compression algorithm and a level, I don't think using the name\n\"method\" for the algorithm is going to be too bad. But as we enrich\nthe system with multiple compression algorithms each of which may have\nmultiple and different parameters, I think the whole thing becomes\nmurkier and the need for precision in language goes up.\n\nNow that is of course an arguable position and you're welcome to\ndisagree with it, but I think that's part of why I'm hesitating.\nAnother part of it, at least for me, is that complete uniformity is\nnot always a positive. I suppose all of us have had the experience at\nsome point of reading a manual that says something like \"to activate\nthe boil water function, press and release the 'boil water' button\"\nand rolled our eyes at how useless it was. It's important to me that\nwe don't fall into that trap. We clearly don't want to go ballistic\nand have random inconsistencies in language for no reason, but at the\nsame time, it's not useful to tell people that METHOD should be\nreplaced with a compression method and LEVEL with a compression level.\nI mean, if you end up saying something like that interspersed with\nnon-obvious information, that is OK, and I don't want to overstate the\npoint I'm trying to make. But it seems to me that if there's a little\nvariation in phrasing and we end up saying that METHOD means the\ncompression algorithm or that ALGORITHM means the compression method\nor whatever, that can actually make things more clear. Here again it's\ndebatable: how much variation in phraseology is helpful, and at what\npoint does it just start to seem inconsistent? Well, everyone may have\ntheir own opinion.\n\nI'm not trying to pretend that this patch (or the existing code base)\ngets this all right. But I do think that, to the extent that we have a\nconsidered position on what to do here, we can make that change later,\nperhaps even after getting some user feedback on what does and does\nnot make sense to other people. And I also think that what we end up\ndoing here may well end up being more nuanced than a blanket\nsearch-and-replace. I'm not saying we couldn't make a blanket\nsearch-and-replace. I just don't see it as necessarily creating value,\nor being all that closely connected to the goal of this patch, which\nis to quickly clean up a forward-compatibility risk before we hit\nfeature freeze.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Mar 2022 12:57:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 21, 2022 at 12:57:36PM -0400, Robert Haas wrote:\n> > typo: contain a an\n> I searched for the \"contain a an\" typo that you mentioned but was not able to\n> find it. Can you give me a more specific pointer?\n\nHere:\n\n+ * during parsing, and will otherwise contain a an appropriate error message.\n\n> I looked a little bit more at the compression method vs. compression\n> algorithm thing. I agree that there is some inconsistency in\n> terminology here, but I'm still not sure that we are well-served by\n> trying to make it totally uniform, especially if we pick the word\n> \"method\" as the standard rather than \"algorithm\". In my opinion,\n> \"method\" is less specific than \"algorithm\". If someone asks me to\n> choose a compression algorithm, I know that I should give an answer\n> like \"lz4\" or \"zstd\". If they ask me to pick a compression method, I'm\n> not quite sure whether they want that kind of answer or whether they\n> want something more detailed, like \"use lz4 with compression level 3\n> and a 1MB block size\". After all, that is (at least according to my\n> understanding of how English works) a perfectly valid answer to the\n> question \"what method should I use to compress this data?\" -- but not\n> to the question \"what algorithm should I use to compress this data?\".\n> The latter can ONLY be properly answered by saying something like\n> \"lz4\". And I think that's really the root of my hesitation to make the\n> kinds of changes you want here.\n\nI think \"algorithm\" could be much more nuanced than \"lz4\", but I also think\nwe've spent more than enough time on it now :)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 21 Mar 2022 13:22:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 21, 2022 at 2:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> + * during parsing, and will otherwise contain a an appropriate error message.\n\nOK, thanks. v4 attached.\n\n> I think \"algorithm\" could be much more nuanced than \"lz4\", but I also think\n> we've spent more than enough time on it now :)\n\nOh dear. But yes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Mar 2022 14:25:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Mon, Mar 21, 2022 at 2:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> + * during parsing, and will otherwise contain a an appropriate error message.\n>\n> OK, thanks. v4 attached.\n\nI haven't read the whole patch, but I noticed an omission in the\ndocumentation changes:\n\n> diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\n> index 9178c779ba..00c593f1af 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -2731,14 +2731,24 @@ The commands accepted in replication mode are:\n> <varlistentry>\n> - <term><literal>COMPRESSION_LEVEL</literal> <replaceable>level</replaceable></term>\n> + <term><literal>COMPRESSION_DETAIL</literal> <replaceable>detail</replaceable></term>\n> <listitem>\n> <para>\n> Specifies the compression level to be used.\n\nThis is no longer the accurate. How about something like like \"Specifies\ndetails of the chosen compression method\"?\n\n- ilmari\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:41:33 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 21, 2022 at 2:41 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> This is no longer the accurate. How about something like like \"Specifies\n> details of the chosen compression method\"?\n\nGood catch. v5 attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 11:37:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Tue, Mar 22, 2022 at 11:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 21, 2022 at 2:41 PM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n> > This is no longer the accurate. How about something like like \"Specifies\n> > details of the chosen compression method\"?\n>\n> Good catch. v5 attached.\n\nAnd committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 09:19:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "[ Changing subject line in the hopes of attracting more eyeballs. ]\n\nOn Mon, Mar 14, 2022 at 12:11 PM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> I tried to implement support for parallel ZSTD compression.\n\nHere's a new patch for this. It's more of a rewrite than an update,\nhonestly; commit ffd53659c46a54a6978bcb8c4424c1e157a2c0f1 necessitated\ntotally different options handling, but I also redid the test cases,\nthe documentation, and the error message.\n\nFor those who may not have been following along, here's an executive\nsummary: libzstd offers an option for parallel compression. It's\nintended to be transparent: you just say you want it, and the library\ntakes care of it for you. Since we have the ability to do backup\ncompression on either the client or the server side, we can expose\nthis option in both locations. That would be cool, because it would\nallow for really fast backup compression with a good compression\nratio. It would also mean that we would be, or really libzstd would\nbe, spawning threads inside the PostgreSQL backend. Short of cats and\ndogs living together, it's hard to think of anything more terrifying,\nbecause the PostgreSQL backend is very much not thread-safe. However,\na lot of the things we usually worry about when people make noises\nabout using threads in the backend don't apply here, because the\nthreads are hidden away behind libzstd interfaces and can't execute\nany PostgreSQL code. Therefore, I think it might be safe to just ...\nturn this on. One reason I think that is that this whole approach was\nrecommended to me by Andres ... but that's not to say that there\ncouldn't be problems. I worry a bit that the mere presence of threads\ncould in some way mess things up, but I don't know what the mechanism\nfor that would be, and I don't want to postpone shipping useful\nfeatures based on nebulous fears.\n\nIn my ideal world, I'd like to push this into v15. I've done a lot of\nwork to improve the backup code in this release, and this is actually\na very small change yet one that potentially enables the project to\nget a lot more value out of the work that has already been committed.\nThat said, I also don't want to break the world, so if you have an\nidea what this would break, please tell me.\n\nFor those curious as to how this affects performance and backup size,\nI loaded up the UK land registry database. That creates a 3769MB\ndatabase. Then I backed it up using client-side compression and\nserver-side compression using the various different algorithms that\nare supported in the master branch, plus parallel zstd.\n\nno compression: 3.7GB, 9 seconds\ngzip: 1.5GB, 140 seconds with server-side, 141 seconds with client-side\nlz4: 2.0GB, 13 seconds with server-side, 12 seconds with client-side\n\nFor both parallel and non-parallel zstd compression, I see differences\nbetween the compressed size depending on where the compression is\ndone. I don't know whether this is an expected behavior of the zstd\nlibrary or a bug. Both files uncompress OK and pass pg_verifybackup,\nbut that doesn't mean we're not, for example, selecting different\ncompression levels where we shouldn't be. I'll try to figure out\nwhat's going on here.\n\nzstd, client-side: 1.7GB, 17 seconds\nzstd, server-side: 1.3GB, 25 seconds\nparallel zstd, 4 workers, client-side: 1.7GB, 7.5 seconds\nparallel zstd, 4 workers, server-side: 1.3GB, 7.2 seconds\n\nNotice that compressing the backup with parallel zstd is actually\nfaster than taking an uncompressed backup, even though this test is\nall being run on the same machine. That's kind of crazy to me: the\nparallel compression is so fast that we save more time on I/O than we\nspend compressing. This assumes of course that you have plenty of CPU\nresources and limited I/O resources, which won't be true for everyone,\nbut it's not an unusual situation.\n\nI think the documentation changes in this patch might not be quite up\nto scratch. I think there's a brewing problem here: as we add more\ncompression options, whether or not that happens in this release, and\nregardless of what specific options we add, the way things are\nstructured right now, we're going to end up either duplicating a bunch\nof stuff between the pg_basebackup documentation and the BASE_BACKUP\ndocumentation, or else one of those places is going to end up lacking\ninformation that someone reading it might like to have. I'm not\nexactly sure what to do about this, though.\n\nThis patch contains a trivial adjustment to\nPostgreSQL::Test::Cluster::run_log to make it return a useful value\ninstead of not. I think that should be pulled out and committed\nindependently regardless of what happens to this patch overall, and\npossibly back-patched.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Mar 2022 16:34:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "multithreaded zstd backup compression for client and server" }, { "msg_contents": "Hi,\n\nOn 2022-03-23 16:34:04 -0400, Robert Haas wrote:\n> Therefore, I think it might be safe to just ... turn this on. One reason I\n> think that is that this whole approach was recommended to me by Andres ...\n\nI didn't do a super careful analysis of the issues... But I do think it's\npretty much the one case where it \"should\" be safe.\n\nThe most likely source of problem would errors thrown while zstd threads are\nalive. Should make sure that that can't happen.\n\n\nWhat is the lifetime of the threads zstd spawns? Are they tied to a single\ncompression call? A single ZSTD_createCCtx()? If the latter, how bulletproof\nis our code ensuring that we don't leak such contexts?\n\nIf they're short-lived, are we compressing large enough batches to not waste a\nlot of time starting/stopping threads?\n\n\n> but that's not to say that there couldn't be problems. I worry a bit that\n> the mere presence of threads could in some way mess things up, but I don't\n> know what the mechanism for that would be, and I don't want to postpone\n> shipping useful features based on nebulous fears.\n\nOne thing that'd be good to tests for is cancelling in-progress server-side\ncompression. And perhaps a few assertions that ensure that we don't escape\nwith some threads still running. That'd have to be platform dependent, but I\ndon't see a problem with that in this case.\n\n\n\n> For both parallel and non-parallel zstd compression, I see differences\n> between the compressed size depending on where the compression is\n> done. I don't know whether this is an expected behavior of the zstd\n> library or a bug. Both files uncompress OK and pass pg_verifybackup,\n> but that doesn't mean we're not, for example, selecting different\n> compression levels where we shouldn't be. I'll try to figure out\n> what's going on here.\n>\n> zstd, client-side: 1.7GB, 17 seconds\n> zstd, server-side: 1.3GB, 25 seconds\n> parallel zstd, 4 workers, client-side: 1.7GB, 7.5 seconds\n> parallel zstd, 4 workers, server-side: 1.3GB, 7.2 seconds\n\nWhat causes this fairly massive client-side/server-side size difference?\n\n\n\n> +\t/*\n> +\t * We check for failure here because (1) older versions of the library\n> +\t * do not support ZSTD_c_nbWorkers and (2) the library might want to\n> +\t * reject unreasonable values (though in practice it does not seem to do\n> +\t * so).\n> +\t */\n> +\tret = ZSTD_CCtx_setParameter(streamer->cctx, ZSTD_c_nbWorkers,\n> +\t\t\t\t\t\t\t\t compress->workers);\n> +\tif (ZSTD_isError(ret))\n> +\t{\n> +\t\tpg_log_error(\"could not set compression worker count to %d: %s\",\n> +\t\t\t\t\t compress->workers, ZSTD_getErrorName(ret));\n> +\t\texit(1);\n> +\t}\n\nWill this cause test failures on systems with older zstd?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Mar 2022 14:14:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "+ * We check for failure here because (1) older versions of the library\n+ * do not support ZSTD_c_nbWorkers and (2) the library might want to\n+ * reject an unreasonable values (though in practice it does not seem to do\n+ * so).\n+ */\n+ ret = ZSTD_CCtx_setParameter(mysink->cctx, ZSTD_c_nbWorkers,\n+ mysink->workers);\n+ if (ZSTD_isError(ret))\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"could not set compression worker count to %d: %s\",\n+ mysink->workers, ZSTD_getErrorName(ret)));\n\nAlso because the library may not be compiled with threading. A few days ago, I\ntried to rebase the original \"parallel workers\" patch over the COMPRESS DETAIL\npatch but then couldn't test it, even after trying various versions of the zstd\npackage and trying to compile it locally. I'll try again soon...\n\nI think you should also test the return value when setting the compress level.\nNot only because it's generally a good idea, but also because I suggested to\nsupport negative compression levels. Which weren't allowed before v1.3.4, and\nthen the range is only defined since 1.3.6 (ZSTD_minCLevel). At some point,\nthe range may have been -7..22 but now it's -131072..22.\n\nlib/compress/zstd_compress.c:int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }\nlib/zstd.h:#define ZSTD_TARGETLENGTH_MAX ZSTD_BLOCKSIZE_MAX\nlib/zstd.h:#define ZSTD_BLOCKSIZE_MAX (1<<ZSTD_BLOCKSIZELOG_MAX)\nlib/zstd.h:#define ZSTD_BLOCKSIZELOG_MAX 17\n; -1<<17\n -131072", "msg_date": "Wed, 23 Mar 2022 16:52:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> The most likely source of problem would errors thrown while zstd threads are\n> alive. Should make sure that that can't happen.\n>\n> What is the lifetime of the threads zstd spawns? Are they tied to a single\n> compression call? A single ZSTD_createCCtx()? If the latter, how bulletproof\n> is our code ensuring that we don't leak such contexts?\n\nI haven't found any real documentation explaining how libzstd manages\nits threads. I am assuming that it is tied to the ZSTD_CCtx, but I\ndon't know. I guess I could try to figure it out from the source code.\nAnyway, what we have now is a PG_TRY()/PG_CATCH() block around the\ncode that uses the basink which will cause bbsink_zstd_cleanup() to\nget called in the event of an error. That will do ZSTD_freeCCtx().\n\nIt's probably also worth mentioning here that even if, contrary to\nexpectations, the compression threads hang around to the end of time\nand chill, in practice nobody is likely to run BASE_BACKUP and then\nkeep the connection open for a long time afterward. So it probably\nwouldn't really affect resource utilization in real-world scenarios\neven if the threads never exited, as long as they didn't, you know,\nbusy-loop in the background. And I assume the actual library behavior\ncan't be nearly that bad. This is a pretty mainstream piece of\nsoftware.\n\n> If they're short-lived, are we compressing large enough batches to not waste a\n> lot of time starting/stopping threads?\n\nWell, we're using a single ZSTD_CCtx for an entire base backup. Again,\nI haven't found documentation explaining with libzstd is actually\ndoing, but it's hard to see how we could make the batch any bigger\nthan that. The context gets reset for each new tablespace, which may\nor may not do anything to the compression threads.\n\n> > but that's not to say that there couldn't be problems. I worry a bit that\n> > the mere presence of threads could in some way mess things up, but I don't\n> > know what the mechanism for that would be, and I don't want to postpone\n> > shipping useful features based on nebulous fears.\n>\n> One thing that'd be good to tests for is cancelling in-progress server-side\n> compression. And perhaps a few assertions that ensure that we don't escape\n> with some threads still running. That'd have to be platform dependent, but I\n> don't see a problem with that in this case.\n\nMore specific suggestions, please?\n\n> > For both parallel and non-parallel zstd compression, I see differences\n> > between the compressed size depending on where the compression is\n> > done. I don't know whether this is an expected behavior of the zstd\n> > library or a bug. Both files uncompress OK and pass pg_verifybackup,\n> > but that doesn't mean we're not, for example, selecting different\n> > compression levels where we shouldn't be. I'll try to figure out\n> > what's going on here.\n> >\n> > zstd, client-side: 1.7GB, 17 seconds\n> > zstd, server-side: 1.3GB, 25 seconds\n> > parallel zstd, 4 workers, client-side: 1.7GB, 7.5 seconds\n> > parallel zstd, 4 workers, server-side: 1.3GB, 7.2 seconds\n>\n> What causes this fairly massive client-side/server-side size difference?\n\nYou seem not to have read what I wrote about this exact point in the\ntext which you quoted.\n\n> Will this cause test failures on systems with older zstd?\n\nI put a bunch of logic in the test case to try to avoid that, so\nhopefully not, but if it does, we can adjust the logic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 18:31:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 5:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Also because the library may not be compiled with threading. A few days ago, I\n> tried to rebase the original \"parallel workers\" patch over the COMPRESS DETAIL\n> patch but then couldn't test it, even after trying various versions of the zstd\n> package and trying to compile it locally. I'll try again soon...\n\nAh. Right, I can update the comment to mention that.\n\n> I think you should also test the return value when setting the compress level.\n> Not only because it's generally a good idea, but also because I suggested to\n> support negative compression levels. Which weren't allowed before v1.3.4, and\n> then the range is only defined since 1.3.6 (ZSTD_minCLevel). At some point,\n> the range may have been -7..22 but now it's -131072..22.\n\nYeah, I was thinking that might be a good change. It would require\nadjusting some other code though, because right now only compression\nlevels 1..22 are accepted anyhow.\n\n> lib/compress/zstd_compress.c:int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }\n> lib/zstd.h:#define ZSTD_TARGETLENGTH_MAX ZSTD_BLOCKSIZE_MAX\n> lib/zstd.h:#define ZSTD_BLOCKSIZE_MAX (1<<ZSTD_BLOCKSIZELOG_MAX)\n> lib/zstd.h:#define ZSTD_BLOCKSIZELOG_MAX 17\n> ; -1<<17\n> -131072\n\nSo does that, like, compress the value by making it way bigger? :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 18:57:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 04:34:04PM -0400, Robert Haas wrote:\n> be, spawning threads inside the PostgreSQL backend. Short of cats and\n> dogs living together, it's hard to think of anything more terrifying,\n> because the PostgreSQL backend is very much not thread-safe. However,\n> a lot of the things we usually worry about when people make noises\n> about using threads in the backend don't apply here, because the\n> threads are hidden away behind libzstd interfaces and can't execute\n> any PostgreSQL code. Therefore, I think it might be safe to just ...\n> turn this on. One reason I think that is that this whole approach was\n> recommended to me by Andres ... but that's not to say that there\n> couldn't be problems. I worry a bit that the mere presence of threads\n> could in some way mess things up, but I don't know what the mechanism\n> for that would be, and I don't want to postpone shipping useful\n> features based on nebulous fears.\n\nNote that the PGDG .RPMs and .DEBs are already linked with pthread, via\nlibxml => liblzma.\n\n$ ldd /usr/pgsql-14/bin/postgres |grep xm\n libxml2.so.2 => /lib64/libxml2.so.2 (0x00007faab984e000)\n$ objdump -p /lib64/libxml2.so.2 |grep NEED\n NEEDED libdl.so.2\n NEEDED libz.so.1\n NEEDED liblzma.so.5\n NEEDED libm.so.6\n NEEDED libc.so.6\n VERNEED 0x0000000000019218\n VERNEEDNUM 0x0000000000000005\n$ objdump -p /lib64/liblzma.so.5 |grep NEED\n NEEDED libpthread.so.0\n\n\n\nDid you try this on windows at all ? It's probably no surprise that zstd\nimplements threading differently there.\n\n\n", "msg_date": "Wed, 23 Mar 2022 18:07:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "Hi,\n\nOn 2022-03-23 18:31:12 -0400, Robert Haas wrote:\n> On Wed, Mar 23, 2022 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > The most likely source of problem would errors thrown while zstd threads are\n> > alive. Should make sure that that can't happen.\n> >\n> > What is the lifetime of the threads zstd spawns? Are they tied to a single\n> > compression call? A single ZSTD_createCCtx()? If the latter, how bulletproof\n> > is our code ensuring that we don't leak such contexts?\n> \n> I haven't found any real documentation explaining how libzstd manages\n> its threads. I am assuming that it is tied to the ZSTD_CCtx, but I\n> don't know. I guess I could try to figure it out from the source code.\n\nI found this the following section in the manual [1]:\n\n ZSTD_c_nbWorkers=400, /* Select how many threads will be spawned to compress in parallel.\n * When nbWorkers >= 1, triggers asynchronous mode when invoking ZSTD_compressStream*() :\n * ZSTD_compressStream*() consumes input and flush output if possible, but immediately gives back control to caller,\n * while compression is performed in parallel, within worker thread(s).\n * (note : a strong exception to this rule is when first invocation of ZSTD_compressStream2() sets ZSTD_e_end :\n * in which case, ZSTD_compressStream2() delegates to ZSTD_compress2(), which is always a blocking call).\n * More workers improve speed, but also increase memory usage.\n * Default value is `0`, aka \"single-threaded mode\" : no worker is spawned,\n * compression is performed inside Caller's thread, and all invocations are blocking */\n\n\"ZSTD_compressStream*() consumes input ... immediately gives back control\"\npretty much confirms that.\n\n\nDo we care about zstd's memory usage here? I think it's OK to mostly ignore\nwork_mem/maintenance_work_mem here, but I could also see limiting concurrency\nso that estimated memory usage would fit into work_mem/maintenance_work_mem.\n\n\n\n> It's probably also worth mentioning here that even if, contrary to\n> expectations, the compression threads hang around to the end of time\n> and chill, in practice nobody is likely to run BASE_BACKUP and then\n> keep the connection open for a long time afterward. So it probably\n> wouldn't really affect resource utilization in real-world scenarios\n> even if the threads never exited, as long as they didn't, you know,\n> busy-loop in the background. And I assume the actual library behavior\n> can't be nearly that bad. This is a pretty mainstream piece of\n> software.\n\nI'm not really worried about resource utilization, more about the existence of\nthreads moving us into undefined behaviour territory or such. I don't think\nthat's possible, but it's IIRC UB to fork() while threads are present and do\npretty much *anything* other than immediately exec*().\n\n\n> > > but that's not to say that there couldn't be problems. I worry a bit that\n> > > the mere presence of threads could in some way mess things up, but I don't\n> > > know what the mechanism for that would be, and I don't want to postpone\n> > > shipping useful features based on nebulous fears.\n> >\n> > One thing that'd be good to tests for is cancelling in-progress server-side\n> > compression. And perhaps a few assertions that ensure that we don't escape\n> > with some threads still running. That'd have to be platform dependent, but I\n> > don't see a problem with that in this case.\n> \n> More specific suggestions, please?\n\nI was thinking of doing something like calling pthread_is_threaded_np() before\nand after the zstd section and erroring out if they differ. But I forgot that\nthat's on mac-ism.\n\n\n> > > For both parallel and non-parallel zstd compression, I see differences\n> > > between the compressed size depending on where the compression is\n> > > done. I don't know whether this is an expected behavior of the zstd\n> > > library or a bug. Both files uncompress OK and pass pg_verifybackup,\n> > > but that doesn't mean we're not, for example, selecting different\n> > > compression levels where we shouldn't be. I'll try to figure out\n> > > what's going on here.\n> > >\n> > > zstd, client-side: 1.7GB, 17 seconds\n> > > zstd, server-side: 1.3GB, 25 seconds\n> > > parallel zstd, 4 workers, client-side: 1.7GB, 7.5 seconds\n> > > parallel zstd, 4 workers, server-side: 1.3GB, 7.2 seconds\n> >\n> > What causes this fairly massive client-side/server-side size difference?\n> \n> You seem not to have read what I wrote about this exact point in the\n> text which you quoted.\n\nSomehow not...\n\nPerhaps it's related to the amounts of memory fed to ZSTD_compressStream2() in\none invocation? I recall that there's some differences between basebackup\nclient / serverside around buffer sizes - but that's before all the recent-ish\nchanges...\n\nGreetings,\n\nAndres Freund\n\n[1] http://facebook.github.io/zstd/zstd_manual.html\n\n\n", "msg_date": "Wed, 23 Mar 2022 16:31:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On 2022-03-23 18:07:01 -0500, Justin Pryzby wrote:\n> Did you try this on windows at all ?\n\nReally should get zstd installed in the windows cf environment...\n\n\n> It's probably no surprise that zstd implements threading differently there.\n\nWorth noting that we have a few of our own threads running on windows already\n- so we're guaranteed to build against the threaded standard libraries etc\nalready.\n\n\n", "msg_date": "Wed, 23 Mar 2022 16:36:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 7:07 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Did you try this on windows at all ? It's probably no surprise that zstd\n> implements threading differently there.\n\nI did not. I haven't had a properly functioning Windows development\nenvironment in about a decade.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 09:00:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "Hi Robert,\n\nI haven't reviewed the meat of the patch in detail, but I noticed\nsomething in the tests:\n\nRobert Haas <robertmhaas@gmail.com> writes:\n> diff --git a/src/bin/pg_verifybackup/t/009_extract.pl b/src/bin/pg_verifybackup/t/009_extract.pl\n> index 9f9cc7540b..e17e7cad51 100644\n> --- a/src/bin/pg_verifybackup/t/009_extract.pl\n> +++ b/src/bin/pg_verifybackup/t/009_extract.pl\n[…]\n> +\t\tif ($backup_stdout ne '')\n> +\t\t{\n> +\t\t\tprint \"# standard output was:\\n$backup_stdout\";\n> +\t\t}\n> +\t\tif ($backup_stderr ne '')\n> +\t\t{\n> +\t\t\tprint \"# standard error was:\\n$backup_stderr\";\n> +\t\t}\n[…]\n> diff --git a/src/bin/pg_verifybackup/t/010_client_untar.pl b/src/bin/pg_verifybackup/t/010_client_untar.pl\n> index 487e30e826..5f6a4b9963 100644\n> --- a/src/bin/pg_verifybackup/t/010_client_untar.pl\n> +++ b/src/bin/pg_verifybackup/t/010_client_untar.pl\n[…]\n> +\t\tif ($backup_stdout ne '')\n> +\t\t{\n> +\t\t\tprint \"# standard output was:\\n$backup_stdout\";\n> +\t\t}\n> +\t\tif ($backup_stderr ne '')\n> +\t\t{\n> +\t\t\tprint \"# standard error was:\\n$backup_stderr\";\n> +\t\t}\n\nPer the TAP protocol, every line of non-test-result output should be\nprefixed by \"# \". The note() function does this for you, see\nhttps://metacpan.org/pod/Test::More#Diagnostics for details.\n\n- ilmari\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:19:31 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 7:31 PM Andres Freund <andres@anarazel.de> wrote:\n> I found this the following section in the manual [1]:\n>\n> ZSTD_c_nbWorkers=400, /* Select how many threads will be spawned to compress in parallel.\n> * When nbWorkers >= 1, triggers asynchronous mode when invoking ZSTD_compressStream*() :\n> * ZSTD_compressStream*() consumes input and flush output if possible, but immediately gives back control to caller,\n> * while compression is performed in parallel, within worker thread(s).\n> * (note : a strong exception to this rule is when first invocation of ZSTD_compressStream2() sets ZSTD_e_end :\n> * in which case, ZSTD_compressStream2() delegates to ZSTD_compress2(), which is always a blocking call).\n> * More workers improve speed, but also increase memory usage.\n> * Default value is `0`, aka \"single-threaded mode\" : no worker is spawned,\n> * compression is performed inside Caller's thread, and all invocations are blocking */\n>\n> \"ZSTD_compressStream*() consumes input ... immediately gives back control\"\n> pretty much confirms that.\n\nI saw that too, but I didn't consider it conclusive. It would be nice\nif their documentation had a bit more detail on what's really\nhappening.\n\n> Do we care about zstd's memory usage here? I think it's OK to mostly ignore\n> work_mem/maintenance_work_mem here, but I could also see limiting concurrency\n> so that estimated memory usage would fit into work_mem/maintenance_work_mem.\n\nI think it's possible that we want to do nothing and possible that we\nwant to do something, but I think it's very unlikely that the thing we\nwant to do is related to maintenance_work_mem. Say we soft-cap the\ncompression level to the one which we think will fit within\nmaintanence_work_mem. I think the most likely outcome is that people\nwill not get the compression level they request and be confused about\nwhy that has happened. It also seems possible that we'll be wrong\nabout how much memory will be used - say, because somebody changes the\nlibrary behavior in a new release - and will limit it to the wrong\nlevel. If we're going to do anything here, I think it should be to\nlimit based on the compression level itself and not based how much\nmemory we think that level will use.\n\nBut that leaves the question of whether we should even try to impose\nsome kind of limit, and there I'm not sure. It feels like it might be\noverengineered, because we're only talking about users who have\nreplication privileges, and if those accounts are subverted there are\nbig problems anyway. I think if we imposed a governance system here it\nwould get very little use. On the other hand, I think that the higher\nzstd compression levels of 20+ can actually use a ton of memory, so we\nmight want to limit access to those somehow. Apparently on the command\nline you have to say --ultra -- not sure if there's a corresponding\nAPI call or if that's a guard that's built specifically into the CLI.\n\n> Perhaps it's related to the amounts of memory fed to ZSTD_compressStream2() in\n> one invocation? I recall that there's some differences between basebackup\n> client / serverside around buffer sizes - but that's before all the recent-ish\n> changes...\n\nThat thought occurred to me too but I haven't investigated yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 09:39:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 23, 2022 at 5:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think you should also test the return value when setting the compress level.\n> Not only because it's generally a good idea, but also because I suggested to\n> support negative compression levels. Which weren't allowed before v1.3.4, and\n> then the range is only defined since 1.3.6 (ZSTD_minCLevel). At some point,\n> the range may have been -7..22 but now it's -131072..22.\n\nHi,\n\nThe attached patch fixes a few goofs around backup compression. It\nadds a check that setting the compression level succeeds, although it\ndoes not allow the broader range of compression levels Justin notes\nabove. That can be done separately, I guess, if we want to do it. It\nalso fixes the problem that client and server-side zstd compression\ndon't actually compress equally well; that turned out to be a bug in\nthe handling of compression options. Finally it adds an exit call to\nan unlikely failure case so that we would, if that case should occur,\nprint a message and exit, rather than the current behavior of printing\na message and then dereferencing a null pointer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Mar 2022 17:56:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "fixing a few backup compression goofs" }, { "msg_contents": "Hi,\n\nThe changes look good to me.\n\nThanks,\nDipesh\n\nHi,The changes look good to me.Thanks,Dipesh", "msg_date": "Fri, 25 Mar 2022 18:53:26 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing a few backup compression goofs" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> [ v5-0001-Replace-BASE_BACKUP-COMPRESSION_LEVEL-option-with.patch ]\n\nCoverity has a nitpick about this:\n\n/srv/coverity/git/pgsql-git/postgresql/src/common/backup_compression.c: 194 in parse_bc_specification()\n193 \t\t/* Advance to next entry and loop around. */\n>>> CID 1503251: Null pointer dereferences (REVERSE_INULL)\n>>> Null-checking \"vend\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n194 \t\tspecification = vend == NULL ? kwend + 1 : vend + 1;\n195 \t}\n196 }\n\nNot sure if you should remove this null-check or add some other ones,\nbut I think you ought to do one or the other.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 27 Mar 2022 13:47:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Wed, Mar 23, 2022 at 06:57:04PM -0400, Robert Haas wrote:\n> On Wed, Mar 23, 2022 at 5:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also because the library may not be compiled with threading. A few days ago, I\n> > tried to rebase the original \"parallel workers\" patch over the COMPRESS DETAIL\n> > patch but then couldn't test it, even after trying various versions of the zstd\n> > package and trying to compile it locally. I'll try again soon...\n> \n> Ah. Right, I can update the comment to mention that.\n\nActually, I suggest to remove those comments:\n| \"We check for failure here because...\"\n\nThat should be the rule rather than the exception, so shouldn't require\njustifying why one might checks the return value of library and system calls.\n\nIn bbsink_zstd_new(), I think you need to check to see if workers were\nrequested (same as the issue you found with \"level\"). If someone builds\nagainst a version of zstd which doesn't support some parameter, you'll\ncurrently call SetParameter with that flag anyway, with a default value.\nThat's not currently breaking anything for me (even though workers=N doesn't\nwork) but I think it's fragile and could break, maybe when compiled against an\nold zstd, or with future options. SetParameter should only be called when the\nuser requested to set the parameter. I handled that for workers in 003, but\ndidn't touch \"level\", which is probably fine, but maybe should change for\nconsistency.\n\nsrc/backend/replication/basebackup_zstd.c: elog(ERROR, \"could not set zstd compression level to %d: %s\",\nsrc/bin/pg_basebackup/bbstreamer_gzip.c: pg_log_error(\"could not set compression level %d: %s\",\nsrc/bin/pg_basebackup/bbstreamer_zstd.c: pg_log_error(\"could not set compression level to: %d: %s\",\n\nI'm not sure why these messages sometimes mention the current compression\nmethod and sometimes don't. I suggest that they shouldn't - errcontext will\nhave the algorithm, and the user already specified it anyway. It'd allow the\ncompiler to merge strings.\n\nHere's a patch for zstd --long mode. (I don't actually use pg_basebackup, but\nI will want to use long mode with pg_dump). The \"strategy\" params may also be\ninteresting, but I haven't played with it. rsyncable is certainly interesting,\nbut currently an experimental, nonpublic interface - and a good example of why\nto not call SetParameter for params which the user didn't specify: PGDG might\neventually compile postgres against a zstd which supports rsyncable flag. And\nsomeone might install somewhere which doesn't support rsyncable, but the server\nwould try to call SetParameter(rsyncable, 0), and the rsyncable ID number\nwould've changed, so zstd would probably reject it, and basebackup would be\nunusable...\n\n$ time src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method=none --no-manifest -Z zstd:long=1 --checkpoint fast |wc -c\n4625935\nreal 0m1,334s\n\n$ time src/bin/pg_basebackup/pg_basebackup -h /tmp -Ft -D- --wal-method=none --no-manifest -Z zstd:long=0 --checkpoint fast |wc -c\n8426516\nreal 0m0,880s", "msg_date": "Sun, 27 Mar 2022 15:50:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Fri, Mar 25, 2022 at 9:23 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> The changes look good to me.\n\nThanks. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:28:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing a few backup compression goofs" }, { "msg_contents": "On Thu, Mar 24, 2022 at 9:19 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Per the TAP protocol, every line of non-test-result output should be\n> prefixed by \"# \". The note() function does this for you, see\n> https://metacpan.org/pod/Test::More#Diagnostics for details.\n\nTrue, but that also means it shows up in the actual failure message,\nwhich seems too verbose. By just using 'print', it ends up in the log\nfile if it's needed, but not anywhere else. Maybe there's a better way\nto do this, but I don't think using note() is what I want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:37:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Thu, Mar 24, 2022 at 9:19 AM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> Per the TAP protocol, every line of non-test-result output should be\n>> prefixed by \"# \". The note() function does this for you, see\n>> https://metacpan.org/pod/Test::More#Diagnostics for details.\n>\n> True, but that also means it shows up in the actual failure message,\n> which seems too verbose. By just using 'print', it ends up in the log\n> file if it's needed, but not anywhere else. Maybe there's a better way\n> to do this, but I don't think using note() is what I want.\n\nThat is the difference between note() and diag(): note() prints to\nstdout so is not visible under a non-verbose prove run, while diag()\nprints to stderr so it's always visible.\n\n- ilmari\n\n\n", "msg_date": "Mon, 28 Mar 2022 17:52:23 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Mon, Mar 28, 2022 at 12:52 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> > True, but that also means it shows up in the actual failure message,\n> > which seems too verbose. By just using 'print', it ends up in the log\n> > file if it's needed, but not anywhere else. Maybe there's a better way\n> > to do this, but I don't think using note() is what I want.\n>\n> That is the difference between note() and diag(): note() prints to\n> stdout so is not visible under a non-verbose prove run, while diag()\n> prints to stderr so it's always visible.\n\nOK, but print doesn't do either of those things. The output only shows\nup in the log file, even with --verbose. Here's an example of what the\nlog file looks like:\n\n# Running: pg_verifybackup -n -m\n/Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/backup_manifest\n-e /Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/extracted-backup\nbackup successfully verified\nok 6 - verify backup, compression gzip\n\nAs you can see, there is a line here that does not begin with #. That\nline is the standard output of a command that was run by the test\nscript.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:55:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Sun, Mar 27, 2022 at 4:50 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Actually, I suggest to remove those comments:\n> | \"We check for failure here because...\"\n>\n> That should be the rule rather than the exception, so shouldn't require\n> justifying why one might checks the return value of library and system calls.\n\nI went for modifying the comment rather than removing it. I agree with\nyou that checking for failure doesn't really require justification,\nbut I think that in a case like this it is useful to explain what we\nknow about why it might fail.\n\n> In bbsink_zstd_new(), I think you need to check to see if workers were\n> requested (same as the issue you found with \"level\").\n\nFixed.\n\n> src/backend/replication/basebackup_zstd.c: elog(ERROR, \"could not set zstd compression level to %d: %s\",\n> src/bin/pg_basebackup/bbstreamer_gzip.c: pg_log_error(\"could not set compression level %d: %s\",\n> src/bin/pg_basebackup/bbstreamer_zstd.c: pg_log_error(\"could not set compression level to: %d: %s\",\n>\n> I'm not sure why these messages sometimes mention the current compression\n> method and sometimes don't. I suggest that they shouldn't - errcontext will\n> have the algorithm, and the user already specified it anyway. It'd allow the\n> compiler to merge strings.\n\nI don't think that errcontext() helps here. On the client side, it\ndoesn't exist. On the server side, it's not in use. I do see\nSTATEMENT: <whatever> in the server log when a replication command\nthrows a server-side error, which is similar, but pg_basebackup\ndoesn't display that STATEMENT line. I don't really know how to\nbalance the legitimate desire for future messages against the\nalso-legitimate desire for clarity about where things are failing. I'm\nslightly inclined to think that including the algorithm name is\nbetter, because options are in the end algorithm-specific, but it's\ncertainly debatable. I would be interested in hearing other\nopinions...\n\nHere's an updated and rebased version of my patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 12:57:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Mon, Mar 28, 2022 at 12:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's an updated and rebased version of my patch.\n\nWell, that only updated the comment on the client side. Let's try again.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 13:32:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Sun, Mar 27, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Coverity has a nitpick about this:\n>\n> /srv/coverity/git/pgsql-git/postgresql/src/common/backup_compression.c: 194 in parse_bc_specification()\n> 193 /* Advance to next entry and loop around. */\n> >>> CID 1503251: Null pointer dereferences (REVERSE_INULL)\n> >>> Null-checking \"vend\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n> 194 specification = vend == NULL ? kwend + 1 : vend + 1;\n> 195 }\n> 196 }\n>\n> Not sure if you should remove this null-check or add some other ones,\n> but I think you ought to do one or the other.\n\nYes, I think this is buggy. I think there's only a theoretical bug\nright now, because the only keyword we have is \"level\" and that\nrequires a value. But if I add an example keyword that does not\nrequire an associated value (as demonstrated in the attached patch)\nand do something like pg_basebackup -cfast -D whatever --compress\nlz4:example, then the present code will dereference \"vend\" even though\nit's NULL, which is not good. The attached patch also shows how I\nthink that should be fixed.\n\nAs I hope is apparent, the first hunk of this patch is not for commit,\nand the second hunk is for commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 15:50:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Mar 27, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Not sure if you should remove this null-check or add some other ones,\n>> but I think you ought to do one or the other.\n\n> As I hope is apparent, the first hunk of this patch is not for commit,\n> and the second hunk is for commit.\n\nLooks plausible to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 16:38:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 28, 2022 at 03:50:50PM -0400, Robert Haas wrote:\n> On Sun, Mar 27, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Coverity has a nitpick about this:\n> >\n> > /srv/coverity/git/pgsql-git/postgresql/src/common/backup_compression.c: 194 in parse_bc_specification()\n> > 193 /* Advance to next entry and loop around. */\n> > >>> CID 1503251: Null pointer dereferences (REVERSE_INULL)\n> > >>> Null-checking \"vend\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n> > 194 specification = vend == NULL ? kwend + 1 : vend + 1;\n> > 195 }\n> > 196 }\n> >\n> > Not sure if you should remove this null-check or add some other ones,\n> > but I think you ought to do one or the other.\n> \n> Yes, I think this is buggy. I think there's only a theoretical bug\n> right now, because the only keyword we have is \"level\" and that\n> requires a value. But if I add an example keyword that does not\n> require an associated value (as demonstrated in the attached patch)\n> and do something like pg_basebackup -cfast -D whatever --compress\n> lz4:example, then the present code will dereference \"vend\" even though\n> it's NULL, which is not good. The attached patch also shows how I\n> think that should be fixed.\n> \n> As I hope is apparent, the first hunk of this patch is not for commit,\n> and the second hunk is for commit.\n\nConfirmed that it's a real issue with my patch for zstd long match mode. But\nyou need to specify another option after the value-less flag option for it to\ncrash.\n\nI suggest to write it differently, as in 0002.\n\nThis also fixes some rebase-induced errors with my previous patches, and adds\nexpect_boolean().", "msg_date": "Mon, 28 Mar 2022 15:53:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 28, 2022 at 4:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I suggest to write it differently, as in 0002.\n\nThat doesn't seem better to me. What's the argument for it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 17:39:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 28, 2022 at 05:39:31PM -0400, Robert Haas wrote:\n> On Mon, Mar 28, 2022 at 4:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I suggest to write it differently, as in 0002.\n> \n> That doesn't seem better to me. What's the argument for it?\n\nI find this much easier to understand:\n\n /* If we got an error or have reached the end of the string, stop. */ \n- if (result->parse_error != NULL || *kwend == '\\0' || *vend == '\\0') \n+ if (result->parse_error != NULL) \n+ break; \n+ if (*kwend == '\\0') \n+ break; \n+ if (vend != NULL && *vend == '\\0') \n break; \n\nthan\n\n /* If we got an error or have reached the end of the string, stop. */\n- if (result->parse_error != NULL || *kwend == '\\0' || *vend == '\\0')\n+ if (result->parse_error != NULL ||\n+ (vend == NULL ? *kwend == '\\0' : *vend == '\\0'))\n\nAlso, why wouldn't *kwend be checked in any case ?\n\n\n", "msg_date": "Mon, 28 Mar 2022 19:07:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Also, why wouldn't *kwend be checked in any case ?\n\nI suspect Robert wrote it that way intentionally --- but if so,\nI agree it could do with more than zero commentary.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 20:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Mon, Mar 28, 2022 at 8:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I suspect Robert wrote it that way intentionally --- but if so,\n> I agree it could do with more than zero commentary.\n\nWell, the point is, we stop advancing kwend when we get to the end of\nthe keyword, and *vend when we get to the end of the value. If there's\na value, the end of the keyword can't have been the end of the string,\nbut the end of the value might have been. If there's no value, the end\nof the keyword could be the end of the string.\n\nMaybe if I just put that last sentence into the comment it's clear enough?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 08:51:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> This patch contains a trivial adjustment to\n> PostgreSQL::Test::Cluster::run_log to make it return a useful value\n> instead of not. I think that should be pulled out and committed\n> independently regardless of what happens to this patch overall, and\n> possibly back-patched.\n\nrun_log() is far from the only such method in PostgreSQL::Test::Cluster.\nHere's a patch that gives the same treatment to all the methods that\njust pass through to the corresponding PostgreSQL::Test::Utils function.\n\nAlso attached is a fix a typo in the _get_env doc comment that I noticed\nwhile auditing the return values.\n\n- ilmari", "msg_date": "Wed, 30 Mar 2022 13:00:17 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Mon, Mar 28, 2022 at 12:52 PM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> > True, but that also means it shows up in the actual failure message,\n>> > which seems too verbose. By just using 'print', it ends up in the log\n>> > file if it's needed, but not anywhere else. Maybe there's a better way\n>> > to do this, but I don't think using note() is what I want.\n>>\n>> That is the difference between note() and diag(): note() prints to\n>> stdout so is not visible under a non-verbose prove run, while diag()\n>> prints to stderr so it's always visible.\n>\n> OK, but print doesn't do either of those things. The output only shows\n> up in the log file, even with --verbose. Here's an example of what the\n> log file looks like:\n>\n> # Running: pg_verifybackup -n -m\n> /Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/backup_manifest\n> -e /Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/extracted-backup\n> backup successfully verified\n> ok 6 - verify backup, compression gzip\n>\n> As you can see, there is a line here that does not begin with #. That\n> line is the standard output of a command that was run by the test\n> script.\n\nOh, that must be some non-standard output handling that our test setup\ndoes. Plain `prove` shows everything on stdout and stderr in verbose\nmode, and only stderr in non-vebose mode:\n\n$ cat verbosity.t\nuse strict;\nuse warnings;\n\nuse Test::More;\n\npass \"pass\";\n\ndiag \"diag\";\nnote \"note\";\nprint \"print\\n\";\nsystem qw(echo system);\n\ndone_testing;\n\n$ prove verbosity.t\nverbosity.t .. 1/? # diag\nverbosity.t .. ok\nAll tests successful.\nFiles=1, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.04 cusr 0.01 csys = 0.07 CPU)\nResult: PASS\n\n$ prove -v verbosity.t\nverbosity.t ..\nok 1 - pass\n# diag\n# note\nprint\nsystem\n1..1\nok\nAll tests successful.\nFiles=1, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.06 cusr 0.00 csys = 0.08 CPU)\nResult: PASS\n\n- ilmari\n\n\n", "msg_date": "Wed, 30 Mar 2022 13:06:37 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "\nOn 3/30/22 08:06, Dagfinn Ilmari Mannsåker wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>\n>> On Mon, Mar 28, 2022 at 12:52 PM Dagfinn Ilmari Mannsåker\n>> <ilmari@ilmari.org> wrote:\n>>>> True, but that also means it shows up in the actual failure message,\n>>>> which seems too verbose. By just using 'print', it ends up in the log\n>>>> file if it's needed, but not anywhere else. Maybe there's a better way\n>>>> to do this, but I don't think using note() is what I want.\n>>> That is the difference between note() and diag(): note() prints to\n>>> stdout so is not visible under a non-verbose prove run, while diag()\n>>> prints to stderr so it's always visible.\n>> OK, but print doesn't do either of those things. The output only shows\n>> up in the log file, even with --verbose. Here's an example of what the\n>> log file looks like:\n>>\n>> # Running: pg_verifybackup -n -m\n>> /Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/backup_manifest\n>> -e /Users/rhaas/pgsql/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/extracted-backup\n>> backup successfully verified\n>> ok 6 - verify backup, compression gzip\n>>\n>> As you can see, there is a line here that does not begin with #. That\n>> line is the standard output of a command that was run by the test\n>> script.\n> Oh, that must be some non-standard output handling that our test setup\n> does. Plain `prove` shows everything on stdout and stderr in verbose\n> mode, and only stderr in non-vebose mode:\n>\n\n\nYes, PostgreSQL::Test::Utils hijacks STDOUT and STDERR (see the INIT\nblock).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 30 Mar 2022 08:55:30 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Wed, Mar 30, 2022 at 8:00 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This patch contains a trivial adjustment to\n> > PostgreSQL::Test::Cluster::run_log to make it return a useful value\n> > instead of not. I think that should be pulled out and committed\n> > independently regardless of what happens to this patch overall, and\n> > possibly back-patched.\n>\n> run_log() is far from the only such method in PostgreSQL::Test::Cluster.\n> Here's a patch that gives the same treatment to all the methods that\n> just pass through to the corresponding PostgreSQL::Test::Utils function.\n>\n> Also attached is a fix a typo in the _get_env doc comment that I noticed\n> while auditing the return values.\n\nI suggest posting these patches on a new thread with a subject line\nthat matches what the patches do, and adding it to the next\nCommitFest. It seems like a reasonable thing to do on first glance,\nbut I wouldn't want to commit it without going through and figuring\nout whether there's any risk of anything breaking, and it doesn't seem\nlike there's a strong need to do it in v15 rather than v16.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:44:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Wed, Mar 30, 2022 at 8:00 AM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>> > This patch contains a trivial adjustment to\n>> > PostgreSQL::Test::Cluster::run_log to make it return a useful value\n>> > instead of not. I think that should be pulled out and committed\n>> > independently regardless of what happens to this patch overall, and\n>> > possibly back-patched.\n>>\n>> run_log() is far from the only such method in PostgreSQL::Test::Cluster.\n>> Here's a patch that gives the same treatment to all the methods that\n>> just pass through to the corresponding PostgreSQL::Test::Utils function.\n>>\n>> Also attached is a fix a typo in the _get_env doc comment that I noticed\n>> while auditing the return values.\n>\n> I suggest posting these patches on a new thread with a subject line\n> that matches what the patches do, and adding it to the next\n> CommitFest.\n\nWill do.\n\n> It seems like a reasonable thing to do on first glance, but I wouldn't\n> want to commit it without going through and figuring out whether\n> there's any risk of anything breaking, and it doesn't seem like\n> there's a strong need to do it in v15 rather than v16.\n\nGiven that the methods don't currently have a useful return value (undef\nor the empty list, depending on context), I don't expect anything to be\nrelying on it (and it passed check-world with --enable-tap-tests and all\nthe --with-foo flags I could easily get to work), but I can grep the\ncode as well to be extra sure.\n\n- ilmari\n\n\n", "msg_date": "Wed, 30 Mar 2022 18:00:42 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Tue, Mar 29, 2022 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 28, 2022 at 8:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I suspect Robert wrote it that way intentionally --- but if so,\n> > I agree it could do with more than zero commentary.\n>\n> Well, the point is, we stop advancing kwend when we get to the end of\n> the keyword, and *vend when we get to the end of the value. If there's\n> a value, the end of the keyword can't have been the end of the string,\n> but the end of the value might have been. If there's no value, the end\n> of the keyword could be the end of the string.\n>\n> Maybe if I just put that last sentence into the comment it's clear enough?\n\nDone that way, since I thought it was better to fix the bug than wait\nfor more feedback on the wording. We can still adjust the wording, or\nthe coding, if it's not clear enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 16:00:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>> Maybe if I just put that last sentence into the comment it's clear enough?\n\n> Done that way, since I thought it was better to fix the bug than wait\n> for more feedback on the wording. We can still adjust the wording, or\n> the coding, if it's not clear enough.\n\nFWIW, I thought that explanation was fine, but I was deferring to\nJustin who was the one who thought things were unclear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Mar 2022 16:14:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "On Wed, Mar 30, 2022 at 04:14:47PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> >> Maybe if I just put that last sentence into the comment it's clear enough?\n> \n> > Done that way, since I thought it was better to fix the bug than wait\n> > for more feedback on the wording. We can still adjust the wording, or\n> > the coding, if it's not clear enough.\n> \n> FWIW, I thought that explanation was fine, but I was deferring to\n> Justin who was the one who thought things were unclear.\n\nI still think it's unnecessarily confusing to nest \"if\" and \"?:\" conditionals\nin one statement, instead of 2 or 3 separate \"if\"s, or \"||\"s.\nBut it's also not worth fussing over any more.\n\n\n", "msg_date": "Wed, 30 Mar 2022 15:27:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c (zstd workers)" }, { "msg_contents": "\nOn 3/30/22 08:00, Dagfinn Ilmari Mannsåker wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>\n>> This patch contains a trivial adjustment to\n>> PostgreSQL::Test::Cluster::run_log to make it return a useful value\n>> instead of not. I think that should be pulled out and committed\n>> independently regardless of what happens to this patch overall, and\n>> possibly back-patched.\n> run_log() is far from the only such method in PostgreSQL::Test::Cluster.\n> Here's a patch that gives the same treatment to all the methods that\n> just pass through to the corresponding PostgreSQL::Test::Utils function.\n>\n> Also attached is a fix a typo in the _get_env doc comment that I noticed\n> while auditing the return values.\n>\n\nNone of these routines in Utils.pm returns a useful value (unlike\nrun_log()). Typically we don't return the value of Test::More routines.\nSo -1 on patch 1. I will fix the typo.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 08:21:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multithreaded zstd backup compression for client and server" }, { "msg_contents": "On Thu, Mar 23, 2023 at 2:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In rem: commit 3500ccc3,\n>\n> for X in ` grep -E '^[^*]+event_name = \"'\n> src/backend/utils/activity/wait_event.c |\n> sed 's/^.* = \"//;s/\";$//;/unknown/d' `\n> do\n> if ! git grep \"$X\" doc/src/sgml/monitoring.sgml > /dev/null\n> then\n> echo \"$X is not documented\"\n> fi\n> done\n>\n> BaseBackupSync is not documented\n> BaseBackupWrite is not documented\n\n[Resending with trimmed CC: list, because the mailing list told me to\ndue to a blocked account, sorry if you already got the above.]\n\n\n", "msg_date": "Thu, 23 Mar 2023 15:08:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Mar 22, 2023 at 10:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > BaseBackupSync is not documented\n> > BaseBackupWrite is not documented\n>\n> [Resending with trimmed CC: list, because the mailing list told me to\n> due to a blocked account, sorry if you already got the above.]\n\nBummer. I'll write a patch to fix that tomorrow, unless somebody beats me to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 16:11:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Thu, Mar 23, 2023 at 4:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Mar 22, 2023 at 10:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > BaseBackupSync is not documented\n> > > BaseBackupWrite is not documented\n> >\n> > [Resending with trimmed CC: list, because the mailing list told me to\n> > due to a blocked account, sorry if you already got the above.]\n>\n> Bummer. I'll write a patch to fix that tomorrow, unless somebody beats me to it.\n\nHere's a patch for that, and a patch to add the missing error check\nPeter noticed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Mar 2023 10:46:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Fri, Mar 24, 2023 at 10:46:37AM -0400, Robert Haas wrote:\n> On Thu, Mar 23, 2023 at 4:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Mar 22, 2023 at 10:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > BaseBackupSync is not documented\n> > > > BaseBackupWrite is not documented\n> > >\n> > > [Resending with trimmed CC: list, because the mailing list told me to\n> > > due to a blocked account, sorry if you already got the above.]\n> >\n> > Bummer. I'll write a patch to fix that tomorrow, unless somebody beats me to it.\n> \n> Here's a patch for that, and a patch to add the missing error check\n> Peter noticed.\n\nI think these maybe got forgotten ?\n\n\n", "msg_date": "Wed, 12 Apr 2023 09:57:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: refactoring basebackup.c" }, { "msg_contents": "On Wed, Apr 12, 2023 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think these maybe got forgotten ?\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Apr 2023 11:55:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: refactoring basebackup.c" } ]
[ { "msg_contents": "Hi!\n\nI noticed that commit 3eb77eba5a changed the logic in \nProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off, \nthe entries are not removed from the pendingOps hash table. I don't \nthink that was intended.\n\nI propose the attached patch to move the test for enableFsync so that \nthe entries are removed from pendingOps again. It looks larger than it \nreally is because it re-indents the block of code that is now inside the \n\"if (enableFsync)\" condition.\n\n- Heikki", "msg_date": "Sat, 9 May 2020 00:21:18 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "pendingOps table is not cleared with fsync=off" }, { "msg_contents": "On Sat, May 9, 2020 at 9:21 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I noticed that commit 3eb77eba5a changed the logic in\n> ProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off,\n> the entries are not removed from the pendingOps hash table. I don't\n> think that was intended.\n\nPerhaps we got confused about what the comment \"... so that changing\nfsync on the fly behaves sensibly\" means. Fsyncing everything you\nmissed when you turn it back on after a period running with it off\ndoes sound a bit like behaviour that someone might want or expect,\nthough it probably isn't really enough to guarantee durability,\nbecause requests queued here aren't the only fsyncs you missed while\nyou had it off, among other problems. Unfortunately, if you try that\non an assertion build, you get a failure anyway, so that probably\nwasn't deliberate:\n\nTRAP: FailedAssertion(\"(CycleCtr) (entry->cycle_ctr + 1) ==\nsync_cycle_ctr\", File: \"sync.c\", Line: 335)\n\n> I propose the attached patch to move the test for enableFsync so that\n> the entries are removed from pendingOps again. It looks larger than it\n> really is because it re-indents the block of code that is now inside the\n> \"if (enableFsync)\" condition.\n\nYeah, I found that git diff/show -w made it easier to understand that\nchange. LGTM, though I'd be tempted to use \"goto skip\" instead of\nincurring that much indentation but up to you ...\n\n\n", "msg_date": "Sat, 9 May 2020 11:53:13 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "On 09/05/2020 02:53, Thomas Munro wrote:\n> On Sat, May 9, 2020 at 9:21 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I noticed that commit 3eb77eba5a changed the logic in\n>> ProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off,\n>> the entries are not removed from the pendingOps hash table. I don't\n>> think that was intended.\n> \n> Perhaps we got confused about what the comment \"... so that changing\n> fsync on the fly behaves sensibly\" means. Fsyncing everything you\n> missed when you turn it back on after a period running with it off\n> does sound a bit like behaviour that someone might want or expect,\n> though it probably isn't really enough to guarantee durability,\n> because requests queued here aren't the only fsyncs you missed while\n> you had it off, among other problems.\n\nYeah, you need to run \"sync\" after turning fsync=on to be safe, there's \nno way around it.\n\n> Unfortunately, if you try that on an assertion build, you get a\n> failure anyway, so that probably wasn't deliberate:\n> \n> TRAP: FailedAssertion(\"(CycleCtr) (entry->cycle_ctr + 1) ==\n> sync_cycle_ctr\", File: \"sync.c\", Line: 335)\n\nAh, I didn't notice that.\n\n>> I propose the attached patch to move the test for enableFsync so that\n>> the entries are removed from pendingOps again. It looks larger than it\n>> really is because it re-indents the block of code that is now inside the\n>> \"if (enableFsync)\" condition.\n> \n> Yeah, I found that git diff/show -w made it easier to understand that\n> change. LGTM, though I'd be tempted to use \"goto skip\" instead of\n> incurring that much indentation but up to you ...\n\nI considered a goto too, but I found it confusing. If we need any more \nnesting here in the future, I think extracting the inner parts into a \nfunction would be good.\n\nCommitted, thanks!\n\n- Heikki\n\n\n", "msg_date": "Thu, 14 May 2020 08:41:26 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 09/05/2020 02:53, Thomas Munro wrote:\n>> On Sat, May 9, 2020 at 9:21 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> I noticed that commit 3eb77eba5a changed the logic in\n>>> ProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off,\n>>> the entries are not removed from the pendingOps hash table. I don't\n>>> think that was intended.\n\nI'm looking at this commit in connection with writing release notes\nfor next week's releases. Am I right in thinking that this bug leads\nto indefinite bloat of the pendingOps hash table when fsync is off?\nIf so, that seems much more worth documenting than the assertion\nfailure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 11:42:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "On 06/08/2020 18:42, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 09/05/2020 02:53, Thomas Munro wrote:\n>>> On Sat, May 9, 2020 at 9:21 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>>> I noticed that commit 3eb77eba5a changed the logic in\n>>>> ProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off,\n>>>> the entries are not removed from the pendingOps hash table. I don't\n>>>> think that was intended.\n> \n> I'm looking at this commit in connection with writing release notes\n> for next week's releases. Am I right in thinking that this bug leads\n> to indefinite bloat of the pendingOps hash table when fsync is off?\n> If so, that seems much more worth documenting than the assertion\n> failure.\n\nThat's correct.\n\n- Heikki\n\n\n", "msg_date": "Thu, 6 Aug 2020 20:35:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 06/08/2020 18:42, Tom Lane wrote:\n>> I'm looking at this commit in connection with writing release notes\n>> for next week's releases. Am I right in thinking that this bug leads\n>> to indefinite bloat of the pendingOps hash table when fsync is off?\n>> If so, that seems much more worth documenting than the assertion\n>> failure.\n\n> That's correct.\n\nOK, thanks for confirming.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 14:10:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "On Sat, May 09, 2020 at 11:53:13AM +1200, Thomas Munro wrote:\n\n> On Sat, May 9, 2020 at 9:21 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I noticed that commit 3eb77eba5a changed the logic in\n> > ProcessSyncRequests() (formerly mdsync()) so that if you have fsync=off,\n> > the entries are not removed from the pendingOps hash table. I don't\n> > think that was intended.\n> \n> Perhaps we got confused about what the comment \"... so that changing\n> fsync on the fly behaves sensibly\" means. Fsyncing everything you\n> missed when you turn it back on after a period running with it off\n> does sound a bit like behaviour that someone might want or expect,\n> though it probably isn't really enough to guarantee durability,\n> because requests queued here aren't the only fsyncs you missed while\n> you had it off, among other problems.\n\nGood catch. Question is, are the users aware of the requirement to do a \nmanual fsync if they flip the fsync GUC off and then on? Should we do \nthis on their behalf to make a good faith attempt to ensure things are \nflushed properly via an assign hook?\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Mon, 10 Aug 2020 09:35:56 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "Shawn Debnath <sdn@amazon.com> writes:\n> Good catch. Question is, are the users aware of the requirement to do a \n> manual fsync if they flip the fsync GUC off and then on? Should we do \n> this on their behalf to make a good faith attempt to ensure things are \n> flushed properly via an assign hook?\n\nNo. Or at least, expecting that you can do that from an assign hook\nis impossibly wrong-headed. GUC assign hooks can't have failure modes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Aug 2020 14:50:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" }, { "msg_contents": "On Mon, Aug 10, 2020 at 02:50:26PM -0400, Tom Lane wrote:\n>\n> Shawn Debnath <sdn@amazon.com> writes:\n> > Good catch. Question is, are the users aware of the requirement to do a\n> > manual fsync if they flip the fsync GUC off and then on? Should we do\n> > this on their behalf to make a good faith attempt to ensure things are\n> > flushed properly via an assign hook?\n> \n> No. Or at least, expecting that you can do that from an assign hook\n> is impossibly wrong-headed. GUC assign hooks can't have failure modes.\n\nOkay agree, will remind myself to drink more coffee next time.\n\nIf we think a fsync should be performed in this case, assign hook\ncould set a value to indicate parameter was reset via SIGHUP. Next call\nto ProcessSyncRequests() could check for this, do a fsync prior to\nabsorbing the newly submitted sync requests, and reset the flag.\nfsync_pgdata() comes to mind to be inclusive.\n\nIf folks are not inclined to do the fsync, the change is good as is.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Mon, 10 Aug 2020 14:02:35 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pendingOps table is not cleared with fsync=off" } ]
[ { "msg_contents": "Hi,\n\nISTM that it's our coding style that we use\n\nsomething\nmy_paramless_func(void)\n{\n...\n}\n\ndefinitions rather than omitting the (void), which makes the function\nlook like an old-style function declaration. I somewhat regularly notice\nsuch omissions during review, and fix them.\n\nSince gcc has a warning detecting such definition, I think we ought to\nautomatically add it when available?\n\nThe attached patch makes configure do so, and also fixes a handful of\nuses that crept in.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 9 May 2020 10:48:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Since gcc has a warning detecting such definition, I think we ought to\n> automatically add it when available?\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 14:15:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Hi,\n\nOn 2020-05-09 14:15:01 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Since gcc has a warning detecting such definition, I think we ought to\n> > automatically add it when available?\n> \n> +1\n\nAny opinion about waiting for branching or not?\n\n- Andres\n\n\n", "msg_date": "Sat, 9 May 2020 11:43:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-05-09 14:15:01 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Since gcc has a warning detecting such definition, I think we ought to\n>>> automatically add it when available?\n\n>> +1\n\n> Any opinion about waiting for branching or not?\n\nI'd be OK with pushing it now, but I dunno about other people.\n\nIf we do want to push this sort of thing now, the nearby changes\nto enable fallthrough warnings should go in too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 19:11:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "On Sat, May 09, 2020 at 07:11:56PM -0400, Tom Lane wrote:\n> I'd be OK with pushing it now, but I dunno about other people.\n\nSounds like a good idea to me to apply this part now.\n\n> If we do want to push this sort of thing now, the nearby changes\n> to enable fallthrough warnings should go in too.\n\nIf we do that, merging this second part before beta1 is out looks like\na good compromise to me.\n--\nMichael", "msg_date": "Sun, 10 May 2020 10:37:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "On 2020-May-09, Tom Lane wrote:\n\n> If we do want to push this sort of thing now, the nearby changes\n> to enable fallthrough warnings should go in too.\n\nI'll get that sorted out tomorrow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 10 May 2020 23:45:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Hi,\n\nOn 2020-05-09 19:11:56 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-05-09 14:15:01 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> Since gcc has a warning detecting such definition, I think we ought to\n> >>> automatically add it when available?\n> \n> >> +1\n> \n> > Any opinion about waiting for branching or not?\n> \n> I'd be OK with pushing it now, but I dunno about other people.\n\nI did run into a small bit of trouble doing so. Those seem to make it a\nmistake to target 13.\n\nUnfortunately it turns out that our CFLAG configure tests don't reliably\nwork with -Wold-style-definition. The problem is that the generated\nprogram contains 'int main() {...}', which obviously is an old-style\ndefinition. Which then causes a warning, which in turn causes the cflag\ntests to fail because we run them with ac_c_werror_flag=yes.\n\nThere's a pretty easy local fix, which is that we can just use\nAC_LANG_SOURCE() instead of AC_LANG_PROGRAM()\nPGAC_PROG_VARCC_VARFLAGS_OPT(). There seems to be little reason not to\ndo so.\n\nBut that still leaves us with a lot of unnecessary subsequent warnings\nfor other tests in config.log. They don't cause problems afaics, as\nac_c_werror_flag=yes isn't widely used, but it's still more noisy than\nI'd like. And the likelihood of silent failures down the line seems\nhigher than I'd like.\n\nUpstream autoconf has fixed this in 2014 (1717921a), but since they've\nnot bothered to release since then...\n\nThe easiest way that I can see to deal with that is to simply redefine\nthe relevant autoconf macro. For me that solves the vast majority of\nthese bleats in config.log. That's not particularly pretty, but we have\nprecedent for it... Since it's just 16 lines, I think we can live with\nthat?\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jun 2020 22:47:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Unfortunately it turns out that our CFLAG configure tests don't reliably\n> work with -Wold-style-definition. The problem is that the generated\n> program contains 'int main() {...}', which obviously is an old-style\n> definition. Which then causes a warning, which in turn causes the cflag\n> tests to fail because we run them with ac_c_werror_flag=yes.\n\nUgh. I suspect main() might not be the only problem, either.\n\n> Upstream autoconf has fixed this in 2014 (1717921a), but since they've\n> not bothered to release since then...\n\nI wonder if there's any way to light a fire under them.\n\n> The easiest way that I can see to deal with that is to simply redefine\n> the relevant autoconf macro. For me that solves the vast majority of\n> these bleats in config.log. That's not particularly pretty, but we have\n> precedent for it... Since it's just 16 lines, I think we can live with\n> that?\n\nI don't really think that -Wold-style-definition is worth that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jun 2020 02:03:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" }, { "msg_contents": "Hi,\n\nOn 2020-06-09 02:03:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Unfortunately it turns out that our CFLAG configure tests don't reliably\n> > work with -Wold-style-definition. The problem is that the generated\n> > program contains 'int main() {...}', which obviously is an old-style\n> > definition. Which then causes a warning, which in turn causes the cflag\n> > tests to fail because we run them with ac_c_werror_flag=yes.\n> \n> Ugh. I suspect main() might not be the only problem, either.\n\nThere's a few more, but they're far far less noisy. And I don't think\nany change the results after the fix, because they don't check for\noutput on stderr (based on grepping through configure).\n\n\n> > Upstream autoconf has fixed this in 2014 (1717921a), but since they've\n> > not bothered to release since then...\n> \n> I wonder if there's any way to light a fire under them.\n\nThere's been talk about working towards a release a few months back:\nhttps://lists.gnu.org/archive/html/autoconf/2020-03/msg00003.html\n\n\n> > The easiest way that I can see to deal with that is to simply redefine\n> > the relevant autoconf macro. For me that solves the vast majority of\n> > these bleats in config.log. That's not particularly pretty, but we have\n> > precedent for it... Since it's just 16 lines, I think we can live with\n> > that?\n> \n> I don't really think that -Wold-style-definition is worth that.\n\nWell, we don't need it for that, strictly speaking. Just using\nAC_LANG_SOURCE is enough...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jun 2020 23:13:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Add -Wold-style-definition to CFLAGS?" } ]
[ { "msg_contents": "When I want t to convert json array into postgres array, I do:\n\nwith t(j) as(\n> select '{\"my_arr\":[3,1,2]}'::json\n> )\n> SELECT ARRAY(SELECT json_array_elements_text(j->'my_arr')) from t\n\n\nIt works like a charm and I never noticed any problem, but I'm asking here\njust to make sure, order of elements will be preserved always?\nIs that guaranteed in above example, or not?\n\nThanks.\n\nWhen I want t to convert json array into postgres array, I do:with t(j) as(    select '{\"my_arr\":[3,1,2]}'::json)SELECT ARRAY(SELECT json_array_elements_text(j->'my_arr')) from tIt works like a charm and I never noticed any problem, but I'm asking here just to make sure,  order of elements will be preserved always? Is that guaranteed in above example, or not?Thanks.", "msg_date": "Sun, 10 May 2020 16:21:35 +0400", "msg_from": "otar shavadze <oshavadze@gmail.com>", "msg_from_op": true, "msg_subject": "Cast json array to postgres array and preserve order of elements" }, { "msg_contents": "\nOn 5/10/20 8:21 AM, otar shavadze wrote:\n> When I want t to convert json array into postgres array, I do:\n>\n> with t(j) as(\n>     select '{\"my_arr\":[3,1,2]}'::json\n> )\n> SELECT ARRAY(SELECT json_array_elements_text(j->'my_arr')) from t\n>\n>\n> It works like a charm and I never noticed any problem, but I'm asking\n> here just to make sure,  order of elements will be preserved always? \n> Is that guaranteed in above example, or not?\n>\n>\n\n\nyes. The order is significant and the elements are produced in array order.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 10 May 2020 11:07:58 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cast json array to postgres array and preserve order of elements" }, { "msg_contents": "Great, thanks very much Andrew!\n\nOn Sun, May 10, 2020 at 7:08 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 5/10/20 8:21 AM, otar shavadze wrote:\n> > When I want t to convert json array into postgres array, I do:\n> >\n> > with t(j) as(\n> > select '{\"my_arr\":[3,1,2]}'::json\n> > )\n> > SELECT ARRAY(SELECT json_array_elements_text(j->'my_arr')) from t\n> >\n> >\n> > It works like a charm and I never noticed any problem, but I'm asking\n> > here just to make sure, order of elements will be preserved always?\n> > Is that guaranteed in above example, or not?\n> >\n> >\n>\n>\n> yes. The order is significant and the elements are produced in array order.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nGreat, thanks very much Andrew!   On Sun, May 10, 2020 at 7:08 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 5/10/20 8:21 AM, otar shavadze wrote:\n> When I want t to convert json array into postgres array, I do:\n>\n>     with t(j) as(\n>         select '{\"my_arr\":[3,1,2]}'::json\n>     )\n>     SELECT ARRAY(SELECT json_array_elements_text(j->'my_arr')) from t\n>\n>\n> It works like a charm and I never noticed any problem, but I'm asking\n> here just to make sure,  order of elements will be preserved always? \n> Is that guaranteed in above example, or not?\n>\n>\n\n\nyes. The order is significant and the elements are produced in array order.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan                https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 10 May 2020 19:56:47 +0400", "msg_from": "otar shavadze <oshavadze@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cast json array to postgres array and preserve order of elements" } ]
[ { "msg_contents": "Hello hackers,\n\nI've found that gcov coverage data miss some information when a postgres\nnode stopped in 'immediate' mode.\nFor example, on the master branch:\nmake coverage-clean; time make check -C src/test/recovery/; make\ncoverage-html\ngenerates a coverage report with 106193 lines/6318 functions for me\n(`make check` takes 1m34s).\nBut with the attached simple patch I get a coverage report with 106540\nlines/6332 functions (and `make check` takes 2m5s).\n(IMO, the slowdown of the test is significant.)\n\nSo if we want to make the coverage reports more precise, I see the three\nways:\n1. Change the stop mode in teardown_node to fast (probably only when\nconfigured with --enable-coverage);\n2. Explicitly stop nodes in TAP tests (where it's important) -- seems\ntoo tedious and troublesome;\n3. Explicitly call __gcov_flush in SIGQUIT handler (quickdie)?\n\nBest regards,\nAlexander", "msg_date": "Sun, 10 May 2020 19:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "gcov coverage data not full with immediate stop" }, { "msg_contents": "(Strangely, I was just thinking about these branches of mine as I\nclosed my week last Friday...)\n\nOn 2020-May-10, Alexander Lakhin wrote:\n\n> So if we want to make the coverage reports more precise, I see the three\n> ways:\n> 1. Change the stop mode in teardown_node to fast (probably only when\n> configured with --enable-coverage);\n> 2. Explicitly stop nodes in TAP tests (where it's important) -- seems\n> too tedious and troublesome;\n> 3. Explicitly call __gcov_flush in SIGQUIT handler (quickdie)?\n\nI tried your idea 3 a long time ago and my experiments didn't show an\nincrease in coverage [1]. But I like this idea the best, and maybe I\ndid something wrong. Attached is the patch I had (on top of\nfc115d0f9fc6), but I don't know if it still applies.\n\n(The second attachment is another branch I had on this, I don't remember\nwhy; that one was on top of 438e51987dcc. The curious thing is that I\ndidn't add the __gcov_flush to quickdie in this one. Maybe what we need\nis a mix of both.)\n\nI think we should definitely get this fixed for pg13 ...\n\n[1] https://postgr.es/m/20190531170503.GA24057@alvherre.pgsql\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 10 May 2020 23:42:43 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-10, Alexander Lakhin wrote:\n>> 3. Explicitly call __gcov_flush in SIGQUIT handler (quickdie)?\n\n> I tried your idea 3 a long time ago and my experiments didn't show an\n> increase in coverage [1]. But I like this idea the best, and maybe I\n> did something wrong. Attached is the patch I had (on top of\n> fc115d0f9fc6), but I don't know if it still applies.\n\nPutting ill-defined, not-controlled-by-us work into a quickdie signal\nhandler sounds like a really bad idea to me. Maybe it's all right,\nsince presumably it would only appear in specialized test builds; but\neven so, how much could you trust the results?\n\n> I think we should definitely get this fixed for pg13 ...\n\n-1 for shoving in such a thing so late in the cycle. We've survived\nwithout it for years, we can do so for a few months more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 00:56:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "Hello Alvaro,\n11.05.2020 06:42, Alvaro Herrera wrote:\n> (Strangely, I was just thinking about these branches of mine as I\n> closed my week last Friday...)\n>\n> On 2020-May-10, Alexander Lakhin wrote:\n>\n>> So if we want to make the coverage reports more precise, I see the three\n>> ways:\n>> 1. Change the stop mode in teardown_node to fast (probably only when\n>> configured with --enable-coverage);\n>> 2. Explicitly stop nodes in TAP tests (where it's important) -- seems\n>> too tedious and troublesome;\n>> 3. Explicitly call __gcov_flush in SIGQUIT handler (quickdie)?\n> I tried your idea 3 a long time ago and my experiments didn't show an\n> increase in coverage [1]. But I like this idea the best, and maybe I\n> did something wrong. Attached is the patch I had (on top of\n> fc115d0f9fc6), but I don't know if it still applies.\nThanks for the reference to that discussion and your patch.\nAs I see the issue with that patch is that quickdie() is not the only\nSIGQUIT handler. When a backend is interrupted with SIGQUIT, it's\nexiting in SignalHandlerForCrashExit().\nIn fact if I only add __gcov_flush() in SignalHandlerForCrashExit(), it\nraises test coverage for `make check -C src/test/recovery/` from\n106198 lines/6319 functions\nto\n106420 lines/6328 functions\n\nIt's not yet clear to me what happens when __gcov_flush() called inside\n__gcov_flush().\nThe test coverage changes to:\n108432 lines/5417 functions\n(number of function calls decreased)\nAnd for example in coverage/src/backend/utils/cache/catcache.c.gcov.html\nI see\n���� 147���������� 8 : int2eqfast(Datum a, Datum b)\n...\n���� 153���������� 0 : int2hashfast(Datum datum)\nbut without __gcov_flush in quickdie() we have:\n���� 147������ 78038 : int2eqfast(Datum a, Datum b)\n...\n���� 153����� 255470 : int2hashfast(Datum datum)\nSo it needs more investigation.\n\nBut I can confirm that calling __gcov_flush() in\nSignalHandlerForCrashExit() really improves a code coverage report.\nI tried to develop a test to elevate a coverage for gist:\nhttps://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html\n(Please look at the attached test if it could be interesting.)\nand came to this issue with a coverage. I tried to play with\nGCOV_PREFIX, but without luck.\nYesterday I found the more recent discussion:\nhttps://www.postgresql.org/message-id/flat/44ecae53-9861-71b7-1d43-4658acc52519%402ndquadrant.com#d02e2e61212831fbceadf290637913a0\n(where probably the same problem came out).\n\nFinally I've managed to get an expected coverage when I performed\n$node_standby->stop() (but __gcov_flush() in SignalHandlerForCrashExit()\nhelps too).\n\nBest regards,\nAlexander", "msg_date": "Mon, 11 May 2020 12:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Mon, May 11, 2020 at 2:30 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n\n>\n> But I can confirm that calling __gcov_flush() in SignalHandlerForCrashExit() really improves a code coverage report.\n>\n> Finally I've managed to get an expected coverage when I performed $node_standby->stop() (but __gcov_flush() in SignalHandlerForCrashExit() helps too).\n\nWhat happens if a coverage tool other than gcov is used? From that\nperspective, it's better to perform a clean shutdown in the TAP tests\ninstead of immediate if that's possible.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 11 May 2020 17:56:33 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Mon, May 11, 2020 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think we should definitely get this fixed for pg13 ...\n>\n> -1 for shoving in such a thing so late in the cycle. We've survived\n> without it for years, we can do so for a few months more.\n\nI agree, but also, we should start thinking about when to branch. I,\ntoo, have patches that aren't critical enough to justify pushing them\npost-freeze, but which are still good improvements that I'd like to\nget into the tree. I'm queueing them right now to avoid the risk of\ndestabilizing things, but that generates more work, for me and for\nother people, if their patches force me to rebase or the other way\naround. I know there's always a concern with removing the focus on\nrelease N too soon, but the open issues list is 3 items long right\nnow, and 2 of those look like preexisting issues, not new problems in\nv13. Meanwhile, we have 20+ active committers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 15:20:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I agree, but also, we should start thinking about when to branch. I,\n> too, have patches that aren't critical enough to justify pushing them\n> post-freeze, but which are still good improvements that I'd like to\n> get into the tree. I'm queueing them right now to avoid the risk of\n> destabilizing things, but that generates more work, for me and for\n> other people, if their patches force me to rebase or the other way\n> around. I know there's always a concern with removing the focus on\n> release N too soon, but the open issues list is 3 items long right\n> now, and 2 of those look like preexisting issues, not new problems in\n> v13. Meanwhile, we have 20+ active committers.\n\nYeah. Traditionally we've waited till the start of the next commitfest\n(which I'm assuming is July 1, for lack of an Ottawa dev meeting to decide\ndifferently). But it seems like things are slow enough that perhaps\nwe could branch earlier, like June 1, and give the committers a chance\nto deal with some of their own stuff before starting the CF.\n\nThis is the wrong thread to be debating that in, though. Also I wonder\nif this is really RMT turf?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 16:04:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Mon, May 11, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is the wrong thread to be debating that in, though.\n\nTrue.\n\n> Also I wonder if this is really RMT turf?\n\nI think it is, but the RMT is permitted -- even encouraged -- to\nconsider the views of non-RMT members when making its decision.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 16:11:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Mon, May 11, 2020 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. Traditionally we've waited till the start of the next commitfest\n> (which I'm assuming is July 1, for lack of an Ottawa dev meeting to decide\n> differently). But it seems like things are slow enough that perhaps\n> we could branch earlier, like June 1, and give the committers a chance\n> to deal with some of their own stuff before starting the CF.\n\nThe RMT discussed this question informally yesterday. The consensus is\nthat we should wait and see what the early feedback from Beta 1 is\nbefore making a final decision. An earlier June 1 branch date is an\nidea that certainly has some merit, but we'd like to put off making a\nfinal decision on that for at least another week, and possibly as long\nas two weeks.\n\nCan that easily be accommodated?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 May 2020 10:06:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, May 11, 2020 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah. Traditionally we've waited till the start of the next commitfest\n>> (which I'm assuming is July 1, for lack of an Ottawa dev meeting to decide\n>> differently). But it seems like things are slow enough that perhaps\n>> we could branch earlier, like June 1, and give the committers a chance\n>> to deal with some of their own stuff before starting the CF.\n\n> The RMT discussed this question informally yesterday. The consensus is\n> that we should wait and see what the early feedback from Beta 1 is\n> before making a final decision. An earlier June 1 branch date is an\n> idea that certainly has some merit, but we'd like to put off making a\n> final decision on that for at least another week, and possibly as long\n> as two weeks.\n\n> Can that easily be accommodated?\n\nThere's no real lead time needed AFAICS: when we are ready to branch,\nwe can just do it. So sure, let's wait till the end of May to decide.\nIf things look bad then, we could reconsider again mid-June.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 13:10:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Mon, May 11, 2020 at 05:56:33PM +0530, Ashutosh Bapat wrote:\n> What happens if a coverage tool other than gcov is used? From that\n> perspective, it's better to perform a clean shutdown in the TAP tests\n> instead of immediate if that's possible.\n\nNope, as that's the fastest path we have to shut down any remaining\nnodes at the end of a test per the END{} block at the end of\nPostgresNode.pm, and I would rather keep it this way because people\ntend to like keeping around a lot of clusters alive at the end of any\nnew test added and shutdown checkpoints are not free either even if\nfsync is enforced to off in the tests.\n\nI think that a solution turning around __gcov_flush() could be the\nbest deal we have, as discussed last year in the thread Álvaro quoted\nupthread, and I would vote for waiting until v14 opens for business\nbefore merging something we consider worth it.\n--\nMichael", "msg_date": "Wed, 13 May 2020 16:43:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" }, { "msg_contents": "On Tue, May 12, 2020 at 10:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Can that easily be accommodated?\n>\n> There's no real lead time needed AFAICS: when we are ready to branch,\n> we can just do it. So sure, let's wait till the end of May to decide.\n> If things look bad then, we could reconsider again mid-June.\n\nGreat. Let's review it at the end of May, before actually branching.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 May 2020 15:59:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: gcov coverage data not full with immediate stop" } ]
[ { "msg_contents": "Hi\n\nI try to use procedures in Orafce package, and I did some easy performance\ntests. I found some hard problems:\n\n1. test case\n\ncreate or replace procedure p1(inout r int, inout v int) as $$\nbegin v := random() * r; end\n$$ language plpgsql;\n\nThis command requires\n\ndo $$\ndeclare r int default 100; x int;\nbegin\n for i in 1..300000 loop\n call p1(r, x);\n end loop;\nend;\n$$;\n\nabout 2.2GB RAM and 10 sec.\n\nWhen I rewrite same to functions then\n\ncreate or replace function p1func2(inout r int, inout v int) as $$\nbegin v := random() * r; end\n$$ language plpgsql;\n\ndo $$\ndeclare r int default 100; x int; re record;\nbegin\n for i in 1..300000 loop\n re := p1func2(r, x);\n end loop;\nend;\n$$;\n\nThen execution is about 1 sec, and memory requirements are +/- zero.\n\nMinimally it looks so CALL statements has a memory issue.\n\nRegards\n\nPavel\n\nHiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.When I rewrite same to functions thencreate or replace function p1func2(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;do $$declare r int default 100; x int; re record;begin  for i in 1..300000 loop     re := p1func2(r, x);  end loop;end;$$;Then execution is about 1 sec, and memory requirements are +/- zero.Minimally it looks so CALL statements has a memory issue.RegardsPavel", "msg_date": "Sun, 10 May 2020 22:20:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Hi\n\nne 10. 5. 2020 v 22:20 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I try to use procedures in Orafce package, and I did some easy performance\n> tests. I found some hard problems:\n>\n> 1. test case\n>\n> create or replace procedure p1(inout r int, inout v int) as $$\n> begin v := random() * r; end\n> $$ language plpgsql;\n>\n> This command requires\n>\n> do $$\n> declare r int default 100; x int;\n> begin\n> for i in 1..300000 loop\n> call p1(r, x);\n> end loop;\n> end;\n> $$;\n>\n> about 2.2GB RAM and 10 sec.\n>\n> When I rewrite same to functions then\n>\n> create or replace function p1func2(inout r int, inout v int) as $$\n> begin v := random() * r; end\n> $$ language plpgsql;\n>\n> do $$\n> declare r int default 100; x int; re record;\n> begin\n> for i in 1..300000 loop\n> re := p1func2(r, x);\n> end loop;\n> end;\n> $$;\n>\n> Then execution is about 1 sec, and memory requirements are +/- zero.\n>\n> Minimally it looks so CALL statements has a memory issue.\n>\n\nThe problem is in plpgsql implementation of CALL statement\n\nIn non atomic case - case of using procedures from DO block, the\nexpression plan is not cached, and plan is generating any time. This is\nreason why it is slow.\n\nUnfortunately, generated plans are not released until SPI_finish. Attached\npatch fixed this issue.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Mon, 11 May 2020 07:25:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "On Sun, May 10, 2020 at 10:20:53PM +0200, Pavel Stehule wrote:\n> When I rewrite same to functions then\n>\n> create or replace function p1func2(inout r int, inout v int) as $$\n> begin v := random() * r; end\n> $$ language plpgsql;\n>\n> Then execution is about 1 sec, and memory requirements are +/- zero.\n>\n> Minimally it looks so CALL statements has a memory issue.\n\nBehavior not limited to plpgsql. A plain SQL function shows the same\nleak patterns:\ncreate or replace procedure p1_sql(in r int, in v int)\n as $$ SELECT r + v; $$ language sql;\n And I cannot get valgrind to complain about lost references, so this\n looks like some missing memory context handling.\n\nAlso, I actually don't quite get why the context created by\nCreateExprContext() cannot be freed before the procedure returns. A\nshort test shows no problems in calling FreeExprContext() at the end\nof ExecuteCallStmt(), but that does not address everything. Perhaps a\nlack of tests with pass-by-reference expressions and procedures?\n\nPeter?\n--\nMichael", "msg_date": "Mon, 11 May 2020 15:07:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory\n against calling function" }, { "msg_contents": "po 11. 5. 2020 v 7:25 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> ne 10. 5. 2020 v 22:20 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> I try to use procedures in Orafce package, and I did some easy\n>> performance tests. I found some hard problems:\n>>\n>> 1. test case\n>>\n>> create or replace procedure p1(inout r int, inout v int) as $$\n>> begin v := random() * r; end\n>> $$ language plpgsql;\n>>\n>> This command requires\n>>\n>> do $$\n>> declare r int default 100; x int;\n>> begin\n>> for i in 1..300000 loop\n>> call p1(r, x);\n>> end loop;\n>> end;\n>> $$;\n>>\n>> about 2.2GB RAM and 10 sec.\n>>\n>> When I rewrite same to functions then\n>>\n>> create or replace function p1func2(inout r int, inout v int) as $$\n>> begin v := random() * r; end\n>> $$ language plpgsql;\n>>\n>> do $$\n>> declare r int default 100; x int; re record;\n>> begin\n>> for i in 1..300000 loop\n>> re := p1func2(r, x);\n>> end loop;\n>> end;\n>> $$;\n>>\n>> Then execution is about 1 sec, and memory requirements are +/- zero.\n>>\n>> Minimally it looks so CALL statements has a memory issue.\n>>\n>\n> The problem is in plpgsql implementation of CALL statement\n>\n> In non atomic case - case of using procedures from DO block, the\n> expression plan is not cached, and plan is generating any time. This is\n> reason why it is slow.\n>\n> Unfortunately, generated plans are not released until SPI_finish. Attached\n> patch fixed this issue.\n>\n\nBut now, recursive calling doesn't work :-(. So this patch is not enough\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n\npo 11. 5. 2020 v 7:25 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hine 10. 5. 2020 v 22:20 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.When I rewrite same to functions thencreate or replace function p1func2(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;do $$declare r int default 100; x int; re record;begin  for i in 1..300000 loop     re := p1func2(r, x);  end loop;end;$$;Then execution is about 1 sec, and memory requirements are +/- zero.Minimally it looks so CALL statements has a memory issue.The problem is in plpgsql implementation of CALL statementIn non atomic case -  case of using procedures from DO block, the expression plan is not cached, and plan is generating any time. This is reason why it is slow.Unfortunately, generated plans are not released until SPI_finish. Attached patch fixed this issue.But now, recursive calling doesn't work :-(. So this patch is not enough RegardsPavelRegardsPavel", "msg_date": "Mon, 11 May 2020 08:07:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Hi\n\n\n> The problem is in plpgsql implementation of CALL statement\n>>\n>> In non atomic case - case of using procedures from DO block, the\n>> expression plan is not cached, and plan is generating any time. This is\n>> reason why it is slow.\n>>\n>> Unfortunately, generated plans are not released until SPI_finish.\n>> Attached patch fixed this issue.\n>>\n>\n> But now, recursive calling doesn't work :-(. So this patch is not enough\n>\n\nAttached patch is working - all tests passed\n\nIt doesn't solve performance, and doesn't solve all memory problems, but\nsignificantly reduce memory requirements from 5007 bytes to 439 bytes per\none CALL\n\nRegards\n\nPavel\n\n\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>", "msg_date": "Fri, 15 May 2020 20:36:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com>\nescreveu:\n\n> Hi\n>\n> I try to use procedures in Orafce package, and I did some easy performance\n> tests. I found some hard problems:\n>\n> 1. test case\n>\n> create or replace procedure p1(inout r int, inout v int) as $$\n> begin v := random() * r; end\n> $$ language plpgsql;\n>\n> This command requires\n>\n> do $$\n> declare r int default 100; x int;\n> begin\n> for i in 1..300000 loop\n> call p1(r, x);\n> end loop;\n> end;\n> $$;\n>\n> about 2.2GB RAM and 10 sec.\n>\nI am having a consistent result of 3 secs, with a modified version\n(exec_stmt_call) of your patch.\nBut my notebook is (Core 5, 8GB and SSD), could it be a difference in the\ntesting hardware?\n\nregards,\nRanier Vilela\n\nEm dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?regards,Ranier Vilela", "msg_date": "Fri, 15 May 2020 19:33:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:\n\n> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n> pavel.stehule@gmail.com> escreveu:\n>\n>> Hi\n>>\n>> I try to use procedures in Orafce package, and I did some easy\n>> performance tests. I found some hard problems:\n>>\n>> 1. test case\n>>\n>> create or replace procedure p1(inout r int, inout v int) as $$\n>> begin v := random() * r; end\n>> $$ language plpgsql;\n>>\n>> This command requires\n>>\n>> do $$\n>> declare r int default 100; x int;\n>> begin\n>> for i in 1..300000 loop\n>> call p1(r, x);\n>> end loop;\n>> end;\n>> $$;\n>>\n>> about 2.2GB RAM and 10 sec.\n>>\n> I am having a consistent result of 3 secs, with a modified version\n> (exec_stmt_call) of your patch.\n> But my notebook is (Core 5, 8GB and SSD), could it be a difference in the\n> testing hardware?\n>\n\nMy notebook is old T520, and more I have a configured Postgres with\n--enable-cassert option.\n\nregards\n\nPavel\n\n\n> regards,\n> Ranier Vilela\n>\n\nso 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. regardsPavel regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 05:06:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com>\nescreveu:\n\n>\n>\n> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n> napsal:\n>\n>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>> pavel.stehule@gmail.com> escreveu:\n>>\n>>> Hi\n>>>\n>>> I try to use procedures in Orafce package, and I did some easy\n>>> performance tests. I found some hard problems:\n>>>\n>>> 1. test case\n>>>\n>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>> begin v := random() * r; end\n>>> $$ language plpgsql;\n>>>\n>>> This command requires\n>>>\n>>> do $$\n>>> declare r int default 100; x int;\n>>> begin\n>>> for i in 1..300000 loop\n>>> call p1(r, x);\n>>> end loop;\n>>> end;\n>>> $$;\n>>>\n>>> about 2.2GB RAM and 10 sec.\n>>>\n>> I am having a consistent result of 3 secs, with a modified version\n>> (exec_stmt_call) of your patch.\n>> But my notebook is (Core 5, 8GB and SSD), could it be a difference in the\n>> testing hardware?\n>>\n>\n> My notebook is old T520, and more I have a configured Postgres with\n> --enable-cassert option.\n>\nThe hardware is definitely making a difference, but if you have time and\ndon't mind testing it,\nI can send you a patch, not that the modifications are a big deal, but\nmaybe they'll help.\n\nregards,\nRanier Vilela\n\nEm sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 00:54:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:\n\n> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n> pavel.stehule@gmail.com> escreveu:\n>\n>>\n>>\n>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>> napsal:\n>>\n>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>> pavel.stehule@gmail.com> escreveu:\n>>>\n>>>> Hi\n>>>>\n>>>> I try to use procedures in Orafce package, and I did some easy\n>>>> performance tests. I found some hard problems:\n>>>>\n>>>> 1. test case\n>>>>\n>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>> begin v := random() * r; end\n>>>> $$ language plpgsql;\n>>>>\n>>>> This command requires\n>>>>\n>>>> do $$\n>>>> declare r int default 100; x int;\n>>>> begin\n>>>> for i in 1..300000 loop\n>>>> call p1(r, x);\n>>>> end loop;\n>>>> end;\n>>>> $$;\n>>>>\n>>>> about 2.2GB RAM and 10 sec.\n>>>>\n>>> I am having a consistent result of 3 secs, with a modified version\n>>> (exec_stmt_call) of your patch.\n>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference in\n>>> the testing hardware?\n>>>\n>>\n>> My notebook is old T520, and more I have a configured Postgres with\n>> --enable-cassert option.\n>>\n> The hardware is definitely making a difference, but if you have time and\n> don't mind testing it,\n> I can send you a patch, not that the modifications are a big deal, but\n> maybe they'll help.\n>\n\nsend me a patch, please\n\nPavel\n\n\n>\n> regards,\n> Ranier Vilela\n>\n\nso 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.send me a patch, pleasePavel regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 06:10:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <pavel.stehule@gmail.com>\nescreveu:\n\n>\n>\n> so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n> napsal:\n>\n>> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n>> pavel.stehule@gmail.com> escreveu:\n>>\n>>>\n>>>\n>>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>> napsal:\n>>>\n>>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>>> pavel.stehule@gmail.com> escreveu:\n>>>>\n>>>>> Hi\n>>>>>\n>>>>> I try to use procedures in Orafce package, and I did some easy\n>>>>> performance tests. I found some hard problems:\n>>>>>\n>>>>> 1. test case\n>>>>>\n>>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>>> begin v := random() * r; end\n>>>>> $$ language plpgsql;\n>>>>>\n>>>>> This command requires\n>>>>>\n>>>>> do $$\n>>>>> declare r int default 100; x int;\n>>>>> begin\n>>>>> for i in 1..300000 loop\n>>>>> call p1(r, x);\n>>>>> end loop;\n>>>>> end;\n>>>>> $$;\n>>>>>\n>>>>> about 2.2GB RAM and 10 sec.\n>>>>>\n>>>> I am having a consistent result of 3 secs, with a modified version\n>>>> (exec_stmt_call) of your patch.\n>>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference in\n>>>> the testing hardware?\n>>>>\n>>>\n>>> My notebook is old T520, and more I have a configured Postgres with\n>>> --enable-cassert option.\n>>>\n>> The hardware is definitely making a difference, but if you have time and\n>> don't mind testing it,\n>> I can send you a patch, not that the modifications are a big deal, but\n>> maybe they'll help.\n>>\n> With more testing, I found that latency increases response time.\nWith 3 (secs) the test is with localhost.\nWith 6 (secs) the test is with tcp (local, not between pcs).\n\nAnyway, I would like to know if we have the number of parameters\npreviously, why use List instead of Arrays?\nIt would not be faster to create plpgsql variables.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 16 May 2020 08:39:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com>\nnapsal:\n\n> Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <\n> pavel.stehule@gmail.com> escreveu:\n>\n>>\n>>\n>> so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>> napsal:\n>>\n>>> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n>>> pavel.stehule@gmail.com> escreveu:\n>>>\n>>>>\n>>>>\n>>>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>> napsal:\n>>>>\n>>>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>\n>>>>>> Hi\n>>>>>>\n>>>>>> I try to use procedures in Orafce package, and I did some easy\n>>>>>> performance tests. I found some hard problems:\n>>>>>>\n>>>>>> 1. test case\n>>>>>>\n>>>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>>>> begin v := random() * r; end\n>>>>>> $$ language plpgsql;\n>>>>>>\n>>>>>> This command requires\n>>>>>>\n>>>>>> do $$\n>>>>>> declare r int default 100; x int;\n>>>>>> begin\n>>>>>> for i in 1..300000 loop\n>>>>>> call p1(r, x);\n>>>>>> end loop;\n>>>>>> end;\n>>>>>> $$;\n>>>>>>\n>>>>>> about 2.2GB RAM and 10 sec.\n>>>>>>\n>>>>> I am having a consistent result of 3 secs, with a modified version\n>>>>> (exec_stmt_call) of your patch.\n>>>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference in\n>>>>> the testing hardware?\n>>>>>\n>>>>\n>>>> My notebook is old T520, and more I have a configured Postgres with\n>>>> --enable-cassert option.\n>>>>\n>>> The hardware is definitely making a difference, but if you have time and\n>>> don't mind testing it,\n>>> I can send you a patch, not that the modifications are a big deal, but\n>>> maybe they'll help.\n>>>\n>> With more testing, I found that latency increases response time.\n> With 3 (secs) the test is with localhost.\n> With 6 (secs) the test is with tcp (local, not between pcs).\n>\n> Anyway, I would like to know if we have the number of parameters\n> previously, why use List instead of Arrays?\n> It would not be faster to create plpgsql variables.\n>\n\nWhy you check SPI_processed?\n\n+ if (SPI_processed == 1)\n+ {\n+ if (!stmt->target)\n+ elog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);\n+ }\n+ else if (SPI_processed > 1)\n+ elog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\",\nexpr->query);\n\n\nCALL cannot to return rows, so these checks has not sense\n\n\n\n> regards,\n> Ranier Vilela\n>\n\nso 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.With more testing, I found that latency increases response time.With 3 (secs) the test is with localhost.With 6 (secs) the test is with tcp (local, not between pcs).Anyway, I would like to know if we have the number of parameters previously, why use List instead of Arrays?It would not be faster to create plpgsql variables.Why you check SPI_processed?+\tif (SPI_processed == 1)+\t{+\t\tif (!stmt->target)+\t\t\telog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);+\t}+\telse if (SPI_processed > 1)+\t\telog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\", expr->query);CALL cannot to return rows, so these checks has not sense regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 14:34:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Em sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <pavel.stehule@gmail.com>\nescreveu:\n\n>\n>\n> so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n> napsal:\n>\n>> Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <\n>> pavel.stehule@gmail.com> escreveu:\n>>\n>>>\n>>>\n>>> so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>> napsal:\n>>>\n>>>> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n>>>> pavel.stehule@gmail.com> escreveu:\n>>>>\n>>>>>\n>>>>>\n>>>>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>>> napsal:\n>>>>>\n>>>>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>>\n>>>>>>> Hi\n>>>>>>>\n>>>>>>> I try to use procedures in Orafce package, and I did some easy\n>>>>>>> performance tests. I found some hard problems:\n>>>>>>>\n>>>>>>> 1. test case\n>>>>>>>\n>>>>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>>>>> begin v := random() * r; end\n>>>>>>> $$ language plpgsql;\n>>>>>>>\n>>>>>>> This command requires\n>>>>>>>\n>>>>>>> do $$\n>>>>>>> declare r int default 100; x int;\n>>>>>>> begin\n>>>>>>> for i in 1..300000 loop\n>>>>>>> call p1(r, x);\n>>>>>>> end loop;\n>>>>>>> end;\n>>>>>>> $$;\n>>>>>>>\n>>>>>>> about 2.2GB RAM and 10 sec.\n>>>>>>>\n>>>>>> I am having a consistent result of 3 secs, with a modified version\n>>>>>> (exec_stmt_call) of your patch.\n>>>>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference in\n>>>>>> the testing hardware?\n>>>>>>\n>>>>>\n>>>>> My notebook is old T520, and more I have a configured Postgres with\n>>>>> --enable-cassert option.\n>>>>>\n>>>> The hardware is definitely making a difference, but if you have time\n>>>> and don't mind testing it,\n>>>> I can send you a patch, not that the modifications are a big deal, but\n>>>> maybe they'll help.\n>>>>\n>>> With more testing, I found that latency increases response time.\n>> With 3 (secs) the test is with localhost.\n>> With 6 (secs) the test is with tcp (local, not between pcs).\n>>\n>> Anyway, I would like to know if we have the number of parameters\n>> previously, why use List instead of Arrays?\n>> It would not be faster to create plpgsql variables.\n>>\n>\n> Why you check SPI_processed?\n>\n> + if (SPI_processed == 1)\n> + {\n> + if (!stmt->target)\n> + elog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);\n> + }\n> + else if (SPI_processed > 1)\n> + elog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\",\n> expr->query);\n>\n>\n> CALL cannot to return rows, so these checks has not sense\n>\nLooking at the original file, this already done, from line 2351,\nI just put all the tests together to, if applicable, get out quickly.\n\nregards,\nRanier Vilela\n\nEm sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.With more testing, I found that latency increases response time.With 3 (secs) the test is with localhost.With 6 (secs) the test is with tcp (local, not between pcs).Anyway, I would like to know if we have the number of parameters previously, why use List instead of Arrays?It would not be faster to create plpgsql variables.Why you check SPI_processed?+\tif (SPI_processed == 1)+\t{+\t\tif (!stmt->target)+\t\t\telog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);+\t}+\telse if (SPI_processed > 1)+\t\telog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\", expr->query);CALL cannot to return rows, so these checks has not senseLooking at the original file, this already done, from line 2351, I just put all the tests together to, if applicable, get out quickly. regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 10:23:09 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "so 16. 5. 2020 v 15:24 odesílatel Ranier Vilela <ranier.vf@gmail.com>\nnapsal:\n\n> Em sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <\n> pavel.stehule@gmail.com> escreveu:\n>\n>>\n>>\n>> so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>> napsal:\n>>\n>>> Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <\n>>> pavel.stehule@gmail.com> escreveu:\n>>>\n>>>>\n>>>>\n>>>> so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>> napsal:\n>>>>\n>>>>> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>\n>>>>>>\n>>>>>>\n>>>>>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>>>> napsal:\n>>>>>>\n>>>>>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>>>\n>>>>>>>> Hi\n>>>>>>>>\n>>>>>>>> I try to use procedures in Orafce package, and I did some easy\n>>>>>>>> performance tests. I found some hard problems:\n>>>>>>>>\n>>>>>>>> 1. test case\n>>>>>>>>\n>>>>>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>>>>>> begin v := random() * r; end\n>>>>>>>> $$ language plpgsql;\n>>>>>>>>\n>>>>>>>> This command requires\n>>>>>>>>\n>>>>>>>> do $$\n>>>>>>>> declare r int default 100; x int;\n>>>>>>>> begin\n>>>>>>>> for i in 1..300000 loop\n>>>>>>>> call p1(r, x);\n>>>>>>>> end loop;\n>>>>>>>> end;\n>>>>>>>> $$;\n>>>>>>>>\n>>>>>>>> about 2.2GB RAM and 10 sec.\n>>>>>>>>\n>>>>>>> I am having a consistent result of 3 secs, with a modified version\n>>>>>>> (exec_stmt_call) of your patch.\n>>>>>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference\n>>>>>>> in the testing hardware?\n>>>>>>>\n>>>>>>\n>>>>>> My notebook is old T520, and more I have a configured Postgres with\n>>>>>> --enable-cassert option.\n>>>>>>\n>>>>> The hardware is definitely making a difference, but if you have time\n>>>>> and don't mind testing it,\n>>>>> I can send you a patch, not that the modifications are a big deal, but\n>>>>> maybe they'll help.\n>>>>>\n>>>> With more testing, I found that latency increases response time.\n>>> With 3 (secs) the test is with localhost.\n>>> With 6 (secs) the test is with tcp (local, not between pcs).\n>>>\n>>> Anyway, I would like to know if we have the number of parameters\n>>> previously, why use List instead of Arrays?\n>>> It would not be faster to create plpgsql variables.\n>>>\n>>\n>> Why you check SPI_processed?\n>>\n>> + if (SPI_processed == 1)\n>> + {\n>> + if (!stmt->target)\n>> + elog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);\n>> + }\n>> + else if (SPI_processed > 1)\n>> + elog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\",\n>> expr->query);\n>>\n>>\n>> CALL cannot to return rows, so these checks has not sense\n>>\n> Looking at the original file, this already done, from line 2351,\n> I just put all the tests together to, if applicable, get out quickly.\n>\n\nIt's little bit messy. Is not good to mix bugfix and refactoring things\ntogether\n\n\n\n> regards,\n> Ranier Vilela\n>\n\nso 16. 5. 2020 v 15:24 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.With more testing, I found that latency increases response time.With 3 (secs) the test is with localhost.With 6 (secs) the test is with tcp (local, not between pcs).Anyway, I would like to know if we have the number of parameters previously, why use List instead of Arrays?It would not be faster to create plpgsql variables.Why you check SPI_processed?+\tif (SPI_processed == 1)+\t{+\t\tif (!stmt->target)+\t\t\telog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);+\t}+\telse if (SPI_processed > 1)+\t\telog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\", expr->query);CALL cannot to return rows, so these checks has not senseLooking at the original file, this already done, from line 2351, I just put all the tests together to, if applicable, get out quickly.It's little bit messy. Is not good to mix bugfix and refactoring things together  regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 16:06:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Em sáb., 16 de mai. de 2020 às 11:07, Pavel Stehule <pavel.stehule@gmail.com>\nescreveu:\n\n>\n>\n> so 16. 5. 2020 v 15:24 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n> napsal:\n>\n>> Em sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <\n>> pavel.stehule@gmail.com> escreveu:\n>>\n>>>\n>>>\n>>> so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>> napsal:\n>>>\n>>>> Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <\n>>>> pavel.stehule@gmail.com> escreveu:\n>>>>\n>>>>>\n>>>>>\n>>>>> so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>>> napsal:\n>>>>>\n>>>>>> Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <\n>>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com>\n>>>>>>> napsal:\n>>>>>>>\n>>>>>>>> Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <\n>>>>>>>> pavel.stehule@gmail.com> escreveu:\n>>>>>>>>\n>>>>>>>>> Hi\n>>>>>>>>>\n>>>>>>>>> I try to use procedures in Orafce package, and I did some easy\n>>>>>>>>> performance tests. I found some hard problems:\n>>>>>>>>>\n>>>>>>>>> 1. test case\n>>>>>>>>>\n>>>>>>>>> create or replace procedure p1(inout r int, inout v int) as $$\n>>>>>>>>> begin v := random() * r; end\n>>>>>>>>> $$ language plpgsql;\n>>>>>>>>>\n>>>>>>>>> This command requires\n>>>>>>>>>\n>>>>>>>>> do $$\n>>>>>>>>> declare r int default 100; x int;\n>>>>>>>>> begin\n>>>>>>>>> for i in 1..300000 loop\n>>>>>>>>> call p1(r, x);\n>>>>>>>>> end loop;\n>>>>>>>>> end;\n>>>>>>>>> $$;\n>>>>>>>>>\n>>>>>>>>> about 2.2GB RAM and 10 sec.\n>>>>>>>>>\n>>>>>>>> I am having a consistent result of 3 secs, with a modified version\n>>>>>>>> (exec_stmt_call) of your patch.\n>>>>>>>> But my notebook is (Core 5, 8GB and SSD), could it be a difference\n>>>>>>>> in the testing hardware?\n>>>>>>>>\n>>>>>>>\n>>>>>>> My notebook is old T520, and more I have a configured Postgres with\n>>>>>>> --enable-cassert option.\n>>>>>>>\n>>>>>> The hardware is definitely making a difference, but if you have time\n>>>>>> and don't mind testing it,\n>>>>>> I can send you a patch, not that the modifications are a big deal,\n>>>>>> but maybe they'll help.\n>>>>>>\n>>>>> With more testing, I found that latency increases response time.\n>>>> With 3 (secs) the test is with localhost.\n>>>> With 6 (secs) the test is with tcp (local, not between pcs).\n>>>>\n>>>> Anyway, I would like to know if we have the number of parameters\n>>>> previously, why use List instead of Arrays?\n>>>> It would not be faster to create plpgsql variables.\n>>>>\n>>>\n>>> Why you check SPI_processed?\n>>>\n>>> + if (SPI_processed == 1)\n>>> + {\n>>> + if (!stmt->target)\n>>> + elog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);\n>>> + }\n>>> + else if (SPI_processed > 1)\n>>> + elog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\",\n>>> expr->query);\n>>>\n>>>\n>>> CALL cannot to return rows, so these checks has not sense\n>>>\n>> Looking at the original file, this already done, from line 2351,\n>> I just put all the tests together to, if applicable, get out quickly.\n>>\n>\n> It's little bit messy. Is not good to mix bugfix and refactoring things\n> together\n>\nOk, I can understand that.\n\nregards,\nRanier Vilela\n\nEm sáb., 16 de mai. de 2020 às 11:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 15:24 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 09:35, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 13:40 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 01:10, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 5:55 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em sáb., 16 de mai. de 2020 às 00:07, Pavel Stehule <pavel.stehule@gmail.com> escreveu:so 16. 5. 2020 v 0:34 odesílatel Ranier Vilela <ranier.vf@gmail.com> napsal:Em dom., 10 de mai. de 2020 às 17:21, Pavel Stehule <pavel.stehule@gmail.com> escreveu:HiI try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:1. test casecreate or replace procedure p1(inout r int, inout v int) as $$ begin v := random() * r; end $$ language plpgsql;This command requiresdo $$declare r int default 100; x int;begin  for i in 1..300000 loop     call p1(r, x);  end loop;end;$$;about 2.2GB RAM and 10 sec.I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch.But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?My notebook is old T520, and more I have a configured Postgres with --enable-cassert option. The hardware is definitely making a difference, but if you have time and don't mind testing it, I can send you a patch, not that the modifications are a big deal, but maybe they'll help.With more testing, I found that latency increases response time.With 3 (secs) the test is with localhost.With 6 (secs) the test is with tcp (local, not between pcs).Anyway, I would like to know if we have the number of parameters previously, why use List instead of Arrays?It would not be faster to create plpgsql variables.Why you check SPI_processed?+\tif (SPI_processed == 1)+\t{+\t\tif (!stmt->target)+\t\t\telog(ERROR, \"DO statement returned a row, query \\\"%s\\\"\", expr->query);+\t}+\telse if (SPI_processed > 1)+\t\telog(ERROR, \"Procedure call returned more than one row, query \\\"%s\\\"\", expr->query);CALL cannot to return rows, so these checks has not senseLooking at the original file, this already done, from line 2351, I just put all the tests together to, if applicable, get out quickly.It's little bit messy. Is not good to mix bugfix and refactoring things togetherOk, I can understand that.regards,Ranier Vilela", "msg_date": "Sat, 16 May 2020 11:20:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "On Sat, 16 May 2020 at 00:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n>>>\n>>> The problem is in plpgsql implementation of CALL statement\n>>>\n>>> In non atomic case - case of using procedures from DO block, the expression plan is not cached, and plan is generating any time. This is reason why it is slow.\n>>>\n>>> Unfortunately, generated plans are not released until SPI_finish. Attached patch fixed this issue.\n>>\n>>\n>> But now, recursive calling doesn't work :-(. So this patch is not enough\n>\n>\n> Attached patch is working - all tests passed\n\nCould you show an example testcase that tests this recursive scenario,\nwith which your earlier patch fails the test, and this v2 patch passes\nit ? I am trying to understand the recursive scenario and the re-use\nof expr->plan.\n\n>\n> It doesn't solve performance, and doesn't solve all memory problems, but significantly reduce memory requirements from 5007 bytes to 439 bytes per one CALL\n\nSo now this patch's intention is to reduce memory consumption, and it\ndoesn't target slowness improvement, right ?\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Wed, 10 Jun 2020 15:22:25 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Sat, 16 May 2020 at 00:07, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > Hi\n> >\n> >>>\n> >>> The problem is in plpgsql implementation of CALL statement\n> >>>\n> >>> In non atomic case - case of using procedures from DO block, the\n> expression plan is not cached, and plan is generating any time. This is\n> reason why it is slow.\n> >>>\n> >>> Unfortunately, generated plans are not released until SPI_finish.\n> Attached patch fixed this issue.\n> >>\n> >>\n> >> But now, recursive calling doesn't work :-(. So this patch is not enough\n> >\n> >\n> > Attached patch is working - all tests passed\n>\n> Could you show an example testcase that tests this recursive scenario,\n> with which your earlier patch fails the test, and this v2 patch passes\n> it ? I am trying to understand the recursive scenario and the re-use\n> of expr->plan.\n>\n\nit hangs on plpgsql tests. So you can apply first version of patch\n\nand \"make check\"\n\n\n> >\n> > It doesn't solve performance, and doesn't solve all memory problems, but\n> significantly reduce memory requirements from 5007 bytes to 439 bytes per\n> one CALL\n>\n> So now this patch's intention is to reduce memory consumption, and it\n> doesn't target slowness improvement, right ?\n>\n\nyes. There is a problem with planning every execution when the procedure\nwas called from not top context.\n\n\n\n> --\n> Thanks,\n> -Amit Khandekar\n> Huawei Technologies\n>\n\nst 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Sat, 16 May 2020 at 00:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n>>>\n>>> The problem is in plpgsql implementation of CALL statement\n>>>\n>>> In non atomic case -  case of using procedures from DO block, the expression plan is not cached, and plan is generating any time. This is reason why it is slow.\n>>>\n>>> Unfortunately, generated plans are not released until SPI_finish. Attached patch fixed this issue.\n>>\n>>\n>> But now, recursive calling doesn't work :-(. So this patch is not enough\n>\n>\n> Attached patch is working - all tests passed\n\nCould you show an example testcase that tests this recursive scenario,\nwith which your earlier patch fails the test, and this v2 patch passes\nit ? I am trying to understand the recursive scenario and the re-use\nof expr->plan.it hangs on plpgsql tests. So you can apply first version of patchand \"make check\" \n\n>\n> It doesn't solve performance, and doesn't solve all memory problems, but significantly reduce memory requirements from 5007 bytes to 439 bytes per one CALL\n\nSo now  this patch's intention is to reduce memory consumption, and it\ndoesn't target slowness improvement, right ?yes. There is a problem with planning every execution when the procedure was called from not top context. \n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Wed, 10 Jun 2020 13:42:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>> Could you show an example testcase that tests this recursive scenario,\n>> with which your earlier patch fails the test, and this v2 patch passes\n>> it ? I am trying to understand the recursive scenario and the re-use\n>> of expr->plan.\n>\n>\n> it hangs on plpgsql tests. So you can apply first version of patch\n>\n> and \"make check\"\n\nI could not reproduce the make check hang with the v1 patch. But I\ncould see a crash with the below testcase. So I understand the purpose\nof the plan_owner variable that you introduced in v2.\n\nConsider this recursive test :\n\ncreate or replace procedure p1(in r int) as $$\nbegin\n RAISE INFO 'r : % ', r;\n if r < 3 then\n call p1(r+1);\n end if;\nend\n$$ language plpgsql;\n\ndo $$\ndeclare r int default 1;\nbegin\n call p1(r);\nend;\n$$;\n\nIn p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\nconsider this code of exec_stmt_call() with your v2 patch applied:\nif (expr->plan && !expr->plan->saved)\n{\n if (plan_owner)\n SPI_freeplan(expr->plan);\n expr->plan = NULL;\n}\n\nHere, plan_owner is false. So SPI_freeplan() will not be called, and\nexpr->plan is set to NULL. Now I have observed that the stmt pointer\nand expr pointer is shared between the p1() execution at this r=2\nlevel and the p1() execution at r=1 level. So after the above code is\nexecuted at r=2, when the upper level (r=1) exec_stmt_call() lands to\nthe same above code snippet, it gets the same expr pointer, but it's\nexpr->plan is already set to NULL without being freed. From this\nlogic, it looks like the plan won't get freed whenever the expr/stmt\npointers are shared across recursive levels, since expr->plan is set\nto NULL at the lowermost level ? Basically, the handle to the plan is\nlost so no one at the upper recursion level can explicitly free it\nusing SPI_freeplan(), right ? This looks the same as the main issue\nwhere the plan does not get freed for non-recursive calls. I haven't\ngot a chance to check if we can develop a testcase for this, similar\nto your testcase where the memory keeps on increasing.\n\n-Amit\n\n\n", "msg_date": "Wed, 17 Jun 2020 11:22:04 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "st 17. 6. 2020 v 7:52 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\n> napsal:\n> >> Could you show an example testcase that tests this recursive scenario,\n> >> with which your earlier patch fails the test, and this v2 patch passes\n> >> it ? I am trying to understand the recursive scenario and the re-use\n> >> of expr->plan.\n> >\n> >\n> > it hangs on plpgsql tests. So you can apply first version of patch\n> >\n> > and \"make check\"\n>\n> I could not reproduce the make check hang with the v1 patch. But I\n> could see a crash with the below testcase. So I understand the purpose\n> of the plan_owner variable that you introduced in v2.\n>\n> Consider this recursive test :\n>\n> create or replace procedure p1(in r int) as $$\n> begin\n> RAISE INFO 'r : % ', r;\n> if r < 3 then\n> call p1(r+1);\n> end if;\n> end\n> $$ language plpgsql;\n>\n> do $$\n> declare r int default 1;\n> begin\n> call p1(r);\n> end;\n> $$;\n>\n> In p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\n> consider this code of exec_stmt_call() with your v2 patch applied:\n> if (expr->plan && !expr->plan->saved)\n> {\n> if (plan_owner)\n> SPI_freeplan(expr->plan);\n> expr->plan = NULL;\n> }\n>\n> Here, plan_owner is false. So SPI_freeplan() will not be called, and\n> expr->plan is set to NULL. Now I have observed that the stmt pointer\n> and expr pointer is shared between the p1() execution at this r=2\n> level and the p1() execution at r=1 level. So after the above code is\n> executed at r=2, when the upper level (r=1) exec_stmt_call() lands to\n> the same above code snippet, it gets the same expr pointer, but it's\n> expr->plan is already set to NULL without being freed. From this\n> logic, it looks like the plan won't get freed whenever the expr/stmt\n> pointers are shared across recursive levels, since expr->plan is set\n> to NULL at the lowermost level ? Basically, the handle to the plan is\n> lost so no one at the upper recursion level can explicitly free it\n> using SPI_freeplan(), right ? This looks the same as the main issue\n> where the plan does not get freed for non-recursive calls. I haven't\n> got a chance to check if we can develop a testcase for this, similar\n> to your testcase where the memory keeps on increasing.\n>\n\nThis is a good consideration.\n\nI am sending updated patch\n\nPavel\n\n\n\n> -Amit\n>", "msg_date": "Wed, 17 Jun 2020 10:23:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "On Wed, 17 Jun 2020 at 13:54, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 17. 6. 2020 v 7:52 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>>\n>> On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> > st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>> >> Could you show an example testcase that tests this recursive scenario,\n>> >> with which your earlier patch fails the test, and this v2 patch passes\n>> >> it ? I am trying to understand the recursive scenario and the re-use\n>> >> of expr->plan.\n>> >\n>> >\n>> > it hangs on plpgsql tests. So you can apply first version of patch\n>> >\n>> > and \"make check\"\n>>\n>> I could not reproduce the make check hang with the v1 patch. But I\n>> could see a crash with the below testcase. So I understand the purpose\n>> of the plan_owner variable that you introduced in v2.\n>>\n>> Consider this recursive test :\n>>\n>> create or replace procedure p1(in r int) as $$\n>> begin\n>> RAISE INFO 'r : % ', r;\n>> if r < 3 then\n>> call p1(r+1);\n>> end if;\n>> end\n>> $$ language plpgsql;\n>>\n>> do $$\n>> declare r int default 1;\n>> begin\n>> call p1(r);\n>> end;\n>> $$;\n>>\n>> In p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\n>> consider this code of exec_stmt_call() with your v2 patch applied:\n>> if (expr->plan && !expr->plan->saved)\n>> {\n>> if (plan_owner)\n>> SPI_freeplan(expr->plan);\n>> expr->plan = NULL;\n>> }\n>>\n>> Here, plan_owner is false. So SPI_freeplan() will not be called, and\n>> expr->plan is set to NULL. Now I have observed that the stmt pointer\n>> and expr pointer is shared between the p1() execution at this r=2\n>> level and the p1() execution at r=1 level. So after the above code is\n>> executed at r=2, when the upper level (r=1) exec_stmt_call() lands to\n>> the same above code snippet, it gets the same expr pointer, but it's\n>> expr->plan is already set to NULL without being freed. From this\n>> logic, it looks like the plan won't get freed whenever the expr/stmt\n>> pointers are shared across recursive levels, since expr->plan is set\n>> to NULL at the lowermost level ? Basically, the handle to the plan is\n>> lost so no one at the upper recursion level can explicitly free it\n>> using SPI_freeplan(), right ? This looks the same as the main issue\n>> where the plan does not get freed for non-recursive calls. I haven't\n>> got a chance to check if we can develop a testcase for this, similar\n>> to your testcase where the memory keeps on increasing.\n>\n>\n> This is a good consideration.\n>\n> I am sending updated patch\n\nChecked the latest patch. Looks like using a local plan rather than\nexpr->plan pointer for doing the checks does seem to resolve the issue\nI raised. That made me think of another scenario :\n\nNow we are checking for plan value and then null'ifying the expr->plan\nvalue. What if expr->plan is different from plan ? Is it possible ? I\nwas thinking of such scenarios. But couldn't find one. As long as a\nplan is always created with saved=true for all levels, or with\nsaved=false for all levels, we are ok. If we can have a mix of saved\nand unsaved plans at different recursion levels, then expr->plan can\nbe different from the outer local plan because then the expr->plan\nwill not be set to NULL in the inner level, while the outer level may\nhave created its own plan. But I think a mix of saved and unsaved\nplans are not possible. If you agree, then I think we should at least\nhave an assert that looks like :\n\n if (plan && !plan->saved)\n {\n if (plan_owner)\n SPI_freeplan(plan);\n\n /* If expr->plan is present, it must be the same plan that we\nallocated */\n Assert ( !expr->plan || plan == expr->plan) );\n\n expr->plan = NULL;\n }\n\nOther than this, I have no other issues. I understand that we have to\ndo this special handling only for this exec_stmt_call() because it is\nonly here that we call exec_prepare_plan() with keep_plan = false, so\ndoing special handling for freeing the plan seems to make sense.\n\n\n", "msg_date": "Thu, 9 Jul 2020 11:57:34 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "čt 9. 7. 2020 v 8:28 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Wed, 17 Jun 2020 at 13:54, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > st 17. 6. 2020 v 7:52 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\n> napsal:\n> >>\n> >> On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> > st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <\n> amitdkhan.pg@gmail.com> napsal:\n> >> >> Could you show an example testcase that tests this recursive\n> scenario,\n> >> >> with which your earlier patch fails the test, and this v2 patch\n> passes\n> >> >> it ? I am trying to understand the recursive scenario and the re-use\n> >> >> of expr->plan.\n> >> >\n> >> >\n> >> > it hangs on plpgsql tests. So you can apply first version of patch\n> >> >\n> >> > and \"make check\"\n> >>\n> >> I could not reproduce the make check hang with the v1 patch. But I\n> >> could see a crash with the below testcase. So I understand the purpose\n> >> of the plan_owner variable that you introduced in v2.\n> >>\n> >> Consider this recursive test :\n> >>\n> >> create or replace procedure p1(in r int) as $$\n> >> begin\n> >> RAISE INFO 'r : % ', r;\n> >> if r < 3 then\n> >> call p1(r+1);\n> >> end if;\n> >> end\n> >> $$ language plpgsql;\n> >>\n> >> do $$\n> >> declare r int default 1;\n> >> begin\n> >> call p1(r);\n> >> end;\n> >> $$;\n> >>\n> >> In p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\n> >> consider this code of exec_stmt_call() with your v2 patch applied:\n> >> if (expr->plan && !expr->plan->saved)\n> >> {\n> >> if (plan_owner)\n> >> SPI_freeplan(expr->plan);\n> >> expr->plan = NULL;\n> >> }\n> >>\n> >> Here, plan_owner is false. So SPI_freeplan() will not be called, and\n> >> expr->plan is set to NULL. Now I have observed that the stmt pointer\n> >> and expr pointer is shared between the p1() execution at this r=2\n> >> level and the p1() execution at r=1 level. So after the above code is\n> >> executed at r=2, when the upper level (r=1) exec_stmt_call() lands to\n> >> the same above code snippet, it gets the same expr pointer, but it's\n> >> expr->plan is already set to NULL without being freed. From this\n> >> logic, it looks like the plan won't get freed whenever the expr/stmt\n> >> pointers are shared across recursive levels, since expr->plan is set\n> >> to NULL at the lowermost level ? Basically, the handle to the plan is\n> >> lost so no one at the upper recursion level can explicitly free it\n> >> using SPI_freeplan(), right ? This looks the same as the main issue\n> >> where the plan does not get freed for non-recursive calls. I haven't\n> >> got a chance to check if we can develop a testcase for this, similar\n> >> to your testcase where the memory keeps on increasing.\n> >\n> >\n> > This is a good consideration.\n> >\n> > I am sending updated patch\n>\n> Checked the latest patch. Looks like using a local plan rather than\n> expr->plan pointer for doing the checks does seem to resolve the issue\n> I raised. That made me think of another scenario :\n>\n> Now we are checking for plan value and then null'ifying the expr->plan\n> value. What if expr->plan is different from plan ? Is it possible ? I\n> was thinking of such scenarios. But couldn't find one. As long as a\n> plan is always created with saved=true for all levels, or with\n> saved=false for all levels, we are ok. If we can have a mix of saved\n> and unsaved plans at different recursion levels, then expr->plan can\n> be different from the outer local plan because then the expr->plan\n> will not be set to NULL in the inner level, while the outer level may\n> have created its own plan. But I think a mix of saved and unsaved\n> plans are not possible. If you agree, then I think we should at least\n> have an assert that looks like :\n>\n> if (plan && !plan->saved)\n> {\n> if (plan_owner)\n> SPI_freeplan(plan);\n>\n> /* If expr->plan is present, it must be the same plan that we\n> allocated */\n> Assert ( !expr->plan || plan == expr->plan) );\n>\n> expr->plan = NULL;\n> }\n>\n> Other than this, I have no other issues. I understand that we have to\n> do this special handling only for this exec_stmt_call() because it is\n> only here that we call exec_prepare_plan() with keep_plan = false, so\n> doing special handling for freeing the plan seems to make sense.\n>\n\nattached patch with assert.\n\nall regress tests passed. I think this short patch can be applied on older\nreleases as bugfix.\n\nThis weekend I'll try to check different strategy - try to save a plan and\nrelease it at the end of the transaction.\n\nRegards\n\nPavel", "msg_date": "Sat, 11 Jul 2020 07:38:21 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "so 11. 7. 2020 v 7:38 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 9. 7. 2020 v 8:28 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\n> napsal:\n>\n>> On Wed, 17 Jun 2020 at 13:54, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> >\n>> >\n>> >\n>> > st 17. 6. 2020 v 7:52 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\n>> napsal:\n>> >>\n>> >> On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> >> > st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <\n>> amitdkhan.pg@gmail.com> napsal:\n>> >> >> Could you show an example testcase that tests this recursive\n>> scenario,\n>> >> >> with which your earlier patch fails the test, and this v2 patch\n>> passes\n>> >> >> it ? I am trying to understand the recursive scenario and the re-use\n>> >> >> of expr->plan.\n>> >> >\n>> >> >\n>> >> > it hangs on plpgsql tests. So you can apply first version of patch\n>> >> >\n>> >> > and \"make check\"\n>> >>\n>> >> I could not reproduce the make check hang with the v1 patch. But I\n>> >> could see a crash with the below testcase. So I understand the purpose\n>> >> of the plan_owner variable that you introduced in v2.\n>> >>\n>> >> Consider this recursive test :\n>> >>\n>> >> create or replace procedure p1(in r int) as $$\n>> >> begin\n>> >> RAISE INFO 'r : % ', r;\n>> >> if r < 3 then\n>> >> call p1(r+1);\n>> >> end if;\n>> >> end\n>> >> $$ language plpgsql;\n>> >>\n>> >> do $$\n>> >> declare r int default 1;\n>> >> begin\n>> >> call p1(r);\n>> >> end;\n>> >> $$;\n>> >>\n>> >> In p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\n>> >> consider this code of exec_stmt_call() with your v2 patch applied:\n>> >> if (expr->plan && !expr->plan->saved)\n>> >> {\n>> >> if (plan_owner)\n>> >> SPI_freeplan(expr->plan);\n>> >> expr->plan = NULL;\n>> >> }\n>> >>\n>> >> Here, plan_owner is false. So SPI_freeplan() will not be called, and\n>> >> expr->plan is set to NULL. Now I have observed that the stmt pointer\n>> >> and expr pointer is shared between the p1() execution at this r=2\n>> >> level and the p1() execution at r=1 level. So after the above code is\n>> >> executed at r=2, when the upper level (r=1) exec_stmt_call() lands to\n>> >> the same above code snippet, it gets the same expr pointer, but it's\n>> >> expr->plan is already set to NULL without being freed. From this\n>> >> logic, it looks like the plan won't get freed whenever the expr/stmt\n>> >> pointers are shared across recursive levels, since expr->plan is set\n>> >> to NULL at the lowermost level ? Basically, the handle to the plan is\n>> >> lost so no one at the upper recursion level can explicitly free it\n>> >> using SPI_freeplan(), right ? This looks the same as the main issue\n>> >> where the plan does not get freed for non-recursive calls. I haven't\n>> >> got a chance to check if we can develop a testcase for this, similar\n>> >> to your testcase where the memory keeps on increasing.\n>> >\n>> >\n>> > This is a good consideration.\n>> >\n>> > I am sending updated patch\n>>\n>> Checked the latest patch. Looks like using a local plan rather than\n>> expr->plan pointer for doing the checks does seem to resolve the issue\n>> I raised. That made me think of another scenario :\n>>\n>> Now we are checking for plan value and then null'ifying the expr->plan\n>> value. What if expr->plan is different from plan ? Is it possible ? I\n>> was thinking of such scenarios. But couldn't find one. As long as a\n>> plan is always created with saved=true for all levels, or with\n>> saved=false for all levels, we are ok. If we can have a mix of saved\n>> and unsaved plans at different recursion levels, then expr->plan can\n>> be different from the outer local plan because then the expr->plan\n>> will not be set to NULL in the inner level, while the outer level may\n>> have created its own plan. But I think a mix of saved and unsaved\n>> plans are not possible. If you agree, then I think we should at least\n>> have an assert that looks like :\n>>\n>> if (plan && !plan->saved)\n>> {\n>> if (plan_owner)\n>> SPI_freeplan(plan);\n>>\n>> /* If expr->plan is present, it must be the same plan that we\n>> allocated */\n>> Assert ( !expr->plan || plan == expr->plan) );\n>>\n>> expr->plan = NULL;\n>> }\n>>\n>> Other than this, I have no other issues. I understand that we have to\n>> do this special handling only for this exec_stmt_call() because it is\n>> only here that we call exec_prepare_plan() with keep_plan = false, so\n>> doing special handling for freeing the plan seems to make sense.\n>>\n>\n> attached patch with assert.\n>\n> all regress tests passed. I think this short patch can be applied on older\n> releases as bugfix.\n>\n> This weekend I'll try to check different strategy - try to save a plan and\n> release it at the end of the transaction.\n>\n\nI check it, and this state of patch is good enough for this moment. Another\nfix needs more invasive changes to handling plan cache.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n\nso 11. 7. 2020 v 7:38 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 9. 7. 2020 v 8:28 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Wed, 17 Jun 2020 at 13:54, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 17. 6. 2020 v 7:52 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>>\n>> On Wed, 10 Jun 2020 at 17:12, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> > st 10. 6. 2020 v 12:26 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>> >> Could you show an example testcase that tests this recursive scenario,\n>> >> with which your earlier patch fails the test, and this v2 patch passes\n>> >> it ? I am trying to understand the recursive scenario and the re-use\n>> >> of expr->plan.\n>> >\n>> >\n>> > it hangs on plpgsql tests. So you can apply first version of patch\n>> >\n>> > and \"make check\"\n>>\n>> I could not reproduce the make check hang with the v1 patch. But I\n>> could see a crash with the below testcase. So I understand the purpose\n>> of the plan_owner variable that you introduced in v2.\n>>\n>> Consider this recursive test :\n>>\n>> create or replace procedure p1(in r int) as $$\n>> begin\n>>    RAISE INFO 'r : % ', r;\n>>    if r < 3 then\n>>       call p1(r+1);\n>>    end if;\n>> end\n>> $$ language plpgsql;\n>>\n>> do $$\n>> declare r int default 1;\n>> begin\n>>     call p1(r);\n>> end;\n>> $$;\n>>\n>> In p1() with r=2, when the stmt \"call p1(r+1)\" is being executed,\n>> consider this code of exec_stmt_call() with your v2 patch applied:\n>> if (expr->plan && !expr->plan->saved)\n>> {\n>>    if (plan_owner)\n>>       SPI_freeplan(expr->plan);\n>>    expr->plan = NULL;\n>> }\n>>\n>> Here, plan_owner is false. So SPI_freeplan() will not be called, and\n>> expr->plan is set to NULL. Now I have observed that the stmt pointer\n>> and expr pointer is shared between the p1() execution at this r=2\n>> level and the p1() execution at r=1 level. So after the above code is\n>> executed at r=2, when the upper level (r=1) exec_stmt_call() lands to\n>> the same above code snippet, it gets the same expr pointer, but it's\n>> expr->plan is already set to NULL without being freed. From this\n>> logic, it looks like the plan won't get freed whenever the expr/stmt\n>> pointers are shared across recursive levels, since expr->plan is set\n>> to NULL at the lowermost level ? Basically, the handle to the plan is\n>> lost so no one at the upper recursion level can explicitly free it\n>> using SPI_freeplan(), right ? This looks the same as the main issue\n>> where the plan does not get freed for non-recursive calls. I haven't\n>> got a chance to check if we can develop a testcase for this, similar\n>> to your testcase where the memory keeps on increasing.\n>\n>\n> This is a good consideration.\n>\n> I am sending updated patch\n\nChecked the latest patch. Looks like using a local plan rather than\nexpr->plan pointer for doing the checks does seem to resolve the issue\nI raised. That made me think of another scenario :\n\nNow we are checking for plan value and then null'ifying the expr->plan\nvalue. What if expr->plan is different from plan ? Is it possible ? I\nwas thinking of such scenarios. But couldn't find one. As long as a\nplan is always created with saved=true for all levels, or with\nsaved=false for all levels, we are ok. If we can have a mix of saved\nand unsaved plans at different recursion levels, then expr->plan can\nbe different from the outer local plan because then the expr->plan\nwill not be set to NULL in the inner level, while the outer level may\nhave created its own plan. But I think a mix of saved and unsaved\nplans are not possible. If you agree, then I think we should at least\nhave an assert that looks like :\n\n    if (plan && !plan->saved)\n    {\n        if (plan_owner)\n            SPI_freeplan(plan);\n\n        /* If expr->plan  is present, it must be the same plan that we\nallocated */\n       Assert ( !expr->plan || plan == expr->plan) );\n\n        expr->plan = NULL;\n    }\n\nOther than this, I have no other issues. I understand that we have to\ndo this special handling only for this exec_stmt_call() because it is\nonly here that we call exec_prepare_plan() with keep_plan = false, so\ndoing special handling for freeing the plan seems to make sense.attached patch with assert.all regress tests passed. I think this short patch can be applied on older releases as bugfix.This weekend I'll try to check different strategy - try to save a plan and release it at the end of the transaction. I check it, and this state of patch is good enough for this moment. Another fix needs more invasive changes to handling plan cache.RegardsPavelRegardsPavel", "msg_date": "Sun, 12 Jul 2020 15:18:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Hi\n\nI am sending another patch that tries to allow CachedPlans for CALL\nstatements. I think this patch is very accurate, but it is not nice,\nbecause it is smudging very precious reference counting for CachedPlans.\n\nCurrent issue:\n==========\n\nI found a problem with repeated CALL statements from DO command. For every\nexecution of a CALL statement a plan is created that is released at the\ntime of the end of DO block.\n\ncreate or replace procedure insert_into_foo(i int)\nas $$\nbegin\n insert into foo values(i, i || 'Ahoj');\n if i % 1000 = 0 then raise notice '%', i;\n --commit;\n end if;\nend;\n$$ language plpgsql;\n\nand DO\n\ndo $$\nbegin\n for i in 1..500000\n loop\n call insert_into_foo(i);\n end loop;\nend\n$$;\n\nRequires about 2.5GB RAM (execution time is 18 sec). The problem is \"long\ntransaction\" with 500M iteration of CALL statement.\n\nIf I try to remove a comment before COMMIT - then I get 500 transactions.\nBut still it needs 2.5GB memory.\n\nThe reason for this behaviour is disabling plan cache for CALL statements\nexecuted in atomic mode.\n\nSo I wrote patch 1, that releases the not saved plan immediately. This\npatch is very simple, and fixes memory issues. It is a little bit faster\n(14 sec), and Postgres consumes about 200KB.\n\nPatch 1 is simple, clean, nice but execution of CALL statements is slow due\nrepeated planning.\n\nI tried to fix this issue another way - by little bit different work with\nplan cache reference counting. Current design expects only statements\nwrapped inside transactions. It is not designed for new possibilities in\nCALL statements, when more transactions can be finished inside one\nstatement. Principally - cached plans should not be reused in different\ntransactions (simple expressions are an exception). So if we try to use\ncached plans for CALL statements, there is no clean responsibility who has\nto close a cached plan. It can be SPI (when execution stays in the same\ntransaction), or resource owner (when transaction is finished inside\nexecution of SPI).\n\nThe Peter wrote a comment about it\n\n<--><--><-->/*\n<--><--><--> * Don't save the plan if not in atomic context. Otherwise,\n<--><--><--> * transaction ends would cause errors about plancache leaks.\n<--><--><--> *\n\nThis comment is not fully accurate. If we try to save the plan, then\nexecution (with finished transaction inside) ends by segfault. Cached plan\nis released on transaction end (by resource owner) and related memory\ncontext is released. But next time this structure is accessed. There is\nonly a warning about unclosed plan cache (it maybe depends on other things).\n\nI wrote a patch 2 that marks CALL statement related plans as \"fragile\". In\nthis case the plan is cached every time. There is a special mark \"fragile\",\nthat blocks immediately releasing related memory context, and it blocks\nwarnings and errors because for this case we expect closing plan cache by\nresource owner or by SPI statement.\n\nIt reduces well CPU and memory overhead - execution time (in one big\ntransaction is only 8sec) - memory overhead is +/- 0\n\nPatch2 is not too clear, and too readable although I think so it is more\ncorrect. It better fixes SPI behaviour against new state - possibility to\ncommit, rollback inside procedures (inside SPI call).\n\nAll regress tests passed.\n\nRegards\n\nPavel", "msg_date": "Thu, 16 Jul 2020 21:08:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "On Thu, Jul 16, 2020 at 09:08:09PM +0200, Pavel Stehule wrote:\n> I am sending another patch that tries to allow CachedPlans for CALL\n> statements. I think this patch is very accurate, but it is not nice,\n> because it is smudging very precious reference counting for CachedPlans.\n\nAmit, you are registered as a reviewer of this patch for two months\nnow. Are you planning to look at it more? If you are not planning to\ndo so, that's fine, but it may be better to remove your name as\nreviewer then.\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 14:36:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory\n against calling function" }, { "msg_contents": "On Thu, 17 Sep 2020 at 11:07, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 16, 2020 at 09:08:09PM +0200, Pavel Stehule wrote:\n> > I am sending another patch that tries to allow CachedPlans for CALL\n> > statements. I think this patch is very accurate, but it is not nice,\n> > because it is smudging very precious reference counting for CachedPlans.\n>\n> Amit, you are registered as a reviewer of this patch for two months\n> now. Are you planning to look at it more? If you are not planning to\n> do so, that's fine, but it may be better to remove your name as\n> reviewer then.\n\nThanks Michael for reminding. I *had* actually planned to do some more\nreview. But I think I might end up not getting bandwidth for this one\nduring this month. So I have removed my name. But I have kept my name\nas reviewer for bitmaps and correlation :\n\"https://commitfest.postgresql.org/29/2310/ since I do plan to do some\nreview on that one.\n\nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Thu, 17 Sep 2020 14:35:44 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending another patch that tries to allow CachedPlans for CALL\n> statements. I think this patch is very accurate, but it is not nice,\n> because it is smudging very precious reference counting for CachedPlans.\n\nI spent some time testing this. Although the #1 patch gets rid of\nthe major memory leak of cached plans, the original test case still\nshows a pretty substantial leak across repeated executions of a CALL.\nThe reason is that the stanza for rebuilding stmt->target also gets\nexecuted each time through, and that leaks not only the relatively\nsmall PLpgSQL_row datum but also a bunch of catalog lookup cruft\ncreated on the way to building the datum. Basically this code forgot\nthat plpgsql's outer execution layer can't assume that it's running\nin a short-lived context.\n\nI attach a revised #1 that takes care of that problem, and also\ncleans up what seems to me to be pretty sloppy thinking in both\nthe original code and Pavel's #1 patch: we should be restoring\nthe previous value of expr->plan, not cavalierly assuming that\nit was necessarily NULL. I didn't care for looking at the plan's\n\"saved\" field to decide what was happening, either. We really\nshould have a local flag variable clearly defining which behavior\nit is that we're implementing.\n\nWith this patch, I see zero memory bloat on Pavel's original example,\neven with a much larger repeat count.\n\nI don't like much of anything about plpgsql-stmt_call-fix-2.patch.\nIt feels confused and poorly documented, possibly because \"fragile\"\nis not a very clear term for whatever property it is you're trying to\nattribute to plans. But in any case, I think it's fixing the problem\nin the wrong place. I think the right way to fix it probably is to\nmanage a CALL's saved plan the same as every other plpgsql plan,\nbut arrange for the transient refcount on that plan to be held by a\nResourceOwner that is not a child of any transaction resowner, but\nrather belongs to the procedure's execution and will be released on\nthe way out of the procedure.\n\nIn any case, I doubt we'd risk back-patching either the #2 patch\nor any other approach to avoiding the repeat planning. We need a\nback-patchable fix that at least tamps down the memory bloat,\nand this seems like it'd do.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 27 Sep 2020 21:04:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "po 28. 9. 2020 v 3:04 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I am sending another patch that tries to allow CachedPlans for CALL\n> > statements. I think this patch is very accurate, but it is not nice,\n> > because it is smudging very precious reference counting for CachedPlans.\n>\n> I spent some time testing this. Although the #1 patch gets rid of\n> the major memory leak of cached plans, the original test case still\n> shows a pretty substantial leak across repeated executions of a CALL.\n> The reason is that the stanza for rebuilding stmt->target also gets\n> executed each time through, and that leaks not only the relatively\n> small PLpgSQL_row datum but also a bunch of catalog lookup cruft\n> created on the way to building the datum. Basically this code forgot\n> that plpgsql's outer execution layer can't assume that it's running\n> in a short-lived context.\n>\n> I attach a revised #1 that takes care of that problem, and also\n> cleans up what seems to me to be pretty sloppy thinking in both\n> the original code and Pavel's #1 patch: we should be restoring\n> the previous value of expr->plan, not cavalierly assuming that\n> it was necessarily NULL. I didn't care for looking at the plan's\n> \"saved\" field to decide what was happening, either. We really\n> should have a local flag variable clearly defining which behavior\n> it is that we're implementing.\n>\n> With this patch, I see zero memory bloat on Pavel's original example,\n> even with a much larger repeat count.\n>\n> I don't like much of anything about plpgsql-stmt_call-fix-2.patch.\n> It feels confused and poorly documented, possibly because \"fragile\"\n> is not a very clear term for whatever property it is you're trying to\n> attribute to plans. But in any case, I think it's fixing the problem\n> in the wrong place. I think the right way to fix it probably is to\n> manage a CALL's saved plan the same as every other plpgsql plan,\n> but arrange for the transient refcount on that plan to be held by a\n> ResourceOwner that is not a child of any transaction resowner, but\n> rather belongs to the procedure's execution and will be released on\n> the way out of the procedure.\n>\n> In any case, I doubt we'd risk back-patching either the #2 patch\n> or any other approach to avoiding the repeat planning. We need a\n> back-patchable fix that at least tamps down the memory bloat,\n> and this seems like it'd do.\n>\n\nI agree with these conclusions. I'll try to look if I can do #2 patch\nbetter for pg14. Probably it can fix more issues related to CALL statement,\nand I agree so this should not be backapatched.\n\nIt can be great to use CALL without memory leaks (and it can be better (in\nfuture) if the performance of CALL statements should be good).\n\nThank you for enhancing and fixing this patch\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\npo 28. 9. 2020 v 3:04 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending another patch that tries to allow CachedPlans for CALL\n> statements. I think this patch is very accurate, but it is not nice,\n> because it is smudging very precious reference counting for CachedPlans.\n\nI spent some time testing this.  Although the #1 patch gets rid of\nthe major memory leak of cached plans, the original test case still\nshows a pretty substantial leak across repeated executions of a CALL.\nThe reason is that the stanza for rebuilding stmt->target also gets\nexecuted each time through, and that leaks not only the relatively\nsmall PLpgSQL_row datum but also a bunch of catalog lookup cruft\ncreated on the way to building the datum.  Basically this code forgot\nthat plpgsql's outer execution layer can't assume that it's running\nin a short-lived context.\n\nI attach a revised #1 that takes care of that problem, and also\ncleans up what seems to me to be pretty sloppy thinking in both\nthe original code and Pavel's #1 patch: we should be restoring\nthe previous value of expr->plan, not cavalierly assuming that\nit was necessarily NULL.  I didn't care for looking at the plan's\n\"saved\" field to decide what was happening, either.  We really\nshould have a local flag variable clearly defining which behavior\nit is that we're implementing.\n\nWith this patch, I see zero memory bloat on Pavel's original example,\neven with a much larger repeat count.\n\nI don't like much of anything about plpgsql-stmt_call-fix-2.patch.\nIt feels confused and poorly documented, possibly because \"fragile\"\nis not a very clear term for whatever property it is you're trying to\nattribute to plans.  But in any case, I think it's fixing the problem\nin the wrong place.  I think the right way to fix it probably is to\nmanage a CALL's saved plan the same as every other plpgsql plan,\nbut arrange for the transient refcount on that plan to be held by a\nResourceOwner that is not a child of any transaction resowner, but\nrather belongs to the procedure's execution and will be released on\nthe way out of the procedure.\n\nIn any case, I doubt we'd risk back-patching either the #2 patch\nor any other approach to avoiding the repeat planning.  We need a\nback-patchable fix that at least tamps down the memory bloat,\nand this seems like it'd do.I agree with these conclusions.  I'll try to look if I can do #2 patch better for pg14. Probably it can fix more issues related to CALL statement, and I agree so this should not be backapatched. It can be great to use CALL without memory leaks (and it can be better (in future) if the performance of CALL statements should be good).Thank you for enhancing and fixing this patchRegardsPavel\n\n                        regards, tom lane", "msg_date": "Mon, 28 Sep 2020 11:14:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I agree with these conclusions. I'll try to look if I can do #2 patch\n> better for pg14. Probably it can fix more issues related to CALL statement,\n> and I agree so this should not be backapatched.\n\nI've pushed this and marked the CF entry committed. Please start a\nnew thread and new CF entry whenever you have a more ambitious patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Sep 2020 11:20:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" }, { "msg_contents": "út 29. 9. 2020 v 17:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I agree with these conclusions. I'll try to look if I can do #2 patch\n> > better for pg14. Probably it can fix more issues related to CALL\n> statement,\n> > and I agree so this should not be backapatched.\n>\n> I've pushed this and marked the CF entry committed. Please start a\n> new thread and new CF entry whenever you have a more ambitious patch.\n>\n\nThank you\n\nPavel\n\n\n> regards, tom lane\n>\n\nút 29. 9. 2020 v 17:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I agree with these conclusions.  I'll try to look if I can do #2 patch\n> better for pg14. Probably it can fix more issues related to CALL statement,\n> and I agree so this should not be backapatched.\n\nI've pushed this and marked the CF entry committed.  Please start a\nnew thread and new CF entry whenever you have a more ambitious patch.Thank youPavel\n\n                        regards, tom lane", "msg_date": "Tue, 29 Sep 2020 18:39:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: calling procedures is slow and consumes extra much memory against\n calling function" } ]
[ { "msg_contents": "Hi,\n\nWhile looking at an old wal2json issue, I stumbled on a scenario that a\ntable\nwith a deferred primary key is not updatable in logical replication. AFAICS\nit\nhas been like that since the beginning of logical decoding and seems to be\nan\noversight while designing logical decoding. I don't envision a problem with\na\ndeferred primary key in an after commit scenario. Am I missing something?\n\nJust in case, I'm attaching a patch to fix it and also add a test to cover\nthis\nscenario.\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 10 May 2020 18:10:40 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "deferred primary key and logical replication" }, { "msg_contents": "The patch no longer applies, because of additions in the test source. Otherwise, I have tested the patch and confirmed that updates and deletes on tables with deferred primary keys work with logical replication.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Fri, 24 Jul 2020 08:15:58 +0000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On Fri, 24 Jul 2020 at 05:16, Ajin Cherian <itsajin@gmail.com> wrote:\n\n> The patch no longer applies, because of additions in the test source.\n> Otherwise, I have tested the patch and confirmed that updates and deletes\n> on tables with deferred primary keys work with logical replication.\n>\n> The new status of this patch is: Waiting on Author\n>\n\nThanks for testing. I attached a rebased patch.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 27 Jul 2020 18:26:13 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nPatch applies cleanly. Tested that update/delete of tables with deferred primary keys now work with logical replication. Code/comments look fine.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 03 Aug 2020 09:46:35 +0000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> While looking at an old wal2json issue, I stumbled on a scenario that a table\n> with a deferred primary key is not updatable in logical replication. AFAICS it\n> has been like that since the beginning of logical decoding and seems to be an\n> oversight while designing logical decoding.\n>\n\nI am not sure if it is an oversight because we document that the index\nmust be non-deferrable, see \"USING INDEX records the old values of the\ncolumns covered by the named index, which must be unique, not partial,\nnot deferrable, and include only columns marked NOT NULL.\" in docs\n[1].\n\nNow sure this constraint is when we use USING INDEX for REPLICA\nIDENTITY but why it has to be different for PRIMARY KEY especially\nwhen UNIQUE constraint will have similar behavior and the same is\ndocumented?\n\n\n[1] - https://www.postgresql.org/docs/devel/sql-altertable.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Oct 2020 17:05:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On Mon, 5 Oct 2020 at 08:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n> >\n> > Hi,\n> >\n> > While looking at an old wal2json issue, I stumbled on a scenario that a\n> table\n> > with a deferred primary key is not updatable in logical replication.\n> AFAICS it\n> > has been like that since the beginning of logical decoding and seems to\n> be an\n> > oversight while designing logical decoding.\n> >\n>\n> I am not sure if it is an oversight because we document that the index\n> must be non-deferrable, see \"USING INDEX records the old values of the\n> columns covered by the named index, which must be unique, not partial,\n> not deferrable, and include only columns marked NOT NULL.\" in docs\n> [1].\n>\n>\nInspecting this patch again, I forgot to consider\nthat RelationGetIndexList()\nis called by other backend modules. Since logical decoding deals with\nfinished\ntransactions, it is ok to use a deferrable primary key. However, this patch\nis\nprobably wrong because it does not consider the other modules.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Mon, 5 Oct 2020 at 08:34, Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> While looking at an old wal2json issue, I stumbled on a scenario that a table\n> with a deferred primary key is not updatable in logical replication. AFAICS it\n> has been like that since the beginning of logical decoding and seems to be an\n> oversight while designing logical decoding.\n>\n\nI am not sure if it is an oversight because we document that the index\nmust be non-deferrable, see \"USING INDEX records the old values of the\ncolumns covered by the named index, which must be unique, not partial,\nnot deferrable, and include only columns marked NOT NULL.\" in docs\n[1].\nInspecting this patch again, I forgot to consider that RelationGetIndexList()is called by other backend modules. Since logical decoding deals with finishedtransactions, it is ok to use a deferrable primary key. However, this patch isprobably wrong because it does not consider the other modules.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 25 Oct 2020 13:09:27 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On Sun, Oct 25, 2020 at 9:39 PM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Mon, 5 Oct 2020 at 08:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n>> <euler.taveira@2ndquadrant.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > While looking at an old wal2json issue, I stumbled on a scenario that a table\n>> > with a deferred primary key is not updatable in logical replication. AFAICS it\n>> > has been like that since the beginning of logical decoding and seems to be an\n>> > oversight while designing logical decoding.\n>> >\n>>\n>> I am not sure if it is an oversight because we document that the index\n>> must be non-deferrable, see \"USING INDEX records the old values of the\n>> columns covered by the named index, which must be unique, not partial,\n>> not deferrable, and include only columns marked NOT NULL.\" in docs\n>> [1].\n>>\n>\n> Inspecting this patch again, I forgot to consider that RelationGetIndexList()\n> is called by other backend modules. Since logical decoding deals with finished\n> transactions, it is ok to use a deferrable primary key.\n>\n\nBut starting PG-14, we do support logical decoding of in-progress\ntransactions as well.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Oct 2020 16:16:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On 27.10.2020 13:46, Amit Kapila wrote:\n> On Sun, Oct 25, 2020 at 9:39 PM Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n>> On Mon, 5 Oct 2020 at 08:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n>>> <euler.taveira@2ndquadrant.com> wrote:\n>>>> Hi,\n>>>>\n>>>> While looking at an old wal2json issue, I stumbled on a scenario that a table\n>>>> with a deferred primary key is not updatable in logical replication. AFAICS it\n>>>> has been like that since the beginning of logical decoding and seems to be an\n>>>> oversight while designing logical decoding.\n>>>>\n>>> I am not sure if it is an oversight because we document that the index\n>>> must be non-deferrable, see \"USING INDEX records the old values of the\n>>> columns covered by the named index, which must be unique, not partial,\n>>> not deferrable, and include only columns marked NOT NULL.\" in docs\n>>> [1].\n>>>\n>> Inspecting this patch again, I forgot to consider that RelationGetIndexList()\n>> is called by other backend modules. Since logical decoding deals with finished\n>> transactions, it is ok to use a deferrable primary key.\n>>\n> But starting PG-14, we do support logical decoding of in-progress\n> transactions as well.\n>\n>\nCommitfest entry status update.\nAs far as I see, this patch needs some further work, so I move it to \n\"Waiting on author\".\nEuler, are you going to continue working on it?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 24 Nov 2020 00:34:56 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" }, { "msg_contents": "On Tue, Nov 24, 2020 at 3:04 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n>\n> On 27.10.2020 13:46, Amit Kapila wrote:\n> > On Sun, Oct 25, 2020 at 9:39 PM Euler Taveira\n> > <euler.taveira@2ndquadrant.com> wrote:\n> >> On Mon, 5 Oct 2020 at 08:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>> On Mon, May 11, 2020 at 2:41 AM Euler Taveira\n> >>> <euler.taveira@2ndquadrant.com> wrote:\n> >>>> Hi,\n> >>>>\n> >>>> While looking at an old wal2json issue, I stumbled on a scenario that a table\n> >>>> with a deferred primary key is not updatable in logical replication. AFAICS it\n> >>>> has been like that since the beginning of logical decoding and seems to be an\n> >>>> oversight while designing logical decoding.\n> >>>>\n> >>> I am not sure if it is an oversight because we document that the index\n> >>> must be non-deferrable, see \"USING INDEX records the old values of the\n> >>> columns covered by the named index, which must be unique, not partial,\n> >>> not deferrable, and include only columns marked NOT NULL.\" in docs\n> >>> [1].\n> >>>\n> >> Inspecting this patch again, I forgot to consider that RelationGetIndexList()\n> >> is called by other backend modules. Since logical decoding deals with finished\n> >> transactions, it is ok to use a deferrable primary key.\n> >>\n> > But starting PG-14, we do support logical decoding of in-progress\n> > transactions as well.\n> >\n> >\n> Commitfest entry status update.\n> As far as I see, this patch needs some further work, so I move it to\n> \"Waiting on author\".\n>\n\nI think this should be marked as \"Returned with Feedback\" as there is\nno response to the feedback for a long time and also it is not very\nclear if this possible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 Nov 2020 07:12:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: deferred primary key and logical replication" } ]
[ { "msg_contents": "Hello.\n\nI happened to notice a wrong function name in the comment of\nXLogReadDetermineTimeline.\n\n * The caller must also make sure it doesn't read past the current replay\n * position (using GetWalRcvWriteRecPtr) if executing in recovery, so it\n\nThe comment is mentioning \"replay position\" and the callers are\nactually using GetXLogReplayRecPtr to check TLI and target LSN. The\ncomment was written in that way when the function is introduced by\n1148e22a82. The attached fixes that.\n\nThe function GetWalRcvWriteRecPtr is not called from anywhere in core\nbut I don't think we need to bother removing it since it is a public\nfunction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 11 May 2020 10:16:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "A comment fix" }, { "msg_contents": "On Mon, May 11, 2020 at 10:16:19AM +0900, Kyotaro Horiguchi wrote:\n> The comment is mentioning \"replay position\" and the callers are\n> actually using GetXLogReplayRecPtr to check TLI and target LSN. The\n> comment was written in that way when the function is introduced by\n> 1148e22a82. The attached fixes that.\n\nLooks right to me, so will fix if there are no objections.\nread_local_xlog_page() uses the replay location when in recovery.\n\n> The function GetWalRcvWriteRecPtr is not called from anywhere in core\n> but I don't think we need to bother removing it since it is a public\n> function.\n\nYes, I don't think that's removable (just look at the log message of\nd140f2f3), and the function is dead simple so that's not really going\nto break even if this is dead in-core now. Worth noting some future\nWAL prefetch stuff may actually use it.\n--\nMichael", "msg_date": "Mon, 11 May 2020 14:22:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A comment fix" }, { "msg_contents": "On Mon, May 11, 2020 at 02:22:36PM +0900, Michael Paquier wrote:\n> Looks right to me, so will fix if there are no objections.\n> read_local_xlog_page() uses the replay location when in recovery.\n\nDone this part now.\n--\nMichael", "msg_date": "Tue, 12 May 2020 14:45:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A comment fix" }, { "msg_contents": "At Tue, 12 May 2020 14:45:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, May 11, 2020 at 02:22:36PM +0900, Michael Paquier wrote:\n> > Looks right to me, so will fix if there are no objections.\n> > read_local_xlog_page() uses the replay location when in recovery.\n> \n> Done this part now.\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 May 2020 15:07:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A comment fix" } ]