threads
listlengths
1
2.99k
[ { "msg_contents": "\n\n\n> On Jun 14, 2022, at 19:06, Thomas Munro <thomas.munro@gmail.com> wrote:\n> One difference would be the effect if ICU ever ships a minor library\n> version update that changes the reported collversion.\n\nIf I’m reading it correctly, ICU would not change collation in major versions, as an explicit matter of policy around DUCET stability and versioning.\n\nhttps://unicode.org/reports/tr10/#Stable_DUCET\n\n\n> With some system of symlinks to make it all work with defaults for\n> those who don't care, a libc could have\n> /usr/share/locale/en_US@CLDR34.UTF-8 etc so you could\n> setlocale(LC_COLLATE, \"en_US@CLDR34\"), or something. I suppose they\n> don't want to promise to be able to interpret the old data in future\n> releases, and, as you say, sometimes the changes are in C code, due to\n> bugs or algorithm changes, not the data.\n\nIf I understand correctly, files in /usr/share/locale aren’t enough because those only have the tailoring rules, and core algorithm and data (before applying locale-specific tweaks) also change between versions. I’m pretty sure glibc works similar to UCA in this regard (albeit based on ISO 14651 and not CDLR), and the Unicode link above is a good illustration of default collation rules that underly the locale-specific tweaks.\n\n-Jeremy\n\nSent from my TI-83\nOn Jun 14, 2022, at 19:06, Thomas Munro <thomas.munro@gmail.com> wrote:One difference would be the effect if ICU ever ships a minor libraryversion update that changes the reported collversion.If I’m reading it correctly, ICU would not change collation in major versions, as an explicit matter of policy around DUCET stability and versioning.https://unicode.org/reports/tr10/#Stable_DUCETWith some system of symlinks to make it all work with defaults forthose who don't care, a libc could have/usr/share/locale/en_US@CLDR34.UTF-8 etc so you couldsetlocale(LC_COLLATE, \"en_US@CLDR34\"), or something.  I suppose theydon't want to promise to be able to interpret the old data in futurereleases, and, as you say, sometimes the changes are in C code, due tobugs or algorithm changes, not the data.If I understand correctly, files in /usr/share/locale aren’t enough because those only have the tailoring rules, and core algorithm and data (before applying locale-specific tweaks) also change between versions. I’m pretty sure glibc works similar to UCA in this regard (albeit based on ISO 14651 and not CDLR), and the Unicode link above is a good illustration of default collation rules that underly the locale-specific tweaks.-JeremySent from my TI-83", "msg_date": "Tue, 14 Jun 2022 20:40:22 -0400", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": true, "msg_subject": "Re: Collation version tracking for macOS" } ]
[ { "msg_contents": "Dear Postgres,\n\nRecently I work on grouping sets and I find the last param numGroups of create_groupingsets_path is not used.\nIn create_groupingsets_path we use rollup->numGroups to do cost_agg. So I remove the param numGroups for\ncreate_groupingsets_path.\n\nI generate a diff.patch, which is sent as an attachment.\nI really hope this can be committed to Postgres. Thank you a lot!\n\nBest wishes to you!\nzxuejing", "msg_date": "Wed, 15 Jun 2022 03:33:04 +0000", "msg_from": "XueJing Zhao <zxuejing@vmware.com>", "msg_from_op": true, "msg_subject": "Remove useless param for create_groupingsets_path" }, { "msg_contents": "On Wed, Jun 15, 2022 at 11:33 AM XueJing Zhao <zxuejing@vmware.com> wrote:\n\n> Recently I work on grouping sets and I find the last param numGroups of\n> create_groupingsets_path is not used.\n>\n> In create_groupingsets_path we use rollup->numGroups to do cost_agg.\n>\n\nYes indeed. The param 'numGroups' was used originally when we first\nintroduced in create_groupingsets_path(), and then all its references\ninside that function were removed and replaced with the numGroups inside\nRollupData in b5635948.\n\n\n> I generate a diff.patch, which is sent as an attachment.\n>\n\nBTW, the patch looks weird to me that it seems operates in the inverse\ndirection, i.e. it's adding the param 'numGroups', not removing it.\n\nThanks\nRichard\n\nOn Wed, Jun 15, 2022 at 11:33 AM XueJing Zhao <zxuejing@vmware.com> wrote:\n\nRecently I work on grouping sets and I find the last param numGroups of create_groupingsets_path is not used.\n\nIn create_groupingsets_path we use rollup->numGroups to do cost_agg. Yes indeed. The param 'numGroups' was used originally when we firstintroduced in create_groupingsets_path(), and then all its referencesinside that function were removed and replaced with the numGroups insideRollupData in b5635948. \n\nI generate a diff.patch, which is sent as an attachment.BTW, the patch looks weird to me that it seems operates in the inversedirection, i.e. it's adding the param 'numGroups', not removing it.ThanksRichard", "msg_date": "Wed, 15 Jun 2022 12:12:00 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless param for create_groupingsets_path" }, { "msg_contents": "Hi, Richard\nYou are right, The patch is incorrect, and I generate a patch once more, It is sent as as attachment named new,patch, please check, thanks!\n\nBest regards!\nZxuejing\n\nFrom: Richard Guo <guofenglinux@gmail.com>\nDate: 2022-06-15 12:12\nTo: XueJing Zhao <zxuejing@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Remove useless param for create_groupingsets_path\n\n\nOn Wed, Jun 15, 2022 at 11:33 AM XueJing Zhao <zxuejing@vmware.com<mailto:zxuejing@vmware.com>> wrote:\nRecently I work on grouping sets and I find the last param numGroups of create_groupingsets_path is not used.\nIn create_groupingsets_path we use rollup->numGroups to do cost_agg.\n\nYes indeed. The param 'numGroups' was used originally when we first\nintroduced in create_groupingsets_path(), and then all its references\ninside that function were removed and replaced with the numGroups inside\nRollupData in b5635948.\n\nI generate a diff.patch, which is sent as an attachment.\n\nBTW, the patch looks weird to me that it seems operates in the inverse\ndirection, i.e. it's adding the param 'numGroups', not removing it.\n\nThanks\nRichard\n\n________________________________", "msg_date": "Wed, 15 Jun 2022 06:03:59 +0000", "msg_from": "XueJing Zhao <zxuejing@vmware.com>", "msg_from_op": true, "msg_subject": "Reply: Remove useless param for create_groupingsets_path" }, { "msg_contents": "XueJing Zhao <zxuejing@vmware.com> writes:\n> You are right, The patch is incorrect, and I generate a patch once more, It is sent as as attachment named new,patch, please check, thanks!\n\nLGTM. Pushed, thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Jul 2022 18:40:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reply: Remove useless param for create_groupingsets_path" } ]
[ { "msg_contents": "Hi all!\n\nThe motivation behind this is that incrementally building up a GiST \nindex for certain input data can create a terrible tree structure.\nFurthermore, exclusion constraints are commonly implemented using GiST \nindices and for that use case, data is mostly orderable.\n\nBy sorting the data before inserting it into the index results in a much \nbetter index structure, leading to significant performance improvements.\n\nTesting was done using following setup, with about 50 million rows:\n\n CREATE EXTENSION btree_gist;\n CREATE TABLE t (id uuid, block_range int4range);\n CREATE INDEX ON before USING GIST (id, block_range);\n COPY t FROM '..' DELIMITER ',' CSV HEADER;\n\nusing\n\n SELECT * FROM t WHERE id = '..' AND block_range && '..'\n\nas test query, using a unpatched instance and one with the patch applied.\n\nSome stats for fetching 10,000 random rows using the query above,\n100 iterations to get good averages.\n\nThe benchmarking was done on a unpatched instance compiled using the \nexact same options as with the patch applied.\n[ Results are noted in a unpatched -> patched fashion. ]\n\nFirst set of results are after the initial CREATE TABLE, CREATE INDEX \nand a COPY to the table, thereby incrementally building the index.\n\nShared Hit Blocks (average): 110.97 -> 78.58\nShared Read Blocks (average): 58.90 -> 47.42\nExecution Time (average): 1.10 -> 0.83 ms\nI/O Read Time (average): 0.19 -> 0.15 ms\n\nAfter a REINDEX on the table, the results improve even more:\n\nShared Hit Blocks (average): 84.24 -> 8.54\nShared Read Blocks (average): 49.89 -> 0.74\nExecution Time (average): 0.84 -> 0.065 ms\nI/O Read Time (average): 0.16 -> 0.004 ms\n\nAdditionally, the time a REINDEX takes also improves significantly:\n\n 672407.584 ms (11:12.408) -> 130670.232 ms (02:10.670)\n\nMost of the sortsupport for btree_gist was implemented by re-using \nalready existing infrastructure. For the few remaining types (bit, bool, \ncash, enum, interval, macaddress8 and time) I manually implemented them \ndirectly in btree_gist.\nIt might make sense to move them into the backend for uniformity, but I \nwanted to get other opinions on that first.\n\n`make check-world` reports no regressions.\n\nAttached below, besides the patch, are also two scripts for benchmarking.\n\n`bench-gist.py` to benchmark the actual patch, example usage of this \nwould be e.g. `./bench-gist.py -o results.csv public.table`. This \nexpects a local instance with no authentication and default `postgres` \nuser. The port can be set using the `--port` option.\n\n`plot.py` prints average values (as used above) and creates boxplots for \neach statistic from the result files produced with `bench-gist.py`. \nDepends on matplotlib and pandas.\n\nAdditionally, if needed, the sample dataset used to benchmark this is \navailable to independently verify the results [1].\n\nThanks,\nChristoph Heiss\n\n---\n\n[1] https://drive.google.com/file/d/1SKRiUYd78_zl7CeD8pLDoggzCCh0wj39", "msg_date": "Wed, 15 Jun 2022 12:45:07 +0200", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "[PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi Christoph!\n\n> On 15 Jun 2022, at 15:45, Christoph Heiss <christoph.heiss@cybertec.at> wrote:\n> \n> By sorting the data before inserting it into the index results in a much better index structure, leading to significant performance improvements.\n\nHere's my version of the very similar idea [0]. It lacks range types support.\nOn a quick glance your version lacks support of abbreviated sort, so I think benchmarks can be pushed event further :)\nLet's merge our efforts and create combined patch?\n\nPlease, create a new entry for the patch on Commitfest.\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/37/2824/\n\n", "msg_date": "Wed, 15 Jun 2022 19:39:34 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On Wed, Jun 15, 2022 at 3:45 AM Christoph Heiss\n<christoph.heiss@cybertec.at> wrote:\n> `make check-world` reports no regressions.\n\ncfbot is reporting a crash in contrib/btree_gist:\n\n https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3686\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 5 Jul 2022 11:13:41 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3686/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:01:00 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi!\n\nSorry for the long delay.\n\nThis fixes the crashes, now all tests run fine w/ and w/o debug \nconfiguration. That shouldn't even have slipped through as such.\n\nNotable changes from v1:\n- gbt_enum_sortsupport() now passes on fcinfo->flinfo\n enum_cmp_internal() needs a place to cache the typcache entry.\n- inet sortsupport now uses network_cmp() directly\n\nThanks,\nChristoph Heiss", "msg_date": "Wed, 31 Aug 2022 21:15:40 +0200", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi,\n\nOn 2022-08-31 21:15:40 +0200, Christoph Heiss wrote:\n> Notable changes from v1:\n> - gbt_enum_sortsupport() now passes on fcinfo->flinfo\n> enum_cmp_internal() needs a place to cache the typcache entry.\n> - inet sortsupport now uses network_cmp() directly\n\nUpdated the patch to add the minimal change for meson compat.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 2 Oct 2022 00:23:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On 2022-10-02 00:23:32 -0700, Andres Freund wrote:\n> Updated the patch to add the minimal change for meson compat.\n\nNow I made the same mistake of not adding the change... Clearly I need to stop\nfor tonight. Either way, here's the hopefully correct change.", "msg_date": "Sun, 2 Oct 2022 00:29:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi,\n\nNo deep code review yet, but CF is approaching its end and i didn't\nhave time to look at this earlier :/ \n\nBelow are some things i've tested so far.\n\nAm Mittwoch, dem 15.06.2022 um 12:45 +0200 schrieb Christoph Heiss:\n\n\n> Testing was done using following setup, with about 50 million rows:\n> \n>     CREATE EXTENSION btree_gist;\n>     CREATE TABLE t (id uuid, block_range int4range);\n>     CREATE INDEX ON before USING GIST (id, block_range);\n>     COPY t FROM '..' DELIMITER ',' CSV HEADER;\n> \n> using\n> \n>     SELECT * FROM t WHERE id = '..' AND block_range && '..'\n> \n> as test query, using a unpatched instance and one with the patch\n> applied.\n> \n> Some stats for fetching 10,000 random rows using the query above,\n> 100 iterations to get good averages.\n> \n\nHere are my results with repeating this:\n\nHEAD:\n-- token index (buffering=auto)\nCREATE INDEX Time: 700213,110 ms (11:40,213)\n\nHEAD patched:\n\n-- token index (buffering=auto)\nCREATE INDEX Time: 136229,400 ms (02:16,229)\n\nSo index creation speed on the test set (table filled with the tokens\nand then creating the index afterwards) gets a lot of speedup with this\npatch and default buffering strategy.\n\n> The benchmarking was done on a unpatched instance compiled using the \n> > exact same options as with the patch applied.\n> > [ Results are noted in a unpatched -> patched fashion. ]\n> > \n> > First set of results are after the initial CREATE TABLE, CREATE\n> INDEX\n> > and a COPY to the table, thereby incrementally building the index.\n> > \n> > Shared Hit Blocks (average): 110.97 -> 78.58\n> > Shared Read Blocks (average): 58.90 -> 47.42\n> > Execution Time (average): 1.10 -> 0.83 ms\n> > I/O Read Time (average): 0.19 -> 0.15 ms\n\nI've changed this a little and did the following:\n\n CREATE EXTENSION btree_gist;\n CREATE TABLE t (id uuid, block_range int4range);\n COPY t FROM '..' DELIMITER ',' CSV HEADER;\n CREATE INDEX ON before USING GIST (id, block_range);\n\nSo creating the index _after_ having loaded the tokens.\nMy configuration was:\n\nshared_buffers = 4G\nmax_wal_size = 6G\neffective_cache_size = 4g # (default, index fits)\nmaintenance_work_mem = 1G\n\n\nHere are my numbers from the attached benchmark script \n\nHEAD -> HEAD patched:\n\nShared Hit Blocks (avg) : 76.81 -> 9.17\nShared Read Blocks (avg): 0.43 -> 0.11\nExecution Time (avg) : 0.40 -> 0.05\nIO Read Time (avg) : 0.001 -> 0.0007\n\nSo with these settings i see an improvement with the provided test set.\nSince this patches adds sortsupport for all other existing opclasses, i\nthought to give it a try with another test set. What i did was to adapt\nthe benchmark script (see attached) to use the \"pgbench_accounts\" table\nwhich i changed to instead using the primary key to have a btree_gist\nindex on column \"aid\".\n\nI let pgbench fill its tables with scale = 1000, dropped the primary\nkey, create the btree_gist on \"aid\" with default buffering strategy:\n\npgbench -s 1000 -i bernd\n\nALTER TABLE pgbench_accounts DROP CONSTRAINT pgbench_accounts_pkey ;\nCREATE INDEX ON pgbench_accounts USING gist(aid);\n\nRan the benchmark script bench-gist-pgbench_accounts.py:\n\nThe numbers are:\n\nHEAD -> HEAD patched\n\nShared Hit Blocks (avg) : 4.85 -> 8.75\nShared Read Blocks (avg): 0.14 -> 0.17\nExecution Time (avg) : 0.01 -> 0.05\nIO Read Time (avg) : 0.0003 -> 0.0009\n\nSo numbers got worse here. You can uncover this when using pgbench\nagainst that modified table in a much more worse outcome.\n\nRunning\n\npgbench -s 1000 -c 16 -j 16 -S -Mprepared -T 300 \n\non my workstation at least 3 times gives me the following numbers:\n\nHEAD:\n\ntps = 215338.784398 (without initial connection time)\ntps = 212826.513727 (without initial connection time)\ntps = 212102.857891 (without initial connection time)\n\nHEAD patched:\n\ntps = 126487.796716 (without initial connection time)\ntps = 125076.391528 (without initial connection time)\ntps = 124538.946388 (without initial connection time)\n\n\nSo this doesn't look good. While this patch gets a real improvement for\nthe provided tokens, it makes performance for at least int4 on this\ntest worse. Though the picture changes again if you build the index\nbuffered:\n\ntps = 198409.248911 (without initial connection time)\ntps = 194431.827394 (without initial connection time)\ntps = 195657.532281 (without initial connection time)\n\nwhich is again close to current HEAD (i have no idea why it is even\n*that* slower, since \"buffered=on\" shouldn't employ sortsupport, no?).\nOf course, built time for the index in this case is much slower again:\n\n-- pgbench_accounts index (buffered)\nCREATE INDEX Time: 900912,924 ms (15:00,913)\n\nSo while providing a huge improvement on index creation speed it's\nsometimes still required to carefully check the index quality.\n\n[...]\n\n\n> Most of the sortsupport for btree_gist was implemented by re-using \n> already existing infrastructure. For the few remaining types (bit,\n> bool, \n> cash, enum, interval, macaddress8 and time) I manually implemented\n> them \n> directly in btree_gist.\n> It might make sense to move them into the backend for uniformity, but\n> I \n> wanted to get other opinions on that first.\n\nHmm i'd say we leave them in the contrib module until they are required\nsomewhere else, too or make a separate patch for them? Do we have plans\nto have such requirement in the backend already?\n\nAttached is a rebased patch against current HEAD.\n\n\nThanks\n\n\tBernd", "msg_date": "Wed, 30 Nov 2022 18:25:25 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On Thu, Dec 1, 2022 at 1:27 AM Bernd Helmle <mailings@oopsware.de> wrote:\n>\n> Hi,\n>\n> No deep code review yet, but CF is approaching its end and i didn't\n> have time to look at this earlier :/\n>\n> Below are some things i've tested so far.\n>\n> Am Mittwoch, dem 15.06.2022 um 12:45 +0200 schrieb Christoph Heiss:\n>\n>\n> > Testing was done using following setup, with about 50 million rows:\n> >\n> > CREATE EXTENSION btree_gist;\n> > CREATE TABLE t (id uuid, block_range int4range);\n> > CREATE INDEX ON before USING GIST (id, block_range);\n> > COPY t FROM '..' DELIMITER ',' CSV HEADER;\n> >\n> > using\n> >\n> > SELECT * FROM t WHERE id = '..' AND block_range && '..'\n> >\n> > as test query, using a unpatched instance and one with the patch\n> > applied.\n> >\n> > Some stats for fetching 10,000 random rows using the query above,\n> > 100 iterations to get good averages.\n> >\n>\n> Here are my results with repeating this:\n>\n> HEAD:\n> -- token index (buffering=auto)\n> CREATE INDEX Time: 700213,110 ms (11:40,213)\n>\n> HEAD patched:\n>\n> -- token index (buffering=auto)\n> CREATE INDEX Time: 136229,400 ms (02:16,229)\n>\n> So index creation speed on the test set (table filled with the tokens\n> and then creating the index afterwards) gets a lot of speedup with this\n> patch and default buffering strategy.\n>\n> > The benchmarking was done on a unpatched instance compiled using the\n> > > exact same options as with the patch applied.\n> > > [ Results are noted in a unpatched -> patched fashion. ]\n> > >\n> > > First set of results are after the initial CREATE TABLE, CREATE\n> > INDEX\n> > > and a COPY to the table, thereby incrementally building the index.\n> > >\n> > > Shared Hit Blocks (average): 110.97 -> 78.58\n> > > Shared Read Blocks (average): 58.90 -> 47.42\n> > > Execution Time (average): 1.10 -> 0.83 ms\n> > > I/O Read Time (average): 0.19 -> 0.15 ms\n>\n> I've changed this a little and did the following:\n>\n> CREATE EXTENSION btree_gist;\n> CREATE TABLE t (id uuid, block_range int4range);\n> COPY t FROM '..' DELIMITER ',' CSV HEADER;\n> CREATE INDEX ON before USING GIST (id, block_range);\n>\n> So creating the index _after_ having loaded the tokens.\n> My configuration was:\n>\n> shared_buffers = 4G\n> max_wal_size = 6G\n> effective_cache_size = 4g # (default, index fits)\n> maintenance_work_mem = 1G\n>\n>\n> Here are my numbers from the attached benchmark script\n>\n> HEAD -> HEAD patched:\n>\n> Shared Hit Blocks (avg) : 76.81 -> 9.17\n> Shared Read Blocks (avg): 0.43 -> 0.11\n> Execution Time (avg) : 0.40 -> 0.05\n> IO Read Time (avg) : 0.001 -> 0.0007\n>\n> So with these settings i see an improvement with the provided test set.\n> Since this patches adds sortsupport for all other existing opclasses, i\n> thought to give it a try with another test set. What i did was to adapt\n> the benchmark script (see attached) to use the \"pgbench_accounts\" table\n> which i changed to instead using the primary key to have a btree_gist\n> index on column \"aid\".\n>\n> I let pgbench fill its tables with scale = 1000, dropped the primary\n> key, create the btree_gist on \"aid\" with default buffering strategy:\n>\n> pgbench -s 1000 -i bernd\n>\n> ALTER TABLE pgbench_accounts DROP CONSTRAINT pgbench_accounts_pkey ;\n> CREATE INDEX ON pgbench_accounts USING gist(aid);\n>\n> Ran the benchmark script bench-gist-pgbench_accounts.py:\n>\n\n \\d pgbench_accounts\n Table \"public.pgbench_accounts\"\n Column | Type | Collation | Nullable | Default\n----------+---------------+-----------+----------+---------\n aid | integer | | not null |\n bid | integer | | |\n abalance | integer | | |\n filler | character(84) | | |\nIndexes:\n \"pgbench_accounts_pkey\" PRIMARY KEY, btree (aid)\n\nyou do `CREATE INDEX ON pgbench_accounts USING gist(aid);`\nbut the original patch didn't change contrib/btree_gist/btree_int4.c\nSo I doubt your benchmark is related to the original patch.\nor maybe I missed something.\n\nalso per doc:\n`\nsortsupport\nReturns a comparator function to sort data in a way that preserves\nlocality. It is used by CREATE INDEX and REINDEX commands. The quality\nof the created index depends on how well the sort order determined by\nthe comparator function preserves locality of the inputs.\n`\nfrom the doc, add sortsupport function will only influence index build time?\n\n+/*\n+ * GiST sortsupport comparator for ranges.\n+ *\n+ * Operates solely on the lower bounds of the ranges, comparing them using\n+ * range_cmp_bounds().\n+ * Empty ranges are sorted before non-empty ones.\n+ */\n+static int\n+range_gist_cmp(Datum a, Datum b, SortSupport ssup)\n+{\n+ RangeType *range_a = DatumGetRangeTypeP(a);\n+ RangeType *range_b = DatumGetRangeTypeP(b);\n+ TypeCacheEntry *typcache = ssup->ssup_extra;\n+ RangeBound lower1,\n+ lower2;\n+ RangeBound upper1,\n+ upper2;\n+ bool empty1,\n+ empty2;\n+ int result;\n+\n+ if (typcache == NULL) {\n+ Assert(RangeTypeGetOid(range_a) == RangeTypeGetOid(range_b));\n+ typcache = lookup_type_cache(RangeTypeGetOid(range_a), TYPECACHE_RANGE_INFO);\n+\n+ /*\n+ * Cache the range info between calls to avoid having to call\n+ * lookup_type_cache() for each comparison.\n+ */\n+ ssup->ssup_extra = typcache;\n+ }\n+\n+ range_deserialize(typcache, range_a, &lower1, &upper1, &empty1);\n+ range_deserialize(typcache, range_b, &lower2, &upper2, &empty2);\n+\n+ /* For b-tree use, empty ranges sort before all else */\n+ if (empty1 && empty2)\n+ result = 0;\n+ else if (empty1)\n+ result = -1;\n+ else if (empty2)\n+ result = 1;\n+ else\n+ result = range_cmp_bounds(typcache, &lower1, &lower2);\n+\n+ if ((Datum) range_a != a)\n+ pfree(range_a);\n+\n+ if ((Datum) range_b != b)\n+ pfree(range_b);\n+\n+ return result;\n+}\n\nper https://www.postgresql.org/docs/current/gist-extensibility.html\nQUOTE:\nAll the GiST support methods are normally called in short-lived memory\ncontexts; that is, CurrentMemoryContext will get reset after each\ntuple is processed. It is therefore not very important to worry about\npfree'ing everything you palloc. However, in some cases it's useful\nfor a support method to\nENDOF_QUOTE\n\nso removing the following part should be OK.\n+ if ((Datum) range_a != a)\n+ pfree(range_a);\n+\n+ if ((Datum) range_b != b)\n+ pfree(range_b);\n\ncomparison solely on the lower bounds looks strange to me.\nif lower bound is the same, then compare upper bound, so the\nrange_gist_cmp function is consistent with function range_compare.\nso following change:\n\n+ else\n+ result = range_cmp_bounds(typcache, &lower1, &lower2);\nto\n`\nelse\n{\nresult = range_cmp_bounds(typcache, &lower1, &lower2);\nif (result == 0)\nresult = range_cmp_bounds(typcache, &upper1, &upper2);\n}\n`\n\ndoes contrib/btree_gist/btree_gist--1.7--1.8.sql function be declared\nas strict ? (I am not sure)\nother than that, the whole patch looks good.\n\n\n", "msg_date": "Wed, 10 Jan 2024 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On Wed, Jan 10, 2024 at 8:00 AM jian he <jian.universality@gmail.com> wrote:\n>\n> `\n> from the doc, add sortsupport function will only influence index build time?\n>\n> +/*\n> + * GiST sortsupport comparator for ranges.\n> + *\n> + * Operates solely on the lower bounds of the ranges, comparing them using\n> + * range_cmp_bounds().\n> + * Empty ranges are sorted before non-empty ones.\n> + */\n> +static int\n> +range_gist_cmp(Datum a, Datum b, SortSupport ssup)\n> +{\n> + RangeType *range_a = DatumGetRangeTypeP(a);\n> + RangeType *range_b = DatumGetRangeTypeP(b);\n> + TypeCacheEntry *typcache = ssup->ssup_extra;\n> + RangeBound lower1,\n> + lower2;\n> + RangeBound upper1,\n> + upper2;\n> + bool empty1,\n> + empty2;\n> + int result;\n> +\n> + if (typcache == NULL) {\n> + Assert(RangeTypeGetOid(range_a) == RangeTypeGetOid(range_b));\n> + typcache = lookup_type_cache(RangeTypeGetOid(range_a), TYPECACHE_RANGE_INFO);\n> +\n> + /*\n> + * Cache the range info between calls to avoid having to call\n> + * lookup_type_cache() for each comparison.\n> + */\n> + ssup->ssup_extra = typcache;\n> + }\n> +\n> + range_deserialize(typcache, range_a, &lower1, &upper1, &empty1);\n> + range_deserialize(typcache, range_b, &lower2, &upper2, &empty2);\n> +\n> + /* For b-tree use, empty ranges sort before all else */\n> + if (empty1 && empty2)\n> + result = 0;\n> + else if (empty1)\n> + result = -1;\n> + else if (empty2)\n> + result = 1;\n> + else\n> + result = range_cmp_bounds(typcache, &lower1, &lower2);\n> +\n> + if ((Datum) range_a != a)\n> + pfree(range_a);\n> +\n> + if ((Datum) range_b != b)\n> + pfree(range_b);\n> +\n> + return result;\n> +}\n>\n> per https://www.postgresql.org/docs/current/gist-extensibility.html\n> QUOTE:\n> All the GiST support methods are normally called in short-lived memory\n> contexts; that is, CurrentMemoryContext will get reset after each\n> tuple is processed. It is therefore not very important to worry about\n> pfree'ing everything you palloc. However, in some cases it's useful\n> for a support method to\n> ENDOF_QUOTE\n>\n> so removing the following part should be OK.\n> + if ((Datum) range_a != a)\n> + pfree(range_a);\n> +\n> + if ((Datum) range_b != b)\n> + pfree(range_b);\n>\n> comparison solely on the lower bounds looks strange to me.\n> if lower bound is the same, then compare upper bound, so the\n> range_gist_cmp function is consistent with function range_compare.\n> so following change:\n>\n> + else\n> + result = range_cmp_bounds(typcache, &lower1, &lower2);\n> to\n> `\n> else\n> {\n> result = range_cmp_bounds(typcache, &lower1, &lower2);\n> if (result == 0)\n> result = range_cmp_bounds(typcache, &upper1, &upper2);\n> }\n> `\n>\n> does contrib/btree_gist/btree_gist--1.7--1.8.sql function be declared\n> as strict ? (I am not sure)\n> other than that, the whole patch looks good.\n\nthe original author email address (christoph.heiss@cybertec.at)\nAddress not found.\nso I don't include it.\n\nI split the original author's patch into 2.\n1. Add GiST sortsupport function for all the btree-gist module data\ntypes except anyrange data type (which actually does not in this\nmodule)\n2. Add GiST sortsupport function for anyrange data type.\n\nWhat changed compared to the original patch:\n1. The original patch missed some operator class for all the data\ntypes in btree-gist modules. So I added them.\nnow add sortsupport function for all the following data types in btree-gist:\n\nint2,int4,int8,float4,float8,numeric\ntimestamp with time zone,\ntimestamp without time zone, time with time zone, time without time zone, date\ninterval, oid, money, char\nvarchar, text, bytea, bit, varbit\nmacaddr, macaddr8, inet, cidr, uuid, bool, enum\n\n2. range_gist_cmp: the gist range sortsupport function, it looks like\nrange_cmp, but the range typcache is cached,\nso we don't need to repeatedly call lookup_type_cache.\nrefactor: As mentioned above, if the range lower bound is the same\nthen compare the upper bound.\nI aslo refactored the comment.\n\nwhat I am confused:\nIn fmgr.h\n\n/*\n * Support for cleaning up detoasted copies of inputs. This must only\n * be used for pass-by-ref datatypes, and normally would only be used\n * for toastable types. If the given pointer is different from the\n * original argument, assume it's a palloc'd detoasted copy, and pfree it.\n * NOTE: most functions on toastable types do not have to worry about this,\n * but we currently require that support functions for indexes not leak\n * memory.\n */\n#define PG_FREE_IF_COPY(ptr,n) \\\ndo { \\\nif ((Pointer) (ptr) != PG_GETARG_POINTER(n)) \\\npfree(ptr); \\\n} while (0)\n\nbut the doc (https://www.postgresql.org/docs/current/gist-extensibility.html)\n says:\nAll the GiST support methods are normally called in short-lived memory\ncontexts; that is, CurrentMemoryContext will get reset after each\ntuple is processed. It is therefore not very important to worry about\npfree'ing everything you palloc.\nENDOF_QUOTE\n\nso I am not sure in range_gist_cmp, we need the following:\n`\nif ((Datum) range_a != a)\npfree(range_a);\nif ((Datum) range_b != b)\npfree(range_b);\n`", "msg_date": "Wed, 10 Jan 2024 22:18:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "\n\n> On 10 Jan 2024, at 19:18, jian he <jian.universality@gmail.com> wrote:\n> \n> what I am confused:\n> In fmgr.h\n> \n> /*\n> * Support for cleaning up detoasted copies of inputs. This must only\n> * be used for pass-by-ref datatypes, and normally would only be used\n> * for toastable types. If the given pointer is different from the\n> * original argument, assume it's a palloc'd detoasted copy, and pfree it.\n> * NOTE: most functions on toastable types do not have to worry about this,\n> * but we currently require that support functions for indexes not leak\n> * memory.\n> */\n> #define PG_FREE_IF_COPY(ptr,n) \\\n> do { \\\n> if ((Pointer) (ptr) != PG_GETARG_POINTER(n)) \\\n> pfree(ptr); \\\n> } while (0)\n> \n> but the doc (https://www.postgresql.org/docs/current/gist-extensibility.html)\n> says:\n> All the GiST support methods are normally called in short-lived memory\n> contexts; that is, CurrentMemoryContext will get reset after each\n> tuple is processed. It is therefore not very important to worry about\n> pfree'ing everything you palloc.\n> ENDOF_QUOTE\n> \n> so I am not sure in range_gist_cmp, we need the following:\n> `\n> if ((Datum) range_a != a)\n> pfree(range_a);\n> if ((Datum) range_b != b)\n> pfree(range_b);\n> `\n\nI think GiST sortsupport comments are more relevant, so there's no need for this pfree()s.\n\nAlso, please check other thread, maybe you will find some useful code there [0,1]. It was committed[2] once, but reverted. Please make sure that corrections made there are taken into account in your patch.\n\nThanks for working on this!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/31/2824/\n[1] https://www.postgresql.org/message-id/flat/285041639646332%40sas1-bf93f9015d57.qloud-c.yandex.net#0e5b4ed57d861d38a3d836c9ec09c0c5\n[2] https://github.com/postgres/postgres/commit/9f984ba6d23dc6eecebf479ab1d3f2e550a4e9be\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 20:13:12 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Am Mittwoch, dem 10.01.2024 um 08:00 +0800 schrieb jian he:\n\n\n> you do  `CREATE INDEX ON pgbench_accounts USING gist(aid);`\n> but the original patch didn't change contrib/btree_gist/btree_int4.c\n> So I doubt your benchmark is related to the original patch.\n> or maybe I missed something.\n> \n\nThe patch originally does this:\n\n+ALTER OPERATOR FAMILY gist_int4_ops USING gist ADD\n+ FUNCTION 11 (int4, int4) btint4sortsupport (internal) ;\n\nThis adds sortsupport function to int4 as well. We reuse existing\nbtint4sortsupport() function, so no need to change btree_int4.c.\n\n> also per doc:\n> `\n> sortsupport\n> Returns a comparator function to sort data in a way that preserves\n> locality. It is used by CREATE INDEX and REINDEX commands. The\n> quality\n> of the created index depends on how well the sort order determined by\n> the comparator function preserves locality of the inputs.\n> `\n> from the doc, add sortsupport function will only influence index\n> build time?\n> \n\nThats the point of this patch. Though it influences the index quality\nin a way which seems to cause the measured performance regression\nupthread.\n\n> \n> per https://www.postgresql.org/docs/current/gist-extensibility.html\n> QUOTE:\n> All the GiST support methods are normally called in short-lived\n> memory\n> contexts; that is, CurrentMemoryContext will get reset after each\n> tuple is processed. It is therefore not very important to worry about\n> pfree'ing everything you palloc. However, in some cases it's useful\n> for a support method to\n> ENDOF_QUOTE\n> \n> so removing the following part should be OK.\n> + if ((Datum) range_a != a)\n> + pfree(range_a);\n> +\n> + if ((Datum) range_b != b)\n> + pfree(range_b);\n> \n\nProbably, i think we get a different range objects in case of\ndetoasting in this case. \n\n> comparison solely on the lower bounds looks strange to me.\n> if lower bound is the same, then compare upper bound, so the\n> range_gist_cmp function is consistent with function range_compare.\n> so following change:\n> \n> + else\n> + result = range_cmp_bounds(typcache, &lower1, &lower2);\n> to\n> `\n> else\n> {\n> result = range_cmp_bounds(typcache, &lower1, &lower2);\n> if (result == 0)\n> result = range_cmp_bounds(typcache, &upper1, &upper2);\n> }\n> `\n> \n> does contrib/btree_gist/btree_gist--1.7--1.8.sql function be declared\n> as strict ? (I am not sure)\n> other than that, the whole patch looks good.\n> \n> \n\nThat's something surely to consider.\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 17:35:37 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Am Mittwoch, dem 10.01.2024 um 20:13 +0500 schrieb Andrey M. Borodin:\n> I think GiST sortsupport comments are more relevant, so there's no\n> need for this pfree()s.\n> \n> Also, please check other thread, maybe you will find some useful code\n> there [0,1]. It was committed[2] once, but reverted. Please make sure\n> that corrections made there are taken into account in your patch.\n> \n\nAt least, i believe we have the same problem described here (many\nthanks Andrey for the links, i wasn't aware about this discussion):\n\nhttps://www.postgresql.org/message-id/98b34b51-a6db-acc4-1bcf-a29caf69bbc7%40iki.fi\n\n> Thanks for working on this!\n\nAbsolutely. This patch needs input ... \n\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 18:04:32 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On Wed, 10 Jan 2024 at 19:49, jian he <jian.universality@gmail.com> wrote:\n>\n> On Wed, Jan 10, 2024 at 8:00 AM jian he <jian.universality@gmail.com> wrote:\n> >\n> > `\n> > from the doc, add sortsupport function will only influence index build time?\n> >\n> > +/*\n> > + * GiST sortsupport comparator for ranges.\n> > + *\n> > + * Operates solely on the lower bounds of the ranges, comparing them using\n> > + * range_cmp_bounds().\n> > + * Empty ranges are sorted before non-empty ones.\n> > + */\n> > +static int\n> > +range_gist_cmp(Datum a, Datum b, SortSupport ssup)\n> > +{\n> > + RangeType *range_a = DatumGetRangeTypeP(a);\n> > + RangeType *range_b = DatumGetRangeTypeP(b);\n> > + TypeCacheEntry *typcache = ssup->ssup_extra;\n> > + RangeBound lower1,\n> > + lower2;\n> > + RangeBound upper1,\n> > + upper2;\n> > + bool empty1,\n> > + empty2;\n> > + int result;\n> > +\n> > + if (typcache == NULL) {\n> > + Assert(RangeTypeGetOid(range_a) == RangeTypeGetOid(range_b));\n> > + typcache = lookup_type_cache(RangeTypeGetOid(range_a), TYPECACHE_RANGE_INFO);\n> > +\n> > + /*\n> > + * Cache the range info between calls to avoid having to call\n> > + * lookup_type_cache() for each comparison.\n> > + */\n> > + ssup->ssup_extra = typcache;\n> > + }\n> > +\n> > + range_deserialize(typcache, range_a, &lower1, &upper1, &empty1);\n> > + range_deserialize(typcache, range_b, &lower2, &upper2, &empty2);\n> > +\n> > + /* For b-tree use, empty ranges sort before all else */\n> > + if (empty1 && empty2)\n> > + result = 0;\n> > + else if (empty1)\n> > + result = -1;\n> > + else if (empty2)\n> > + result = 1;\n> > + else\n> > + result = range_cmp_bounds(typcache, &lower1, &lower2);\n> > +\n> > + if ((Datum) range_a != a)\n> > + pfree(range_a);\n> > +\n> > + if ((Datum) range_b != b)\n> > + pfree(range_b);\n> > +\n> > + return result;\n> > +}\n> >\n> > per https://www.postgresql.org/docs/current/gist-extensibility.html\n> > QUOTE:\n> > All the GiST support methods are normally called in short-lived memory\n> > contexts; that is, CurrentMemoryContext will get reset after each\n> > tuple is processed. It is therefore not very important to worry about\n> > pfree'ing everything you palloc. However, in some cases it's useful\n> > for a support method to\n> > ENDOF_QUOTE\n> >\n> > so removing the following part should be OK.\n> > + if ((Datum) range_a != a)\n> > + pfree(range_a);\n> > +\n> > + if ((Datum) range_b != b)\n> > + pfree(range_b);\n> >\n> > comparison solely on the lower bounds looks strange to me.\n> > if lower bound is the same, then compare upper bound, so the\n> > range_gist_cmp function is consistent with function range_compare.\n> > so following change:\n> >\n> > + else\n> > + result = range_cmp_bounds(typcache, &lower1, &lower2);\n> > to\n> > `\n> > else\n> > {\n> > result = range_cmp_bounds(typcache, &lower1, &lower2);\n> > if (result == 0)\n> > result = range_cmp_bounds(typcache, &upper1, &upper2);\n> > }\n> > `\n> >\n> > does contrib/btree_gist/btree_gist--1.7--1.8.sql function be declared\n> > as strict ? (I am not sure)\n> > other than that, the whole patch looks good.\n>\n> the original author email address (christoph.heiss@cybertec.at)\n> Address not found.\n> so I don't include it.\n>\n> I split the original author's patch into 2.\n> 1. Add GiST sortsupport function for all the btree-gist module data\n> types except anyrange data type (which actually does not in this\n> module)\n> 2. Add GiST sortsupport function for anyrange data type.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n=== applying patch\n./v5-0001-Add-GIST-sortsupport-function-for-all-the-btree-g.patch\npatching file contrib/btree_gist/Makefile\nHunk #1 FAILED at 33.\n1 out of 1 hunk FAILED -- saving rejects to file contrib/btree_gist/Makefile.rej\n...\nThe next patch would create the file\ncontrib/btree_gist/btree_gist--1.7--1.8.sql,\nwhich already exists! Applying it anyway.\npatching file contrib/btree_gist/btree_gist--1.7--1.8.sql\nHunk #1 FAILED at 1.\n1 out of 1 hunk FAILED -- saving rejects to file\ncontrib/btree_gist/btree_gist--1.7--1.8.sql.rej\npatching file contrib/btree_gist/btree_gist.control\nHunk #1 FAILED at 1.\n1 out of 1 hunk FAILED -- saving rejects to file\ncontrib/btree_gist/btree_gist.control.rej\n...\npatching file contrib/btree_gist/meson.build\nHunk #1 FAILED at 50.\n1 out of 1 hunk FAILED -- saving rejects to file\ncontrib/btree_gist/meson.build.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3686.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 18:31:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Am Freitag, dem 26.01.2024 um 18:31 +0530 schrieb vignesh C:\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n\nI've started working on it and planning to submit a polished patch for\nthe upcoming CF.\n\n\n\n\n", "msg_date": "Fri, 26 Jan 2024 19:22:28 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Am Mittwoch, dem 10.01.2024 um 22:18 +0800 schrieb jian he:\n> \n> I split the original author's patch into 2.\n> 1. Add GiST sortsupport function for all the btree-gist module data\n> types except anyrange data type (which actually does not in this\n> module)\n> 2. Add GiST sortsupport function for anyrange data type.\n> \n\nPlease find attached a new version of this patch set with the following\nchanges/adjustments:\n\n- Rebased to current master\n- Heavily reworked *_cmp() functions to properly\ndecode GPT_VARKEY and GBT_KEY input.\n\nFor some datatypes the btree comparison functions were reused and the\ninput arguments not properly handled. This patch adds dedicated\nbtree_gist sortsupport comparison methods for all datatypes.\n\nThere was another patch from Andrey Borodin (thanks again for the hint)\nand a deeper review done by Heikki in [1]. I've incorporated Heikkis\nfindings in this patch, too.\n\n[...]\n\nI've also updated the btree_gist documentation to reflect the default\nsorted built strategy this patch introduces now.\n\n\nAdditionally i did some benchmarks again on this new version on the\npatch. Still, index build speed improvement is quite impressive on the\ndataset originally provided by Christoph Heiss (since its not available\nanymore i've uploaded it here [2] again):\n\nHEAD\n(Index was built with default buffering setting)\n---------------------\nREINDEX (s) 4809\nCREATE INDEX (s) 4920\n\nbtree_gist sortsupport\n----------------------\nREINDEX (s)\t 573\nCREATE INDEX (s) 578\n\nI created another pgbench based custom script to measure the single\ncore speed of the lookup query of the bench-gist.py script. This looks\nlike this:\n\ninit.sql\n--------\nBEGIN;\n\nDROP TABLE IF EXISTS test_dataset;\nCREATE TABLE test_dataset(keyid integer not null, id text not null,\nblock_range int4range);\nCREATE TEMP SEQUENCE testset_seq;\nINSERT INTO test_dataset SELECT nextval('testset_seq'), id, block_range\nFROM test ORDER BY random() LIMIT 10000;\nCREATE UNIQUE INDEX ON test_dataset(keyid);\n\nCOMMIT;\n\nbench.pgbench\n-------------\n\n\\set keyid random(1, 10000)\nSELECT id, block_range FROM test_dataset WHERE keyid = :keyid \\gset\nSELECT id, block_range FROM test WHERE id = ':id' AND block_range &&\n':block_range';\n\nRun by\n\nfor in in `seq 1 3`; do psql -qXf init.pgbench && pgbench -n -r -c 1 -T\n60 -f bench.pgbench; done\n\nWith this i get the following (on prewarmed index and table):\n\nHEAD \n-------------------------------------\npgbench single core tps=248,67\n\nbtree_gist sortsupport\n----------------------------\npgbench single core tps=1830,33\n\nThis is an average over 3 runs each (complete results attached). So\nthis looks really impressive and i hope i didn't do something entirely\nwrong (still learning about this GiST stuff).\n\n> what I am confused:\n> In fmgr.h\n> \n> /*\n>  * Support for cleaning up detoasted copies of inputs.  This must\n> only\n>  * be used for pass-by-ref datatypes, and normally would only be used\n>  * for toastable types.  If the given pointer is different from the\n>  * original argument, assume it's a palloc'd detoasted copy, and\n> pfree it.\n>  * NOTE: most functions on toastable types do not have to worry about\n> this,\n>  * but we currently require that support functions for indexes not\n> leak\n>  * memory.\n>  */\n> #define PG_FREE_IF_COPY(ptr,n) \\\n> do { \\\n> if ((Pointer) (ptr) != PG_GETARG_POINTER(n)) \\\n> pfree(ptr); \\\n> } while (0)\n> \n> but the doc\n> (https://www.postgresql.org/docs/current/gist-extensibility.html)\n>  says:\n> All the GiST support methods are normally called in short-lived\n> memory\n> contexts; that is, CurrentMemoryContext will get reset after each\n> tuple is processed. It is therefore not very important to worry about\n> pfree'ing everything you palloc.\n> ENDOF_QUOTE\n> \n> so I am not sure in range_gist_cmp, we need the following:\n> `\n> if ((Datum) range_a != a)\n> pfree(range_a);\n> if ((Datum) range_b != b)\n> pfree(range_b);\n> `\n\nTurns out this is not true for sortsupport: the comparison function is\ncalled for each tuple during sorting, which will leak the detoasted\n(and probably copied datum) in the sort memory context. See the same\nfor e.g. numeric and text, which i needed to change to pass the key\nvalues correctly to the text_cmp() or numeric_cmp() function in these \ncases.\n\nI've adapted the PG_FREE_IF_COPY() macro for these functions and\nintroduced GBT_FREE_IF_COPY() in btree_utils_var.h, since the former\nrelies on fcinfo.\n\nI'll add the patch again to the upcoming CF for another review round.\n\n[1]\nhttps://www.postgresql.org/message-id/c0846e34-8b3a-e1bf-c88e-021eb241a481%40iki.fi\n\n[2] https://drive.google.com/file/d/1CPNFGR53-FUto1zjXPMM2Yrn0GaGfGFz/view?usp=drive_link", "msg_date": "Thu, 08 Feb 2024 19:14:05 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "On Fri, Feb 9, 2024 at 2:14 AM Bernd Helmle <mailings@oopsware.de> wrote:\n>\n> Am Mittwoch, dem 10.01.2024 um 22:18 +0800 schrieb jian he:\n> >\n> > I split the original author's patch into 2.\n> > 1. Add GiST sortsupport function for all the btree-gist module data\n> > types except anyrange data type (which actually does not in this\n> > module)\n> > 2. Add GiST sortsupport function for anyrange data type.\n> >\n>\n> > what I am confused:\n> > In fmgr.h\n> >\n> > /*\n> > * Support for cleaning up detoasted copies of inputs. This must\n> > only\n> > * be used for pass-by-ref datatypes, and normally would only be used\n> > * for toastable types. If the given pointer is different from the\n> > * original argument, assume it's a palloc'd detoasted copy, and\n> > pfree it.\n> > * NOTE: most functions on toastable types do not have to worry about\n> > this,\n> > * but we currently require that support functions for indexes not\n> > leak\n> > * memory.\n> > */\n> > #define PG_FREE_IF_COPY(ptr,n) \\\n> > do { \\\n> > if ((Pointer) (ptr) != PG_GETARG_POINTER(n)) \\\n> > pfree(ptr); \\\n> > } while (0)\n> >\n> > but the doc\n> > (https://www.postgresql.org/docs/current/gist-extensibility.html)\n> > says:\n> > All the GiST support methods are normally called in short-lived\n> > memory\n> > contexts; that is, CurrentMemoryContext will get reset after each\n> > tuple is processed. It is therefore not very important to worry about\n> > pfree'ing everything you palloc.\n> > ENDOF_QUOTE\n> >\n> > so I am not sure in range_gist_cmp, we need the following:\n> > `\n> > if ((Datum) range_a != a)\n> > pfree(range_a);\n> > if ((Datum) range_b != b)\n> > pfree(range_b);\n> > `\n>\n> Turns out this is not true for sortsupport: the comparison function is\n> called for each tuple during sorting, which will leak the detoasted\n> (and probably copied datum) in the sort memory context. See the same\n> for e.g. numeric and text, which i needed to change to pass the key\n> values correctly to the text_cmp() or numeric_cmp() function in these\n> cases.\n>\n\n+ <para>\n+ Per default <filename>btree_gist</filename> builts\n<acronym>GiST</acronym> indexe with\n+ <function>sortsupport</function> in <firstterm>sorted</firstterm>\nmode. This usually results in a\n+ much better index quality and smaller index sizes by much faster\nindex built speed. It is still\n+ possible to revert to buffered built strategy by using the\n<literal>buffering</literal> parameter\n+ when creating the index.\n+ </para>\n+\nI believe `built` |`builts` should be `build`.\nAlso\nmaybe we can simply copy some texts from\nhttps://www.postgresql.org/docs/current/gist-implementation.html.\nhow about the following:\n <para>\n The sorted method is only available if each of the opclasses used by the\n index provides a <function>sortsupport</function> function, as described\n in <xref linkend=\"gist-extensibility\"/>. If they do, this method is\n usually the best, so it is used by default.\n It is still possible to change to a buffered build strategy by using\nthe <literal>buffering</literal> parameter\n to the CREATE INDEX command.\n </para>\n\nyou've changed contrib/btree_gist/meson.build, seems we also need to\nchange contrib/btree_gist/Makefile\n\ngist_point_sortsupport have `if (ssup->abbreviate)`, does\nrange_gist_sortsupport also this part?\nI think the `if(ssup->abbreviate)` part is optional?\nCan we add some comments on it?\n\n\n", "msg_date": "Mon, 12 Feb 2024 21:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Am Montag, dem 12.02.2024 um 21:00 +0800 schrieb jian he:\n> + <para>\n> +  Per default <filename>btree_gist</filename> builts\n> <acronym>GiST</acronym> indexe with\n> +  <function>sortsupport</function> in <firstterm>sorted</firstterm>\n> mode. This usually results in a\n> +  much better index quality and smaller index sizes by much faster\n> index built speed. It is still\n> +  possible to revert to buffered built strategy by using the\n> <literal>buffering</literal> parameter\n> +  when creating the index.\n> + </para>\n> +\n> I believe `built` |`builts` should be `build`.\n\nRight, Fixed.\n\n> Also\n> maybe we can simply copy some texts from\n> https://www.postgresql.org/docs/current/gist-implementation.html.\n> how about the following:\n>   <para>\n>    The sorted method is only available if each of the opclasses used\n> by the\n>    index provides a <function>sortsupport</function> function, as\n> described\n>    in <xref linkend=\"gist-extensibility\"/>.  If they do, this method\n> is\n>    usually the best, so it is used by default.\n>   It is still possible to change to a buffered build strategy by\n> using\n> the <literal>buffering</literal> parameter\n>   to the CREATE INDEX command.\n>   </para>\n\nHmm not sure what you are trying to achieve with this? The opclasses in\nbtree_gist provides sortsupport, but by reading the above i would get\nthe impression they're still optional.\n\n> \n> you've changed contrib/btree_gist/meson.build, seems we also need to\n> change contrib/btree_gist/Makefile\n> \n\nOh, good catch. I'm so focused on meson already that i totally forgot\nthe good old Makefile. Fixed.\n\n> gist_point_sortsupport have `if (ssup->abbreviate)`,  does\n> range_gist_sortsupport also this part?\n> I think the `if(ssup->abbreviate)` part is optional?\n> Can we add some comments on it?\n\nI've thought about abbreviated keys support but put that aside for\nlater. I wanted to focus on general sortsupport first before getting my\nhands on it and so postponed it for another round.\n\nIf we agree that this patch needs support for abbreviated keys now, i\ncertainly can work on it.\n\nThanks for your review,\n\n\tBernd", "msg_date": "Tue, 13 Feb 2024 12:03:10 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi,\n\nHere is a rebased version of the patch. Since i don't have anything to\nadd at the moment and the numbers looks promising and to move on, i've\nmarked this patch \"Ready For Committer\".\n\n\nThanks,\n\n\tBernd", "msg_date": "Fri, 22 Mar 2024 14:20:50 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "\n\n> On 22 Mar 2024, at 18:20, Bernd Helmle <mailings@oopsware.de> wrote:\n> \n> Here is a rebased version of the patch.\n\nFWIW it would be nice at least port tests from commit that I referenced upthread.\nNowadays we have injection points, so these tests can be much more stable.\n\nSorry for bringing up this so late.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 22 Mar 2024 18:27:49 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" }, { "msg_contents": "Hi Andrey,\n\nAm Freitag, dem 22.03.2024 um 18:27 +0500 schrieb Andrey M. Borodin:\n> > Here is a rebased version of the patch.\n> \n> FWIW it would be nice at least port tests from commit that I\n> referenced upthread.\n> Nowadays we have injection points, so these tests can be much more\n> stable.\n\nAlright, that's a reasonable point. I will look into this. Did you see\nother important things missing?\n\nChanged status back to \"Waiting On Author\".\n\nThanks,\n\n\tBernd\n\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:20:20 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add sortsupport for range types and btree_gist" } ]
[ { "msg_contents": "Adding a plan_id to pg_stat_activity allows users\r\nto determine if a plan for a particular statement\r\nhas changed and if the new plan is performing better\r\nor worse for a particular statement.\r\n\r\nThere are several ways the plan_id in pg_stat_activity\r\ncan be used:\r\n\r\n1. In extensions that expose the plan text.\r\nThis will allow users to map a plan_id\r\nfrom pg_stat_activity to the plan text.\r\n\r\n2. In EXPLAIN output, including auto_explain.\r\n\r\n3. In statement logging.\r\n\r\nComputing the plan_id can be done using the same\r\nroutines for query jumbling, except plan nodes\r\nwill be jumbled. This approach was inspired by\r\nwork done in the extension pg_stat_plans,\r\nhttps://github.com/2ndQuadrant/pg_stat_plans/\r\n\r\nAttached is a POC patch that computes the plan_id\r\nand presents the top-level plan_id in pg_stat_activity.\r\n\r\nThe patch still has work left:\r\n- Perhaps Moving the plan jumbler outside of queryjumble.c?\r\n- In the POC, the compute_query_id GUC determines if a\r\n plan_id is to be computed. Should this be a separate GUC?\r\n\r\n\r\n-- Below is the output of sampling pg_stat_activity\r\n-- with a pgbench workload running The patch\r\n-- introduces the plan_id column.\r\n\r\nselect count(*),\r\n query,\r\n query_id,\r\n plan_id\r\nfrom pg_stat_activity\r\nwhere state='active'\r\nand plan_id is not null and query_id is not null\r\ngroup by query, query_id, plan_id\r\norder by 1 desc limit 1;\r\n\r\n-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------\r\ncount | 1\r\nquery | INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (7, 8, 242150, -1471, CURRENT_TIMESTAMP);\r\nquery_id | 4535829358544711074\r\nplan_id | -4913142083940981109\r\n\r\n\r\n-- Also, a new view called pg_stat_statements_plan which\r\n-- Includes all the same columns as pg_stat_statements, but\r\n-- with statistics shown per plan.\r\n\r\npostgres=# select substr(query, 1, 10) as query, queryid, planid, calls from pg_stat_statements_plan where queryid = 4535829358544711074;\r\n-[ RECORD 1 ]-----------------\r\nquery | INSERT INT\r\nqueryid | 4535829358544711074\r\nplanid | -4913142083940981109\r\ncalls | 4274428\r\n\r\n-- the existing pg_stat_statements table\r\n-- shows stats aggregated on\r\n-- the queryid level. This is current behavior.\r\n\r\npostgres=# select substr(query, 1, 10) as query, queryid, calls from pg_stat_statements where queryid = 4535829358544711074;\r\n-[ RECORD 1 ]----------------\r\nquery | INSERT INT\r\nqueryid | 4535829358544711074\r\ncalls | 4377142\r\n\r\n-- The “%Q” log_line_prefix flag will also include the planid as part of the output\r\n-- the format will be \"query_id/plan_id\"\r\n\r\n\r\n-- An example of using auto_explain with the ‘%Q” flag in log_line_prefix.\r\n2022-06-14 17:08:10.485 CDT [76955] [4912312221998332774/-2294484545013135901] LOG: duration: 0.144 ms plan:\r\n Query Text: UPDATE pgbench_tellers SET tbalance = tbalance + -1952 WHERE tid = 32;\r\n Update on public.pgbench_tellers (cost=0.27..8.29 rows=0 width=0)\r\n -> Index Scan using pgbench_tellers_pkey on public.pgbench_tellers (cost=0.27..8.29 rows=1 width=10)\r\n Output: (tbalance + '-1952'::integer), ctid\r\n Index Cond: (pgbench_tellers.tid = 32)\r\n\r\n\r\n-- the output for EXPLAIN VERBOSE also shows a plan id.\r\n\r\npostgres=# explain verbose select 1;\r\n QUERY PLAN\r\n------------------------------------------\r\nResult (cost=0.00..0.01 rows=1 width=4)\r\n Output: 1\r\nQuery Identifier: -2698492627503961632\r\nPlan Identifier: -7861780579971713347\r\n(4 rows)\r\n\r\n\r\nThanks,\r\n\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Wed, 15 Jun 2022 18:45:38 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "[PROPOSAL] Detecting plan changes with plan_id in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 15, 2022 at 06:45:38PM +0000, Imseih (AWS), Sami wrote:\n> Adding a plan_id to pg_stat_activity allows users\n> to determine if a plan for a particular statement\n> has changed and if the new plan is performing better\n> or worse for a particular statement.\n> [...]\n> Attached is a POC patch that computes the plan_id\n> and presents the top-level plan_id in pg_stat_activity.\n\nAFAICS you're proposing to add an identifier for a specific plan, but no way to\nknow what that plan was? How are users supposed to use the information if they\nknow something changed but don't know what changed exactly?\n\n> - In the POC, the compute_query_id GUC determines if a\n> plan_id is to be computed. Should this be a separate GUC?\n\nProbably, as computing it will likely be quite expensive. Some benchmark on\nvarious workloads would be needed here.\n\nI only had a quick look at the patch, but I see that you have some code to\navoid storing the query text multiple times with different planid. How does it\nwork exactly, and does it ensure that the query text is only removed once the\nlast entry that uses it is removed? It seems that you identify a specific\nquery text by queryid, but that seems wrong as collision can (easily?) happen\nin different databases. The real identifier of a query text should be (dbid,\nqueryid).\n\nNote that this problem already exists, as the query texts are now stored per\n(userid, dbid, queryid, istoplevel). Maybe this part could be split in a\ndifferent commit as it could already be useful without a planid.\n\n\n", "msg_date": "Thu, 16 Jun 2022 13:19:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "On 15/6/2022 21:45, Imseih (AWS), Sami wrote:\n> Adding a plan_id to pg_stat_activity allows users\n> to determine if a plan for a particular statement\n> has changed and if the new plan is performing better\n> or worse for a particular statement.\n> There are several ways the plan_id in pg_stat_activity\nIn general, your idea is quite useful.\nBut, as discussed earlier [1] extensions would implement many useful \nthings if they could add into a plan some custom data.\nMaybe implement your feature with some private list of nodes in plan \nstructure instead of single-purpose plan_id field?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/e0de3423-4bba-1e69-c55a-f76bf18dbd74%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 16 Jun 2022 08:48:34 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "> AFAICS you're proposing to add an identifier for a specific plan, but no way to\r\n> know what that plan was? How are users supposed to use the information if they\r\n> know something changed but don't know what changed exactly?\r\n\r\nI see this as a start to do more useful things with plans.\r\n\r\nThe patch right out the gate exposes the plan_id in EXPLAIN output \r\nand auto_explain.\r\nThis will also be useful for extensions that will provide the plan text.\r\nIt is also conceivable that pg_stat_statements can provide an option\r\nTo store the plan text?\r\n\r\n > - In the POC, the compute_query_id GUC determines if a\r\n > plan_id is to be computed. Should this be a separate GUC?\r\n\r\n> Probably, as computing it will likely be quite expensive. Some benchmark on\r\n> various workloads would be needed here.\r\n\r\nYes, more benchmarks will be needed here with more complex plans. I have\r\nOnly benchmarked with pgbench at this point. \r\nHowever, separating this into Its own GUC is what I am leaning towards as well \r\nand will update the patch.\r\n\r\n> I only had a quick look at the patch, but I see that you have some code to\r\n> avoid storing the query text multiple times with different planid. How does it\r\n> work exactly, and does it ensure that the query text is only removed once the\r\n> last entry that uses it is removed? It seems that you identify a specific\r\n> query text by queryid, but that seems wrong as collision can (easily?) happen\r\n> in different databases. The real identifier of a query text should be (dbid,\r\n> queryid).\r\n\r\nThe idea is to lookup the offsets and length of the text in the external file by querid\r\nonly. Therefore we can store similar query text for multiple pgss_hash entries\r\nonly once. \r\n\r\nWhen a new entry in pgss is not found, the new qtext_hash is consulted to \r\nsee if it has information about the offsets/length of the queryid. If found in\r\nqtext_hash, the new pgss_hash entry is created with the offsets found. \r\n\r\nIf not found in qtext_hash, the query text will be (normalized) and stored in \r\nthe external file. Then, a new entry will be created in qtext_hash and \r\nan entry in pgss_hash.\r\n\r\nOf course we need to also handle the gc_qtext cleanups, entry_reset, startup\r\nand shutdown scenarios. The patch does this, but I will go back and do more\r\ntesting.\r\n\r\n> Note that this problem already exists, as the query texts are now stored per\r\n> (userid, dbid, queryid, istoplevel). Maybe this part could be split in a\r\n> different commit as it could already be useful without a planid.\r\n\r\nGood point. I will separate this patch.\r\n\r\nRegards, \r\n\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n\r\n\r\n", "msg_date": "Thu, 16 Jun 2022 21:32:26 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "> Good point. I will separate this patch.\r\n\r\nI separated the pg_stat_statements patch. The patch\r\nIntroduces a secondary hash that tracks locations of\r\nA query ( by queryid ) in the external file. The hash\r\nremains in lockstep with the pgss_hash using a\r\nsynchronization routine. For the default\r\npg_stat_statements.max = 5000, this hash requires 2MB megabytes\r\nof additional shared memory.\r\n\r\nMy testing does not show any regression for workloads\r\nIn which statements are not issues by multiple users/databases.\r\n\r\nHowever, it shows good improvement, 10-15%, when there\r\nare similar statements that are issues by multiple \r\nusers/databases/tracking levels.\r\n\r\nBesides this improvement, this will open up the opportunity\r\nto also track plan_id's as discussed earlier in the thread.\r\n\r\nThanks for the feedback.\r\n\r\nRegards, \r\n\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Tue, 21 Jun 2022 20:04:01 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 21, 2022 at 08:04:01PM +0000, Imseih (AWS), Sami wrote:\n>\n> I separated the pg_stat_statements patch. The patch\n> Introduces a secondary hash that tracks locations of\n> A query ( by queryid ) in the external file.\n\nI still think that's wrong.\n\n> The hash\n> remains in lockstep with the pgss_hash using a\n> synchronization routine.\n\nCan you describe how it's kept in sync, and how it makes sure that the property\nis maintained over restart / gc? I don't see any change in the code for the\n2nd part so I don't see how it could work (and after a quick test it indeed\ndoesn't).\n\nI also don't see any change in the heuristics for need_gc_qtext(), isn't that\ngoing to lead to too frequent gc?\n\n> My testing does not show any regression for workloads\n> In which statements are not issues by multiple users/databases.\n>\n> However, it shows good improvement, 10-15%, when there\n> are similar statements that are issues by multiple\n> users/databases/tracking levels.\n\n\"no regression\" and \"10-15% improvement\" on what?\n\nCan you share more details on the benchmarks you did? Did you also run\nbenchmark on workloads that induce entry eviction, with and without need for\ngc? Eviction in pgss is already very expensive, and making things worse just\nto save a bit of disk space doesn't seem like a good compromise.\n\n\n", "msg_date": "Wed, 22 Jun 2022 12:38:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "> Can you describe how it's kept in sync, and how it makes sure that the property\r\n> is maintained over restart / gc? I don't see any change in the code for the\r\n> 2nd part so I don't see how it could work (and after a quick test it indeed\r\n> doesn't).\r\n\r\nThere is a routine called qtext_hash_sync which removed all entries from \r\nthe qtext_hash and reloads it will all the query ids from pgss_hash. \r\n\r\nThis routine is called during:\r\n\r\n1. gc_qtexts()\r\n2. entry_reset()\r\n3. entry_dealloc(), although this can be moved to the end of entry_alloc() instead.\r\n4. pgss_shmem_startup()\r\n\r\nAll the points when it's called, an exclusive lock is held.this allows or syncing all\r\nThe present queryid's in pgss_hash with qtext_hash.\r\n.\r\n\r\n> 2nd part so I don't see how it could work (and after a quick test it indeed\r\n> doesn't).\r\n\r\nCan you tell me what test you used to determine it is not in sync?\r\n\r\n\r\n> Can you share more details on the benchmarks you did? Did you also run\r\n> benchmark on workloads that induce entry eviction, with and without need for\r\n> gc? Eviction in pgss is already very expensive, and making things worse just\r\n> to save a bit of disk space doesn't seem like a good compromise.\r\n\r\nSorry this was poorly explained by me. I went back and did some benchmarks. Attached is\r\nThe script and results. But here is a summary:\r\nOn a EC2 r5.2xlarge. The benchmark I performed is:\r\n1. create 10k tables\r\n2. create 5 users\r\n3. run a pgbench script that performs per transaction a select on \r\nA randomly chosen table for each of the 5 users.\r\n4. 2 variants of the test executed . 1 variant is with the default pg_stat_statements.max = 5000\r\nand one test with a larger pg_stat_statements.max = 10000. \r\n\r\nSo 10-15% is not accurate. I originally tested on a less powered machine. For this\r\nBenchmark I see a 6% increase in TPS (732k vs 683k) when we have a larger sized \r\npg_stat_statements.max is used and less gc/deallocations. \r\nBoth tests show a drop in gc/deallocations and a net increase\r\nIn tps. Less GC makes sense since the external file has less duplicate SQLs.\r\n\r\n\r\n\r\n##################################\r\n## pg_stat_statements.max = 15000\r\n##################################\r\n\r\n## with patch\r\n\r\ntransaction type: /tmp/wl.sql\r\nscaling factor: 1\r\nquery mode: simple\r\nnumber of clients: 20\r\nnumber of threads: 1\r\nmaximum number of tries: 1\r\nduration: 360 s\r\nnumber of transactions actually processed: 732604\r\nnumber of failed transactions: 0 (0.000%)\r\nlatency average = 9.828 ms\r\ninitial connection time = 33.349 ms\r\ntps = 2035.051541 (without initial connection time)\r\n[ec2-user@ip- pg_stat_statements]$\r\n(1 row)\r\n\r\n42 gc_qtext calls\r\n3473 deallocations\r\n\r\n## no patch\r\n\r\ntransaction type: /tmp/wl.sql\r\nscaling factor: 1\r\nquery mode: simple\r\nnumber of clients: 20\r\nnumber of threads: 1\r\nmaximum number of tries: 1\r\nduration: 360 s\r\nnumber of transactions actually processed: 683434\r\nnumber of failed transactions: 0 (0.000%)\r\nlatency average = 10.535 ms\r\ninitial connection time = 32.788 ms\r\ntps = 1898.452025 (without initial connection time)\r\n\r\n154 garbage collections\r\n3239 deallocations\r\n\r\n##################################\r\n## pg_stat_statements.max = 5000\r\n##################################\r\n\r\n\r\n## with patch\r\n\r\ntransaction type: /tmp/wl.sql\r\nscaling factor: 1\r\nquery mode: simple\r\nnumber of clients: 20\r\nnumber of threads: 1\r\nmaximum number of tries: 1\r\nduration: 360 s\r\nnumber of transactions actually processed: 673135\r\nnumber of failed transactions: 0 (0.000%)\r\nlatency average = 10.696 ms\r\ninitial connection time = 32.908 ms\r\ntps = 1869.829843 (without initial connection time)\r\n\r\n400 garbage collections\r\n12501 deallocations\r\n\r\n## no patch\r\n\r\ntransaction type: /tmp/wl.sql\r\nscaling factor: 1\r\nquery mode: simple\r\nnumber of clients: 20\r\nnumber of threads: 1\r\nmaximum number of tries: 1\r\nduration: 360 s\r\nnumber of transactions actually processed: 656160\r\nnumber of failed transactions: 0 (0.000%)\r\nlatency average = 10.973 ms\r\ninitial connection time = 33.275 ms\r\ntps = 1822.678069 (without initial connection time)\r\n\r\n580 garbage collections\r\n12180 deallocations\r\n\r\nThanks\r\n\r\nSami\r\nAmazon Web Services", "msg_date": "Wed, 22 Jun 2022 23:05:54 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "On Wed, Jun 22, 2022 at 11:05:54PM +0000, Imseih (AWS), Sami wrote:\n> > Can you describe how it's kept in sync, and how it makes sure that the property\n> > is maintained over restart / gc? I don't see any change in the code for the\n> > 2nd part so I don't see how it could work (and after a quick test it indeed\n> > doesn't).\n>\n> There is a routine called qtext_hash_sync which removed all entries from\n> the qtext_hash and reloads it will all the query ids from pgss_hash.\n> [...]\n> All the points when it's called, an exclusive lock is held.this allows or syncing all\n> The present queryid's in pgss_hash with qtext_hash.\n\nSo your approach is to let the current gc / file loading behavior happen as\nbefore and construct your mapping hash using the resulting query text / offset\ninfo. That can't work.\n\n> > 2nd part so I don't see how it could work (and after a quick test it indeed\n> > doesn't).\n>\n> Can you tell me what test you used to determine it is not in sync?\n\nWhat test did you use to determine it is in sync? Have you checked how the gc/\nfile loading actually work?\n\nIn my case I just checked the size of the query text file after running some\nscript that makes sure that there are the same few queries ran by multiple\ndifferent roles, then:\n\nSize of $PGDATA/pg_stat_tmp/pgss_query_texts.stat: 559B\npg_ctl restart\nSize of $PGDATA/pg_stat_tmp/pgss_query_texts.stat: 2383B\n\n> > Can you share more details on the benchmarks you did? Did you also run\n> > benchmark on workloads that induce entry eviction, with and without need for\n> > gc? Eviction in pgss is already very expensive, and making things worse just\n> > to save a bit of disk space doesn't seem like a good compromise.\n>\n> Sorry this was poorly explained by me. I went back and did some benchmarks. Attached is\n> The script and results. But here is a summary:\n> On a EC2 r5.2xlarge. The benchmark I performed is:\n> 1. create 10k tables\n> 2. create 5 users\n> 3. run a pgbench script that performs per transaction a select on\n> A randomly chosen table for each of the 5 users.\n> 4. 2 variants of the test executed . 1 variant is with the default pg_stat_statements.max = 5000\n> and one test with a larger pg_stat_statements.max = 10000.\n\nBut you wrote:\n\n> ##################################\n> ## pg_stat_statements.max = 15000\n> ##################################\n\nSo which one is it?\n\n>\n> So 10-15% is not accurate. I originally tested on a less powered machine. For this\n> Benchmark I see a 6% increase in TPS (732k vs 683k) when we have a larger sized\n> pg_stat_statements.max is used and less gc/deallocations.\n> Both tests show a drop in gc/deallocations and a net increase\n> In tps. Less GC makes sense since the external file has less duplicate SQLs.\n\nOn the other hand you're rebuilding the new query_offset hashtable every time\nthere's an entry eviction, which seems quite expensive.\n\nAlso, as I mentioned you didn't change any of the heuristic for\nneed_gc_qtexts(). So if the query texts are indeed deduplicated, doesn't it\nmean that gc will artificially\nbe called less often? The wanted target of \"50% bloat\" will become \"50%\nbloat assuming no deduplication is done\" and the average query text file size\nwill stay the same whether the query texts are deduplicated or not.\n\nI'm wondering the improvements you see due to the patch or simply due to\nartificially calling gc less often? What are the results if instead of using\nvanilla pg_stat_statements you patch it to perform roughly the same number of\ngc as your version does?\n\nAlso your benchmark workload is very friendly with your feature, what are the\nresults with other workloads? Having the results with query texts that aren't\nartificially long would be interesting for instance, after fixing the problems\nmentioned previously.\n\nAlso, you said that if you run that benchmark with a single user you don't see\nany regression. I don't see how rebuilding an extra hashtable in\nentry_dealloc(), so when holding an exclusive lwlock, while not saving any\nother work elsewhere could be free?\n\nLooking at the script, you have:\necho \"log_min_messages=debug1\" >> $PGDATA/postgresql.conf; \\\n\nIs that really necessary? Couldn't you upgrade the gc message to a higher\nlevel for your benchmark need, or expose some new counter in\npg_stat_statements_info maybe? Have you done the benchmark using a debug build\nor normal build?\n\n\n", "msg_date": "Thu, 23 Jun 2022 11:12:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "Shouldn't the patch status be set to \"Waiting on Author\"?\n\n(I was curious if this is a patch that I can review.)\n\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Jun 22, 2022 at 11:05:54PM +0000, Imseih (AWS), Sami wrote:\n> > > Can you describe how it's kept in sync, and how it makes sure that the property\n> > > is maintained over restart / gc? I don't see any change in the code for the\n> > > 2nd part so I don't see how it could work (and after a quick test it indeed\n> > > doesn't).\n> >\n> > There is a routine called qtext_hash_sync which removed all entries from\n> > the qtext_hash and reloads it will all the query ids from pgss_hash.\n> > [...]\n> > All the points when it's called, an exclusive lock is held.this allows or syncing all\n> > The present queryid's in pgss_hash with qtext_hash.\n> \n> So your approach is to let the current gc / file loading behavior happen as\n> before and construct your mapping hash using the resulting query text / offset\n> info. That can't work.\n> \n> > > 2nd part so I don't see how it could work (and after a quick test it indeed\n> > > doesn't).\n> >\n> > Can you tell me what test you used to determine it is not in sync?\n> \n> What test did you use to determine it is in sync? Have you checked how the gc/\n> file loading actually work?\n> \n> In my case I just checked the size of the query text file after running some\n> script that makes sure that there are the same few queries ran by multiple\n> different roles, then:\n> \n> Size of $PGDATA/pg_stat_tmp/pgss_query_texts.stat: 559B\n> pg_ctl restart\n> Size of $PGDATA/pg_stat_tmp/pgss_query_texts.stat: 2383B\n> \n> > > Can you share more details on the benchmarks you did? Did you also run\n> > > benchmark on workloads that induce entry eviction, with and without need for\n> > > gc? Eviction in pgss is already very expensive, and making things worse just\n> > > to save a bit of disk space doesn't seem like a good compromise.\n> >\n> > Sorry this was poorly explained by me. I went back and did some benchmarks. Attached is\n> > The script and results. But here is a summary:\n> > On a EC2 r5.2xlarge. The benchmark I performed is:\n> > 1. create 10k tables\n> > 2. create 5 users\n> > 3. run a pgbench script that performs per transaction a select on\n> > A randomly chosen table for each of the 5 users.\n> > 4. 2 variants of the test executed . 1 variant is with the default pg_stat_statements.max = 5000\n> > and one test with a larger pg_stat_statements.max = 10000.\n> \n> But you wrote:\n> \n> > ##################################\n> > ## pg_stat_statements.max = 15000\n> > ##################################\n> \n> So which one is it?\n> \n> >\n> > So 10-15% is not accurate. I originally tested on a less powered machine. For this\n> > Benchmark I see a 6% increase in TPS (732k vs 683k) when we have a larger sized\n> > pg_stat_statements.max is used and less gc/deallocations.\n> > Both tests show a drop in gc/deallocations and a net increase\n> > In tps. Less GC makes sense since the external file has less duplicate SQLs.\n> \n> On the other hand you're rebuilding the new query_offset hashtable every time\n> there's an entry eviction, which seems quite expensive.\n> \n> Also, as I mentioned you didn't change any of the heuristic for\n> need_gc_qtexts(). So if the query texts are indeed deduplicated, doesn't it\n> mean that gc will artificially\n> be called less often? The wanted target of \"50% bloat\" will become \"50%\n> bloat assuming no deduplication is done\" and the average query text file size\n> will stay the same whether the query texts are deduplicated or not.\n> \n> I'm wondering the improvements you see due to the patch or simply due to\n> artificially calling gc less often? What are the results if instead of using\n> vanilla pg_stat_statements you patch it to perform roughly the same number of\n> gc as your version does?\n> \n> Also your benchmark workload is very friendly with your feature, what are the\n> results with other workloads? Having the results with query texts that aren't\n> artificially long would be interesting for instance, after fixing the problems\n> mentioned previously.\n> \n> Also, you said that if you run that benchmark with a single user you don't see\n> any regression. I don't see how rebuilding an extra hashtable in\n> entry_dealloc(), so when holding an exclusive lwlock, while not saving any\n> other work elsewhere could be free?\n> \n> Looking at the script, you have:\n> echo \"log_min_messages=debug1\" >> $PGDATA/postgresql.conf; \\\n> \n> Is that really necessary? Couldn't you upgrade the gc message to a higher\n> level for your benchmark need, or expose some new counter in\n> pg_stat_statements_info maybe? Have you done the benchmark using a debug build\n> or normal build?\n> \n> \n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 14 Jul 2022 08:51:24 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 14, 2022 at 08:51:24AM +0200, Antonin Houska wrote:\n> Shouldn't the patch status be set to \"Waiting on Author\"?\n>\n> (I was curious if this is a patch that I can review.)\n\nAh indeed, I just updated the CF entry!\n\n\n", "msg_date": "Thu, 14 Jul 2022 15:13:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3700/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:11:48 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Detecting plan changes with plan_id in\n pg_stat_activity" } ]
[ { "msg_contents": "Hackers,\n\nWhile developing various Table Access Methods, I have wanted a callback for determining if CLUSTER (and VACUUM FULL) should be run against a table backed by a given TAM. The current API contains a callback for doing the guts of the cluster, but by that time, it's a bit too late to cleanly back out. For single relation cluster commands, raising an error from that callback is probably not too bad. For multi-relation cluster commands, that aborts the clustering of other yet to be processed relations, which doesn't seem acceptable. I tried fixing this with a ProcessUtility_hook, but that fires before the multi-relation cluster command has compiled the list of relations to cluster, meaning the ProcessUtility_hook doesn't have access to the necessary information. (It can be hacked to compile the list of relations itself, but that duplicates both code and effort, with the usual risks that the code will get out of sync.)\n\nFor my personal development, I have declared a new hook, bool (*relation_supports_cluster) (Relation rel). It gets called from commands/cluster.c in both the single-relation and multi-relation code paths, with warning or debug log messages output for relations that decline to be clustered, respectively.\n\nBefore posting a patch, do people think this sounds useful? Would you like the hook function signature to differ in some way? Is a simple \"yes this relation may be clustered\" vs. \"no this relation may not be clustered\" interface overly simplistic?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 17:21:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 17:21:56 -0700, Mark Dilger wrote:\n> While developing various Table Access Methods, I have wanted a callback for\n> determining if CLUSTER (and VACUUM FULL) should be run against a table\n> backed by a given TAM. The current API contains a callback for doing the\n> guts of the cluster, but by that time, it's a bit too late to cleanly back\n> out. For single relation cluster commands, raising an error from that\n> callback is probably not too bad. For multi-relation cluster commands, that\n> aborts the clustering of other yet to be processed relations, which doesn't\n> seem acceptable.\n\nWhy not? What else do you want to do in that case? Silently ignoring\nnon-clusterable tables doesn't seem right either. What's the use-case for\nswallowing the error?\n\n\n> I tried fixing this with a ProcessUtility_hook, but that\n> fires before the multi-relation cluster command has compiled the list of\n> relations to cluster, meaning the ProcessUtility_hook doesn't have access to\n> the necessary information. (It can be hacked to compile the list of\n> relations itself, but that duplicates both code and effort, with the usual\n> risks that the code will get out of sync.)\n> \n> For my personal development, I have declared a new hook, bool\n> (*relation_supports_cluster) (Relation rel). It gets called from\n> commands/cluster.c in both the single-relation and multi-relation code\n> paths, with warning or debug log messages output for relations that decline\n> to be clustered, respectively.\n\nDo you actually need to dynamically decide whether CLUSTER is supported?\nOtherwise we could just make the existing cluster callback optional and error\nout if a table is clustered that doesn't have the callback.\n\n\n> Before posting a patch, do people think this sounds useful? Would you like\n> the hook function signature to differ in some way? Is a simple \"yes this\n> relation may be clustered\" vs. \"no this relation may not be clustered\"\n> interface overly simplistic?\n\nIt seems overly complicated, if anything ;)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jun 2022 18:01:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 6:01 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2022-06-15 17:21:56 -0700, Mark Dilger wrote:\n>> While developing various Table Access Methods, I have wanted a callback for\n>> determining if CLUSTER (and VACUUM FULL) should be run against a table\n>> backed by a given TAM. The current API contains a callback for doing the\n>> guts of the cluster, but by that time, it's a bit too late to cleanly back\n>> out. For single relation cluster commands, raising an error from that\n>> callback is probably not too bad. For multi-relation cluster commands, that\n>> aborts the clustering of other yet to be processed relations, which doesn't\n>> seem acceptable.\n> \n> Why not? What else do you want to do in that case? Silently ignoring\n> non-clusterable tables doesn't seem right either. What's the use-case for\n> swallowing the error?\n\nImagine you develop a TAM for which the concept of \"clustering\" doesn't have any defined meaning. Perhaps you've arranged the data in a way that has no similarity to heap, and now somebody runs a CLUSTER command (with no arguments.) It's reasonable that they want all their heap tables to get the usual cluster_rel treatment, and that the existence of a table using your exotic TAM shouldn't interfere with that. Then what? You are forced to copy all the data from your OldHeap (badly named) to the NewHeap (also badly named), or to raise an error. That doesn't seem ok.\n\n>> I tried fixing this with a ProcessUtility_hook, but that\n>> fires before the multi-relation cluster command has compiled the list of\n>> relations to cluster, meaning the ProcessUtility_hook doesn't have access to\n>> the necessary information. (It can be hacked to compile the list of\n>> relations itself, but that duplicates both code and effort, with the usual\n>> risks that the code will get out of sync.)\n>> \n>> For my personal development, I have declared a new hook, bool\n>> (*relation_supports_cluster) (Relation rel). It gets called from\n>> commands/cluster.c in both the single-relation and multi-relation code\n>> paths, with warning or debug log messages output for relations that decline\n>> to be clustered, respectively.\n> \n> Do you actually need to dynamically decide whether CLUSTER is supported?\n> Otherwise we could just make the existing cluster callback optional and error\n> out if a table is clustered that doesn't have the callback.\n\nSame as above, I don't know why erroring would be the right thing to do. As a comparison, consider that we don't attempt to cluster a partitioned table, but rather just silently skip it. Imagine if, when we introduced the concept of partitioned tables, we made unqualified CLUSTER commands always fail when they encountered a partitioned table.\n\n>> Before posting a patch, do people think this sounds useful? Would you like\n>> the hook function signature to differ in some way? Is a simple \"yes this\n>> relation may be clustered\" vs. \"no this relation may not be clustered\"\n>> interface overly simplistic?\n> \n> It seems overly complicated, if anything ;)\n\nFor the TAMs I've developed thus far, I don't need the (Relation rel) parameter, and could just have easily used (void). But that seems to fence in what other TAM authors could do in future.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 18:24:45 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 18:24:45 -0700, Mark Dilger wrote:\n> > On Jun 15, 2022, at 6:01 PM, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-06-15 17:21:56 -0700, Mark Dilger wrote:\n> >> While developing various Table Access Methods, I have wanted a callback for\n> >> determining if CLUSTER (and VACUUM FULL) should be run against a table\n> >> backed by a given TAM. The current API contains a callback for doing the\n> >> guts of the cluster, but by that time, it's a bit too late to cleanly back\n> >> out. For single relation cluster commands, raising an error from that\n> >> callback is probably not too bad. For multi-relation cluster commands, that\n> >> aborts the clustering of other yet to be processed relations, which doesn't\n> >> seem acceptable.\n> > \n> > Why not? What else do you want to do in that case? Silently ignoring\n> > non-clusterable tables doesn't seem right either. What's the use-case for\n> > swallowing the error?\n> \n> Imagine you develop a TAM for which the concept of \"clustering\" doesn't have\n> any defined meaning. Perhaps you've arranged the data in a way that has no\n> similarity to heap, and now somebody runs a CLUSTER command (with no\n> arguments.)\n\nI think nothing would happen in this case - only pre-clustered tables get\nclustered in an argumentless CLUSTER. What am I missing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jun 2022 18:55:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 6:55 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> I think nothing would happen in this case - only pre-clustered tables get\n> clustered in an argumentless CLUSTER. What am I missing?\n\nThe \"VACUUM FULL\" synonym of \"CLUSTER\" doesn't depend on whether the target is pre-clustered, and both will run against the table if the user has run an ALTER TABLE..CLUSTER ON. Now, we could try to catch that latter command with a utility hook, but since the VACUUM FULL is still problematic, it seems cleaner to just add the callback I am proposing.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 19:07:50 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 19:07:50 -0700, Mark Dilger wrote:\n> > On Jun 15, 2022, at 6:55 PM, Andres Freund <andres@anarazel.de> wrote:\n> > \n> > I think nothing would happen in this case - only pre-clustered tables get\n> > clustered in an argumentless CLUSTER. What am I missing?\n> \n> The \"VACUUM FULL\" synonym of \"CLUSTER\" doesn't depend on whether the target\n> is pre-clustered\n\nVACUUM FULL isn't a synonym of CLUSTER. While a good bit of the implementation\nis shared, VACUUM FULL doesn't order the table contents. I see now reason why\nan AM shouldn't support VACUUM FULL?\n\n\n> , and both will run against the table if the user has run an ALTER\n> TABLE..CLUSTER ON.\n\nIf a user does that for a table that doesn't support clustering, well, I don't\nsee what's gained by not erroring out.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jun 2022 19:14:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 7:14 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2022-06-15 19:07:50 -0700, Mark Dilger wrote:\n>>> On Jun 15, 2022, at 6:55 PM, Andres Freund <andres@anarazel.de> wrote:\n>>> \n>>> I think nothing would happen in this case - only pre-clustered tables get\n>>> clustered in an argumentless CLUSTER. What am I missing?\n>> \n>> The \"VACUUM FULL\" synonym of \"CLUSTER\" doesn't depend on whether the target\n>> is pre-clustered\n> \n> VACUUM FULL isn't a synonym of CLUSTER. While a good bit of the implementation\n> is shared, VACUUM FULL doesn't order the table contents. I see now reason why\n> an AM shouldn't support VACUUM FULL?\n\nIt's effectively a synonym which determines whether the \"bool use_sort\" parameter of the table AM's relation_copy_for_cluster will be set. Heap-AM plays along and sorts or not based on that. But it's up to the TAM what it wants to do with that boolean, if in fact it does anything at all based on that. A TAM could decide to do the exact opposite of what Heap-AM does and instead sort on VACUUM FULL but not sort on CLUSTER, or perhaps perform a randomized shuffle, or <insert your weird behavior here>. From the point-of-view of a TAM implementor, VACUUM FULL and CLUSTER are synonyms. Or am I missing something?\n\n>> , and both will run against the table if the user has run an ALTER\n>> TABLE..CLUSTER ON.\n> \n> If a user does that for a table that doesn't support clustering, well, I don't\n> see what's gained by not erroring out.\n\nPerhaps they want to give the TAM information about which index to use for sorting, on those occasions when the TAM's logic dictates that sorting is appropriate, but not in response to a cluster command.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 19:21:42 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 7:21 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> If a user does that for a table that doesn't support clustering, well, I don't\n>> see what's gained by not erroring out.\n> \n> Perhaps they want to give the TAM information about which index to use for sorting, on those occasions when the TAM's logic dictates that sorting is appropriate, but not in response to a cluster command.\n\nI should admit that this is a bit hack-ish, but TAM authors haven't been left a lot of options here. Index AMs allow for custom storage parameters, but Table AMs don't, so getting information to the TAM about how to behave takes more than a little slight of hand. Simon's proposal from a while back (don't have the link just now) to allow TAMs to define custom storage parameters would go some distance here.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 19:24:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 19:21:42 -0700, Mark Dilger wrote:\n> > On Jun 15, 2022, at 7:14 PM, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-06-15 19:07:50 -0700, Mark Dilger wrote:\n> >>> On Jun 15, 2022, at 6:55 PM, Andres Freund <andres@anarazel.de> wrote:\n> >>> \n> >>> I think nothing would happen in this case - only pre-clustered tables get\n> >>> clustered in an argumentless CLUSTER. What am I missing?\n> >> \n> >> The \"VACUUM FULL\" synonym of \"CLUSTER\" doesn't depend on whether the target\n> >> is pre-clustered\n> > \n> > VACUUM FULL isn't a synonym of CLUSTER. While a good bit of the implementation\n> > is shared, VACUUM FULL doesn't order the table contents. I see now reason why\n> > an AM shouldn't support VACUUM FULL?\n> \n> It's effectively a synonym which determines whether the \"bool use_sort\"\n> parameter of the table AM's relation_copy_for_cluster will be set. Heap-AM\n> plays along and sorts or not based on that.\n\nHardly a synonym if it behaves differently?\n\n\n> But it's up to the TAM what it wants to do with that boolean, if in fact it\n> does anything at all based on that. A TAM could decide to do the exact\n> opposite of what Heap-AM does and instead sort on VACUUM FULL but not sort\n> on CLUSTER, or perhaps perform a randomized shuffle, or <insert your weird\n> behavior here>.\n\nThat's bogus. Yes, an AM can do stupid stuff in a callback. But so what,\nthat's possible with all extension APIs.\n\n\n\n> From the point-of-view of a TAM implementor, VACUUM FULL and CLUSTER are\n> synonyms. Or am I missing something?\n\nThe callback gets passed use_sort. So just implement it use_sort = false and\nerror out if use_sort = true?\n\n\n> >> , and both will run against the table if the user has run an ALTER\n> >> TABLE..CLUSTER ON.\n> > \n> > If a user does that for a table that doesn't support clustering, well, I don't\n> > see what's gained by not erroring out.\n> \n> Perhaps they want to give the TAM information about which index to use for\n> sorting, on those occasions when the TAM's logic dictates that sorting is\n> appropriate, but not in response to a cluster command.\n\nI have little sympathy to randomly misusing catalog contents and then\ncomplaining that those catalog contents have an effect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jun 2022 19:30:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 7:30 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n>> It's effectively a synonym which determines whether the \"bool use_sort\"\n>> parameter of the table AM's relation_copy_for_cluster will be set. Heap-AM\n>> plays along and sorts or not based on that.\n> \n> Hardly a synonym if it behaves differently?\n\nI don't think this point is really worth arguing. We don't have to call it a synonym, and the rest of the discussion remains the same.\n\n>> But it's up to the TAM what it wants to do with that boolean, if in fact it\n>> does anything at all based on that. A TAM could decide to do the exact\n>> opposite of what Heap-AM does and instead sort on VACUUM FULL but not sort\n>> on CLUSTER, or perhaps perform a randomized shuffle, or <insert your weird\n>> behavior here>.\n> \n> That's bogus. Yes, an AM can do stupid stuff in a callback. But so what,\n> that's possible with all extension APIs.\n\nI don't think it's \"stupid stuff\". A major motivation, perhaps the only useful motivation, for implementing a TAM is to get a non-trivial performance boost (relative to heap) for some target workload, almost certainly at the expense of worse performance for another workload. To optimize any particular workload sufficiently to make it worth the bother, you've pretty much got to do something meaningfully different than what heap does.\n\n\n>> From the point-of-view of a TAM implementor, VACUUM FULL and CLUSTER are\n>> synonyms. Or am I missing something?\n> \n> The callback gets passed use_sort. So just implement it use_sort = false and\n> error out if use_sort = true?\n\nI'm not going to say that your idea is unreasonable for a TAM that you might choose to implement, but I don't see why that should be required of all TAMs anybody might ever implement.\n\nThe callback that gets use_sort is called from copy_table_data(). By that time, it's too late to avoid the\n\n /*\n * Open the relations we need.\n */\n NewHeap = table_open(OIDNewHeap, AccessExclusiveLock);\n OldHeap = table_open(OIDOldHeap, AccessExclusiveLock);\n\ncode that has already happened in cluster.c's copy_table_data() function, and unless I raise an error, after returning from my TAM's callback, the cluster code will replace the old table with the new one. I'm left no choices but to copy my data over, loose my data, or abort the command. None of those are OK options for me.\n\nI'm open to different solutions. If a simple callback like relation_supports_cluster(Relation rel) is too simplistic, and more parameters with more context information is wanted, then fine, let's do that. But I can't really complete my work with the interface as it stands now.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 20:10:30 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 20:10:30 -0700, Mark Dilger wrote:\n> > On Jun 15, 2022, at 7:30 PM, Andres Freund <andres@anarazel.de> wrote:\n> >> But it's up to the TAM what it wants to do with that boolean, if in fact it\n> >> does anything at all based on that. A TAM could decide to do the exact\n> >> opposite of what Heap-AM does and instead sort on VACUUM FULL but not sort\n> >> on CLUSTER, or perhaps perform a randomized shuffle, or <insert your weird\n> >> behavior here>.\n> > \n> > That's bogus. Yes, an AM can do stupid stuff in a callback. But so what,\n> > that's possible with all extension APIs.\n> \n> I don't think it's \"stupid stuff\". A major motivation, perhaps the only\n> useful motivation, for implementing a TAM is to get a non-trivial\n> performance boost (relative to heap) for some target workload, almost\n> certainly at the expense of worse performance for another workload. To\n> optimize any particular workload sufficiently to make it worth the bother,\n> you've pretty much got to do something meaningfully different than what heap\n> does.\n\nSure. I just don't see what that has to do with doing something widely\ndiffering in VACUUM FULL. Which AM can't support that? I guess there's some\nwhere implementing the full visibility semantics isn't feasible, but that's\nafaics OK.\n\n\n> >> From the point-of-view of a TAM implementor, VACUUM FULL and CLUSTER are\n> >> synonyms. Or am I missing something?\n> > \n> > The callback gets passed use_sort. So just implement it use_sort = false and\n> > error out if use_sort = true?\n> \n> I'm not going to say that your idea is unreasonable for a TAM that you might\n> choose to implement, but I don't see why that should be required of all TAMs\n> anybody might ever implement.\n\n> The callback that gets use_sort is called from copy_table_data(). By that time, it's too late to avoid the\n> \n> /*\n> * Open the relations we need.\n> */\n> NewHeap = table_open(OIDNewHeap, AccessExclusiveLock);\n> OldHeap = table_open(OIDOldHeap, AccessExclusiveLock);\n> \n> code that has already happened in cluster.c's copy_table_data() function,\n> and unless I raise an error, after returning from my TAM's callback, the\n> cluster code will replace the old table with the new one. I'm left no\n> choices but to copy my data over, loose my data, or abort the command. None\n> of those are OK options for me.\n\nI think you need to do a bit more explaining of what you're actually trying to\nachieve here. You're just saying \"I don't want to\", which doesn't really help\nme to understand the set of useful options.\n\n\n> I'm open to different solutions. If a simple callback like\n> relation_supports_cluster(Relation rel) is too simplistic, and more\n> parameters with more context information is wanted, then fine, let's do\n> that.\n\nFWIW, I want to go *simpler* if anything, not more complicated. I.e. make the\nrelation_copy_for_cluster optional.\n\nI still think it's a terrible idea to silently ignore tables in CLUSTER or\nVACUUM FULL.\n\n\n> But I can't really complete my work with the interface as it stands\n> now.\n\nSince you've not described that work to a meaningful degree...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jun 2022 20:18:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "On Wed, Jun 15, 2022 at 8:18 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > If a simple callback like\n> > relation_supports_cluster(Relation rel) is too simplistic\n>\n\nSeems like it should be called:\nrelation_supports_compaction[_by_removal_of_interspersed_dead_tuples]\n\nBasically, if the user tells the table to make itself smaller on disk by\nremoving dead tuples, should we support the case where the Table AM says:\n\"Sorry, I cannot do that\"?\n\nIf yes, then naming the table explicitly should elicit an error. Having\nthe table chosen implicitly should provoke a warning. For ALTER TABLE\nCLUSTER there should be an error: which makes the implicit CLUSTER command\na non-factor.\n\nHowever, given that should the table structure change it is imperative that\nthe Table AM be capable of producing a new physical relation with the\ncorrect data, which will have been compacted as a side-effect, it seems\nlike, explicit or implicit, expecting any Table AM to do that when faced\nwith Vacuum Full is reasonable. Which leaves deciding how to allow a table\nwith a given TAM to prevent itself from being added to the CLUSTER roster.\nAnd decide whether an opt-out feature for implicit VACUUM FULL is something\nwe should offer as well.\n\nI'm doubtful that a TAM that is pluggable into the MVCC and WAL\narchitecture of PostgreSQL could avoid this basic contract between the\nsystem and its users.\n\nDavid J.\n\nOn Wed, Jun 15, 2022 at 8:18 PM Andres Freund <andres@anarazel.de> wrote:> If a simple callback like\n> relation_supports_cluster(Relation rel) is too simplisticSeems like it should be called: relation_supports_compaction[_by_removal_of_interspersed_dead_tuples]Basically, if the user tells the table to make itself smaller on disk by removing dead tuples, should we support the case where the Table AM says: \"Sorry, I cannot do that\"?If yes, then naming the table explicitly should elicit an error.  Having the table chosen implicitly should provoke a warning.  For ALTER TABLE CLUSTER there should be an error: which makes the implicit CLUSTER command a non-factor.However, given that should the table structure change it is imperative that the Table AM be capable of producing a new physical relation with the correct data, which will have been compacted as a side-effect, it seems like, explicit or implicit, expecting any Table AM to do that when faced with Vacuum Full is reasonable.  Which leaves deciding how to allow a table with a given TAM to prevent itself from being added to the CLUSTER roster.  And decide whether an opt-out feature for implicit VACUUM FULL is something we should offer as well.I'm doubtful that a TAM that is pluggable into the MVCC and WAL architecture of PostgreSQL could avoid this basic contract between the system and its users.David J.", "msg_date": "Wed, 15 Jun 2022 20:50:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 8:18 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2022-06-15 20:10:30 -0700, Mark Dilger wrote:\n>>> On Jun 15, 2022, at 7:30 PM, Andres Freund <andres@anarazel.de> wrote:\n>>>> But it's up to the TAM what it wants to do with that boolean, if in fact it\n>>>> does anything at all based on that. A TAM could decide to do the exact\n>>>> opposite of what Heap-AM does and instead sort on VACUUM FULL but not sort\n>>>> on CLUSTER, or perhaps perform a randomized shuffle, or <insert your weird\n>>>> behavior here>.\n>>> \n>>> That's bogus. Yes, an AM can do stupid stuff in a callback. But so what,\n>>> that's possible with all extension APIs.\n>> \n>> I don't think it's \"stupid stuff\". A major motivation, perhaps the only\n>> useful motivation, for implementing a TAM is to get a non-trivial\n>> performance boost (relative to heap) for some target workload, almost\n>> certainly at the expense of worse performance for another workload. To\n>> optimize any particular workload sufficiently to make it worth the bother,\n>> you've pretty much got to do something meaningfully different than what heap\n>> does.\n> \n> Sure. I just don't see what that has to do with doing something widely\n> differing in VACUUM FULL. Which AM can't support that? I guess there's some\n> where implementing the full visibility semantics isn't feasible, but that's\n> afaics OK.\n\nThe problem isn't that the TAM can't do it. Any TAM can probably copy its contents verbatim from the OldHeap to the NewHeap. But it's really stupid to have to do that if you're not going to change anything along the way, and I think if you divorce your thinking from how heap-AM works sufficiently, you might start to see how other TAMs might have nothing useful to do at this step. So there's a strong motivation to not be forced into a \"move all the data uselessly\" step.\n\nI don't really want to discuss the proprietary details of any TAMs I'm developing, so I'll use some made up examples. Imagine you have decided to trade off the need to vacuum against the cost of storing 64bit xids. That doesn't mean that compaction isn't maybe still a good thing, but you don't need to vacuum for anti-wraparound purposes anymore. Now imagine you've also decided to trade off insert speed for select speed, and you sort the entire table on every insert, and to keep indexes happy, you keep a \"externally visible TID\" to \"internal actual location\" mapping in a separate fork. Let's say you also don't support UPDATE or DELETE at all. Those immediately draw an error, \"not implemented by this tam\".\n\nAt this point, you have a fully sorted and completely compacted table at all times. It's simply an invariant of the TAM. So, now what exactly is VACUUM FULL or CLUSTER supposed to do? Seems like the answer is \"diddly squat\", and yet you seem to propose requiring the TAM to do it. I don't like that.\n\n>>>> From the point-of-view of a TAM implementor, VACUUM FULL and CLUSTER are\n>>>> synonyms. Or am I missing something?\n>>> \n>>> The callback gets passed use_sort. So just implement it use_sort = false and\n>>> error out if use_sort = true?\n>> \n>> I'm not going to say that your idea is unreasonable for a TAM that you might\n>> choose to implement, but I don't see why that should be required of all TAMs\n>> anybody might ever implement.\n> \n>> The callback that gets use_sort is called from copy_table_data(). By that time, it's too late to avoid the\n>> \n>> /*\n>> * Open the relations we need.\n>> */\n>> NewHeap = table_open(OIDNewHeap, AccessExclusiveLock);\n>> OldHeap = table_open(OIDOldHeap, AccessExclusiveLock);\n>> \n>> code that has already happened in cluster.c's copy_table_data() function,\n>> and unless I raise an error, after returning from my TAM's callback, the\n>> cluster code will replace the old table with the new one. I'm left no\n>> choices but to copy my data over, loose my data, or abort the command. None\n>> of those are OK options for me.\n> \n> I think you need to do a bit more explaining of what you're actually trying to\n> achieve here. You're just saying \"I don't want to\", which doesn't really help\n> me to understand the set of useful options.\n\nI'm trying to opt out of cluster/vacfull.\n\n>> I'm open to different solutions. If a simple callback like\n>> relation_supports_cluster(Relation rel) is too simplistic, and more\n>> parameters with more context information is wanted, then fine, let's do\n>> that.\n> \n> FWIW, I want to go *simpler* if anything, not more complicated. I.e. make the\n> relation_copy_for_cluster optional.\n> \n> I still think it's a terrible idea to silently ignore tables in CLUSTER or\n> VACUUM FULL.\n\nI'm not entirely against you on that, but it makes me cringe that we impose design decisions like that on any and all future TAMs. It seems better to me to let the TAM author decide to emit an error, warning, notice, or whatever, as they see fit.\n\n>> But I can't really complete my work with the interface as it stands\n>> now.\n> \n> Since you've not described that work to a meaningful degree...\n\nI don't think I should have to do so. It's like saying, \"I think I should have freedom of speech\", and you say, \"well, I'm not sure about that; tell me what you want to say, and I'll decide if I'm going to let you say it\". That's not freedom. I think TAM authors should have broad discretion over anything that the core system doesn't have a compelling interest in controlling. You've not yet said why a TAM should be prohibited from opting out of cluster/vacfull.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 22:23:36 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 8:50 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Wed, Jun 15, 2022 at 8:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > If a simple callback like\n> > relation_supports_cluster(Relation rel) is too simplistic\n> \n> Seems like it should be called: relation_supports_compaction[_by_removal_of_interspersed_dead_tuples]\n\nOk.\n\n> Basically, if the user tells the table to make itself smaller on disk by removing dead tuples, should we support the case where the Table AM says: \"Sorry, I cannot do that\"?\n\nI submit that's the only sane thing to do if the table AM already guarantees that the table will always be fully compacted. There is no justification for forcing the table contents to be copied without benefit.\n\n> If yes, then naming the table explicitly should elicit an error. Having the table chosen implicitly should provoke a warning. For ALTER TABLE CLUSTER there should be an error: which makes the implicit CLUSTER command a non-factor.\n\nI'm basically fine with how you would design the TAM, but I'm going to argue again that the core project should not dictate these decisions. The TAM's relation_supports_compaction() function can return true/false, or raise an error. If raising an error is the right action, the TAM can do that. If the core code makes that decision, the TAM can't override, and that paints TAM authors into a corner.\n\n> However, given that should the table structure change it is imperative that the Table AM be capable of producing a new physical relation with the correct data, which will have been compacted as a side-effect, it seems like, explicit or implicit, expecting any Table AM to do that when faced with Vacuum Full is reasonable. Which leaves deciding how to allow a table with a given TAM to prevent itself from being added to the CLUSTER roster. And decide whether an opt-out feature for implicit VACUUM FULL is something we should offer as well.\n> \n> I'm doubtful that a TAM that is pluggable into the MVCC and WAL architecture of PostgreSQL could avoid this basic contract between the system and its users.\n\nHow about a TAM that implements a write-once, read-many logic. You get one multi-insert, and forever after you can't modify it (other than to drop the table, or perhaps to truncate it). That's a completely made-up-on-the-spot example, but it's not entirely without merit. You could avoid a lot of locking overhead when using such a table, since you'd know a priori that nobody else is modifying it. It could also be implemented with a smaller tuple header, since a lot of the header bytes in heap tuples are dedicated to tracking updates. You wouldn't need a per-row inserting transaction-Id either, since you could just store one per table, knowing that all the rows were inserted in the same transaction.\n\nIn what sense does this made-up TAM conflict with mvcc and wal? It doesn't have all the features of heap, but that's not the same thing as violating mvcc or breaking wal.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 23:23:09 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "Hi,\n\nOn 2022-06-15 22:23:36 -0700, Mark Dilger wrote:\n> I'm not entirely against you on that, but it makes me cringe that we impose\n> design decisions like that on any and all future TAMs. It seems better to\n> me to let the TAM author decide to emit an error, warning, notice, or\n> whatever, as they see fit.\n\nThe tradeoff is that that pushes down complexity and makes the overall system\nharder to understand. I'm not saying that there's no possible use for such\ncallbacks / configurability, I'm just not convinced it's worth the cost.\n\n\n> >> But I can't really complete my work with the interface as it stands\n> >> now.\n> > \n> > Since you've not described that work to a meaningful degree...\n> \n> I don't think I should have to do so. It's like saying, \"I think I should\n> have freedom of speech\", and you say, \"well, I'm not sure about that; tell\n> me what you want to say, and I'll decide if I'm going to let you say it\".'\n> That's not freedom. I think TAM authors should have broad discretion over\n> anything that the core system doesn't have a compelling interest in\n> controlling.\n\nThat's insultingly ridiculous. You can say, do whatever you want, but that\ndoesn't mean I have to be convinced by it (i.e. +1 adding an API) - that'd be\ncompelled speech, to go with your image...\n\nIt's utterly normal to be asked what the use case for a new API is when\nproposing one.\n\n\n> You've not yet said why a TAM should be prohibited from opting\n> out of cluster/vacfull.\n\nAPI / behavioural complexity. If we make ever nook and cranny configurable,\nwe'll have an ever harder to use / administer system (from a user's POV) and\nhave difficulty understanding the state of the system when writing patches\n(from a core PG developer's POV). It might be the right thing in this case -\nhence me asking for what the motivation is.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Jun 2022 00:27:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "On Wed, Jun 15, 2022 at 11:23 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n> > On Jun 15, 2022, at 8:50 PM, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n> >\n> > On Wed, Jun 15, 2022 at 8:18 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > If a simple callback like\n> > > relation_supports_cluster(Relation rel) is too simplistic\n> >\n> > Seems like it should be called:\n> relation_supports_compaction[_by_removal_of_interspersed_dead_tuples]\n>\n> Ok.\n>\n> > Basically, if the user tells the table to make itself smaller on disk by\n> removing dead tuples, should we support the case where the Table AM says:\n> \"Sorry, I cannot do that\"?\n>\n> I submit that's the only sane thing to do if the table AM already\n> guarantees that the table will always be fully compacted. There is no\n> justification for forcing the table contents to be copied without benefit.\n>\n\nI accept that this is a valid outcome that should be accommodated for.\n\n>\n> > If yes, then naming the table explicitly should elicit an error. Having\n> the table chosen implicitly should provoke a warning. For ALTER TABLE\n> CLUSTER there should be an error: which makes the implicit CLUSTER command\n> a non-factor.\n>\n> I'm basically fine with how you would design the TAM, but I'm going to\n> argue again that the core project should not dictate these decisions. The\n> TAM's relation_supports_compaction() function can return true/false, or\n> raise an error. If raising an error is the right action, the TAM can do\n> that.\n\n\n\n> If the core code makes that decision, the TAM can't override, and that\n> paints TAM authors into a corner.\n\n\nThe core code has to decide what the template pattern code looks like,\nincluding what things it will provide and what it requires extensions to\nprovide. To a large extent, providing a consistent end-user experience is\nthe template's, and thus core code's, job.\n\n\n> How about a TAM that implements a write-once, read-many logic. You get\n> one multi-insert, and forever after you can't modify it (other than to drop\n> the table, or perhaps to truncate it).\n\n\nSo now the AM wants to ignore ALTER TABLE, INSERT, and DELETE commands.\n\n That's a completely made-up-on-the-spot example, but it's not entirely\n> without merit.\n>\n> In what sense does this made-up TAM conflict with mvcc and wal? It\n> doesn't have all the features of heap, but that's not the same thing as\n> violating mvcc or breaking wal.\n>\n>\nI am nowhere near informed enough to speak to the implementation details\nhere, and my imagination is probably lacking too, but I'll accept that the\ncurrent system does indeed make assumptions in the template design that are\nnow being seen as incorrect in light of new algorithms.\n\nBut you are basically proposing a reworking of the existing system into one\nthat makes pretty much any SQL Command something that a TAM can treat as\nbeing an optional request by the user; whereas today the system presumes\nthat the implementations will respond to these commands. And to make this\nchange without any core code having such a need. Or even a working\nextension that can be incorporated during development. And, as per the\nabove, all of this requires coming to some kind of agreement on the desired\nuser experience (I don't immediately accept the \"let the AM decide\" option).\n\nAnyway, that was mostly my attempt at Devil's Advocate.\nI was going to originally post that the template simply inspect whether the\nnew physical relation file, after the copy was requested, had a non-zero\nsize, and if so finish performing the swap the way we do today, otherwise\nbasically abort (or otherwise perform the minimal amount of catalog\nchanges) so the existing relation file continues to be pointed at.\nSomething to consider with a smaller API footprint than a gatekeeper hook.\n\nI think that all boils down to - it seems preferable to simply continue\nassuming all these commands are accepted, but figure out whether a \"no-op\"\nis a valid outcome and, if so, ensure there is a way to identify that no-op\nmeaningfully. While hopefully designing the surrounding code so that\nunnecessary work is not performed in front of a no-op. This seems\npreferable to spreading hooks throughout the code that basically ask \"do\nyou handle this SQL command?\". The specifics of the existing code may\ndictate otherwise.\n\nDavid J.\n\nOn Wed, Jun 15, 2022 at 11:23 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> On Jun 15, 2022, at 8:50 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Wed, Jun 15, 2022 at 8:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > If a simple callback like\n> > relation_supports_cluster(Relation rel) is too simplistic\n> \n> Seems like it should be called: relation_supports_compaction[_by_removal_of_interspersed_dead_tuples]\n\nOk.\n\n> Basically, if the user tells the table to make itself smaller on disk by removing dead tuples, should we support the case where the Table AM says: \"Sorry, I cannot do that\"?\n\nI submit that's the only sane thing to do if the table AM already guarantees that the table will always be fully compacted.  There is no justification for forcing the table contents to be copied without benefit.I accept that this is a valid outcome that should be accommodated for.\n\n> If yes, then naming the table explicitly should elicit an error.  Having the table chosen implicitly should provoke a warning.  For ALTER TABLE CLUSTER there should be an error: which makes the implicit CLUSTER command a non-factor.\n\nI'm basically fine with how you would design the TAM, but I'm going to argue again that the core project should not dictate these decisions.  The TAM's relation_supports_compaction() function can return true/false, or raise an error.  If raising an error is the right action, the TAM can do that.   If the core code makes that decision, the TAM can't override, and that paints TAM authors into a corner.The core code has to decide what the template pattern code looks like, including what things it will provide and what it requires extensions to provide.  To a large extent, providing a consistent end-user experience is the template's, and thus core code's, job. \nHow about a TAM that implements a write-once, read-many logic.  You get one multi-insert, and forever after you can't modify it (other than to drop the table, or perhaps to truncate it).So now the AM wants to ignore ALTER TABLE, INSERT, and DELETE commands.  That's a completely made-up-on-the-spot example, but it's not entirely without merit.\n\nIn what sense does this made-up TAM conflict with mvcc and wal?  It doesn't have all the features of heap, but that's not the same thing as violating mvcc or breaking wal.I am nowhere near informed enough to speak to the implementation details here, and my imagination is probably lacking too, but I'll accept that the current system does indeed make assumptions in the template design that are now being seen as incorrect in light of new algorithms.But you are basically proposing a reworking of the existing system into one that makes pretty much any SQL Command something that a TAM can treat as being an optional request by the user; whereas today the system presumes that the implementations will respond to these commands.  And to make this change without any core code having such a need. Or even a working extension that can be incorporated during development.  And, as per the above, all of this requires coming to some kind of agreement on the desired user experience (I don't immediately accept the \"let the AM decide\" option).Anyway, that was mostly my attempt at Devil's Advocate.I was going to originally post that the template simply inspect whether the new physical relation file, after the copy was requested, had a non-zero size, and if so finish performing the swap the way we do today, otherwise basically abort (or otherwise perform the minimal amount of catalog changes) so the existing relation file continues to be pointed at.  Something to consider with a smaller API footprint than a gatekeeper hook.I think that all boils down to - it seems preferable to simply continue assuming all these commands are accepted, but figure out whether a \"no-op\" is a valid outcome and, if so, ensure there is a way to identify that no-op meaningfully.  While hopefully designing the surrounding code so that unnecessary work is not performed in front of a no-op.  This seems preferable to spreading hooks throughout the code that basically ask \"do you handle this SQL command?\".  The specifics of the existing code may dictate otherwise.David J.", "msg_date": "Thu, 16 Jun 2022 00:28:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 16, 2022, at 12:28 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> But you are basically proposing a reworking of the existing system into one that makes pretty much any SQL Command something that a TAM can treat as being an optional request by the user;\n\nYes, and I think I'm perfectly correct in asking for that. If the standard you are proposing (albeit as Devil's Advocate) were applied to filesystems, nobody could ever implement /dev/null, on the argument that users have a reasonable expectation that data they write to a file will be there for them to read later. Yet Michael Paquier wrote a blackhole TAM, and although I don't find it terribly useful, I do think it's a reasonable thing for somebody to be able to write. \n\n> whereas today the system presumes that the implementations will respond to these commands.\n\nThat depends on what you mean by \"respond to\". A TAM which implements a tamper resistant audit log responds to update and delete commands with a \"permission denied\" error. A TAM which implements running aggregates implements insert commands by computing and inserting a new running aggregate value and reclaiming space from old running aggregate values when no transaction could any longer see them. You can do this stuff at a higher level with hooks, functions, triggers, and rules, inserting into a heap, and having to periodically vacuum, by why would you want to? That's almost guaranteed to be slower, maybe even orders of magnitude slower. \n\n> And to make this change without any core code having such a need.\n\nThe core code won't have any such need, because the core code is content with heap, and the API already accommodates heap. It seems Andres moved the project in the direction of allowing custom TAMs when he created the Table AM interface, and I'm quite pleased that he did so, but it doesn't allow nearly enough flexibility to do all the interesting things a TAM could otherwise do. Consider for example that the multi_insert hook uses a BulkInsertStateData parameter, defined as: \n\ntypedef struct BulkInsertStateData\n{ \n BufferAccessStrategy strategy; /* our BULKWRITE strategy object */\n Buffer current_buf; /* current insertion target page */\n} BulkInsertStateData; \n\nwhich is just the structure heap would want, but what about a TAM that wants to route different tuples to different pages? The \"current_buf\" isn't enough information, and there's no void *extra field, so you're just sunk.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 16 Jun 2022 09:10:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" }, { "msg_contents": "\n\n> On Jun 16, 2022, at 12:27 AM, Andres Freund <andres@anarazel.de> wrote:\n> \n>> I don't think I should have to do so. It's like saying, \"I think I should\n>> have freedom of speech\", and you say, \"well, I'm not sure about that; tell\n>> me what you want to say, and I'll decide if I'm going to let you say it\".'\n>> That's not freedom. I think TAM authors should have broad discretion over\n>> anything that the core system doesn't have a compelling interest in\n>> controlling.\n> \n> That's insultingly ridiculous. You can say, do whatever you want, but that\n> doesn't mean I have to be convinced by it (i.e. +1 adding an API) - that'd be\n> compelled speech, to go with your image...\n\nIndeed it would be compelled speech, and I'm not trying to compel you, only to convince you. And my apologies if it came across as insulting. I have a lot of respect for you, as do others at EDB, per invariably complementary comments I've heard others express.\n\n> It's utterly normal to be asked what the use case for a new API is when\n> proposing one.\n\nIt seems like we're talking on two different levels. I've said what the use case is, which is to implement a TAM that doesn't benefit from cluster or vacuum full, without the overhead of needlessly copying itself, and without causing argumentless VACUUM FULL commands to fail. I'm *emphatically* not asking the community to accept the TAM back as a patch. The freedom I'm talking about is the freedom to design and implement such a third-party TAM without seeking community approval of the TAM's merits.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 16 Jun 2022 09:48:34 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Modest proposal to extend TableAM API for controlling cluster\n commands" } ]
[ { "msg_contents": "Hackers,\n\nI have extended the grammar to allow \"USING NOT method [, ...]\" to exclude one or more TAMs in a CREATE TABLE statement. This may sound like a weird thing to do, but it is surprisingly useful when developing new Table Access Methods, particularly when you are developing two or more, not just one. To explain:\n\nDeveloping a new TAM takes an awful lot of testing, and much of it is duplicative of the existing core regression test suite. Leveraging the existing tests saves an awful lot of test development.\n\nWhen developing just one TAM, leveraging the existing tests isn't too hard. Without much work*, you can set default_table_access_method=mytam for the duration of the check-world. You'll get a few test failures this way. Some will be in tests that probe the catalogs to verify that /heap/ is stored there, and instead /mytam/ is found. Others will be tests that are sensitive to the number of rows that fit per page, etc. But a surprising number of tests just pass, at least after you get the TAM itself debugged.\n\nWhen developing two or more TAMs, this falls apart. Some tests may be worth fixing up (perhaps with alternate output files) for \"mytam\", but not for \"columnar_tam\". That might be because the test is checking fundamentally row-store-ish properties of the table, which has no applicability to your column-store-ish TAM. In that case, \"USING NOT columnar_tam\" fixes the test failure when columnar is the default, without preventing the test from testing \"mytam\" when it happens to be the default.\n\nOnce you have enough TAMs developed and deployed, this USING NOT business becomes useful in production. You might have different defaults on different servers, or for different customers, etc., and for a given piece of DDL that you want to release you only want to say which TAMs not to use, not to nail down which TAM must be used.\n\nThoughts? I'll hold off posting a patch until the general idea is debated.\n\n\n[*] It takes some extra work to get the TAP tests to play along.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 18:16:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Extending USING [heap | mytam | yourtam] grammar and behavior" }, { "msg_contents": "On Wed, Jun 15, 2022 at 06:16:21PM -0700, Mark Dilger wrote:\n> When developing two or more TAMs, this falls apart. Some tests may\n> be worth fixing up (perhaps with alternate output files) for\n> \"mytam\", but not for \"columnar_tam\". That might be because the test\n> is checking fundamentally row-store-ish properties of the table,\n> which has no applicability to your column-store-ish TAM. In that\n> case, \"USING NOT columnar_tam\" fixes the test failure when columnar\n> is the default, without preventing the test from testing \"mytam\"\n> when it happens to be the default. \n\nI think that it is very important for the in-core test suite to remain\ntransparent in terms of options used for table AMs (or compression),\nand this has improved a lot over the last years with options like\nHIDE_TABLEAM and HIDE_TOAST_COMPRESSION. Things could have actually\nmore ORDER BY clauses to ensure more ordering of the results, as long\nas the tests don't want to stress a specific planning path. However,\nyour problem is basically that you develop multiple AMs, but you want\nto have regression tests that do checks across more than one table AM\nat the same time. Am I getting that right? Why is a grammar\nextension necessary for what looks like a test structure problem when \nthere are interdependencies across multiple AMs developped?\n\n> Once you have enough TAMs developed and deployed, this USING NOT\n> business becomes useful in production. You might have different\n> defaults on different servers, or for different customers, etc., and\n> for a given piece of DDL that you want to release you only want to\n> say which TAMs not to use, not to nail down which TAM must be used. \n\nI am not sure to see why this would be something users would actually\nuse in prod. That means to pick up something else than what the\nserver thinks is the best default AM but where somebody does not want\nto trust the default, while generating an error if specifying the\ndefault AM in the USING NOT clause. On top of that\ndefault_table_access_method is user-settable.\n--\nMichael", "msg_date": "Thu, 16 Jun 2022 12:51:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extending USING [heap | mytam | yourtam] grammar and behavior" }, { "msg_contents": "On Wed, Jun 15, 2022 at 8:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On top of that\n> default_table_access_method is user-settable.\n>\n>\nFWIW this proposal acknowledges that and basically leverages it to the\nhilt, turning it into something like search_path. I strongly dislike the\nidea of any workflow that depends on a GUC in this manner. The fact that\nit is user-settable is, IMO, a flaw, not a feature, at least as far as\nproduction settings are concerned.\n\nIt is a novel API for PostgreSQL to rely upon setting a GUC then attaching\n\"unless\" configurations to individual objects to ignore it. And what would\nbe chosen (ultimately fallback is heap?), or whether it would simply error,\nis presently, as you say, undefined.\n\nIn production this general behavior becomes useful only under the condition\nthat among the various named access methods some of them don't even exist\non the server in question, but that a fallback option would be acceptable\nin that case. But that suggests extending \"USING\" to accept\nmultiple names, not inventing a \"NOT USING\".\n\nThat all said, I can understand that testing presents its own special\nneeds. But testing is probably where GUCs shine. So why not implement\nthis capability as a GUC that is set just before the table is created\ninstead of extending the grammar for it? Add it to \"developer options\" and\ncall it a day. Dump/Restore no longer has to care about it, and its value\nonce the table exists is basically zero anyway.\n\nDavid J.\n\nOn Wed, Jun 15, 2022 at 8:51 PM Michael Paquier <michael@paquier.xyz> wrote:On top of that\ndefault_table_access_method is user-settable.FWIW this proposal acknowledges that and basically leverages it to the hilt, turning it into something like search_path.  I strongly dislike the idea of any workflow that depends on a GUC in this manner.  The fact that it is user-settable is, IMO, a flaw, not a feature, at least as far as production settings are concerned.It is a novel API for PostgreSQL to rely upon setting a GUC then attaching \"unless\" configurations to individual objects to ignore it.  And what would be chosen (ultimately fallback is heap?), or whether it would simply error, is presently, as you say, undefined.In production this general behavior becomes useful only under the condition that among the various named access methods some of them don't even exist on the server in question, but that a fallback option would be acceptable in that case.  But that suggests extending \"USING\" to accept multiple names, not inventing a \"NOT USING\".That all said, I can understand that testing presents its own special needs.  But testing is probably where GUCs shine.  So why not implement this capability as a GUC that is set just before the table is created instead of extending the grammar for it?  Add it to \"developer options\" and call it a day.  Dump/Restore no longer has to care about it, and its value once the table exists is basically zero anyway.David J.", "msg_date": "Wed, 15 Jun 2022 21:33:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extending USING [heap | mytam | yourtam] grammar and behavior" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 8:51 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> However,\n> your problem is basically that you develop multiple AMs, but you want\n> to have regression tests that do checks across more than one table AM\n> at the same time.\n\nIt is true that I test multiple table AMs at the same time, but that's a somewhat different concern.\n\n> Am I getting that right?\n\nNot exactly.\n\n> Why is a grammar\n> extension necessary for what looks like a test structure problem when \n> there are interdependencies across multiple AMs developped?\n\nOk, I didn't want to get into my exact process, because it involves other changes that I don't expect -hackers to want. But basically what I do is:\n\n./configure --with-default-tam=chicago && make && make check-world\n\nThat fails for a few tests, and I manually change the create table statements in tests that are not chicago-compatible to \"using not chicago\". Then\n\n./configure --with-default-tam=detroit && make && make check-world\n\nThat fails for some other set of tests, but note that the tests with \"using not chicago\" are still using detroit in this second run. That wouldn't be true if I'd fixed up the tests in the first run \"using heap\".\n\nThen I can also add my own tests which might make some chicago backed tables plus some detroit backed tables and see how they interact. But that's superfluous to the issue of just trying to leverage the existing tests as much as I can without having to reinvent tests to cover \"chicago\", and then reinvent again to cover \"detroit\", and so forth.\n\nIf you develop enough TAMs in parallel, and go with the \"using heap\" solution, you eventually have zero coverage for any of the TAMs, because you'll eventually be \"using heap\" in all the tables of all the tests.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 22:08:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending USING [heap | mytam | yourtam] grammar and behavior" }, { "msg_contents": "\n\n> On Jun 15, 2022, at 8:51 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> I am not sure to see why this would be something users would actually\n> use in prod. That means to pick up something else than what the\n> server thinks is the best default AM but where somebody does not want\n> to trust the default, while generating an error if specifying the\n> default AM in the USING NOT clause.\n\nSorry for the lack of clarity. I do not suggest raising an error. If you say \"USING NOT foo\", and foo is the default table access method, then you get the same behavior as a \"USING heap\" would have gotten you, otherwise, you get the same behavior as not providing any USING clause at all.\n\nIn future, we might want to create a list of fallback tams rather than just hardcoding \"heap\" as the one and only fallback, but I haven't run into an actual need for that. If you're wondering what \"USING NOT heap\" falls back to, I think that could error, or it could just use heap anyway. Whatever. That's why I'm still soliciting for comments at this phase rather than posting a patch.\n\n> On top of that\n> default_table_access_method is user-settable.\n\nYeah, but specifying a \"USING foo\" clause is also open to any user, so I don't see why this matters. \"USING NOT foo\" is just shorthand for checking the current default_table_access_method, and then either appending a \"USING heap\" clause or appending no clause. Since the user can do this anyway, what's the security implication in some syntactic sugar?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Jun 2022 22:49:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending USING [heap | mytam | yourtam] grammar and behavior" } ]
[ { "msg_contents": "FYI, I happened to notice in the PG docs there are a few integer\nconfiguration parameters that describe themselves as type \"(int)\"\ninstead of \"(integer)\".\n\nIt looked like a small mistake to me; there are only 3 of (int),\nversus 148 of (integer).\n\n~~~\n\ndoc\\src\\sgml\\auth-delay.sgml:\n 29 <term>\n 30: <varname>auth_delay.milliseconds</varname> (<type>int</type>)\n 31 <indexterm>\n\ndoc\\src\\sgml\\config.sgml:\n 4918 <varlistentry id=\"guc-max-logical-replication-workers\"\nxreflabel=\"max_logical_replication_workers\">\n 4919: <term><varname>max_logical_replication_workers</varname>\n(<type>int</type>)\n 4920 <indexterm>\n\ndoc\\src\\sgml\\pgprewarm.sgml:\n 109 <term>\n 110: <varname>pg_prewarm.autoprewarm_interval</varname>\n(<type>int</type>)\n 111 <indexterm>\n\n~~~\n\nPSA a small patch to correct those.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 16 Jun 2022 19:22:15 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "PGDOCS - Integer configuration parameters should say \"(integer)\"" }, { "msg_contents": "On Thu, Jun 16, 2022 at 07:22:15PM +1000, Peter Smith wrote:\n> It looked like a small mistake to me; there are only 3 of (int),\n> versus 148 of (integer).\n\nGrepping around, that's correct. Will fix.\n--\nMichael", "msg_date": "Thu, 16 Jun 2022 20:59:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - Integer configuration parameters should say \"(integer)\"" } ]
[ { "msg_contents": "Dear PostgreSQL Developers,\n\nI'm currently working on a GiST extension (a new index structure) for PostgreSQL\nand I want to make it as customizable as I can. To achieve my goal I'm trying to take\nadvantage of the options GiST support function to provide extra parameters to the\noperator class.\n\nBecause I'm creating a new index structure, I've also developed new operators\nwhere I want to access the value of the operator class parameters as well. My main\nproblem is that I can't do that, because the parameters are only accessible from the\nregistered GiST support functions through specific macros.\n\nTo solve the problem, I've tried to use global variables but it was very inconsistent\nbecause of the complex memory management of the whole system (also, I'm not as\ngreat in C programming as I want to be).\n\nCould you please help, by telling me that iss there any way to store and access\nvalues globally in PostgreSQL? I want to store these values in a way that is not\naffected by restarting the database server or maybe the whole computer.\n\nI would really appreciate your help. Thanks in advance!\n\nBest regards,\nZsolt\n\n\n\n\n\n\n\n\nDear PostgreSQL Developers,\n\n\n\n\nI'm currently working on a GiST extension (a new index structure) for PostgreSQL\n\nand I want to make it as customizable as I can. To achieve my goal I'm trying to take\n\nadvantage of the\noptions GiST support function to provide extra parameters to the\n\noperator class.\n\n\n\n\nBecause I'm creating a new index structure, I've also developed new operators\n\nwhere I want to access the value of the operator class parameters as well. My main\n\nproblem is that I can't do that, because the parameters are only accessible from the\n\nregistered GiST support functions through specific macros.\n\n\n\n\nTo solve the problem, I've tried to use global variables but it was very inconsistent\n\nbecause of the complex memory management of the whole system (also, I'm not as\n\ngreat in C programming as I want to be).\n\n\n\n\nCould you please help, by telling me that iss there any way to store and access\n\nvalues globally in PostgreSQL? I want to store these values in a way that is not\n\naffected by restarting the database server or maybe the whole computer.\n\n\n\n\n\nI would really appreciate your help. Thanks in advance!\n\n\n\n\nBest regards,\n\nZsolt", "msg_date": "Thu, 16 Jun 2022 15:00:10 +0000", "msg_from": "=?iso-8859-2?Q?Sajti_Zsolt_Zolt=E1n?= <qnwbq9@inf.elte.hu>", "msg_from_op": true, "msg_subject": "Global variable/memory context for PostgreSQL functions" } ]
[ { "msg_contents": "Hi,\nI'm reading the docs (I'm trying to figure out some replication\nthings) and I was wondering why the file references [1] don't match\nthe file names.\n\nMost of the inconsistent items are for `obsolete-*` where the filename\nis actually `appendix-obsolete-*`. But, oddly, afaict, they were\nintroduced with these inconsistent names.\n\nIn one of those cases, the base of the file is also wrong (pgxlogdump\n[2] vs. pgreceivexlog [3]). I believe this was an api change between\n9.3 and 9.4. I know that there are `id=` tags designed to catch old\nreferences, but the comments don't seem to serve that purpose, if they\nare, I was wondering if an additional comment explaining their\ndiscrepancies would be warranted.\n\nIn one case, it's just a missing `-` (`backupmanifest.sgml` vs\n`backup-manifest.sgml`) which feels accidental.\n\n(I do have more technical questions about the docs, but I think I may\ntry a different venue to ask them.)\n\nThanks,\n\n[1] https://github.com/jsoref/postgres/commit/sgml-doc-file-refs\n[2] https://www.postgresql.org/docs/9.3/pgxlogdump.html\n[3] https://www.postgresql.org/docs/9.4/app-pgreceivexlog.html", "msg_date": "Thu, 16 Jun 2022 13:30:19 -0400", "msg_from": "Josh Soref <jsoref@gmail.com>", "msg_from_op": true, "msg_subject": "SGML doc file references" }, { "msg_contents": "On 16.06.22 19:30, Josh Soref wrote:\n> I'm reading the docs (I'm trying to figure out some replication\n> things) and I was wondering why the file references [1] don't match\n> the file names.\n\nI think it was never a goal to absolutely make them match all the time, \nso a lot of the differences might be accidental. There are also some \ntooling restrictions for what characters can be in the output file names.\n\n\n", "msg_date": "Thu, 16 Jun 2022 22:04:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SGML doc file references" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> I think it was never a goal to absolutely make them match all the time,\n> so a lot of the differences might be accidental.\n\nok, are they worth fixing? It seems like it'd make sense for files to\nproperly reference other files so that humans don't have to go looking\nfor files that don't exist...\n\n> There are also some tooling restrictions for what characters can be in the output file names.\n\nI don't think that this applies to the changes I suggested in the\npatch I attached in my initial email.\n\n\n", "msg_date": "Fri, 17 Jun 2022 13:52:21 -0400", "msg_from": "Josh Soref <jsoref@gmail.com>", "msg_from_op": true, "msg_subject": "Re: SGML doc file references" }, { "msg_contents": "On 17.06.22 19:52, Josh Soref wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> I think it was never a goal to absolutely make them match all the time,\n>> so a lot of the differences might be accidental.\n> \n> ok, are they worth fixing?\n\nThat would require renaming either the output files or the input files, \nand people would really not like either one.\n\n\n", "msg_date": "Fri, 17 Jun 2022 21:21:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SGML doc file references" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 17.06.22 19:52, Josh Soref wrote:\n>> ok, are they worth fixing?\n\n> That would require renaming either the output files or the input files, \n> and people would really not like either one.\n\nAgreed that renaming those files is not desirable, but the presented\npatch was only fixing erroneous/obsolete comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 15:33:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SGML doc file references" }, { "msg_contents": "On 17.06.22 21:33, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 17.06.22 19:52, Josh Soref wrote:\n>>> ok, are they worth fixing?\n> \n>> That would require renaming either the output files or the input files,\n>> and people would really not like either one.\n> \n> Agreed that renaming those files is not desirable, but the presented\n> patch was only fixing erroneous/obsolete comments.\n\nYeah, I had totally misinterpreted what was being proposed. Of course, \nthe patch is most sensible. Committed.\n\n\n", "msg_date": "Mon, 20 Jun 2022 14:37:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SGML doc file references" } ]
[ { "msg_contents": "libpq contains a lot of\n\n if (foo)\n free(foo);\n\ncalls, where the \"if\" part is unnecessary. This is of course pretty \nharmless, but some functions like scram_free() and freePGconn() have \nbecome so bulky that it becomes annoying. So while I was doing some \nwork in that area I undertook to simplify this.", "msg_date": "Thu, 16 Jun 2022 22:07:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "On Thu, Jun 16, 2022 at 10:07:33PM +0200, Peter Eisentraut wrote:\n> calls, where the \"if\" part is unnecessary. This is of course pretty\n> harmless, but some functions like scram_free() and freePGconn() have become\n> so bulky that it becomes annoying. So while I was doing some work in that\n> area I undertook to simplify this.\n\nSeems fine. Would some of the buildfarm dinosaurs hiccup on that?\ngaur is one that comes into mind. \n--\nMichael", "msg_date": "Fri, 17 Jun 2022 12:25:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jun 16, 2022 at 10:07:33PM +0200, Peter Eisentraut wrote:\n>> calls, where the \"if\" part is unnecessary. This is of course pretty\n>> harmless, but some functions like scram_free() and freePGconn() have become\n>> so bulky that it becomes annoying. So while I was doing some work in that\n>> area I undertook to simplify this.\n\n> Seems fine. Would some of the buildfarm dinosaurs hiccup on that?\n> gaur is one that comes into mind. \n\nDoubt it. (In any case, gaur/pademelon are unlikely to be seen\nagain after a hardware failure --- I'm working on resurrecting that\nmachine using modern NetBSD on an external drive, but its HPUX\ninstallation probably isn't coming back.)\n\nPOSIX has required free(NULL) to be a no-op since at least SUSv2 (1997).\nEven back then, the machines that failed on it were legacy devices,\nlike then-decade-old SunOS versions. So I don't think that Peter's\nproposal has any portability risk today.\n\nHaving said that, the pattern \"if (x) free(x);\" is absolutely\nubiquitous across our code, and so I'm not sure that I'm on\nboard with undoing it only in libpq. I'd be happier if we made\na push to get rid of it everywhere. Notably, I think the choice\nthat pfree(NULL) is disallowed traces directly to worries about\ncoding-pattern-compatibility with pre-POSIX free(). Should we\nrevisit that?\n\nIndependently of that concern, how much of a back-patch hazard\nmight we create with such changes?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 01:11:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "On 17.06.22 05:25, Michael Paquier wrote:\n> On Thu, Jun 16, 2022 at 10:07:33PM +0200, Peter Eisentraut wrote:\n>> calls, where the \"if\" part is unnecessary. This is of course pretty\n>> harmless, but some functions like scram_free() and freePGconn() have become\n>> so bulky that it becomes annoying. So while I was doing some work in that\n>> area I undertook to simplify this.\n> Seems fine. Would some of the buildfarm dinosaurs hiccup on that?\n> gaur is one that comes into mind.\n\nI'm pretty sure PostgreSQL code already depends on this behavior anyway. \n The code just isn't consistent about it.\n\n\n", "msg_date": "Fri, 17 Jun 2022 21:03:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "On 17.06.22 07:11, Tom Lane wrote:\n> Having said that, the pattern \"if (x) free(x);\" is absolutely\n> ubiquitous across our code, and so I'm not sure that I'm on\n> board with undoing it only in libpq. I'd be happier if we made\n> a push to get rid of it everywhere.\n\nSure, here is a more comprehensive patch set. (It still looks like \nlibpq is the largest chunk.)\n\n> Notably, I think the choice\n> that pfree(NULL) is disallowed traces directly to worries about\n> coding-pattern-compatibility with pre-POSIX free(). Should we\n> revisit that?\n\nYes please, and also repalloc().", "msg_date": "Fri, 17 Jun 2022 21:07:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 17.06.22 07:11, Tom Lane wrote:\n>> Notably, I think the choice\n>> that pfree(NULL) is disallowed traces directly to worries about\n>> coding-pattern-compatibility with pre-POSIX free(). Should we\n>> revisit that?\n\n> Yes please, and also repalloc().\n\nrepalloc no, because you wouldn't know which context to do the\nallocation in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 15:31:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "On Fri, Jun 17, 2022 at 09:03:23PM +0200, Peter Eisentraut wrote:\n> I'm pretty sure PostgreSQL code already depends on this behavior anyway.\n> The code just isn't consistent about it.\n\nIn the frontend, I'd like to think that you are right and that we have\nalready some places doing that. The backend is a different story,\nlike in GetMemoryChunkContext(). That should be easy enough to check\nwith some LD_PRELOAD wizardry, at least.\n--\nMichael", "msg_date": "Sat, 18 Jun 2022 12:17:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Jun 17, 2022 at 09:03:23PM +0200, Peter Eisentraut wrote:\n>> I'm pretty sure PostgreSQL code already depends on this behavior anyway.\n>> The code just isn't consistent about it.\n\n> In the frontend, I'd like to think that you are right and that we have\n> already some places doing that.\n\nWe do, indeed.\n\n> The backend is a different story,\n> like in GetMemoryChunkContext(). That should be easy enough to check\n> with some LD_PRELOAD wizardry, at least.\n\nHuh? The proposal is to accept the fact that free() tolerates NULL,\nand then maybe make pfree() tolerate it as well. I don't think that\nthat needs to encompass any other functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 23:45:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "On 2022-Jun-17, Peter Eisentraut wrote:\n\n> From 355ef1a68be690d9bb8ee0524226abd648733ce0 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Fri, 17 Jun 2022 12:09:32 +0200\n> Subject: [PATCH v2 3/3] Remove redundant null pointer checks before PQclear\n> and PQconninfofree\n> \n> These functions already had the free()-like behavior of handling NULL\n> pointers as a no-op. But it wasn't documented, so add it explicitly\n> to the documentation, too.\n\nFor PQclear() specifically, one thing that I thought a few days ago\nwould be useful would to have it return (PGresult *) NULL. Then the\nvery common pattern of doing \"PQclear(res); res = NULL;\" could be\nsimplified to \"res = PQclear(res);\", which is nicely compact and is\nlearned instantly.\n\nI've not seen this convention used anywhere else though, and I'm not\nsure I'd advocate it for other functions where we use similar patterns\nsuch as pfree/pg_free, so perhaps it'd become too much of a special\ncase.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 19 Jun 2022 11:55:33 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> For PQclear() specifically, one thing that I thought a few days ago\n> would be useful would to have it return (PGresult *) NULL. Then the\n> very common pattern of doing \"PQclear(res); res = NULL;\" could be\n> simplified to \"res = PQclear(res);\", which is nicely compact and is\n> learned instantly.\n\nThat's a public API unfortunately, and so some people would demand\na libpq.so major version bump if we changed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Jun 2022 13:38:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq: Remove redundant null pointer checks before free()" } ]
[ { "msg_contents": "While browsing through some of the clock-sweep code I noticed that the \nonly place StrategyNotifyBgWriter() is called now is in \nBackgroundWriterMain()[1]. Presumably this isn't what's desired. If \nnothing else, it means the function's description comment is wrong, as \nare comments in BackgroundWriterMain(). This isn't new; 9.2 shows the \nsame thing and that's when the function was added. I'm not sure what the \nright fix here is, since ISTM joggling bgwriter for every call to \nBufferAlloc() would be overkill.\n\n\n1: \nhttps://doxygen.postgresql.org/freelist_8c.html#aabbd7d3891afc1d8531c3871d08d4b28\n\n\n\n", "msg_date": "Thu, 16 Jun 2022 17:30:14 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": true, "msg_subject": "Nothing is using StrategyNotifyBgWriter() anymore" }, { "msg_contents": "Answering my own question... I now see that the wakeup does in fact happen in StrategyGetBuffer(). Sorry for the noise.\r\n\r\nOn 6/16/22, 5:32 PM, \"Jim Nasby\" <nasbyj@amazon.com> wrote:\r\n\r\n While browsing through some of the clock-sweep code I noticed that the \r\n only place StrategyNotifyBgWriter() is called now is in \r\n BackgroundWriterMain()[1]. Presumably this isn't what's desired. If \r\n nothing else, it means the function's description comment is wrong, as \r\n are comments in BackgroundWriterMain(). This isn't new; 9.2 shows the \r\n same thing and that's when the function was added. I'm not sure what the \r\n right fix here is, since ISTM joggling bgwriter for every call to \r\n BufferAlloc() would be overkill.\r\n\r\n\r\n 1: \r\n https://doxygen.postgresql.org/freelist_8c.html#aabbd7d3891afc1d8531c3871d08d4b28\r\n\r\n\r\n\r\n\r\n", "msg_date": "Thu, 16 Jun 2022 22:53:27 +0000", "msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Nothing is using StrategyNotifyBgWriter() anymore" } ]
[ { "msg_contents": "Hi,\n\nMark Callaghan reported a regression in 15, in the post linked here (with\ncomments in the twitter thread, hence linked here)\nhttps://twitter.com/MarkCallaghanDB/status/1537475430227161098\n\nA differential flame graph shows increased time spent doing memory\nallocations, below ExecInitExpr():\nhttps://mdcallag.github.io/reports/22_06_06_ibench.20m.pg.all/l.i0.0.143.15b1.n.svg\n\nQuite the useful comparison.\n\n\nThat lead to me to look at the size of ExprEvalStep - and indeed, it has\n*exploded* in 15. 10-13: 64 bytes, 14: 320 bytes.\n\nThe comment above the union for data fields says:\n\t/*\n\t * Inline data for the operation. Inline data is faster to access, but\n\t * also bloats the size of all instructions. The union should be kept to\n\t * no more than 40 bytes on 64-bit systems (so that the entire struct is\n\t * no more than 64 bytes, a single cacheline on common systems).\n\t */\n\nHowever, jsonexpr/EEOP_JSONEXPR is 296 bytes, and\nhashedscalararrayop/EEOP_HASHED_SCALARARRAYOP is 64 bytes, even though the\nlimit is 40 bytes.\n\nThe EEOP_JSONEXPR stuff was added during 15 development in:\n\ncommit 1a36bc9dba8eae90963a586d37b6457b32b2fed4\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: 2022-03-03 13:11:14 -0500\n\n SQL/JSON query functions\n\n\nthe EEOP_HASHED_SCALARARRAYOP stuff was added during 14 development in:\n\ncommit 50e17ad281b8d1c1b410c9833955bc80fbad4078\nAuthor: David Rowley <drowley@postgresql.org>\nDate: 2021-04-08 23:51:22 +1200\n\n Speedup ScalarArrayOpExpr evaluation\n\n\nUnfortunately ExprEvalStep is public, so I don't think we can fix the 14\nregression. If somebody installed updated server packages while the server is\nrunning, we could end up loading extensions referencing ExprEvalStep (at least\nplpgsql and LLVMJIT).\n\nIt's not great to have an ABI break at this point of the 15 cycle, but I don't\nthink we have a choice. Exploding the size of ExprEvalStep by ~4x is bad -\nboth for memory overhead (expressions often have dozens to hundreds of steps)\nand expression evaluation performance.\n\n\nThe Hashed SAO case can perhaps be squeezed sufficiently to fit inline, but\nclearly that's not going to happen for the json case. So we should just move\nthat out of line.\n\n\nMaybe it's worth sticking a StaticAssert() for the struct size somewhere. I'm\na bit wary about that being too noisy, there are some machines with odd\nalignment requirements. Perhaps worth restricting the assertion to x86-64 +\narmv8 or such?\n\n\nIt very well might be that this isn't the full explanation of the regression\nMark observed. E.g. the difference in DecodeDateTime() looks more likely to be\ncaused by 591e088dd5b - but we need to fix the above issue, whether it's the\ncause of the regression Mark observed or not.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Jun 2022 16:31:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> However, jsonexpr/EEOP_JSONEXPR is 296 bytes, and\n> hashedscalararrayop/EEOP_HASHED_SCALARARRAYOP is 64 bytes, even though the\n> limit is 40 bytes.\n\nOops.\n\n> Maybe it's worth sticking a StaticAssert() for the struct size\n> somewhere.\n\nIndeed. I thought we had one already.\n\n> I'm a bit wary about that being too noisy, there are some machines with\n> odd alignment requirements. Perhaps worth restricting the assertion to\n> x86-64 + armv8 or such?\n\nI'd put it in first and only reconsider if it shows unfixable problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jun 2022 19:37:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-16 16:31:30 -0700, Andres Freund wrote:\n> The EEOP_JSONEXPR stuff was added during 15 development in:\n>\n> commit 1a36bc9dba8eae90963a586d37b6457b32b2fed4\n> Author: Andrew Dunstan <andrew@dunslane.net>\n> Date: 2022-03-03 13:11:14 -0500\n>\n> SQL/JSON query functions\n\nI'm quite confused about part of the struct definition of this:\n\n\t\t\tstruct JsonCoercionsState\n\t\t\t{\n\t\t\t\tstruct JsonCoercionState\n\t\t\t\t{\n\t\t\t\t\tJsonCoercion *coercion; /* coercion expression */\n\t\t\t\t\tExprState *estate; /* coercion expression state */\n\t\t\t\t}\t\t\tnull,\n\t\t\t\t\t\t\tstring,\n\t\t\t\tnumeric ,\n\t\t\t\t\t\t\tboolean,\n\t\t\t\t\t\t\tdate,\n\t\t\t\t\t\t\ttime,\n\t\t\t\t\t\t\ttimetz,\n\t\t\t\t\t\t\ttimestamp,\n\t\t\t\t\t\t\ttimestamptz,\n\t\t\t\t\t\t\tcomposite;\n\t\t\t}\t\t\tcoercions;\t/* states for coercion from SQL/JSON item\n\t\t\t\t\t\t\t\t\t * types directly to the output type */\n\nWhy on earth do we have coercion state for all these different types? That\nreally adds up:\n\n struct {\n JsonExpr * jsexpr; /* 24 8 */\n struct {\n FmgrInfo func; /* 32 48 */\n /* --- cacheline 1 boundary (64 bytes) was 16 bytes ago --- */\n Oid typioparam; /* 80 4 */\n } input; /* 32 56 */\n\n /* XXX last struct has 4 bytes of padding */\n\n NullableDatum * formatted_expr; /* 88 8 */\n NullableDatum * res_expr; /* 96 8 */\n NullableDatum * coercion_expr; /* 104 8 */\n NullableDatum * pathspec; /* 112 8 */\n ExprState * result_expr; /* 120 8 */\n /* --- cacheline 2 boundary (128 bytes) --- */\n ExprState * default_on_empty; /* 128 8 */\n ExprState * default_on_error; /* 136 8 */\n List * args; /* 144 8 */\n void * cache; /* 152 8 */\n struct JsonCoercionsState coercions; /* 160 160 */\n } jsonexpr; /* 24 296 */\n\nAnd why is FmgrInfo stored inline in the struct? Everything else just stores\npointers to FmgrInfo.\n\n\nNow that I look at this: It's a *bad* idea to have subsidiary ExprState inside\nan ExprState. Nearly always the correct thing to do is to build those\nexpressions. There's plenty memory and evaluation overhead in jumping to a\ndifferent expression. And I see no reason for doing it that way here?\n\nThis stuff doesn't look ready.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Jun 2022 17:16:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Fri, 17 Jun 2022 at 11:31, Andres Freund <andres@anarazel.de> wrote:\n> hashedscalararrayop/EEOP_HASHED_SCALARARRAYOP is 64 bytes, even though the\n> limit is 40 bytes.\n\n> commit 50e17ad281b8d1c1b410c9833955bc80fbad4078\n> Author: David Rowley <drowley@postgresql.org>\n> Date: 2021-04-08 23:51:22 +1200\n>\n> Speedup ScalarArrayOpExpr evaluation\n\nI've put together the attached patch which removes 4 fields from the\nhashedscalararrayop portion of the struct which, once the JSON part is\nfixed, will put sizeof(ExprEvalStep) back down to 64 bytes again.\n\nThe attached patch causes some extra pointer dereferencing to perform\na hashed saop step, so I tested the performance on f4fb45d15 (prior to\nthe JSON patch that pushed the sizeof(ExprEvalStep) up further. I\nfound:\n\nsetup:\ncreate table a (a int);\ninsert into a select x from generate_series(1000000,2000000) x;\n\nbench.sql\nselect * from a where a in(1,2,3,4,5,6,7,8,9,10);\n\nf4fb45d15 + reduce_sizeof_hashedsaop_ExprEvalStep.patch\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\ntps = 44.841851 (without initial connection time)\ntps = 44.986472 (without initial connection time)\ntps = 44.944315 (without initial connection time)\n\nf4fb45d15\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\ntps = 44.446127 (without initial connection time)\ntps = 44.614134 (without initial connection time)\ntps = 44.895011 (without initial connection time)\n\n(Patched is ~0.61% faster here)\n\nSo, there appears to be no performance regression due to the extra\nindirection. There's maybe even some gains due to the smaller step\nsize.\n\nDavid", "msg_date": "Fri, 17 Jun 2022 14:14:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Thu, Jun 16, 2022 at 7:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> So, there appears to be no performance regression due to the extra\n> indirection. There's maybe even some gains due to the smaller step\n> size.\n\nHave you tried this with the insert benchmark [1]?\n\nI've run it myself in the past (when working on B-Tree deduplication).\nIt's quite straightforward to set up and run.\n\n[1] http://smalldatum.blogspot.com/2017/06/the-insert-benchmark.html\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 Jun 2022 20:33:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Fri, 17 Jun 2022 at 15:33, Peter Geoghegan <pg@bowt.ie> wrote:\n> Have you tried this with the insert benchmark [1]?\n\nI was mostly focusing on the performance of the hashed saop feature\nafter having removed the additional fields that pushed ExprEvalStep\nover 64 bytes in 14.\n\nI agree it would be good to do further benchmarking to see if there's\nanything else that's snuck into 15 that's slowed that benchmark down,\nbut we can likely work on that after we get the ExprEvalStep size back\nto 64 bytes again.\n\nDavid\n\n\n", "msg_date": "Fri, 17 Jun 2022 16:53:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-17 16:53:31 +1200, David Rowley wrote:\n> On Fri, 17 Jun 2022 at 15:33, Peter Geoghegan <pg@bowt.ie> wrote:\n> > Have you tried this with the insert benchmark [1]?\n> \n> I was mostly focusing on the performance of the hashed saop feature\n> after having removed the additional fields that pushed ExprEvalStep\n> over 64 bytes in 14.\n> \n> I agree it would be good to do further benchmarking to see if there's\n> anything else that's snuck into 15 that's slowed that benchmark down,\n> but we can likely work on that after we get the ExprEvalStep size back\n> to 64 bytes again.\n\nI did reproduce a regression between 14 and 15, using both pgbench -Mprepared\n-S (scale 1) and TPC-H Q01 (scale 5). Between 7-10% - not good, particularly\nthat that's not been found so far. Fixing the json size issue gets that down\nto ~2%. Not sure what that's caused by yet.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 16 Jun 2022 22:22:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-06-16 22:22:28 -0700, Andres Freund wrote:\n> On 2022-06-17 16:53:31 +1200, David Rowley wrote:\n> > On Fri, 17 Jun 2022 at 15:33, Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Have you tried this with the insert benchmark [1]?\n> >\n> > I was mostly focusing on the performance of the hashed saop feature\n> > after having removed the additional fields that pushed ExprEvalStep\n> > over 64 bytes in 14.\n> >\n> > I agree it would be good to do further benchmarking to see if there's\n> > anything else that's snuck into 15 that's slowed that benchmark down,\n> > but we can likely work on that after we get the ExprEvalStep size back\n> > to 64 bytes again.\n>\n> I did reproduce a regression between 14 and 15, using both pgbench -Mprepared\n> -S (scale 1) and TPC-H Q01 (scale 5). Between 7-10% - not good, particularly\n> that that's not been found so far. Fixing the json size issue gets that down\n> to ~2%. Not sure what that's caused by yet.\n\nThe remaining difference looks like it's largely caused by the\nenable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the\npgstats patch. It's only really visible when I pin a single connection pgbench\nto the same CPU core as the server (which gives a ~16% boost here).\n\nIt's not the timeout itself - that we amortize nicely (via 09cf1d522). It's\nthat enable_timeout_after() does a GetCurrentTimestamp().\n\nNot sure yet what the best way to fix that is.\n\nWe could just leave the timer active and add some gating condition indicating\nidleness to the IdleStatsUpdateTimeoutPending body in ProcessInterrupts()?\n\nOr we could add a timeout.c API that specifies the timeout?\npgstat_report_stat() uses GetCurrentTransactionStopTimestamp(), it seems like\nit'd make sense to use the same for arming the timeout?\n\n- Andres\n\n\n", "msg_date": "Thu, 16 Jun 2022 23:24:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "At Thu, 16 Jun 2022 23:24:56 -0700, Andres Freund <andres@anarazel.de> wrote in \n> The remaining difference looks like it's largely caused by the\n> enable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the\n> pgstats patch. It's only really visible when I pin a single connection pgbench\n> to the same CPU core as the server (which gives a ~16% boost here).\n> \n> It's not the timeout itself - that we amortize nicely (via 09cf1d522). It's\n> that enable_timeout_after() does a GetCurrentTimestamp().\n> \n> Not sure yet what the best way to fix that is.\n> \n> We could just leave the timer active and add some gating condition indicating\n> idleness to the IdleStatsUpdateTimeoutPending body in ProcessInterrupts()?\n> \n> Or we could add a timeout.c API that specifies the timeout?\n\nI sometimes wanted this, But I don't see a simple way to sort multiple\nrelative timeouts in absolute time order. Maybe we can skip\nGetCurrentTimestamp only when inserting the first timeout, but I don't\nthink it benefits this case.\n\n> pgstat_report_stat() uses GetCurrentTransactionStopTimestamp(), it seems like\n> it'd make sense to use the same for arming the timeout?\n\nThis seems like the feasible best fix for this specific issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Jun 2022 15:54:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "At Fri, 17 Jun 2022 15:54:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Or we could add a timeout.c API that specifies the timeout?\n> \n> I sometimes wanted this, But I don't see a simple way to sort multiple\n> relative timeouts in absolute time order. Maybe we can skip\n> GetCurrentTimestamp only when inserting the first timeout, but I don't\n> think it benefits this case.\n\nOr we can use a free-run interval timer and individual down-counter\nfor each timtouts. I think we need at-most 0.1s resolution and error\nof long-run timer doesn't harm?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Jun 2022 15:59:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "At Fri, 17 Jun 2022 15:59:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 17 Jun 2022 15:54:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > Or we could add a timeout.c API that specifies the timeout?\n> > \n> > I sometimes wanted this, But I don't see a simple way to sort multiple\n> > relative timeouts in absolute time order. Maybe we can skip\n> > GetCurrentTimestamp only when inserting the first timeout, but I don't\n> > think it benefits this case.\n> \n> Or we can use a free-run interval timer and individual down-counter\n> for each timtouts. I think we need at-most 0.1s resolution and error\n> of long-run timer doesn't harm?\n\nYeah, stupid. We don't want awake process with such a high frequency..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Jun 2022 16:05:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Thu, Jun 16, 2022 at 10:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 17 Jun 2022 at 11:31, Andres Freund <andres@anarazel.de> wrote:\n> > hashedscalararrayop/EEOP_HASHED_SCALARARRAYOP is 64 bytes, even though the\n> > limit is 40 bytes.\n>\n> > commit 50e17ad281b8d1c1b410c9833955bc80fbad4078\n> > Author: David Rowley <drowley@postgresql.org>\n> > Date: 2021-04-08 23:51:22 +1200\n> >\n> > Speedup ScalarArrayOpExpr evaluation\n>\n> I've put together the attached patch which removes 4 fields from the\n> hashedscalararrayop portion of the struct which, once the JSON part is\n> fixed, will put sizeof(ExprEvalStep) back down to 64 bytes again.\n>\n> The attached patch causes some extra pointer dereferencing to perform\n> a hashed saop step, so I tested the performance on f4fb45d15 (prior to\n> the JSON patch that pushed the sizeof(ExprEvalStep) up further. I\n> found:\n>\n> setup:\n> create table a (a int);\n> insert into a select x from generate_series(1000000,2000000) x;\n>\n> bench.sql\n> select * from a where a in(1,2,3,4,5,6,7,8,9,10);\n>\n> f4fb45d15 + reduce_sizeof_hashedsaop_ExprEvalStep.patch\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\n> tps = 44.841851 (without initial connection time)\n> tps = 44.986472 (without initial connection time)\n> tps = 44.944315 (without initial connection time)\n>\n> f4fb45d15\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\n> tps = 44.446127 (without initial connection time)\n> tps = 44.614134 (without initial connection time)\n> tps = 44.895011 (without initial connection time)\n>\n> (Patched is ~0.61% faster here)\n>\n> So, there appears to be no performance regression due to the extra\n> indirection. There's maybe even some gains due to the smaller step\n> size.\n\nI didn't see that comment when working on this (it's quite a long\nunioned struct; I concur on adding an assert to catch it).\n\nThis patch looks very reasonable to me though.\n\nJames Coleman\n\n\n", "msg_date": "Fri, 17 Jun 2022 08:33:29 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The remaining difference looks like it's largely caused by the\n> enable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the\n> pgstats patch. It's only really visible when I pin a single connection pgbench\n> to the same CPU core as the server (which gives a ~16% boost here).\n\n> It's not the timeout itself - that we amortize nicely (via 09cf1d522). It's\n> that enable_timeout_after() does a GetCurrentTimestamp().\n\n> Not sure yet what the best way to fix that is.\n\nMaybe not queue a new timeout if the old one is still active?\n\nBTW, it looks like that patch also falsified this comment\n(postgres.c:4478):\n\n\t\t * At most one of these timeouts will be active, so there's no need to\n\t\t * worry about combining the timeout.c calls into one.\n\nMaybe fixing that end of things would be a simpler way of buying back\nthe delta.\n\n> Or we could add a timeout.c API that specifies the timeout?\n\nDon't think that will help: it'd be morally equivalent to\nenable_timeout_at(), which also has to do GetCurrentTimestamp().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 10:33:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Or we could add a timeout.c API that specifies the timeout?\n\n> Don't think that will help: it'd be morally equivalent to\n> enable_timeout_at(), which also has to do GetCurrentTimestamp().\n\nBTW, if we were willing to drop get_timeout_start_time(), it might\nbe possible to avoid doing GetCurrentTimestamp() in enable_timeout_at,\nin the common case where the specified timestamp is beyond signal_due_at\nso that no setitimer call is needed. But getting the race conditions\nright could be tricky. On the whole this doesn't sound like something\nto tackle post-beta.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 10:53:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-17 14:14:54 +1200, David Rowley wrote:\n> I've put together the attached patch which removes 4 fields from the\n> hashedscalararrayop portion of the struct which, once the JSON part is\n> fixed, will put sizeof(ExprEvalStep) back down to 64 bytes again.\n\n> The attached patch causes some extra pointer dereferencing to perform\n> a hashed saop step, so I tested the performance on f4fb45d15 (prior to\n> the JSON patch that pushed the sizeof(ExprEvalStep) up further. I\n> found:\n\nWhat do you think about the approach prototyped in my patch to move the hash\nFunctionCallInfo into the element_tab? With a tiny bit more work that should\nreduce the amount of dereferincing over the state today, while also keeping\nbelow the limit?\n\n> setup:\n> create table a (a int);\n> insert into a select x from generate_series(1000000,2000000) x;\n> \n> bench.sql\n> select * from a where a in(1,2,3,4,5,6,7,8,9,10);\n> \n> f4fb45d15 + reduce_sizeof_hashedsaop_ExprEvalStep.patch\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\n> tps = 44.841851 (without initial connection time)\n> tps = 44.986472 (without initial connection time)\n> tps = 44.944315 (without initial connection time)\n> \n> f4fb45d15\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 -M prepared postgres\n> tps = 44.446127 (without initial connection time)\n> tps = 44.614134 (without initial connection time)\n> tps = 44.895011 (without initial connection time)\n> \n> (Patched is ~0.61% faster here)\n> \n> So, there appears to be no performance regression due to the extra\n> indirection. There's maybe even some gains due to the smaller step\n> size.\n\n\"smaller step size\"?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Jun 2022 10:21:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-06-17 10:33:08 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The remaining difference looks like it's largely caused by the\n> > enable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the\n> > pgstats patch. It's only really visible when I pin a single connection pgbench\n> > to the same CPU core as the server (which gives a ~16% boost here).\n> \n> > It's not the timeout itself - that we amortize nicely (via 09cf1d522). It's\n> > that enable_timeout_after() does a GetCurrentTimestamp().\n> \n> > Not sure yet what the best way to fix that is.\n> \n> Maybe not queue a new timeout if the old one is still active?\n\nRight now we disable the timer after ReadCommand(). We can of course change\nthat. At first I thought we might need more bookkeeping to do so, to avoid\nProcessInterrupts() triggering pgstat_report_stat() when the timer fires\nlater, but we probably can jury-rig something with DoingCommandRead &&\nIsTransactionOrTransactionBlock() or such.\n\nI guess one advantage of something like this could be that we could possibly\nmove the arming of the timeout to pgstat.c. But that looks like it might be\nmore complicated than really worth it.\n\n\n> BTW, it looks like that patch also falsified this comment\n> (postgres.c:4478):\n> \n> \t\t * At most one of these timeouts will be active, so there's no need to\n> \t\t * worry about combining the timeout.c calls into one.\n\nHm, yea. I guess we can just disable them at once.\n\n\n> Maybe fixing that end of things would be a simpler way of buying back\n> the delta.\n\nI don't think that'll do the trick - in the case I'm looking at none of the\nother timers are active...\n\n\n> > Or we could add a timeout.c API that specifies the timeout?\n> \n> Don't think that will help: it'd be morally equivalent to\n> enable_timeout_at(), which also has to do GetCurrentTimestamp().\n\nI should have been more precise - what I meant was a timeout.c API that allows\nthe caller to pass in \"now\", which in this case we'd get from\nGetCurrentTransactionStopTimestamp(), which would avoid the additional\ntimestamp computation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Jun 2022 10:30:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I should have been more precise - what I meant was a timeout.c API that allows\n> the caller to pass in \"now\", which in this case we'd get from\n> GetCurrentTransactionStopTimestamp(), which would avoid the additional\n> timestamp computation.\n\nI don't care for that one bit: it makes the accuracy of all timeouts\ndependent on how careful that caller is to provide an up-to-date \"now\".\nIn the example at hand, there is WAY too much code between\nSetCurrentTransactionStopTimestamp() and the timer arming to make me\nthink the results will be acceptable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jun 2022 13:43:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-17 13:43:49 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I should have been more precise - what I meant was a timeout.c API that allows\n> > the caller to pass in \"now\", which in this case we'd get from\n> > GetCurrentTransactionStopTimestamp(), which would avoid the additional\n> > timestamp computation.\n> \n> I don't care for that one bit: it makes the accuracy of all timeouts\n> dependent on how careful that caller is to provide an up-to-date \"now\".\n\nI don't think it'd necessarily have to influence the accuracy of all timeouts\n- but I've not looked at timeout.c much before. From what I understand we use\n'now' for two things: First, to set ->start_time in enable_timeout() and\nsecond to schedule the alarm in schedule_alarm(). An inaccurate start_time\nwon't cause problems for other timers afaics and it looks to me that it\nwouldn't be too hard to only require an accurate 'now' if the new timeout is\nnearest_timeout and now + nearest_timeout < signal_due_at?\n\nIt's probably to complicated to tinker with now tho.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Jun 2022 11:02:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-06-17 10:30:55 -0700, Andres Freund wrote:\n> On 2022-06-17 10:33:08 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > The remaining difference looks like it's largely caused by the\n> > > enable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the\n> > > pgstats patch. It's only really visible when I pin a single connection pgbench\n> > > to the same CPU core as the server (which gives a ~16% boost here).\n> > \n> > > It's not the timeout itself - that we amortize nicely (via 09cf1d522). It's\n> > > that enable_timeout_after() does a GetCurrentTimestamp().\n> > \n> > > Not sure yet what the best way to fix that is.\n> > \n> > Maybe not queue a new timeout if the old one is still active?\n> \n> Right now we disable the timer after ReadCommand(). We can of course change\n> that. At first I thought we might need more bookkeeping to do so, to avoid\n> ProcessInterrupts() triggering pgstat_report_stat() when the timer fires\n> later, but we probably can jury-rig something with DoingCommandRead &&\n> IsTransactionOrTransactionBlock() or such.\n\nHere's a patch for that.\n\nOne thing I noticed is that disable_timeout() calls do\nschedule_alarm(GetCurrentTimestamp()) if there's any other active timeout,\neven if the to-be-disabled timer is already disabled. Of course callers of\ndisable_timeout() can guard against that using get_timeout_active(), but that\nspreads repetitive code around...\n\nI opted to add a fastpath for that, instead of using\nget_timeout_active(). Afaics that's safe to do without disarming the signal\nhandler, but I'd welcome a look from somebody that knows this code.\n\n\n> I guess one advantage of something like this could be that we could possibly\n> move the arming of the timeout to pgstat.c. But that looks like it might be\n> more complicated than really worth it.\n\nI didn't do that yet, but am curious whether others think this would be\npreferrable.\n\n\n> > BTW, it looks like that patch also falsified this comment\n> > (postgres.c:4478):\n> > \n> > \t\t * At most one of these timeouts will be active, so there's no need to\n> > \t\t * worry about combining the timeout.c calls into one.\n> \n> Hm, yea. I guess we can just disable them at once.\n\nWith the proposed change we don't need to change the separate timeout.c to\none, or update the comment, as it should now look the same as 14.\n\n\nI also attached my heavily-WIP patches for the ExprEvalStep issues, I\naccidentally had only included a small part of the contents of the json fix.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 17 Jun 2022 13:06:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "\nOn 2022-06-17 Fr 16:06, Andres Freund wrote:\n>\n>\n> I also attached my heavily-WIP patches for the ExprEvalStep issues, \n\n\nMany thanks\n\n\n> I\n> accidentally had only included a small part of the contents of the json fix.\n>\n\nYeah, that confused me mightily last week :-)\n\nI and a couple of colleagues have looked it over. As far as it goes the\njson fix looks kosher to me. I'll play with it some more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 21 Jun 2022 17:11:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-21 17:11:33 -0400, Andrew Dunstan wrote:\n> I and a couple of colleagues have looked it over. As far as it goes the\n> json fix looks kosher to me. I'll play with it some more.\n\nCool.\n\nAny chance you could look at fixing the \"structure\" of the generated\nexpression \"program\". The recursive ExecEvalExpr() calls are really not ok...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Jun 2022 14:25:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "\nOn 2022-06-21 Tu 17:25, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-21 17:11:33 -0400, Andrew Dunstan wrote:\n>> I and a couple of colleagues have looked it over. As far as it goes the\n>> json fix looks kosher to me. I'll play with it some more.\n> Cool.\n>\n> Any chance you could look at fixing the \"structure\" of the generated\n> expression \"program\". The recursive ExecEvalExpr() calls are really not ok...\n>\n\nYes, but I don't guarantee to have a fix in time for Beta2.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 21 Jun 2022 17:41:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Tue, Jun 21, 2022 at 05:41:07PM -0400, Andrew Dunstan wrote:\n> On 2022-06-21 Tu 17:25, Andres Freund wrote:\n>> On 2022-06-21 17:11:33 -0400, Andrew Dunstan wrote:\n>>> I and a couple of colleagues have looked it over. As far as it goes the\n>>> json fix looks kosher to me. I'll play with it some more.\n>>\n>> Cool.\n>>\n>> Any chance you could look at fixing the \"structure\" of the generated\n>> expression \"program\". The recursive ExecEvalExpr() calls are really not ok...\n\nBy how much does the size of ExprEvalStep go down once you don't\ninline the JSON structures as of 0004 in [1]? And what of 0003? The\nJSON portions seem like the largest portion of the cake, though both\nare must-fixes.\n\n> Yes, but I don't guarantee to have a fix in time for Beta2.\n\nIMHO, it would be nice to get something done for beta2. Now the\nthread is rather fresh and I guess that more performance study is \nrequired even for 0004, so.. Waiting for beta3 would a better move at\nthis stage. Is somebody confident enough in the patches proposed?\n0004 looks rather sane, seen from here, at least.\n\n[1]: https://www.postgresql.org/message-id/20220617200605.3moq7dtxua5cxemv@alap3.anarazel.de\n--\nMichael", "msg_date": "Thu, 23 Jun 2022 16:38:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-06-23 16:38:12 +0900, Michael Paquier wrote:\n> On Tue, Jun 21, 2022 at 05:41:07PM -0400, Andrew Dunstan wrote:\n> > On 2022-06-21 Tu 17:25, Andres Freund wrote:\n> >> On 2022-06-21 17:11:33 -0400, Andrew Dunstan wrote:\n> >>> I and a couple of colleagues have looked it over. As far as it goes the\n> >>> json fix looks kosher to me. I'll play with it some more.\n> >>\n> >> Cool.\n> >>\n> >> Any chance you could look at fixing the \"structure\" of the generated\n> >> expression \"program\". The recursive ExecEvalExpr() calls are really not ok...\n>\n> By how much does the size of ExprEvalStep go down once you don't\n> inline the JSON structures as of 0004 in [1]? And what of 0003?\n\n0004 gets us back to 64 bytes, if 0003 is applied first. 0003 alone doesn't\nyield a size reduction, because obviously 0004 is the bigger problem. Applying\njust 0004 you end up with 88 bytes.\n\n\n> The JSON portions seem like the largest portion of the cake, though both are\n> must-fixes.\n\nYep.\n\n\n> > Yes, but I don't guarantee to have a fix in time for Beta2.\n>\n> IMHO, it would be nice to get something done for beta2. Now the\n> thread is rather fresh and I guess that more performance study is\n> required even for 0004, so..\n\nI don't think there's a whole lot of performance study needed for 0004 - the\ncurrent state is obviously wrong.\n\nI think Andrew's beta 2 comment was more about my other architectural\ncomplains around the json expression eval stuff.\n\n\n> Waiting for beta3 would a better move at this stage. Is somebody confident\n> enough in the patches proposed?\n\n0001 is the one that needs to most careful analysis, I think. 0002 I'd be fine\nwith pushing after reviewing it again. For 0003 David's approach might be\nbetter or worse, it doesn't matter much I think. 0004 is ok I think, perhaps\nwith the exception of quibbling over some naming decisions?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:51:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On 2022-06-23 Th 21:51, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-23 16:38:12 +0900, Michael Paquier wrote:\n>> On Tue, Jun 21, 2022 at 05:41:07PM -0400, Andrew Dunstan wrote:\n>>> On 2022-06-21 Tu 17:25, Andres Freund wrote:\n>>>> On 2022-06-21 17:11:33 -0400, Andrew Dunstan wrote:\n>>>>> I and a couple of colleagues have looked it over. As far as it goes the\n>>>>> json fix looks kosher to me. I'll play with it some more.\n>>>> Cool.\n>>>>\n>>>> Any chance you could look at fixing the \"structure\" of the generated\n>>>> expression \"program\". The recursive ExecEvalExpr() calls are really not ok...\n>> By how much does the size of ExprEvalStep go down once you don't\n>> inline the JSON structures as of 0004 in [1]? And what of 0003?\n> 0004 gets us back to 64 bytes, if 0003 is applied first. 0003 alone doesn't\n> yield a size reduction, because obviously 0004 is the bigger problem. Applying\n> just 0004 you end up with 88 bytes.\n>\n>\n>> The JSON portions seem like the largest portion of the cake, though both are\n>> must-fixes.\n> Yep.\n>\n>\n>>> Yes, but I don't guarantee to have a fix in time for Beta2.\n>> IMHO, it would be nice to get something done for beta2. Now the\n>> thread is rather fresh and I guess that more performance study is\n>> required even for 0004, so..\n> I don't think there's a whole lot of performance study needed for 0004 - the\n> current state is obviously wrong.\n>\n> I think Andrew's beta 2 comment was more about my other architectural\n> complains around the json expression eval stuff.\n\n\nRight. That's being worked on but it's not going to be a mechanical fix.\n\n\n>\n>\n>> Waiting for beta3 would a better move at this stage. Is somebody confident\n>> enough in the patches proposed?\n> 0001 is the one that needs to most careful analysis, I think. 0002 I'd be fine\n> with pushing after reviewing it again. For 0003 David's approach might be\n> better or worse, it doesn't matter much I think. 0004 is ok I think, perhaps\n> with the exception of quibbling over some naming decisions?\n>\n>\n\nThe attached very small patch applies on top of your 0002 and deals with\nthe FmgrInfo complaint.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 24 Jun 2022 10:29:06 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Sat, 18 Jun 2022 at 05:21, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-06-17 14:14:54 +1200, David Rowley wrote:\n> > So, there appears to be no performance regression due to the extra\n> > indirection. There's maybe even some gains due to the smaller step\n> > size.\n>\n> \"smaller step size\"?\n\nI mean smaller sizeof(ExprEvalStep).\n\nDavid\n\n\n", "msg_date": "Wed, 29 Jun 2022 07:18:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Sat, 18 Jun 2022 at 08:06, Andres Freund <andres@anarazel.de> wrote:\n> I also attached my heavily-WIP patches for the ExprEvalStep issues, I\n> accidentally had only included a small part of the contents of the json fix.\n\nI've now looked at the 0003 patch. I like the idea you have about\nmoving some of the additional fields into ScalarArrayOpExprHashTable.\nI think the patch can even go a little further and move the hash_finfo\ninto there too. This means we don't need to dereference the \"op\" in\nsaop_element_hash().\n\nTo make this work, I did need to tag the ScalarArrayOpExpr into the\nExprEvalStep. That's required now since some of the initialization of\nthe hash function fields is delayed until\nExecEvalHashedScalarArrayOp(). We need to know the\nScalarArrayOpExpr's hashfuncid and inputcollid.\n\nYour v2 patch did shift off some of this initialization work to\nExecEvalHashedScalarArrayOp(). The attached v3 takes that a bit\nfurther. This saves a bit more work for ScalarArrayOpExprs that are\nevaluated 0 times.\n\nAnother small thing which I considered doing was to put the\nhash_fcinfo_data field as the final field in\nScalarArrayOpExprHashTable so that we could allocate the memory for\nthe hash_fcinfo_data in the same allocation as the\nScalarArrayOpExprHashTable. This would reduce the pointer\ndereferencing done in saop_element_hash() a bit further. I just\ndidn't notice anywhere else where we do that for FunctionCallInfo, so\nI resisted doing this.\n\n(There was also a small bug in your patch where you mistakenly cast to\nan OpExpr instead of ScalarArrayOpExpr when you were fetching the\ninputcollid)\n\nDavid", "msg_date": "Wed, 29 Jun 2022 11:40:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-24 10:29:06 -0400, Andrew Dunstan wrote:\n> On 2022-06-23 Th 21:51, Andres Freund wrote:\n> > On 2022-06-23 16:38:12 +0900, Michael Paquier wrote:\n> >> On Tue, Jun 21, 2022 at 05:41:07PM -0400, Andrew Dunstan wrote:\n> >>> Yes, but I don't guarantee to have a fix in time for Beta2.\n> >> IMHO, it would be nice to get something done for beta2. Now the\n> >> thread is rather fresh and I guess that more performance study is\n> >> required even for 0004, so..\n> > I don't think there's a whole lot of performance study needed for 0004 - the\n> > current state is obviously wrong.\n> >\n> > I think Andrew's beta 2 comment was more about my other architectural\n> > complains around the json expression eval stuff.\n> \n> \n> Right. That's being worked on but it's not going to be a mechanical fix.\n\nAny updates here?\n\nI'd mentioned the significant space use due to all JsonCoercionsState for all\nthe types. Another related aspect is that this code is just weird - the same\nstruct name (JsonCoercionsState), nested in each other?\n\n struct JsonCoercionsState\n {\n struct JsonCoercionState\n {\n JsonCoercion *coercion; /* coercion expression */\n ExprState *estate; /* coercion expression state */\n } null,\n string,\n numeric ,\n boolean,\n date,\n time,\n timetz,\n timestamp,\n timestamptz,\n composite;\n } coercions; /* states for coercion from SQL/JSON item\n * types directly to the output type */\n\nAlso note the weird numeric indentation that pgindent does...\n\n\n> The attached very small patch applies on top of your 0002 and deals with\n> the FmgrInfo complaint.\n\nNow that the FmgrInfo is part of a separately allocated struct, that doesn't\nseem necessary anymore.\n\n- Andres\n\n\n", "msg_date": "Tue, 5 Jul 2022 11:36:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "\nOn 2022-07-05 Tu 14:36, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-24 10:29:06 -0400, Andrew Dunstan wrote:\n>> On 2022-06-23 Th 21:51, Andres Freund wrote:\n>>> On 2022-06-23 16:38:12 +0900, Michael Paquier wrote:\n>>>> On Tue, Jun 21, 2022 at 05:41:07PM -0400, Andrew Dunstan wrote:\n>>>>> Yes, but I don't guarantee to have a fix in time for Beta2.\n>>>> IMHO, it would be nice to get something done for beta2. Now the\n>>>> thread is rather fresh and I guess that more performance study is\n>>>> required even for 0004, so..\n>>> I don't think there's a whole lot of performance study needed for 0004 - the\n>>> current state is obviously wrong.\n>>>\n>>> I think Andrew's beta 2 comment was more about my other architectural\n>>> complains around the json expression eval stuff.\n>>\n>> Right. That's being worked on but it's not going to be a mechanical fix.\n> Any updates here?\n\n\nNot yet. A colleague and I are working on it. I'll post a status this\nweek if we can't post a fix.\n\n\n>\n> I'd mentioned the significant space use due to all JsonCoercionsState for all\n> the types. Another related aspect is that this code is just weird - the same\n> struct name (JsonCoercionsState), nested in each other?\n>\n> struct JsonCoercionsState\n> {\n> struct JsonCoercionState\n> {\n> JsonCoercion *coercion; /* coercion expression */\n> ExprState *estate; /* coercion expression state */\n> } null,\n> string,\n> numeric ,\n> boolean,\n> date,\n> time,\n> timetz,\n> timestamp,\n> timestamptz,\n> composite;\n> } coercions; /* states for coercion from SQL/JSON item\n> * types directly to the output type */\n>\n> Also note the weird numeric indentation that pgindent does...\n\n\nYeah, we'll try to fix that.\n\n\n>\n>\n>> The attached very small patch applies on top of your 0002 and deals with\n>> the FmgrInfo complaint.\n> Now that the FmgrInfo is part of a separately allocated struct, that doesn't\n> seem necessary anymore.\n\n\nRight, but you complained that we should do it the same way as it's done\nelsewhere, so I thought I'd do that anyway.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Jul 2022 15:04:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-06-23 18:51:45 -0700, Andres Freund wrote:\n> > Waiting for beta3 would a better move at this stage. Is somebody confident\n> > enough in the patches proposed?\n> \n> 0001 is the one that needs to most careful analysis, I think. 0002 I'd be fine\n> with pushing after reviewing it again. For 0003 David's approach might be\n> better or worse, it doesn't matter much I think. 0004 is ok I think, perhaps\n> with the exception of quibbling over some naming decisions?\n\nI don't quite feel comfortable with 0001, without review by others. So my\ncurrent plan is to drop it and use get_timeout_active() \"manually\". We can\nimprove this in HEAD to remove the redundancy.\n\nI've pushed what was 0004, will push what was 0002 with the above change in a\nshort while unless somebody protests PDQ. Then will look at David's edition of\nmy 0003.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 12:08:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-06-29 11:40:45 +1200, David Rowley wrote:\n> On Sat, 18 Jun 2022 at 08:06, Andres Freund <andres@anarazel.de> wrote:\n> > I also attached my heavily-WIP patches for the ExprEvalStep issues, I\n> > accidentally had only included a small part of the contents of the json fix.\n> \n> I've now looked at the 0003 patch. I like the idea you have about\n> moving some of the additional fields into ScalarArrayOpExprHashTable.\n> I think the patch can even go a little further and move the hash_finfo\n> into there too. This means we don't need to dereference the \"op\" in\n> saop_element_hash().\n\nMakes sense.\n\n\n> To make this work, I did need to tag the ScalarArrayOpExpr into the\n> ExprEvalStep. That's required now since some of the initialization of\n> the hash function fields is delayed until\n> ExecEvalHashedScalarArrayOp(). We need to know the\n> ScalarArrayOpExpr's hashfuncid and inputcollid.\n\nMakes sense.\n\n\n> Another small thing which I considered doing was to put the\n> hash_fcinfo_data field as the final field in\n> ScalarArrayOpExprHashTable so that we could allocate the memory for\n> the hash_fcinfo_data in the same allocation as the\n> ScalarArrayOpExprHashTable. This would reduce the pointer\n> dereferencing done in saop_element_hash() a bit further. I just\n> didn't notice anywhere else where we do that for FunctionCallInfo, so\n> I resisted doing this.\n\nI think that'd make sense - it does add a bit of size calculation magic, but\nit shouldn't be a problem. I'm fairly sure we do this in other parts of the\ncode.\n\n\n> (There was also a small bug in your patch where you mistakenly cast to\n> an OpExpr instead of ScalarArrayOpExpr when you were fetching the\n> inputcollid)\n\nOoops.\n\n\nAre you good pushing this? I'm fine with you doing so wether you adapt it\nfurther or not.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 17:32:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Thanks for looking at this.\n\nOn Wed, 6 Jul 2022 at 12:32, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-06-29 11:40:45 +1200, David Rowley wrote:\n> > Another small thing which I considered doing was to put the\n> > hash_fcinfo_data field as the final field in\n> > ScalarArrayOpExprHashTable so that we could allocate the memory for\n> > the hash_fcinfo_data in the same allocation as the\n> > ScalarArrayOpExprHashTable. This would reduce the pointer\n> > dereferencing done in saop_element_hash() a bit further. I just\n> > didn't notice anywhere else where we do that for FunctionCallInfo, so\n> > I resisted doing this.\n>\n> I think that'd make sense - it does add a bit of size calculation magic, but\n> it shouldn't be a problem. I'm fairly sure we do this in other parts of the\n> code.\n\nI've now adjusted that. I also changed the hash_finfo field to make\nit so the FmgrInfo is inline rather than a pointer. This saves an\nadditional dereference in saop_element_hash() and also saves a\npalloc().\n\nI had to adjust the palloc for the ScalarArrayOpExprHashTable struct\ninto a palloc0 due to the FmgrInfo being inlined. I considered just\nzeroing out the hash_finfo portion but thought it wasn't worth the\nextra code.\n\n> Are you good pushing this? I'm fine with you doing so wether you adapt it\n> further or not.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Wed, 6 Jul 2022 19:52:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "\nOn 2022-07-05 Tu 15:04, Andrew Dunstan wrote:\n> On 2022-07-05 Tu 14:36, Andres Freund wrote:\n>>\n>>>> I think Andrew's beta 2 comment was more about my other architectural\n>>>> complains around the json expression eval stuff.\n>>> Right. That's being worked on but it's not going to be a mechanical fix.\n>> Any updates here?\n>\n> Not yet. A colleague and I are working on it. I'll post a status this\n> week if we can't post a fix.\n\n\nWe're still working on it. We've made substantial progress but there are\nsome tests failing that we need to fix.\n\n\n>> I'd mentioned the significant space use due to all JsonCoercionsState for all\n>> the types. Another related aspect is that this code is just weird - the same\n>> struct name (JsonCoercionsState), nested in each other?\n>>\n>> struct JsonCoercionsState\n>> {\n>> struct JsonCoercionState\n>> {\n>> JsonCoercion *coercion; /* coercion expression */\n>> ExprState *estate; /* coercion expression state */\n>> } null,\n>> string,\n>> numeric ,\n>> boolean,\n>> date,\n>> time,\n>> timetz,\n>> timestamp,\n>> timestamptz,\n>> composite;\n>> } coercions; /* states for coercion from SQL/JSON item\n>> * types directly to the output type */\n>>\n>> Also note the weird numeric indentation that pgindent does...\n>\n> Yeah, we'll try to fix that.\n\n\nActually, it's not the same name: JsonCoercionsState vs\nJsonCoercionState. But I agree that it's a subtle enough difference that\nwe should use something more obvious. Maybe JsonCoercionStates instead\nof JsonCoercionsState? The plural at the end would be harder to miss.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:05:49 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-07-08 17:05:49 -0400, Andrew Dunstan wrote:\n> Actually, it's not the same name: JsonCoercionsState vs\n> JsonCoercionState. But I agree that it's a subtle enough difference that\n> we should use something more obvious. Maybe JsonCoercionStates instead\n> of JsonCoercionsState? The plural at the end would be harder to miss.\n\nGiven that it's a one-off use struct, why name it? Then we don't have to\nfigure out a name we never use.\n\nI also still would like to understand why we need pre-allocated space for all\nthese types. How could multiple datums be coerced in an interleaved manner?\nAnd if that's possible, why can't multiple datums of the same type be coerced\nat the same time?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 10 Jul 2022 17:29:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-07-08 17:05:49 -0400, Andrew Dunstan wrote:\n> On 2022-07-05 Tu 15:04, Andrew Dunstan wrote:\n> > On 2022-07-05 Tu 14:36, Andres Freund wrote:\n> >>\n> >>>> I think Andrew's beta 2 comment was more about my other architectural\n> >>>> complains around the json expression eval stuff.\n> >>> Right. That's being worked on but it's not going to be a mechanical fix.\n> >> Any updates here?\n> >\n> > Not yet. A colleague and I are working on it. I'll post a status this\n> > week if we can't post a fix.\n\n> We're still working on it. We've made substantial progress but there are\n> some tests failing that we need to fix.\n\nI think we need to resolve this soon - or consider the alternatives. A lot of\nthe new json stuff doesn't seem fully baked, so I'm starting to wonder if we\nhave to consider pushing it a release further down.\n\nPerhaps you could post your current state? I might be able to help resolving\nsome of the problems.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Jul 2022 14:07:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On 2022-07-15 Fr 17:07, Andres Freund wrote:\n> Hi,\n>\n> On 2022-07-08 17:05:49 -0400, Andrew Dunstan wrote:\n>> On 2022-07-05 Tu 15:04, Andrew Dunstan wrote:\n>>> On 2022-07-05 Tu 14:36, Andres Freund wrote:\n>>>>>> I think Andrew's beta 2 comment was more about my other architectural\n>>>>>> complains around the json expression eval stuff.\n>>>>> Right. That's being worked on but it's not going to be a mechanical fix.\n>>>> Any updates here?\n>>> Not yet. A colleague and I are working on it. I'll post a status this\n>>> week if we can't post a fix.\n>> We're still working on it. We've made substantial progress but there are\n>> some tests failing that we need to fix.\n> I think we need to resolve this soon - or consider the alternatives. A lot of\n> the new json stuff doesn't seem fully baked, so I'm starting to wonder if we\n> have to consider pushing it a release further down.\n>\n> Perhaps you could post your current state? I might be able to help resolving\n> some of the problems.\n\n\nOk. Here is the state of things. This has proved to be rather more\nintractable than I expected. Almost all the legwork here has been done\nby Amit Langote, for which he deserves both my thanks and considerable\ncredit, but I take responsibility for it.\n\nI just discovered today that this scheme is failing under\n\"force_parallel_mode = regress\". I have as yet no idea if that can be\nfixed simply or not. Apart from that I think the main outstanding issue\nis to fill in the gaps in llvm_compile_expr().\n\nIf you have help you can offer that would be very welcome.\n\nI'd still very much like to get this done, but if the decision is we've\nrun out of time I'll be sad but understand.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 18 Jul 2022 15:09:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 19, 2022 at 4:09 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-07-15 Fr 17:07, Andres Freund wrote:\n> > Perhaps you could post your current state? I might be able to help resolving\n> > some of the problems.\n>\n> Ok. Here is the state of things. This has proved to be rather more\n> intractable than I expected. Almost all the legwork here has been done\n> by Amit Langote, for which he deserves both my thanks and considerable\n> credit, but I take responsibility for it.\n>\n> I just discovered today that this scheme is failing under\n> \"force_parallel_mode = regress\". I have as yet no idea if that can be\n> fixed simply or not.\n\nThe errors Andrew mentions here had to do with a bug of the new\ncoercion evaluation logic. The old code in ExecEvalJsonExpr() would\nskip coercion evaluation and thus also the sub-transaction associated\nwith it for some JsonExprs that the new code would not and that didn't\nsit well with the invariant that a parallel worker shouldn't try to\nstart a sub-transaction.\n\nThat bug has been fixed in the attached updated version.\n\n> Apart from that I think the main outstanding issue\n> is to fill in the gaps in llvm_compile_expr().\n\nAbout that, I was wondering if the blocks in llvm_compile_expr() need\nto be hand-coded to match what's added in ExecInterpExpr() or if I've\nmissed some tool that can be used instead?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jul 2022 20:40:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-07-19 20:40:11 +0900, Amit Langote wrote:\n> About that, I was wondering if the blocks in llvm_compile_expr() need\n> to be hand-coded to match what's added in ExecInterpExpr() or if I've\n> missed some tool that can be used instead?\n\nThe easiest way is to just call an external function for the implementation of\nthe step. But yes, otherwise you need to handcraft it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Jul 2022 08:37:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Wed, Jul 20, 2022 at 12:37 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-19 20:40:11 +0900, Amit Langote wrote:\n> > About that, I was wondering if the blocks in llvm_compile_expr() need\n> > to be hand-coded to match what's added in ExecInterpExpr() or if I've\n> > missed some tool that can be used instead?\n>\n> The easiest way is to just call an external function for the implementation of\n> the step. But yes, otherwise you need to handcraft it.\n\nOk, thanks.\n\nSo I started updating llvm_compile_expr() for handling the new\nExprEvalSteps that the patch adds to ExecExprInterp(), but quickly\nrealized that code could have been consolidated into less code, or\nIOW, into fewer new ExprEvalSteps. So, I refactored things that way\nand am now retrying adding the code to llvm_compile_expr() based on\nnew, better consolidated, code.\n\nHere's the updated version, without the llvm pieces, in case you'd\nlike to look at it even in this state. I'll post a version with llvm\npieces filled in tomorrow. (I have merged the different patches into\none for convenience.)\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Jul 2022 23:09:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Wed, Jul 20, 2022 at 11:09 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 20, 2022 at 12:37 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-19 20:40:11 +0900, Amit Langote wrote:\n> > > About that, I was wondering if the blocks in llvm_compile_expr() need\n> > > to be hand-coded to match what's added in ExecInterpExpr() or if I've\n> > > missed some tool that can be used instead?\n> >\n> > The easiest way is to just call an external function for the implementation of\n> > the step. But yes, otherwise you need to handcraft it.\n>\n> Ok, thanks.\n>\n> So I started updating llvm_compile_expr() for handling the new\n> ExprEvalSteps that the patch adds to ExecExprInterp(), but quickly\n> realized that code could have been consolidated into less code, or\n> IOW, into fewer new ExprEvalSteps. So, I refactored things that way\n> and am now retrying adding the code to llvm_compile_expr() based on\n> new, better consolidated, code.\n>\n> Here's the updated version, without the llvm pieces, in case you'd\n> like to look at it even in this state. I'll post a version with llvm\n> pieces filled in tomorrow. (I have merged the different patches into\n> one for convenience.)\n\nAnd here's a version with llvm pieces filled in.\n\nBecause I wrote all of it while not really understanding how the LLVM\nconstructs like blocks and branches work, the only reason I think\nthose llvm_compile_expr() additions may be correct is that all the\ntests in jsonb_sqljson.sql pass even if I add the following line at\nthe top:\n\nset jit_above_cost to 0;\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Jul 2022 23:55:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Thu, Jul 21, 2022 at 11:55 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n> On Wed, Jul 20, 2022 at 11:09 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n> > On Wed, Jul 20, 2022 at 12:37 AM Andres Freund <andres@anarazel.de>\nwrote:\n> > > On 2022-07-19 20:40:11 +0900, Amit Langote wrote:\n> > > > About that, I was wondering if the blocks in llvm_compile_expr()\nneed\n> > > > to be hand-coded to match what's added in ExecInterpExpr() or if\nI've\n> > > > missed some tool that can be used instead?\n> > >\n> > > The easiest way is to just call an external function for the\nimplementation of\n> > > the step. But yes, otherwise you need to handcraft it.\n> >\n> > Ok, thanks.\n> >\n> > So I started updating llvm_compile_expr() for handling the new\n> > ExprEvalSteps that the patch adds to ExecExprInterp(), but quickly\n> > realized that code could have been consolidated into less code, or\n> > IOW, into fewer new ExprEvalSteps. So, I refactored things that way\n> > and am now retrying adding the code to llvm_compile_expr() based on\n> > new, better consolidated, code.\n> >\n> > Here's the updated version, without the llvm pieces, in case you'd\n> > like to look at it even in this state. I'll post a version with llvm\n> > pieces filled in tomorrow. (I have merged the different patches into\n> > one for convenience.)\n>\n> And here's a version with llvm pieces filled in.\n>\n> Because I wrote all of it while not really understanding how the LLVM\n> constructs like blocks and branches work, the only reason I think\n> those llvm_compile_expr() additions may be correct is that all the\n> tests in jsonb_sqljson.sql pass even if I add the following line at\n> the top:\n>\n> set jit_above_cost to 0;\n\nOh and I did build --with-llvm. :-)\n\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 21, 2022 at 11:55 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 20, 2022 at 11:09 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Jul 20, 2022 at 12:37 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-07-19 20:40:11 +0900, Amit Langote wrote:\n> > > > About that, I was wondering if the blocks in llvm_compile_expr() need\n> > > > to be hand-coded to match what's added in ExecInterpExpr() or if I've\n> > > > missed some tool that can be used instead?\n> > >\n> > > The easiest way is to just call an external function for the implementation of\n> > > the step. But yes, otherwise you need to handcraft it.\n> >\n> > Ok, thanks.\n> >\n> > So I started updating llvm_compile_expr() for handling the new\n> > ExprEvalSteps that the patch adds to ExecExprInterp(), but quickly\n> > realized that code could have been consolidated into less code, or\n> > IOW, into fewer new ExprEvalSteps.  So, I refactored things that way\n> > and am now retrying adding the code to llvm_compile_expr() based on\n> > new, better consolidated, code.\n> >\n> > Here's the updated version, without the llvm pieces, in case you'd\n> > like to look at it even in this state.  I'll post a version with llvm\n> > pieces filled in tomorrow.   (I have merged the different patches into\n> > one for convenience.)\n>\n> And here's a version with llvm pieces filled in.\n>\n> Because I wrote all of it while not really understanding how the LLVM\n> constructs like blocks and branches work, the only reason I think\n> those llvm_compile_expr() additions may be correct is that all the\n> tests in jsonb_sqljson.sql pass even if I add the following line at\n> the top:\n>\n> set jit_above_cost to 0;Oh and I did build --with-llvm. :-)\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Fri, 22 Jul 2022 00:19:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 2022-Jul-21, Amit Langote wrote:\n\n> Because I wrote all of it while not really understanding how the LLVM\n> constructs like blocks and branches work, the only reason I think\n> those llvm_compile_expr() additions may be correct is that all the\n> tests in jsonb_sqljson.sql pass even if I add the following line at\n> the top:\n\nI suggest to build with --enable-coverage, then run the regression tests\nand do \"make coverage-html\" and see if your code appears covered in the\nreport.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n", "msg_date": "Thu, 21 Jul 2022 19:12:35 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Fri, Jul 22, 2022 at 2:12 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-21, Amit Langote wrote:\n>\n> > Because I wrote all of it while not really understanding how the LLVM\n> > constructs like blocks and branches work, the only reason I think\n> > those llvm_compile_expr() additions may be correct is that all the\n> > tests in jsonb_sqljson.sql pass even if I add the following line at\n> > the top:\n>\n> I suggest to build with --enable-coverage, then run the regression tests\n> and do \"make coverage-html\" and see if your code appears covered in the\n> report.\n\nThanks for the suggestion. I just did and it seems that both the\nadditions to ExecInterpExpr() and to llvm_compile_expr() are well\ncovered.\n\nBTW, the only way I found to *forcefully* exercise llvm_compile_expr()\nis to add `set jit_above_cost to 0` at the top of the test file, or\nare we missing a force_jit_mode, like there is force_parallel_mode?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jul 2022 12:21:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Fri, 22 Jul 2022 at 15:22, Amit Langote <amitlangote09@gmail.com> wrote:\n> BTW, the only way I found to *forcefully* exercise llvm_compile_expr()\n> is to add `set jit_above_cost to 0` at the top of the test file, or\n> are we missing a force_jit_mode, like there is force_parallel_mode?\n\nI don't think we'd need any setting to hide the JIT counters from\nEXPLAIN ANALYZE since those only show with COSTS ON, which we tend not\nto do.\n\nI think for testing, you could just zero all the jit*above_cost GUCs.\n\nIf you look at the config_extra in [1], you'll see that animal runs\nthe tests with modified JIT parameters.\n\nBTW, I was working on code inside llvm_compile_expr() a few days ago\nand I thought I'd gotten the new evaluation steps I was adding correct\nas it worked fine with jit_above_cost=0, but on further testing, it\ncrashed with jit_inline_above_cost=0. Might be worth doing both to see\nif everything works as intended.\n\nDavid\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2022-07-22%2003%3A04%3A03\n\n\n", "msg_date": "Fri, 22 Jul 2022 16:13:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Fri, Jul 22, 2022 at 1:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 22 Jul 2022 at 15:22, Amit Langote <amitlangote09@gmail.com> wrote:\n> > BTW, the only way I found to *forcefully* exercise llvm_compile_expr()\n> > is to add `set jit_above_cost to 0` at the top of the test file, or\n> > are we missing a force_jit_mode, like there is force_parallel_mode?\n>\n> I don't think we'd need any setting to hide the JIT counters from\n> EXPLAIN ANALYZE since those only show with COSTS ON, which we tend not\n> to do.\n\nAh, makes sense.\n\n> I think for testing, you could just zero all the jit*above_cost GUCs.\n>\n> If you look at the config_extra in [1], you'll see that animal runs\n> the tests with modified JIT parameters.\n>\n> BTW, I was working on code inside llvm_compile_expr() a few days ago\n> and I thought I'd gotten the new evaluation steps I was adding correct\n> as it worked fine with jit_above_cost=0, but on further testing, it\n> crashed with jit_inline_above_cost=0. Might be worth doing both to see\n> if everything works as intended.\n\nThanks for the pointer.\n\nSo I didn't see things going bust on re-testing with all\njit_*_above_cost parameters set to 0, so maybe the\nllvm_compile_expression() additions are alright.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jul 2022 14:49:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Fri, Jul 22, 2022 at 2:49 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jul 22, 2022 at 1:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > BTW, I was working on code inside llvm_compile_expr() a few days ago\n> > and I thought I'd gotten the new evaluation steps I was adding correct\n> > as it worked fine with jit_above_cost=0, but on further testing, it\n> > crashed with jit_inline_above_cost=0. Might be worth doing both to see\n> > if everything works as intended.\n>\n> Thanks for the pointer.\n>\n> So I didn't see things going bust on re-testing with all\n> jit_*_above_cost parameters set to 0, so maybe the\n> llvm_compile_expression() additions are alright.\n\nHere's an updated version of the patch, with mostly cosmetic changes.\nIn particular, I added comments describing the new llvm_compile_expr()\nblobs.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 27 Jul 2022 17:01:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "\nOn 2022-07-27 We 04:01, Amit Langote wrote:\n> On Fri, Jul 22, 2022 at 2:49 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Fri, Jul 22, 2022 at 1:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>> BTW, I was working on code inside llvm_compile_expr() a few days ago\n>>> and I thought I'd gotten the new evaluation steps I was adding correct\n>>> as it worked fine with jit_above_cost=0, but on further testing, it\n>>> crashed with jit_inline_above_cost=0. Might be worth doing both to see\n>>> if everything works as intended.\n>> Thanks for the pointer.\n>>\n>> So I didn't see things going bust on re-testing with all\n>> jit_*_above_cost parameters set to 0, so maybe the\n>> llvm_compile_expression() additions are alright.\n> Here's an updated version of the patch, with mostly cosmetic changes.\n> In particular, I added comments describing the new llvm_compile_expr()\n> blobs.\n>\n\n\nAndres,\n\n\nthis work has been done in response to a complaint from you. Does this\naddress your concerns satisfactorily?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 29 Jul 2022 14:27:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-07-29 14:27:36 -0400, Andrew Dunstan wrote:\n> this work has been done in response to a complaint from you. Does this\n> address your concerns satisfactorily?\n\nWill look. Was on vacation for the last two weeks...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 1 Aug 2022 16:27:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2022-07-27 17:01:13 +0900, Amit Langote wrote:\n> Here's an updated version of the patch, with mostly cosmetic changes.\n> In particular, I added comments describing the new llvm_compile_expr()\n> blobs.\n\n- I've asked a couple times before: Why do we need space for every possible\n datatype at once in JsonItemCoercions? Can there be multiple \"concurrent\"\n coercions in process?\n\n The whole coercion stuff just seems incredibly clunky (in a slightly\n different shape before this patch). ExecEvalJsonExprItemCoercion() calls\n ExecPrepareJsonItemCoercion(), which gets a pointer to one of the per-type\n elements in JsonItemCoercionsState, dispatching on the type of the json\n object. Then we later call ExecGetJsonItemCoercion() (via a convoluted\n path), which again will dispatch on the type (extracting the json object\n again afaics!), to then somehow eventually get the coerced value.\n\n I cannot make any sense of this. This code should not have been committed\n in this state.\n\n\n- Looks like there's still some recursive expression states, namely\n JsonExprState->{result_coercion, coercions}?\n\n\n- Looks like the JsonExpr code in ExecInitExprRec() is big enough to\n potentially benefit from splitting out into a separate function?\n\n\n- looks like JsonExprPostEvalState could be moved to execExprInterp.c?\n\n\n- I ran the patch against LLVM 14 built with assertions enabled, and it\n triggers an assertion failure:\n\n#3 0x00007f75d165c242 in __GI___assert_fail (\n assertion=0x7f75c278d511 \"getOperand(0)->getType() == getOperand(1)->getType() && \\\"Both operands to ICmp instruction are not of the same type!\\\"\",\n file=0x7f75c2780366 \"/home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h\", line=1192,\n function=0x7f75c27d9dcc \"void llvm::ICmpInst::AssertOK()\") at assert.c:101\n#4 0x00007f75c2b9b25c in llvm::ICmpInst::AssertOK (this=0x55e019290ca0) at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h:1191\n#5 0x00007f75c2b9b0ea in llvm::ICmpInst::ICmpInst (this=0x55e019290ca0, pred=llvm::CmpInst::ICMP_EQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, NameStr=\"\")\n at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h:1246\n#6 0x00007f75c2b93c99 in llvm::IRBuilderBase::CreateICmp (this=0x55e0192894f0, P=llvm::CmpInst::ICMP_EQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, Name=\"\")\n at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/IRBuilder.h:2202\n#7 0x00007f75c2c1bc5d in LLVMBuildICmp (B=0x55e0192894f0, Op=LLVMIntEQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, Name=0x7f75d0d24cbc \"\")\n at /home/andres/src/llvm-project-14/llvm/lib/IR/Core.cpp:3927\n#8 0x00007f75d0d20b1f in llvm_compile_expr (state=0x55e019201380) at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit_expr.c:2392\n...\n#19 0x000055e0184c16d4 in exec_simple_query (query_string=0x55e01912f6e0 \"SELECT JSON_EXISTS(NULL::jsonb, '$');\") at /home/andres/src/postgresql/src/backend/tcop/postgres.c:1204\n\n this triggers easily interactively - which is nice because that allows to\n dump the types:\n\n p getOperand(0)->getType()->dump() -> prints i64\n p getOperand(1)->getType()->dump() -> prints i32\n\n The immediate issue is that you're setting v_jumpaddrp up as a pointer to a\n pointer to size_t - but then compare it to i32.\n\n\n I first was confused why the code tries to load the jump target\n dynamically. But then I saw that the interpreted code sets it dynamically -\n why? That's completely unnecessary overhead afaics? There's just two\n possible jump targets, no?\n\n\n- why is EvalJsonPathVar() in execExprInterp.c, when it's only ever called\n from within jsonpath_exec.c?\n\n- s/JsobbValue/JsonbValue/\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 1 Aug 2022 17:39:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nThanks for looking into this.\n\nOn Tue, Aug 2, 2022 at 9:39 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-27 17:01:13 +0900, Amit Langote wrote:\n> > Here's an updated version of the patch, with mostly cosmetic changes.\n> > In particular, I added comments describing the new llvm_compile_expr()\n> > blobs.\n>\n> - I've asked a couple times before: Why do we need space for every possible\n> datatype at once in JsonItemCoercions? Can there be multiple \"concurrent\"\n> coercions in process?\n\nThis topic has been a head-scratcher for me too from the beginning,\nbut I've since come to understand (convince myself) that we do need\nthe coercions for all possible types, because we don't know the type\nof the JSON item that's going to pop out of the main JSON path\nexpression until we've run it through the JSON path executor that\nExecEvalJson() invokes. So, it's not possible to statically assign\nthe coercion. I am not really sure if different coercions may be used\nin the same query over multiple evaluations of the same JSON path\nexpression, but maybe that's also possible.\n\n> The whole coercion stuff just seems incredibly clunky (in a slightly\n> different shape before this patch). ExecEvalJsonExprItemCoercion() calls\n> ExecPrepareJsonItemCoercion(), which gets a pointer to one of the per-type\n> elements in JsonItemCoercionsState, dispatching on the type of the json\n> object. Then we later call ExecGetJsonItemCoercion() (via a convoluted\n> path), which again will dispatch on the type (extracting the json object\n> again afaics!), to then somehow eventually get the coerced value.\n\nI think it might be possible to make this a bit simpler, by not\nleaving anything coercion-related in ExecEvalJsonExpr(). I left some\npieces there, because I thought the error of not finding an\nappropriate coercion must be thrown right away as the code in\nExecEvalJsonExpr() does after calling ExecGetJsonItemCoercion().\n\nExecPrepareJsonItemCoercion() is called later when it's time to\nactually evaluate the coercion. If we move the error path to\nExecPrepareJsonItemCoercion(), both ExecGetJsonItemCoercion() and the\nerror path code in ExecEvalJsonExpr() will be unnecessary. I will\ngive that a try.\n\n> - Looks like there's still some recursive expression states, namely\n> JsonExprState->{result_coercion, coercions}?\n\nSo, the problem with inlining coercion evaluation into the main parent\nJsonExpr's is that it needs to be wrapped in a sub-transaction to\ncatch any errors and return NULL instead. I don't know a way to wrap\nExprEvalStep evaluation in a sub-transaction to achieve that effect.\n\n> - Looks like the JsonExpr code in ExecInitExprRec() is big enough to\n> potentially benefit from splitting out into a separate function?\n\nThought about it too, so will do.\n\n> - looks like JsonExprPostEvalState could be moved to execExprInterp.c?\n\nOK, will give that a try.\n\n> - I ran the patch against LLVM 14 built with assertions enabled, and it\n> triggers an assertion failure:\n>\n> #3 0x00007f75d165c242 in __GI___assert_fail (\n> assertion=0x7f75c278d511 \"getOperand(0)->getType() == getOperand(1)->getType() && \\\"Both operands to ICmp instruction are not of the same type!\\\"\",\n> file=0x7f75c2780366 \"/home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h\", line=1192,\n> function=0x7f75c27d9dcc \"void llvm::ICmpInst::AssertOK()\") at assert.c:101\n> #4 0x00007f75c2b9b25c in llvm::ICmpInst::AssertOK (this=0x55e019290ca0) at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h:1191\n> #5 0x00007f75c2b9b0ea in llvm::ICmpInst::ICmpInst (this=0x55e019290ca0, pred=llvm::CmpInst::ICMP_EQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, NameStr=\"\")\n> at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/Instructions.h:1246\n> #6 0x00007f75c2b93c99 in llvm::IRBuilderBase::CreateICmp (this=0x55e0192894f0, P=llvm::CmpInst::ICMP_EQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, Name=\"\")\n> at /home/andres/src/llvm-project-14/llvm/include/llvm/IR/IRBuilder.h:2202\n> #7 0x00007f75c2c1bc5d in LLVMBuildICmp (B=0x55e0192894f0, Op=LLVMIntEQ, LHS=0x55e019290c10, RHS=0x55e01928ce80, Name=0x7f75d0d24cbc \"\")\n> at /home/andres/src/llvm-project-14/llvm/lib/IR/Core.cpp:3927\n> #8 0x00007f75d0d20b1f in llvm_compile_expr (state=0x55e019201380) at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit_expr.c:2392\n> ...\n> #19 0x000055e0184c16d4 in exec_simple_query (query_string=0x55e01912f6e0 \"SELECT JSON_EXISTS(NULL::jsonb, '$');\") at /home/andres/src/postgresql/src/backend/tcop/postgres.c:1204\n>\n> this triggers easily interactively - which is nice because that allows to\n> dump the types:\n>\n> p getOperand(0)->getType()->dump() -> prints i64\n> p getOperand(1)->getType()->dump() -> prints i32\n>\n> The immediate issue is that you're setting v_jumpaddrp up as a pointer to a\n> pointer to size_t - but then compare it to i32.\n\nOoh, thanks for letting me know. So maybe I am missing some\nllvmjist_emit.h/type.c infrastructure to read an int32 value\n(jumpdone) out of an int32 pointer (&jumpdone)?\n\n> I first was confused why the code tries to load the jump target\n> dynamically. But then I saw that the interpreted code sets it dynamically -\n> why? That's completely unnecessary overhead afaics? There's just two\n> possible jump targets, no?\n\nHmm, I looked at the code for other expressions that jump, especially\nCASE WHEN, but they use ready-made EEOP_JUMP_IF_* steps, which can be\nadded statically. I thought we can't use them in this case, because\nthe conditions are very ad-hoc, like if the JSON path computation\nreturned an \"empty\" item or if the \"error\" flag was set during that\ncomputation, etc.\n\n> - why is EvalJsonPathVar() in execExprInterp.c, when it's only ever called\n> from within jsonpath_exec.c?\n\nHadn't noticed that because the patch didn't really have to touch it,\nbut yes, maybe it makes sense to move it there.\n\n> - s/JsobbValue/JsonbValue/\n\nOops, will fix.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Aug 2022 12:05:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n> On Tue, Aug 2, 2022 at 9:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-27 17:01:13 +0900, Amit Langote wrote:\n> > > Here's an updated version of the patch, with mostly cosmetic changes.\n> > > In particular, I added comments describing the new llvm_compile_expr()\n> > > blobs.\n> >\n> > - I've asked a couple times before: Why do we need space for every possible\n> > datatype at once in JsonItemCoercions? Can there be multiple \"concurrent\"\n> > coercions in process?\n> \n> This topic has been a head-scratcher for me too from the beginning,\n> but I've since come to understand (convince myself) that we do need\n> the coercions for all possible types, because we don't know the type\n> of the JSON item that's going to pop out of the main JSON path\n> expression until we've run it through the JSON path executor that\n> ExecEvalJson() invokes. So, it's not possible to statically assign\n> the coercion.\n\nSure. But that doesn't mean we have to have memory for every possible type *at\nthe same time*.\n\n\n> I am not really sure if different coercions may be used\n> in the same query over multiple evaluations of the same JSON path\n> expression, but maybe that's also possible.\n\nEven if the type can change, I don't think that means we need to have space\nfor multiple types at the same time - there can't be multiple coercions\nhappening at the same time, otherwise there could be two coercions of the same\ntype as well. So we don't need memory for every coercion type.\n\n> \n> > The whole coercion stuff just seems incredibly clunky (in a slightly\n> > different shape before this patch). ExecEvalJsonExprItemCoercion() calls\n> > ExecPrepareJsonItemCoercion(), which gets a pointer to one of the per-type\n> > elements in JsonItemCoercionsState, dispatching on the type of the json\n> > object. Then we later call ExecGetJsonItemCoercion() (via a convoluted\n> > path), which again will dispatch on the type (extracting the json object\n> > again afaics!), to then somehow eventually get the coerced value.\n> \n> I think it might be possible to make this a bit simpler, by not\n> leaving anything coercion-related in ExecEvalJsonExpr().\n\nHonestly, this code seems like it should just be rewritten from scratch.\n\n\n> I left some pieces there, because I thought the error of not finding an\n> appropriate coercion must be thrown right away as the code in\n> ExecEvalJsonExpr() does after calling ExecGetJsonItemCoercion().\n> \n> ExecPrepareJsonItemCoercion() is called later when it's time to\n> actually evaluate the coercion. If we move the error path to\n> ExecPrepareJsonItemCoercion(), both ExecGetJsonItemCoercion() and the\n> error path code in ExecEvalJsonExpr() will be unnecessary. I will\n> give that a try.\n\nWhy do we need the separation of prepare and then evaluation? They're executed\nstraight after each other?\n\n\n> > - Looks like there's still some recursive expression states, namely\n> > JsonExprState->{result_coercion, coercions}?\n> \n> So, the problem with inlining coercion evaluation into the main parent\n> JsonExpr's is that it needs to be wrapped in a sub-transaction to\n> catch any errors and return NULL instead. I don't know a way to wrap\n> ExprEvalStep evaluation in a sub-transaction to achieve that effect.\n\nBut we don't need to wrap arbitrary evaluation in a subtransaction - afaics\nthe coercion calls a single function, not an arbitrary expression?\n\n\n> Ooh, thanks for letting me know. So maybe I am missing some\n> llvmjist_emit.h/type.c infrastructure to read an int32 value\n> (jumpdone) out of an int32 pointer (&jumpdone)?\n\nNo, you just need to replace l_ptr(TypeSizeT) with l_ptr(LLVMInt32Type()).\n\n\n> > I first was confused why the code tries to load the jump target\n> > dynamically. But then I saw that the interpreted code sets it dynamically -\n> > why? That's completely unnecessary overhead afaics? There's just two\n> > possible jump targets, no?\n> \n> Hmm, I looked at the code for other expressions that jump, especially\n> CASE WHEN, but they use ready-made EEOP_JUMP_IF_* steps, which can be\n> added statically. I thought we can't use them in this case, because\n> the conditions are very ad-hoc, like if the JSON path computation\n> returned an \"empty\" item or if the \"error\" flag was set during that\n> computation, etc.\n\nThe minimal fix would be to return the jump target from the function, and then\njump to that. That at least avoids the roundtrip to memory you have right now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 2 Aug 2022 08:00:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 3, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n> > On Tue, Aug 2, 2022 at 9:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-07-27 17:01:13 +0900, Amit Langote wrote:\n> > > > Here's an updated version of the patch, with mostly cosmetic changes.\n> > > > In particular, I added comments describing the new llvm_compile_expr()\n> > > > blobs.\n> > >\n> > > - I've asked a couple times before: Why do we need space for every possible\n> > > datatype at once in JsonItemCoercions? Can there be multiple \"concurrent\"\n> > > coercions in process?\n> >\n> > This topic has been a head-scratcher for me too from the beginning,\n> > but I've since come to understand (convince myself) that we do need\n> > the coercions for all possible types, because we don't know the type\n> > of the JSON item that's going to pop out of the main JSON path\n> > expression until we've run it through the JSON path executor that\n> > ExecEvalJson() invokes. So, it's not possible to statically assign\n> > the coercion.\n>\n> Sure. But that doesn't mean we have to have memory for every possible type *at\n> the same time*.\n>\n> > I am not really sure if different coercions may be used\n> > in the same query over multiple evaluations of the same JSON path\n> > expression, but maybe that's also possible.\n>\n> Even if the type can change, I don't think that means we need to have space\n> for multiple types at the same time - there can't be multiple coercions\n> happening at the same time, otherwise there could be two coercions of the same\n> type as well. So we don't need memory for every coercion type.\n\nDo you find it unnecessary to statically allocate memory for\nJsonItemCoercionState for each possible coercion, as in the following\nstruct definition:\n\ntypedef struct JsonItemCoercionsState\n{\n JsonItemCoercionState null;\n JsonItemCoercionState string;\n JsonItemCoercionState numeric;\n JsonItemCoercionState boolean;\n JsonItemCoercionState date;\n JsonItemCoercionState time;\n JsonItemCoercionState timetz;\n JsonItemCoercionState timestamp;\n JsonItemCoercionState timestamptz;\n JsonItemCoercionState composite;\n} JsonItemCoercionsState;\n\nA given JsonItemCoercionState (note singular Coercion) contains:\n\ntypedef struct JsonItemCoercionState\n{\n /* Expression used to evaluate the coercion */\n JsonCoercion *coercion;\n\n /* ExprEvalStep to compute this coercion's expression */\n int jump_eval_expr;\n} JsonItemCoercionState;\n\njump_eval_expr above is the address in JsonItemCoercions'\nExprState.steps of the 1st ExprEvalStep corresponding to\ncoercion->expr. IIUC, all ExprEvalSteps needed to evaluate an\nexpression and its children must be allocated statically in\nExecInitExprRec(), and none on-the-fly as needed. So, this considers\nall coercions and allocates states of all statically.\n\n> > > The whole coercion stuff just seems incredibly clunky (in a slightly\n> > > different shape before this patch). ExecEvalJsonExprItemCoercion() calls\n> > > ExecPrepareJsonItemCoercion(), which gets a pointer to one of the per-type\n> > > elements in JsonItemCoercionsState, dispatching on the type of the json\n> > > object. Then we later call ExecGetJsonItemCoercion() (via a convoluted\n> > > path), which again will dispatch on the type (extracting the json object\n> > > again afaics!), to then somehow eventually get the coerced value.\n> >\n> > I think it might be possible to make this a bit simpler, by not\n> > leaving anything coercion-related in ExecEvalJsonExpr().\n>\n> Honestly, this code seems like it should just be rewritten from scratch.\n\nBased on what I wrote above, please let me know if I've misunderstood\nyour concerns about over-allocation of coercion state. I can try to\nrewrite one more time if I know what this should look like instead.\n\n> > I left some pieces there, because I thought the error of not finding an\n> > appropriate coercion must be thrown right away as the code in\n> > ExecEvalJsonExpr() does after calling ExecGetJsonItemCoercion().\n> >\n> > ExecPrepareJsonItemCoercion() is called later when it's time to\n> > actually evaluate the coercion. If we move the error path to\n> > ExecPrepareJsonItemCoercion(), both ExecGetJsonItemCoercion() and the\n> > error path code in ExecEvalJsonExpr() will be unnecessary. I will\n> > give that a try.\n>\n> Why do we need the separation of prepare and then evaluation? They're executed\n> straight after each other?\n\nExecPrepareJsonItemCoercion() is a helper routine to choose the\ncoercion and extract the Datum out of the JsonbValue produced by the\nEEOP_JSONEXPR_PATH step to feed to the coercion expression's\nExprEvalStep. The coercion evaluation will be done by jumping to said\nstep in ExecInterpExpr().\n\n> > > - Looks like there's still some recursive expression states, namely\n> > > JsonExprState->{result_coercion, coercions}?\n> >\n> > So, the problem with inlining coercion evaluation into the main parent\n> > JsonExpr's is that it needs to be wrapped in a sub-transaction to\n> > catch any errors and return NULL instead. I don't know a way to wrap\n> > ExprEvalStep evaluation in a sub-transaction to achieve that effect.\n>\n> But we don't need to wrap arbitrary evaluation in a subtransaction - afaics\n> the coercion calls a single function, not an arbitrary expression?\n\nIt can do EEOP_IOCOERCE for example, and the input/output function may\ncause an error depending on what comes out of the JSON blob.\n\nIIUC, those errors need to be caught to satisfy some SQL/JSON spec.\n\n> > Ooh, thanks for letting me know. So maybe I am missing some\n> > llvmjist_emit.h/type.c infrastructure to read an int32 value\n> > (jumpdone) out of an int32 pointer (&jumpdone)?\n>\n> No, you just need to replace l_ptr(TypeSizeT) with l_ptr(LLVMInt32Type()).\n\nOK, thanks.\n\n> > > I first was confused why the code tries to load the jump target\n> > > dynamically. But then I saw that the interpreted code sets it dynamically -\n> > > why? That's completely unnecessary overhead afaics? There's just two\n> > > possible jump targets, no?\n> >\n> > Hmm, I looked at the code for other expressions that jump, especially\n> > CASE WHEN, but they use ready-made EEOP_JUMP_IF_* steps, which can be\n> > added statically. I thought we can't use them in this case, because\n> > the conditions are very ad-hoc, like if the JSON path computation\n> > returned an \"empty\" item or if the \"error\" flag was set during that\n> > computation, etc.\n>\n> The minimal fix would be to return the jump target from the function, and then\n> jump to that. That at least avoids the roundtrip to memory you have right now.\n\nYou mean like this:\n\n LLVMValueRef v_args[2];\n LLVMValueRef v_ret;\n\n /*\n * Call ExecEvalJsonExprSkip() to decide if JSON path\n * evaluation can be skipped. This returns the step\n * address to jump to.\n */\n v_args[0] = v_state;\n v_args[1] = l_ptr_const(op, l_ptr(StructExprEvalStep));\n v_ret = LLVMBuildCall(b,\n llvm_pg_func(mod,\n\"ExecEvalJsonExprSkip\"),\n params, lengthof(params), \"\");\n\nActually, this is how I had started, but never figured out how to jump\nto the address in v_ret. As in, how to extract the plain C int32\nvalue that is the jump address from v_ret, an LLVMValueRef, to use in\nthe following statement:\n\n LLVMBuildBr(b, opblocks[<int32-in-v_ret>]);\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 17:01:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nI tried to look into some of the questions from Amit, but I have e.g. no idea\nwhat exactly the use of subtransactions tries to achieve - afaics 1a36bc9dba8\nis the first patch to introduce needing to evaluate parts expressions in a\nsubtransaction - but there's not a single comment explaining why.\n\nIt's really hard to understand the new json code. It's a substantial amount of\nnew infrastructure, without any design documentation that I can find. And it's\nnot like it's just code that's equivalent to nearby stuff. To me this falls\nway below our standards and I think it's *months* of work to fix.\n\nEven just the expresion evaluation code: EvalJsonPathVar(),\nExecEvalJsonConstructor(), ExecEvalJsonExpr(), ExecEvalJson(). There's one\nlayer of subtransactions in one of the paths in ExecEvalJsonExpr(), another in\nExecEvalJson(). Some paths of ExecEvalJsonExpr() go through subtransactions,\nothers don't.\n\nIt's one thing for a small set of changes to be of this quality. But this is\npretty large - a quick summing of diffstat ends up with about 17k lines added,\nof which ~2.5k are docs, ~4.8k are tests.\n\n\nOn 2022-08-04 17:01:48 +0900, Amit Langote wrote:\n> On Wed, Aug 3, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > The whole coercion stuff just seems incredibly clunky (in a slightly\n> > > > different shape before this patch). ExecEvalJsonExprItemCoercion() calls\n> > > > ExecPrepareJsonItemCoercion(), which gets a pointer to one of the per-type\n> > > > elements in JsonItemCoercionsState, dispatching on the type of the json\n> > > > object. Then we later call ExecGetJsonItemCoercion() (via a convoluted\n> > > > path), which again will dispatch on the type (extracting the json object\n> > > > again afaics!), to then somehow eventually get the coerced value.\n> > >\n> > > I think it might be possible to make this a bit simpler, by not\n> > > leaving anything coercion-related in ExecEvalJsonExpr().\n> >\n> > Honestly, this code seems like it should just be rewritten from scratch.\n>\n> Based on what I wrote above, please let me know if I've misunderstood\n> your concerns about over-allocation of coercion state. I can try to\n> rewrite one more time if I know what this should look like instead.\n\nI don't know. I don't understand the design of what needs to have error\nhandling, and what not.\n\n\n> > > > I first was confused why the code tries to load the jump target\n> > > > dynamically. But then I saw that the interpreted code sets it dynamically -\n> > > > why? That's completely unnecessary overhead afaics? There's just two\n> > > > possible jump targets, no?\n> > >\n> > > Hmm, I looked at the code for other expressions that jump, especially\n> > > CASE WHEN, but they use ready-made EEOP_JUMP_IF_* steps, which can be\n> > > added statically. I thought we can't use them in this case, because\n> > > the conditions are very ad-hoc, like if the JSON path computation\n> > > returned an \"empty\" item or if the \"error\" flag was set during that\n> > > computation, etc.\n> >\n> > The minimal fix would be to return the jump target from the function, and then\n> > jump to that. That at least avoids the roundtrip to memory you have right now.\n>\n> You mean like this:\n>\n> LLVMValueRef v_args[2];\n> LLVMValueRef v_ret;\n>\n> /*\n> * Call ExecEvalJsonExprSkip() to decide if JSON path\n> * evaluation can be skipped. This returns the step\n> * address to jump to.\n> */\n> v_args[0] = v_state;\n> v_args[1] = l_ptr_const(op, l_ptr(StructExprEvalStep));\n> v_ret = LLVMBuildCall(b,\n> llvm_pg_func(mod,\n> \"ExecEvalJsonExprSkip\"),\n> params, lengthof(params), \"\");\n>\n> Actually, this is how I had started, but never figured out how to jump\n> to the address in v_ret. As in, how to extract the plain C int32\n> value that is the jump address from v_ret, an LLVMValueRef, to use in\n> the following statement:\n>\n> LLVMBuildBr(b, opblocks[<int32-in-v_ret>]);\n\nWe could make that work, but even keeping it more similar to your current\ncode, you're already dealing with a variable jump target. Only that you load\nit from a variable, instead of the function return type. So you could just\nv_ret instead of v_jumpaddr, and your code would be simpler and faster.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Aug 2022 13:36:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi Andres,\n\nOn Sat, Aug 6, 2022 at 5:37 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-04 17:01:48 +0900, Amit Langote wrote:\n> > On Wed, Aug 3, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Honestly, this code seems like it should just be rewritten from scratch.\n> >\n> > Based on what I wrote above, please let me know if I've misunderstood\n> > your concerns about over-allocation of coercion state. I can try to\n> > rewrite one more time if I know what this should look like instead.\n>\n> I don't know. I don't understand the design of what needs to have error\n> handling, and what not.\n\nAFAIK, there are two things that the JSON path code considers can\ncause an error when evaluating a JsonExpr:\n\n* Actual JSON path evaluation in ExecEvalJsonExpr(), because it\ninvokes JsonPath*() family of functions defined in jsonpath_exec.c,\nwhich can possibly cause an error. Actually, I looked again today as\nto what goes on in them and it seems there is a throwErrors variable\nbeing used to catch and prevent any errors found by the JSON path\nmachinery itself, and it has the same value as the throwErrors\nvariable in ExecEvalJsonExpr(). Given that the latter is set per the\nON ERROR specification (throw errors or return NULL / a default\nexpression in lieu), maybe this part doesn't really need a\nsub-transaction. To check, I took off the sub-transaction around this\npart and can see that no tests fail.\n\n* Evaluating coercion expression in ExecEvalJsonExprCoercion(), which\ninvolves passing a user-specified expression through either\nEEOP_IOCOERCE or json_populate_type(), both of which can cause errors\nthat are not suppressible as those in jsonpath_exec.c are. So, this\npart does need a sub-transaction to satisfy the ON ERROR behavior. To\ncheck, I took out the sub-transaction around the coercion evaluation,\nand some tests in jsob_sqljson do indeed fail, like this, for example:\n\n SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int);\n- json_value\n-------------\n-\n-(1 row)\n-\n+ERROR: invalid input syntax for type integer: \"aaa\"\n\nNote that both the JSON path expression and the coercion would run as\npart of the one EEOP_JSONEXPR ExecEvalStep before this patch and thus\nwould be wrapped under the same sub-transaction, even if only the\nlatter apparently needs it.\n\nWith this patch, I tried to keep that same behavior, but because the\ncoercion evaluation has now been broken out into its own step, it must\nuse another sub-transaction, given that the same sub-transaction can\nno longer wrap both things. But given my finding that the JSON path\nexpression doesn't really need one, maybe the new EEOP_JSONEXPR_PATH\nstep can run without one, while keeping it for the new\nEEOP_JSONEXPR_COERCION step.\n\n> > > > > I first was confused why the code tries to load the jump target\n> > > > > dynamically. But then I saw that the interpreted code sets it dynamically -\n> > > > > why? That's completely unnecessary overhead afaics? There's just two\n> > > > > possible jump targets, no?\n> > > >\n> > > > Hmm, I looked at the code for other expressions that jump, especially\n> > > > CASE WHEN, but they use ready-made EEOP_JUMP_IF_* steps, which can be\n> > > > added statically. I thought we can't use them in this case, because\n> > > > the conditions are very ad-hoc, like if the JSON path computation\n> > > > returned an \"empty\" item or if the \"error\" flag was set during that\n> > > > computation, etc.\n> > >\n> > > The minimal fix would be to return the jump target from the function, and then\n> > > jump to that. That at least avoids the roundtrip to memory you have right now.\n> >\n> > You mean like this:\n> >\n> > LLVMValueRef v_args[2];\n> > LLVMValueRef v_ret;\n> >\n> > /*\n> > * Call ExecEvalJsonExprSkip() to decide if JSON path\n> > * evaluation can be skipped. This returns the step\n> > * address to jump to.\n> > */\n> > v_args[0] = v_state;\n> > v_args[1] = l_ptr_const(op, l_ptr(StructExprEvalStep));\n> > v_ret = LLVMBuildCall(b,\n> > llvm_pg_func(mod,\n> > \"ExecEvalJsonExprSkip\"),\n> > params, lengthof(params), \"\");\n> >\n> > Actually, this is how I had started, but never figured out how to jump\n> > to the address in v_ret. As in, how to extract the plain C int32\n> > value that is the jump address from v_ret, an LLVMValueRef, to use in\n> > the following statement:\n> >\n> > LLVMBuildBr(b, opblocks[<int32-in-v_ret>]);\n>\n> We could make that work, but even keeping it more similar to your current\n> code, you're already dealing with a variable jump target. Only that you load\n> it from a variable, instead of the function return type. So you could just\n> v_ret instead of v_jumpaddr, and your code would be simpler and faster.\n\nAh, I see you mean to continue to use all the LLVMBuildCondBr()s as\nthe code currently does, but use v_ret like in the code above, instead\nof v_jumpaddr, to access the jump address returned by the\nstep-choosing function. I've done that in the updated patch. This\nalso allows us to get rid of all the jumpdone fields in the\nExprEvalStep.\n\nI've also moved the blocks of code in ExecInitExprRec() that\ninitialize the state for JsonExpr and JsonItemCoercions into new\nfunctions. I've also moved EvalJsonPathVar() from execExprInterp.c to\njsonpath_exec.c where it's used.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 8 Aug 2022 16:38:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 8/5/22 4:36 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> I tried to look into some of the questions from Amit, but I have e.g. no idea\r\n> what exactly the use of subtransactions tries to achieve - afaics 1a36bc9dba8\r\n> is the first patch to introduce needing to evaluate parts expressions in a\r\n> subtransaction - but there's not a single comment explaining why.\r\n> \r\n> It's really hard to understand the new json code. It's a substantial amount of\r\n> new infrastructure, without any design documentation that I can find. And it's\r\n> not like it's just code that's equivalent to nearby stuff. To me this falls\r\n> way below our standards and I think it's *months* of work to fix.\r\n> \r\n> Even just the expresion evaluation code: EvalJsonPathVar(),\r\n> ExecEvalJsonConstructor(), ExecEvalJsonExpr(), ExecEvalJson(). There's one\r\n> layer of subtransactions in one of the paths in ExecEvalJsonExpr(), another in\r\n> ExecEvalJson(). Some paths of ExecEvalJsonExpr() go through subtransactions,\r\n> others don't.\r\n> \r\n> It's one thing for a small set of changes to be of this quality. But this is\r\n> pretty large - a quick summing of diffstat ends up with about 17k lines added,\r\n> of which ~2.5k are docs, ~4.8k are tests.\r\n\r\nThe RMT met today to discuss the state of this open item surrounding the \r\nSQL/JSON feature set. We discussed the specific concerns raised about \r\nthe code and debated four different options:\r\n\r\n 1. Do nothing and continue with the community process of stabilizing \r\nthe code without significant redesign\r\n\r\n 2. Recommend holding up the v15 release to allow for the code to be \r\nredesigned and fixed (as based on Andres' estimates, this would push the \r\nrelease out several months).\r\n\r\n 3. Revert the problematic parts of the code but try to include some \r\nof the features in the v15 release (e.g. JSON_TABLE)\r\n\r\n 4. Revert the feature set and redesign and try to include for v16\r\n\r\nBased on the concerns raised, the RMT is recommending option #4, to \r\nrevert the SQL/JSON changes for v15, and come back with a redesign for v16.\r\n\r\nIf folks think there are some bits we can include in v15, we can \r\nconsider option #3. (Personally, I would like to see if we can keep \r\nJSON_TABLE, but if there is too much overlap with the problematic \r\nportions of the code I am fine with waiting for v16).\r\n\r\nAt this stage in the release process coupled with the concerns, we're a \r\nbit worried about introducing changes that are unpredictable in terms of \r\nstability and maintenance. We also do not want to hold up the release \r\nwhile this feature set is goes through a redesign without agreement on \r\nwhat such a design would look like as well as a timeline.\r\n\r\nNote that the above are the RMT's recommendations; while the RMT can \r\nexplicitly call for a revert, we want to first guide the discussion on \r\nthe best path forward knowing the challenges for including these \r\nfeatures in v15.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 9 Aug 2022 09:59:51 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "\nOn 2022-08-09 Tu 09:59, Jonathan S. Katz wrote:\n> On 8/5/22 4:36 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> I tried to look into some of the questions from Amit, but I have e.g.\n>> no idea\n>> what exactly the use of subtransactions tries to achieve - afaics\n>> 1a36bc9dba8\n>> is the first patch to introduce needing to evaluate parts expressions\n>> in a\n>> subtransaction - but there's not a single comment explaining why.\n>>\n>> It's really hard to understand the new json code. It's a substantial\n>> amount of\n>> new infrastructure, without any design documentation that I can find.\n>> And it's\n>> not like it's just code that's equivalent to nearby stuff. To me this\n>> falls\n>> way below our standards and I think it's *months* of work to fix.\n>>\n>> Even just the expresion evaluation code: EvalJsonPathVar(),\n>> ExecEvalJsonConstructor(), ExecEvalJsonExpr(), ExecEvalJson().\n>> There's one\n>> layer of subtransactions in one of the paths in ExecEvalJsonExpr(),\n>> another in\n>> ExecEvalJson(). Some paths of ExecEvalJsonExpr() go through\n>> subtransactions,\n>> others don't.\n>>\n>> It's one thing for a small set of changes to be of this quality. But\n>> this is\n>> pretty large - a quick summing of diffstat ends up with about 17k\n>> lines added,\n>> of which ~2.5k are docs, ~4.8k are tests.\n>\n> The RMT met today to discuss the state of this open item surrounding\n> the SQL/JSON feature set. We discussed the specific concerns raised\n> about the code and debated four different options:\n>\n>   1. Do nothing and continue with the community process of stabilizing\n> the code without significant redesign\n>\n>   2. Recommend holding up the v15 release to allow for the code to be\n> redesigned and fixed (as based on Andres' estimates, this would push\n> the release out several months).\n>\n>   3. Revert the problematic parts of the code but try to include some\n> of the features in the v15 release (e.g. JSON_TABLE)\n>\n>   4. Revert the feature set and redesign and try to include for v16\n>\n> Based on the concerns raised, the RMT is recommending option #4, to\n> revert the SQL/JSON changes for v15, and come back with a redesign for\n> v16.\n>\n> If folks think there are some bits we can include in v15, we can\n> consider option #3. (Personally, I would like to see if we can keep\n> JSON_TABLE, but if there is too much overlap with the problematic\n> portions of the code I am fine with waiting for v16).\n>\n> At this stage in the release process coupled with the concerns, we're\n> a bit worried about introducing changes that are unpredictable in\n> terms of stability and maintenance. We also do not want to hold up the\n> release while this feature set is goes through a redesign without\n> agreement on what such a design would look like as well as a timeline.\n>\n> Note that the above are the RMT's recommendations; while the RMT can\n> explicitly call for a revert, we want to first guide the discussion on\n> the best path forward knowing the challenges for including these\n> features in v15.\n>\n>\n\nI very much doubt option 3 is feasible. The parts that are controversial\ngo back at least in part to the first patches of the series. Trying to\nsalvage anything would almost certainly be more disruptive than trying\nto fix it.\n\nI'm not sure what the danger is to stability, performance or correctness\nin applying the changes Amit has proposed for release 15. But if that\ndanger is judged to be too great then I agree we should revert.\n\nI should add that I'm very grateful to Amit for his work, and I'm sure\nit isn't wasted effort, whatever the decision.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Aug 2022 11:03:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 8/9/22 11:03 AM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-09 Tu 09:59, Jonathan S. Katz wrote:\r\n\r\n>> The RMT met today to discuss the state of this open item surrounding\r\n>> the SQL/JSON feature set. We discussed the specific concerns raised\r\n>> about the code and debated four different options:\r\n>>\r\n>>   1. Do nothing and continue with the community process of stabilizing\r\n>> the code without significant redesign\r\n>>\r\n>>   2. Recommend holding up the v15 release to allow for the code to be\r\n>> redesigned and fixed (as based on Andres' estimates, this would push\r\n>> the release out several months).\r\n>>\r\n>>   3. Revert the problematic parts of the code but try to include some\r\n>> of the features in the v15 release (e.g. JSON_TABLE)\r\n>>\r\n>>   4. Revert the feature set and redesign and try to include for v16\r\n>>\r\n>> Based on the concerns raised, the RMT is recommending option #4, to\r\n>> revert the SQL/JSON changes for v15, and come back with a redesign for\r\n>> v16.\r\n>>\r\n>> If folks think there are some bits we can include in v15, we can\r\n>> consider option #3. (Personally, I would like to see if we can keep\r\n>> JSON_TABLE, but if there is too much overlap with the problematic\r\n>> portions of the code I am fine with waiting for v16).\r\n>>\r\n>> At this stage in the release process coupled with the concerns, we're\r\n>> a bit worried about introducing changes that are unpredictable in\r\n>> terms of stability and maintenance. We also do not want to hold up the\r\n>> release while this feature set is goes through a redesign without\r\n>> agreement on what such a design would look like as well as a timeline.\r\n>>\r\n>> Note that the above are the RMT's recommendations; while the RMT can\r\n>> explicitly call for a revert, we want to first guide the discussion on\r\n>> the best path forward knowing the challenges for including these\r\n>> features in v15.\r\n>>\r\n>>\r\n> \r\n> I very much doubt option 3 is feasible. The parts that are controversial\r\n> go back at least in part to the first patches of the series. Trying to\r\n> salvage anything would almost certainly be more disruptive than trying\r\n> to fix it.\r\n> \r\n> I'm not sure what the danger is to stability, performance or correctness\r\n> in applying the changes Amit has proposed for release 15. But if that\r\n> danger is judged to be too great then I agree we should revert.\r\n\r\nSpeaking personally, I would like to see what we could do to include \r\nsupport for this batch of the SQL/JSON features in v15. What is included \r\nlooks like it closes most of the gap on what we've been missing \r\nsyntactically since the standard was adopted, and the JSON_TABLE work is \r\nvery convenient for converting JSON data into a relational format. I \r\nbelieve having this feature set is important for maintaining standards \r\ncompliance, interoperability, tooling support, and general usability. \r\nPlus, JSON still seems to be pretty popular :)\r\n\r\nRereading the thread for the umpteenth time, I have seen Amit working \r\nthrough Andres' concerns. From my read, the ones that seem pressing are:\r\n\r\n* Lack of design documentation, which may be leading to some of the \r\ngeneral design concerns\r\n* The use of substransactions within the executor, though docs \r\nexplaining the decisions on that could alleviate it (I realize this is a \r\nbig topic and any summary I give won't capture the full nuance)\r\n* Debate on how to handle the type coercions\r\n\r\n(Please correct me if I've missed anything).\r\n\r\nI hope that these can be addressed satisfactorily in a reasonable (read: \r\nnow a much shorter) timeframe so we can include the SQL/JSON work in v15.\r\n\r\nWith my RMT hat on, the issue is that we're now at beta 3 and we still \r\nhave not not reached a resolution on this open item. Even if it were \r\ncommitted tomorrow, we would definitely need a beta 4, and we would want \r\nto let the code bake a bit more to ensure we get adequate test coverage \r\non it. This would likely put the release date into October, presuming we \r\nhave not found any other issues that could cause a release delay.\r\n\r\nWith my advocacy hat on, we're at the point in the release cycle where I \r\nkick off the GA release process (e.g. announcement drafting). Not \r\nknowing the final status of a feature that's likely to be highlighted \r\nmakes it difficult to write said release as well as kick off the other \r\nmachinery (e.g. translations). If there is at least a decision on next \r\nsteps, I can adjust the GA release process timeline.\r\n\r\n> I should add that I'm very grateful to Amit for his work, and I'm sure\r\n> it isn't wasted effort, whatever the decision.\r\n\r\n+1. While I've been quiet on this thread to date, I have definitely seen \r\nAmit working hard on addressing the concerns.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 9 Aug 2022 14:04:48 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-08-09 14:04:48 -0400, Jonathan S. Katz wrote:\n> On 8/9/22 11:03 AM, Andrew Dunstan wrote:\n> > \n> > On 2022-08-09 Tu 09:59, Jonathan S. Katz wrote:\n> \n> > > The RMT met today to discuss the state of this open item surrounding\n> > > the SQL/JSON feature set. We discussed the specific concerns raised\n> > > about the code and debated four different options:\n> > > \n> > > � 1. Do nothing and continue with the community process of stabilizing\n> > > the code without significant redesign\n> > > \n> > > � 2. Recommend holding up the v15 release to allow for the code to be\n> > > redesigned and fixed (as based on Andres' estimates, this would push\n> > > the release out several months).\n\nObviously that's a question of the resources brought to bear.\n\n From my angle: I've obviously some of my own work to take care of as well, but\nit's also just hard to improve something that includes a lot of undocumented\ndesign decisions.\n\n\n> > > � 3. Revert the problematic parts of the code but try to include some\n> > > of the features in the v15 release (e.g. JSON_TABLE)\n\n> > I very much doubt option 3 is feasible. The parts that are controversial\n> > go back at least in part to the first patches of the series. Trying to\n> > salvage anything would almost certainly be more disruptive than trying\n> > to fix it.\n\nAgreed.\n\n\n> > > � 4. Revert the feature set and redesign and try to include for v16\n> > > \n> > > Based on the concerns raised, the RMT is recommending option #4, to\n> > > revert the SQL/JSON changes for v15, and come back with a redesign for\n> > > v16.\n> > > \n> > > If folks think there are some bits we can include in v15, we can\n> > > consider option #3. (Personally, I would like to see if we can keep\n> > > JSON_TABLE, but if there is too much overlap with the problematic\n> > > portions of the code I am fine with waiting for v16).\n> > > \n> > > At this stage in the release process coupled with the concerns, we're\n> > > a bit worried about introducing changes that are unpredictable in\n> > > terms of stability and maintenance. We also do not want to hold up the\n> > > release while this feature set is goes through a redesign without\n> > > agreement on what such a design would look like as well as a timeline.\n> > > \n> > > Note that the above are the RMT's recommendations; while the RMT can\n> > > explicitly call for a revert, we want to first guide the discussion on\n> > > the best path forward knowing the challenges for including these\n> > > features in v15.\n\nUnless we decide on 4 immediately, I think it might be worth starting a\nseparate thread to get more attention. The subject doesn't necessarily have\neveryone follow along.\n\n\n\n> > I'm not sure what the danger is to stability, performance or correctness\n> > in applying the changes Amit has proposed for release 15. But if that\n> > danger is judged to be too great then I agree we should revert.\n\nMy primary problem is that as-is the code is nearly unmaintainable. It's hard\nfor Amit to fix that, given that he's not one of the original authors.\n\n\n> Rereading the thread for the umpteenth time, I have seen Amit working\n> through Andres' concerns. From my read, the ones that seem pressing are:\n> \n> * Lack of design documentation, which may be leading to some of the general\n> design concerns\n\n> * The use of substransactions within the executor, though docs explaining\n> the decisions on that could alleviate it (I realize this is a big topic and\n> any summary I give won't capture the full nuance)\n\nI don't think subtransactions per-se are a fundamental problem. I think the\nerror handling implementation is ridiculously complicated, and while I started\nto hack on improving it, I stopped when I really couldn't understand what\nerrors it actually needs to handle when and why.\n\n\n> * Debate on how to handle the type coercions\n\nThat's partially related to the error handling issue above.\n\nOne way this code could be drastically simplified is to force all\ntype-coercions to go through the \"io coercion\" path, which could be\nimplemented as a single execution step (which thus could trivially\nstart/finish a subtransaction) and would remove a lot of the complicated code\naround coercions.\n\n\n> > I should add that I'm very grateful to Amit for his work, and I'm sure\n> > it isn't wasted effort, whatever the decision.\n> \n> +1. While I've been quiet on this thread to date, I have definitely seen\n> Amit working hard on addressing the concerns.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Aug 2022 11:57:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Speaking personally, I would like to see what we could do to include \n> support for this batch of the SQL/JSON features in v15. What is included \n> looks like it closes most of the gap on what we've been missing \n> syntactically since the standard was adopted, and the JSON_TABLE work is \n> very convenient for converting JSON data into a relational format. I \n> believe having this feature set is important for maintaining standards \n> compliance, interoperability, tooling support, and general usability. \n> Plus, JSON still seems to be pretty popular :)\n> ...\n> I hope that these can be addressed satisfactorily in a reasonable (read: \n> now a much shorter) timeframe so we can include the SQL/JSON work in v15.\n\nWe have delayed releases for $COOL_FEATURE in the past, and I think\nour batting average on that is still .000: not once has it worked out\nwell. I think we're better off getting the pain over with quickly,\nso I regretfully vote for revert. And for a full redesign/rewrite\nbefore we try again; based on Andres' comments, it needs that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Aug 2022 15:17:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-08-09 15:17:44 -0400, Tom Lane wrote:\n> We have delayed releases for $COOL_FEATURE in the past, and I think\n> our batting average on that is still .000: not once has it worked out\n> well.\n\nI think it semi worked when jsonb (?) first went in - it took a while and a\nlot of effort from a lot of people, but in the end we made it work, and it was\na success from our user's perspectives, I think. OTOH, it's not a great sign\nthis is around json again...\n\n- Andres\n\n\n", "msg_date": "Tue, 9 Aug 2022 12:22:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On 8/9/22 3:22 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-08-09 15:17:44 -0400, Tom Lane wrote:\r\n>> We have delayed releases for $COOL_FEATURE in the past, and I think\r\n>> our batting average on that is still .000: not once has it worked out\r\n>> well.\r\n> \r\n> I think it semi worked when jsonb (?) first went in - it took a while and a\r\n> lot of effort from a lot of people, but in the end we made it work, and it was\r\n> a success from our user's perspectives, I think. \r\n\r\nYeah, this was the example I was thinking of. To continue with the \r\nbaseball analogy, it was a home-run from a PR perspective, and I can say \r\nas a power user at the time, the 9.4 JSONB representation worked well \r\nfor my use case. Certainly newer functionality has made JSON easier to \r\nwork with in PG.\r\n\r\n(I can't remember what the 9.5 hold up was).\r\n\r\nThe cases where we either delayed/punted on $COOL_FEATURE that cause me \r\nconcern are the ones where we say \"OK, well fix this in the next \r\nrelease\" and we are then waiting, 2, 3, 4 releases for the work to be \r\ncompleted. And to be clear, I'm thinking of this as \"known issues\" vs. \r\n\"iterating towards the whole solution\".\r\n\r\n> OTOH, it's not a great sign this is around json again...\r\n\r\nYeah, I was thinking about that too.\r\n\r\nPer Andres comment upthread, let's open a new thread to discuss the \r\nSQL/JSON + v15 topic to improve visibility and get more feedback. I can \r\ndo that shortly.\r\n\r\nWe can continue with the technical discussion in here.\r\n\r\nJonathan", "msg_date": "Tue, 9 Aug 2022 15:50:36 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 8/9/22 2:57 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-08-09 14:04:48 -0400, Jonathan S. Katz wrote:\r\n\r\n>>>>   2. Recommend holding up the v15 release to allow for the code to be\r\n>>>> redesigned and fixed (as based on Andres' estimates, this would push\r\n>>>> the release out several months).\r\n> \r\n> Obviously that's a question of the resources brought to bear.\r\n> \r\n> From my angle: I've obviously some of my own work to take care of as well, but\r\n> it's also just hard to improve something that includes a lot of undocumented\r\n> design decisions.\r\n\r\n*nod*\r\n\r\n>>>>   4. Revert the feature set and redesign and try to include for v16\r\n\r\n> Unless we decide on 4 immediately, I think it might be worth starting a\r\n> separate thread to get more attention. The subject doesn't necessarily have\r\n> everyone follow along.\r\n\r\n*nod* I'll do this shortly.\r\n\r\n\r\n>> Rereading the thread for the umpteenth time, I have seen Amit working\r\n>> through Andres' concerns. From my read, the ones that seem pressing are:\r\n>>\r\n>> * Lack of design documentation, which may be leading to some of the general\r\n>> design concerns\r\n> \r\n>> * The use of substransactions within the executor, though docs explaining\r\n>> the decisions on that could alleviate it (I realize this is a big topic and\r\n>> any summary I give won't capture the full nuance)\r\n> \r\n> I don't think subtransactions per-se are a fundamental problem. I think the\r\n> error handling implementation is ridiculously complicated, and while I started\r\n> to hack on improving it, I stopped when I really couldn't understand what\r\n> errors it actually needs to handle when and why.\r\n\r\nAh, thanks for the clarification. That makes sense.\r\n\r\n>> * Debate on how to handle the type coercions\r\n> \r\n> That's partially related to the error handling issue above.\r\n> \r\n> One way this code could be drastically simplified is to force all\r\n> type-coercions to go through the \"io coercion\" path, which could be\r\n> implemented as a single execution step (which thus could trivially\r\n> start/finish a subtransaction) and would remove a lot of the complicated code\r\n> around coercions.\r\n\r\nIf we went down this path, would this make you feel more comfortable \r\nwith including this work in the v15 release?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 9 Aug 2022 15:59:44 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "\nOn 2022-08-09 Tu 15:50, Jonathan S. Katz wrote:\n> On 8/9/22 3:22 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-08-09 15:17:44 -0400, Tom Lane wrote:\n>>> We have delayed releases for $COOL_FEATURE in the past, and I think\n>>> our batting average on that is still .000: not once has it worked out\n>>> well.\n>>\n>> I think it semi worked when jsonb (?) first went in - it took a while\n>> and a\n>> lot of effort from a lot of people, but in the end we made it work,\n>> and it was\n>> a success from our user's perspectives, I think. \n>\n> Yeah, this was the example I was thinking of. To continue with the\n> baseball analogy, it was a home-run from a PR perspective, and I can\n> say as a power user at the time, the 9.4 JSONB representation worked\n> well for my use case. Certainly newer functionality has made JSON\n> easier to work with in PG.\n>\n> (I can't remember what the 9.5 hold up was).\n>\n> The cases where we either delayed/punted on $COOL_FEATURE that cause\n> me concern are the ones where we say \"OK, well fix this in the next\n> release\" and we are then waiting, 2, 3, 4 releases for the work to be\n> completed. And to be clear, I'm thinking of this as \"known issues\" vs.\n> \"iterating towards the whole solution\".\n\n\nWhere we ended up with jsonb was a long way from where we started, but\ntechnical difficulties were largely confined because it didn't involve\nanything like the parser or the expression evaluation code. Here the SQL\nStandards Committee has imposed a pretty substantial technical burden on\nus and the code that Andres complains of is attempting to deal with that.\n\n\n>\n>> OTOH, it's not a great sign  this is around json again...\n>\n> Yeah, I was thinking about that too.\n\n\nOuch :-(\n\nI think after 10 years of being involved with our JSON features, I'm\ngoing to take a substantial break on that front.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Aug 2022 16:15:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 8/9/22 4:15 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-09 Tu 15:50, Jonathan S. Katz wrote:\r\n>> On 8/9/22 3:22 PM, Andres Freund wrote:\r\n\r\n>>\r\n>>> OTOH, it's not a great sign  this is around json again...\r\n>>\r\n>> Yeah, I was thinking about that too.\r\n> \r\n> \r\n> Ouch :-(\r\n> \r\n> I think after 10 years of being involved with our JSON features, I'm\r\n> going to take a substantial break on that front.\r\n\r\nI hope that wasn't taken as a sleight, but just an observation. There \r\nare other feature areas where I can make similar observations. All this \r\nwork around a database system is challenging as there are many \r\nconsiderations that need to be made.\r\n\r\nYou've done an awesome job driving the JSON work forward and it is \r\ngreatly appreciated.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 9 Aug 2022 16:19:33 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "\nOn 2022-08-09 Tu 16:19, Jonathan S. Katz wrote:\n> On 8/9/22 4:15 PM, Andrew Dunstan wrote:\n>>\n>> On 2022-08-09 Tu 15:50, Jonathan S. Katz wrote:\n>>> On 8/9/22 3:22 PM, Andres Freund wrote:\n>\n>>>\n>>>> OTOH, it's not a great sign  this is around json again...\n>>>\n>>> Yeah, I was thinking about that too.\n>>\n>>\n>> Ouch :-(\n>>\n>> I think after 10 years of being involved with our JSON features, I'm\n>> going to take a substantial break on that front.\n>\n> I hope that wasn't taken as a sleight, but just an observation. There\n> are other feature areas where I can make similar observations. All\n> this work around a database system is challenging as there are many\n> considerations that need to be made.\n>\n> You've done an awesome job driving the JSON work forward and it is\n> greatly appreciated.\n>\n>\n\n\nThanks, I appreciate that (and I wasn't fishing for compliments). It's\nmore that I feel a bit tired of it ... some of my colleagues will\nconfirm that I've been saying this for a while, so it's not spurred by\nthis setback.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Aug 2022 16:27:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On Wed, Aug 10, 2022 at 3:57 AM Andres Freund <andres@anarazel.de> wrote:\n> One way this code could be drastically simplified is to force all\n> type-coercions to go through the \"io coercion\" path, which could be\n> implemented as a single execution step (which thus could trivially\n> start/finish a subtransaction) and would remove a lot of the complicated code\n> around coercions.\n\nCould you please clarify how you think we might do the io coercion\nwrapped with a subtransaction all as a single execution step? I\nwould've thought that we couldn't do the sub-transaction without\nleaving ExecInterpExpr() anyway, so maybe you meant the io coercion\nitself was done using some code outside ExecInterpExpr()?\n\nThe current JsonExpr code does it by recursively calling\nExecInterpExpr() using the nested ExprState expressly for the\ncoercion.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Aug 2022 22:27:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "On 8/10/22 9:27 AM, Amit Langote wrote:\r\n> On Wed, Aug 10, 2022 at 3:57 AM Andres Freund <andres@anarazel.de> wrote:\r\n>> One way this code could be drastically simplified is to force all\r\n>> type-coercions to go through the \"io coercion\" path, which could be\r\n>> implemented as a single execution step (which thus could trivially\r\n>> start/finish a subtransaction) and would remove a lot of the complicated code\r\n>> around coercions.\r\n> \r\n> Could you please clarify how you think we might do the io coercion\r\n> wrapped with a subtransaction all as a single execution step? I\r\n> would've thought that we couldn't do the sub-transaction without\r\n> leaving ExecInterpExpr() anyway, so maybe you meant the io coercion\r\n> itself was done using some code outside ExecInterpExpr()?\r\n> \r\n> The current JsonExpr code does it by recursively calling\r\n> ExecInterpExpr() using the nested ExprState expressly for the\r\n> coercion.\r\n\r\nWith RMT hat on, Andres do you have any thoughts on this?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 11 Aug 2022 13:08:27 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2022-08-11 13:08:27 -0400, Jonathan S. Katz wrote:\n> On 8/10/22 9:27 AM, Amit Langote wrote:\n> > On Wed, Aug 10, 2022 at 3:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > > One way this code could be drastically simplified is to force all\n> > > type-coercions to go through the \"io coercion\" path, which could be\n> > > implemented as a single execution step (which thus could trivially\n> > > start/finish a subtransaction) and would remove a lot of the complicated code\n> > > around coercions.\n> > \n> > Could you please clarify how you think we might do the io coercion\n> > wrapped with a subtransaction all as a single execution step? I\n> > would've thought that we couldn't do the sub-transaction without\n> > leaving ExecInterpExpr() anyway, so maybe you meant the io coercion\n> > itself was done using some code outside ExecInterpExpr()?\n> > \n> > The current JsonExpr code does it by recursively calling\n> > ExecInterpExpr() using the nested ExprState expressly for the\n> > coercion.\n\nThe basic idea is to rip out all the type-dependent stuff out and replace it\nwith a single JSON_IOCERCE step, which has a parameter about whether to wrap\nthings in a subtransaction or not. That step would always perform the coercion\nby calling the text output function of the input and the text input function\nof the output.\n\n\n> With RMT hat on, Andres do you have any thoughts on this?\n\nI think I need to prototype how it'd look like to give a more detailed\nanswer. I have a bunch of meetings over the next few hours, but after that I\ncan give it a shot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Aug 2022 10:17:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Maybe it's worth sticking a StaticAssert() for the struct size\n>> somewhere.\n\n> Indeed. I thought we had one already.\n\n>> I'm a bit wary about that being too noisy, there are some machines with\n>> odd alignment requirements. Perhaps worth restricting the assertion to\n>> x86-64 + armv8 or such?\n\n> I'd put it in first and only reconsider if it shows unfixable problems.\n\nNow that we've got the sizeof(ExprEvalStep) under control, shouldn't\nwe do the attached?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 22 Feb 2023 16:34:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2023-02-22 16:34:44 -0500, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Maybe it's worth sticking a StaticAssert() for the struct size\n> >> somewhere.\n> \n> > Indeed. I thought we had one already.\n> \n> >> I'm a bit wary about that being too noisy, there are some machines with\n> >> odd alignment requirements. Perhaps worth restricting the assertion to\n> >> x86-64 + armv8 or such?\n> \n> > I'd put it in first and only reconsider if it shows unfixable problems.\n> \n> Now that we've got the sizeof(ExprEvalStep) under control, shouldn't\n> we do the attached?\n\nIndeed. Pushed.\n\nLet's hope there's no rarely used architecture with odd alignment rules.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Feb 2023 14:47:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Wed, Feb 22, 2023 at 5:47 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-02-22 16:34:44 -0500, Tom Lane wrote:\n> > I wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > >> Maybe it's worth sticking a StaticAssert() for the struct size\n> > >> somewhere.\n> >\n> > > Indeed. I thought we had one already.\n> >\n> > >> I'm a bit wary about that being too noisy, there are some machines\n> with\n> > >> odd alignment requirements. Perhaps worth restricting the assertion to\n> > >> x86-64 + armv8 or such?\n> >\n> > > I'd put it in first and only reconsider if it shows unfixable problems.\n> >\n> > Now that we've got the sizeof(ExprEvalStep) under control, shouldn't\n> > we do the attached?\n>\n> Indeed. Pushed.\n>\n> Let's hope there's no rarely used architecture with odd alignment rules.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\nI have a question about this that may affect some of my future work.\n\nMy not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\nFuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\nnecessitates adding a reserror boolean to ExprEvalStep for subsequent steps\nto test if the error happened.\n\nWill that change be throwing some architectures over the 64 byte count?\n\nOn Wed, Feb 22, 2023 at 5:47 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-02-22 16:34:44 -0500, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Maybe it's worth sticking a StaticAssert() for the struct size\n> >> somewhere.\n> \n> > Indeed.  I thought we had one already.\n> \n> >> I'm a bit wary about that being too noisy, there are some machines with\n> >> odd alignment requirements. Perhaps worth restricting the assertion to\n> >> x86-64 + armv8 or such?\n> \n> > I'd put it in first and only reconsider if it shows unfixable problems.\n> \n> Now that we've got the sizeof(ExprEvalStep) under control, shouldn't\n> we do the attached?\n\nIndeed. Pushed.\n\nLet's hope there's no rarely used architecture with odd alignment rules.\n\nGreetings,\n\nAndres Freund\n\nI have a question about this that may affect some of my future work. My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that necessitates adding a reserror boolean to ExprEvalStep for subsequent steps to test if the error happened.Will that change be throwing some architectures over the 64 byte count?", "msg_date": "Thu, 23 Feb 2023 13:39:14 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\n> FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\n> necessitates adding a reserror boolean to ExprEvalStep for subsequent steps\n> to test if the error happened.\n\nWhy do you want it in ExprEvalStep ... couldn't it be in ExprState?\nI can't see why you'd need more than one at a time during evaluation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Feb 2023 13:56:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" }, { "msg_contents": "Hi,\n\nOn 2023-02-23 13:39:14 -0500, Corey Huinker wrote:\n> My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\n> FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\n> necessitates adding a reserror boolean to ExprEvalStep for subsequent steps\n> to test if the error happened.\n\nI think if that requires adding a new variable to each ExprEvalStep, it's\nDOA. The overhead would be too high. But I don't see why it would need to be\nadded all ExprEvalSteps instead of individual steps, or perhaps ExprEvalState?\n\n\n> Will that change be throwing some architectures over the 64 byte count?\n\nIt would.\n\nI find the 'pahole' tool very useful for looking at struct layout.\n\n\nstruct ExprEvalStep {\n intptr_t opcode; /* 0 8 */\n Datum * resvalue; /* 8 8 */\n _Bool * resnull; /* 16 8 */\n union {\n struct {\n int last_var; /* 24 4 */\n _Bool fixed; /* 28 1 */\n\n /* XXX 3 bytes hole, try to pack */\n\n TupleDesc known_desc; /* 32 8 */\n const TupleTableSlotOps * kind; /* 40 8 */\n } fetch; /* 24 24 */\n...\n struct {\n Datum * values; /* 24 8 */\n _Bool * nulls; /* 32 8 */\n int nelems; /* 40 4 */\n MinMaxOp op; /* 44 4 */\n FmgrInfo * finfo; /* 48 8 */\n FunctionCallInfo fcinfo_data; /* 56 8 */\n } minmax; /* 24 40 */\n...\n\n } d; /* 24 40 */\n\n /* size: 64, cachelines: 1, members: 4 */\n};\n\n\nWe don't have memory to spare in the \"general\" portion of ExprEvalStep\n(currently 24 bytes), as several of the type-specific portions are already 40\nbytes large.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Feb 2023 11:35:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "Hi,\n\nOn 2023-02-23 13:56:56 -0500, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\n> > FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\n> > necessitates adding a reserror boolean to ExprEvalStep for subsequent steps\n> > to test if the error happened.\n> \n> Why do you want it in ExprEvalStep ... couldn't it be in ExprState?\n> I can't see why you'd need more than one at a time during evaluation.\n\nI don't know exactly what CAST( ... ON DEFAULT ... ) is aiming for - I guess\nit wants to assign a different value when the cast fails? Is the default\nexpression a constant, or does it need to be runtime evaluated? If a const,\nthen the cast steps just could assign the new value. If runtime evaluation is\nneeded I'd expect the various coerce steps to jump to the value implementing\nthe default expression in case of a failure.\n\nSo I'm not sure we even need a reserror field in ExprState.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Feb 2023 11:39:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to\n ExprEvalStep size" }, { "msg_contents": "On Thu, Feb 23, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-02-23 13:56:56 -0500, Tom Lane wrote:\n> > Corey Huinker <corey.huinker@gmail.com> writes:\n> > > My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\n> > > FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\n> > > necessitates adding a reserror boolean to ExprEvalStep for subsequent\n> steps\n> > > to test if the error happened.\n> >\n> > Why do you want it in ExprEvalStep ... couldn't it be in ExprState?\n> > I can't see why you'd need more than one at a time during evaluation.\n>\n> I don't know exactly what CAST( ... ON DEFAULT ... ) is aiming for - I\n> guess\n> it wants to assign a different value when the cast fails? Is the default\n> expression a constant, or does it need to be runtime evaluated? If a\n> const,\n> then the cast steps just could assign the new value. If runtime evaluation\n> is\n> needed I'd expect the various coerce steps to jump to the value\n> implementing\n> the default expression in case of a failure.\n>\n\nThe default expression is itself a cast expression. So CAST (expr1 AS\nsome_type DEFAULT expr2 ON ERROR) would basically be a safe-mode cast of\nexpr1 to some_type, and only upon failure would the non-safe cast of expr2\nto some_type be executed. Granted, the most common use case would be for\nexpr2 to be a constant or something that folds into a constant, but the\nproposed spec allows for it.\n\nMy implementation involved adding a setting to CoalesceExpr that tested for\nerror flags rather than null flags, hence putting it in ExprEvalStep and\nExprState (perhaps mistakenly). Copying and adapting EEOP_JUMP_IF_NOT_NULL\nlead me to this:\n\n EEO_CASE(EEOP_JUMP_IF_NOT_ERROR)\n {\n /* Transfer control if current result is non-error */\n if (!*op->reserror)\n {\n *op->reserror = false;\n EEO_JUMP(op->d.jump.jumpdone);\n }\n\n /* reset error flag */\n *op->reserror = false;\n\n EEO_NEXT();\n }\n\nOn Thu, Feb 23, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-02-23 13:56:56 -0500, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > My not-ready-for-16 work on CAST( ... ON DEFAULT ... ) involved making\n> > FuncExpr/IoCoerceExpr/ArrayCoerceExpr have a safe_mode flag, and that\n> > necessitates adding a reserror boolean to ExprEvalStep for subsequent steps\n> > to test if the error happened.\n> \n> Why do you want it in ExprEvalStep ... couldn't it be in ExprState?\n> I can't see why you'd need more than one at a time during evaluation.\n\nI don't know exactly what CAST( ... ON DEFAULT ... ) is aiming for - I guess\nit wants to assign a different value when the cast fails?  Is the default\nexpression a constant, or does it need to be runtime evaluated?  If a const,\nthen the cast steps just could assign the new value. If runtime evaluation is\nneeded I'd expect the various coerce steps to jump to the value implementing\nthe default expression in case of a failure.The default expression is itself a cast expression. So CAST (expr1 AS some_type DEFAULT expr2 ON ERROR) would basically be a safe-mode cast of expr1 to some_type, and only upon failure would the non-safe cast of expr2 to some_type be executed. Granted, the most common use case would be for expr2 to be a constant or something that folds into a constant, but the proposed spec allows for it.My implementation involved adding a setting to CoalesceExpr that tested for error flags rather than null flags, hence putting it in ExprEvalStep and ExprState (perhaps mistakenly). Copying and adapting EEOP_JUMP_IF_NOT_NULL lead me to this:          EEO_CASE(EEOP_JUMP_IF_NOT_ERROR)          {                  /* Transfer control if current result is non-error */              if (!*op->reserror)              {                      *op->reserror = false;                  EEO_JUMP(op->d.jump.jumpdone);              }                    /* reset error flag */              *op->reserror = false;                EEO_NEXT();          }", "msg_date": "Fri, 24 Feb 2023 04:44:03 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 15 (and to a smaller degree 14) regression due to ExprEvalStep\n size" } ]
[ { "msg_contents": "Hi hackers,\n\n I thought it would be nice to have an configuration example of the pg_prewarm extension.\n Therefore, I have written an example of a basic configuration.\n\n ---\n Regards,\n DongWook Lee", "msg_date": "Sat, 18 Jun 2022 17:55:41 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "doc: pg_prewarm add configuration example" }, { "msg_contents": "On 22/06/29 03:57오후, Jacob Champion wrote:\n> On 6/18/22 01:55, Dong Wook Lee wrote:\n> > Hi hackers,\n> > \n> > I thought it would be nice to have an configuration example of the pg_prewarm extension.\n> > Therefore, I have written an example of a basic configuration.\n> \n> [offlist]\n> \n> Hi Dong Wook, I saw a commitfest entry registered for some of your other\n> patches, but not for this one. Quick reminder to register it in the\n> Documentation section if that was your intent.\n\nThank you very much for letting me know.\nThe patch is so trivial that I thought about whether to post it on commitfest,\nbut I think it would be better to post it.\n\n> \n> Thanks,\n> --Jacob\n> \n> [NOTE] This is part of a mass communication prior to the July\n> commitfest, which begins in two days. If I've made a mistake -- for\n> example, if the patch has already been registered, withdrawn, or\n> committed -- just let me know, and sorry for the noise.\n\n\n", "msg_date": "Thu, 30 Jun 2022 14:40:50 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: doc: pg_prewarm add configuration example" }, { "msg_contents": "On Thu, Jun 30, 2022 at 02:40:50PM +0900, Dong Wook Lee wrote:\n> On 22/06/29 03:57오후, Jacob Champion wrote:\n> > On 6/18/22 01:55, Dong Wook Lee wrote:\n> > > Hi hackers,\n> > > \n> > > I thought it would be nice to have an configuration example of the pg_prewarm extension.\n> > > Therefore, I have written an example of a basic configuration.\n> > \n> > [offlist]\n> > \n> > Hi Dong Wook, I saw a commitfest entry registered for some of your other\n> > patches, but not for this one. Quick reminder to register it in the\n> > Documentation section if that was your intent.\n> \n> Thank you very much for letting me know.\n> The patch is so trivial that I thought about whether to post it on commitfest,\n> but I think it would be better to post it.\n\nI have applied this, with adjustments, to all supported versions back to\nPG 11. PG 10 had docs different enough that I skipped it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Fri, 8 Jul 2022 18:38:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: doc: pg_prewarm add configuration example" } ]
[ { "msg_contents": "Extension version strings need to be quoted. Either double or single\nquotes will work. In released psql clients, tab completion offers double\nquoted suggestions:\n\nalter extension pg_trgm update TO <tab><tab>\n\"1.3\" \"1.4\" \"1.5\" \"1.6\"\n\nBut commit 02b8048ba5 broke that, it now offers unquoted version strings\nwhich if used as offered then lead to syntax errors.\n\nThe code change seems to have been intentional, but I don't think the\nbehavior change was intended. While the version string might not be an\nidentifier, it still needs to be treated as if it were one.\nPutting pg_catalog.quote_ident back\ninto Query_for_list_of_available_extension_versions* fixes it, but might\nnot be the best way to fix it.\n\ncommit 02b8048ba5dc36238f3e7c3c58c5946220298d71 (HEAD, refs/bisect/bad)\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sun Jan 30 13:33:23 2022 -0500\n\n psql: improve tab-complete's handling of variant SQL names.\n\n\nCheer,\n\nJeff\n\nExtension version strings need to be quoted.  Either double or single quotes will work.  In released psql clients, tab completion offers double quoted suggestions:alter extension pg_trgm update TO <tab><tab>\"1.3\"  \"1.4\"  \"1.5\"  \"1.6\" But commit 02b8048ba5 broke that, it now offers unquoted version strings which if used as offered then lead to syntax errors.The code change seems to have been intentional, but I don't think the behavior change was intended.  While the version string might not be an identifier, it still needs to be treated as if it were one.  Putting pg_catalog.quote_ident back into Query_for_list_of_available_extension_versions* fixes it, but might not be the best way to fix it. commit 02b8048ba5dc36238f3e7c3c58c5946220298d71 (HEAD, refs/bisect/bad)Author: Tom Lane <tgl@sss.pgh.pa.us>Date:   Sun Jan 30 13:33:23 2022 -0500    psql: improve tab-complete's handling of variant SQL names.Cheer,Jeff", "msg_date": "Sat, 18 Jun 2022 14:09:48 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "15beta1 tab completion of extension versions" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> Extension version strings need to be quoted. Either double or single\n> quotes will work. In released psql clients, tab completion offers double\n> quoted suggestions:\n> But commit 02b8048ba5 broke that, it now offers unquoted version strings\n> which if used as offered then lead to syntax errors.\n\nOoops.\n\n> The code change seems to have been intentional, but I don't think the\n> behavior change was intended.\n\nGiven the comments about it, I'm sure I tested the behavior somewhere\nalong the line --- but I must not have done so with the final logic\nof _complete_from_query.\n\n> Putting pg_catalog.quote_ident back\n> into Query_for_list_of_available_extension_versions* fixes it, but might\n> not be the best way to fix it.\n\nYeah, that seems like the appropriate fix. Done, thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jun 2022 19:49:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "I wrote:\n> Jeff Janes <jeff.janes@gmail.com> writes:\n>> Putting pg_catalog.quote_ident back\n>> into Query_for_list_of_available_extension_versions* fixes it, but might\n>> not be the best way to fix it.\n\n> Yeah, that seems like the appropriate fix. Done, thanks for the report!\n\nActually ... after further thought it seems like maybe we should\nmake this more like other cases rather than less so. ISTM that much\nof the issue here is somebody's decision that \"TO version\" should be\noffered as a completion of \"UPDATE\", which is unlike the way we do this\nanywhere else --- the usual thing is to offer \"UPDATE TO\" as a single\ncompletion. So I'm thinking about the attached.\n\nThis behaves a little differently from the old code. In v14,\n\talter extension pg_trgm upd<TAB>\ngives you\n\talter extension pg_trgm update<space>\nand another <TAB> produces\n\talter extension pg_trgm update TO \"1.\n\nWith this,\n\talter extension pg_trgm upd<TAB>\ngives you\n\talter extension pg_trgm update to<space>\nand another <TAB> produces\n\talter extension pg_trgm update to \"1.\n\nThat seems more consistent with other cases, and it's the same\nnumber of <TAB> presses.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Jun 2022 00:56:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "On Sun, Jun 19, 2022 at 12:56:13AM -0400, Tom Lane wrote:\n> Actually ... after further thought it seems like maybe we should\n> make this more like other cases rather than less so. ISTM that much\n> of the issue here is somebody's decision that \"TO version\" should be\n> offered as a completion of \"UPDATE\", which is unlike the way we do this\n> anywhere else --- the usual thing is to offer \"UPDATE TO\" as a single\n> completion. So I'm thinking about the attached.\n\nWhich are the older completions that offer \"UPDATE TO\"? I don't see any.\n\n> This behaves a little differently from the old code. In v14,\n> \talter extension pg_trgm upd<TAB>\n> gives you\n> \talter extension pg_trgm update<space>\n> and another <TAB> produces\n> \talter extension pg_trgm update TO \"1.\n> \n> With this,\n> \talter extension pg_trgm upd<TAB>\n> gives you\n> \talter extension pg_trgm update to<space>\n> and another <TAB> produces\n> \talter extension pg_trgm update to \"1.\n> \n> That seems more consistent with other cases, and it's the same\n> number of <TAB> presses.\n\nI think it makes sense to send UPDATE TO as a single completion in places\nwhere no valid command can have the UPDATE without the TO. CREATE RULE foo AS\nON UPDATE TO is a candidate, though CREATE RULE completion doesn't do that\ntoday. \"ALTER EXTENSION hstore UPDATE;\" is a valid command (updates to the\ncontrol file default version). Hence, I think the v14 behavior was better.\n\n\n", "msg_date": "Sun, 3 Jul 2022 01:32:17 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I think it makes sense to send UPDATE TO as a single completion in places\n> where no valid command can have the UPDATE without the TO. CREATE RULE foo AS\n> ON UPDATE TO is a candidate, though CREATE RULE completion doesn't do that\n> today. \"ALTER EXTENSION hstore UPDATE;\" is a valid command (updates to the\n> control file default version). Hence, I think the v14 behavior was better.\n\nHmm ... good point, let me think about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 10:29:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "I wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> \"ALTER EXTENSION hstore UPDATE;\" is a valid command (updates to the\n>> control file default version). Hence, I think the v14 behavior was better.\n\n> Hmm ... good point, let me think about that.\n\nAfter consideration, my preferred solution is just this:\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 463cac9fb0..c5cafe6f4b 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -1927,7 +1927,7 @@ psql_completion(const char *text, int start, int end)\n \n \t/* ALTER EXTENSION <name> */\n \telse if (Matches(\"ALTER\", \"EXTENSION\", MatchAny))\n-\t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"UPDATE TO\", \"SET SCHEMA\");\n+\t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"UPDATE\", \"SET SCHEMA\");\n \n \t/* ALTER EXTENSION <name> UPDATE */\n \telse if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"UPDATE\"))\n\nThis will require one extra <TAB> when what you want is to update to\na specific version, but I doubt that that's going to bother anyone\nvery much. I don't want to try to resurrect the v14 behavior exactly\nbecause it's too much of a mess from a quoting standpoint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 14:00:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "On Sun, Jul 03, 2022 at 02:00:59PM -0400, Tom Lane wrote:\n> I wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> >> \"ALTER EXTENSION hstore UPDATE;\" is a valid command (updates to the\n> >> control file default version). Hence, I think the v14 behavior was better.\n> \n> > Hmm ... good point, let me think about that.\n> \n> After consideration, my preferred solution is just this:\n> \n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index 463cac9fb0..c5cafe6f4b 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -1927,7 +1927,7 @@ psql_completion(const char *text, int start, int end)\n> \n> \t/* ALTER EXTENSION <name> */\n> \telse if (Matches(\"ALTER\", \"EXTENSION\", MatchAny))\n> -\t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"UPDATE TO\", \"SET SCHEMA\");\n> +\t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"UPDATE\", \"SET SCHEMA\");\n> \n> \t/* ALTER EXTENSION <name> UPDATE */\n> \telse if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"UPDATE\"))\n> \n> This will require one extra <TAB> when what you want is to update to\n> a specific version, but I doubt that that's going to bother anyone\n> very much. I don't want to try to resurrect the v14 behavior exactly\n> because it's too much of a mess from a quoting standpoint.\n\nWorks for me, and I agree the patch implements that successfully. \"ALTER\nEXTENSION x UPDATE;\" is an infrequent command, and \"ALTER EXTENSION x UPDATE\nTO ...\" is even less frequent. It's not worth much special effort to shave\n<TAB> steps.\n\n\n", "msg_date": "Sun, 3 Jul 2022 12:11:22 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Jul 03, 2022 at 02:00:59PM -0400, Tom Lane wrote:\n>> This will require one extra <TAB> when what you want is to update to\n>> a specific version, but I doubt that that's going to bother anyone\n>> very much. I don't want to try to resurrect the v14 behavior exactly\n>> because it's too much of a mess from a quoting standpoint.\n\n> Works for me, and I agree the patch implements that successfully. \"ALTER\n> EXTENSION x UPDATE;\" is an infrequent command, and \"ALTER EXTENSION x UPDATE\n> TO ...\" is even less frequent. It's not worth much special effort to shave\n> <TAB> steps.\n\nDone that way then. Thanks for the report.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 15:28:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 15beta1 tab completion of extension versions" } ]
[ { "msg_contents": "Hi hackers,\n\nIn several conversations I had recently with colleagues it was pointed\nout that it would be great if PostgreSQL supported COPY to/from\nParquet and other formats. I've found a corresponding discussion [1]\non pgsql-general@. The consensus reached back in 2018 seems to be that\nthis shouldn't be implemented in the core but rather an API should be\nprovided for the extensions. To my knowledge this was never\nimplemented though.\n\nI would like to invest some time into providing a corresponding patch\nfor the core and implementing \"pg_copy_parquet\" extension as a\npractical example, and yet another, a bit simpler, extension as an API\nusage example for the core codebase. I just wanted to double-check\nthat this is still a wanted feature and no one on pgsql-hackers@\nobjects the idea.\n\nAny feedback, suggestions and ideas are most welcome.\n\n[1]: https://postgr.es/m/20180210151304.fonjztsynewldfba%40gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 20 Jun 2022 18:05:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Make COPY extendable in order to support Parquet and other formats" }, { "msg_contents": "On Mon, Jun 20, 2022 at 8:35 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> I would like to invest some time into providing a corresponding patch\n> for the core and implementing \"pg_copy_parquet\" extension as a\n> practical example, and yet another, a bit simpler, extension as an API\n> usage example for the core codebase. I just wanted to double-check\n> that this is still a wanted feature and no one on pgsql-hackers@\n> objects the idea.\n\nAn extension just for COPY to/from parquet looks limited in\nfunctionality. Shouldn't this be viewed as an FDW or Table AM support\nfor parquet or other formats? Of course the later is much larger in\nscope compared to the first one. But there may already be efforts\nunderway [1]\nhttps://www.postgresql.org/about/news/parquet-s3-fdw-01-was-newly-released-2179/\n\nI have not used it myself or worked with it.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 21 Jun 2022 15:08:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Hi Ashutosh,\n\n> An extension just for COPY to/from parquet looks limited in\n> functionality. Shouldn't this be viewed as an FDW or Table AM support\n> for parquet or other formats? Of course the later is much larger in\n> scope compared to the first one. But there may already be efforts\n> underway\n> https://www.postgresql.org/about/news/parquet-s3-fdw-01-was-newly-released-2179/\n\nMany thanks for sharing your thoughts on this!\n\nWe are using parquet_fdw [2] but this is a read-only FDW.\n\nWhat users typically need is to dump their data as fast as possible in\na given format and either to upload it to the cloud as historical data\nor to transfer it to another system (Spark, etc). The data can be\naccessed later if needed, as read only one.\n\nNote that when accessing the historical data with parquet_fdw you\nbasically have a zero ingestion time.\n\nAnother possible use case is transferring data to PostgreSQL from\nanother source. Here the requirements are similar - the data should be\ndumped as fast as possible from the source, transferred over the\nnetwork and imported as fast as possible.\n\nIn other words, personally I'm unaware of use cases when somebody\nneeds a complete read/write FDW or TableAM implementation for formats\nlike Parquet, ORC, etc. Also to my knowledge they are not particularly\noptimized for this.\n\n[2]: https://github.com/adjust/parquet_fdw\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 21 Jun 2022 12:56:21 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "On Tue, Jun 21, 2022 at 3:26 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n\n>\n> In other words, personally I'm unaware of use cases when somebody\n> needs a complete read/write FDW or TableAM implementation for formats\n> like Parquet, ORC, etc. Also to my knowledge they are not particularly\n> optimized for this.\n>\n\nIIUC, you want extensibility in FORMAT argument to COPY command\nhttps://www.postgresql.org/docs/current/sql-copy.html. Where the\nformat is pluggable. That seems useful.\nAnother option is to dump the data in csv format but use external\nutility to convert csv to parquet or whatever other format is. I\nunderstand that that's not going to be as efficient as dumping\ndirectly in the desired format.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 22 Jun 2022 16:59:16 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Hi Ashutosh,\n\n> IIUC, you want extensibility in FORMAT argument to COPY command\n> https://www.postgresql.org/docs/current/sql-copy.html. Where the\n> format is pluggable. That seems useful.\n> Another option is to dump the data in csv format but use external\n> utility to convert csv to parquet or whatever other format is. I\n> understand that that's not going to be as efficient as dumping\n> directly in the desired format.\n\nExactly. However, to clarify, I suspect this may be a bit more\ninvolved than simply extending the FORMAT arguments.\n\nThis change per se will not be extremely useful. Currently nothing\nprevents an extension author to iterate over a table using\nheap_open(), heap_getnext(), etc API and dump its content in any\nformat. The user will have to write \"dump_table(foo, filename)\"\ninstead of \"COPY ...\" but that's not a big deal.\n\nThe problem is that every new extension has to re-invent things like\nfiguring out the schema, the validation of the data, etc. If we could\ndo this in the core so that an extension author has to implement only\nthe minimal format-dependent list of callbacks that would be really\ngreat. In order to make the interface practical though one will have\nto implement a practical extension as well, for instance, a Parquet\none.\n\nThis being said, if it turns out that for some reason this is not\nrealistic to deliver, ending up with simply extending this part of the\nsyntax a bit should be fine too.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 22 Jun 2022 14:51:49 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Hi,\n\nOn 2022-06-22 16:59:16 +0530, Ashutosh Bapat wrote:\n> On Tue, Jun 21, 2022 at 3:26 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> \n> >\n> > In other words, personally I'm unaware of use cases when somebody\n> > needs a complete read/write FDW or TableAM implementation for formats\n> > like Parquet, ORC, etc. Also to my knowledge they are not particularly\n> > optimized for this.\n> >\n> \n> IIUC, you want extensibility in FORMAT argument to COPY command\n> https://www.postgresql.org/docs/current/sql-copy.html. Where the\n> format is pluggable. That seems useful.\n\nAgreed.\n\nBut I think it needs quite a bit of care. Just plugging in a bunch of per-row\n(or worse, per field) switches to COPYs input / output parsing will make the\ncode even harder to read and even slower.\n\nI suspect that we'd first need a patch to refactor the existing copy code a\ngood bit to clean things up. After that it hopefully will be possible to plug\nin a new format without being too intrusive.\n\nI know little about parquet - can it support FROM STDIN efficiently?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Jun 2022 16:49:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-06-22 16:59:16 +0530, Ashutosh Bapat wrote:\n>> IIUC, you want extensibility in FORMAT argument to COPY command\n>> https://www.postgresql.org/docs/current/sql-copy.html. Where the\n>> format is pluggable. That seems useful.\n\n> Agreed.\n\nDitto.\n\n> I suspect that we'd first need a patch to refactor the existing copy code a\n> good bit to clean things up. After that it hopefully will be possible to plug\n> in a new format without being too intrusive.\n\nI think that step 1 ought to be to convert the existing formats into\nplug-ins, and demonstrate that there's no significant loss of performance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 22:51:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Andres, Tom,\n\n> > I suspect that we'd first need a patch to refactor the existing copy code a\n> > good bit to clean things up. After that it hopefully will be possible to plug\n> > in a new format without being too intrusive.\n>\n> I think that step 1 ought to be to convert the existing formats into\n> plug-ins, and demonstrate that there's no significant loss of performance.\n\nYep, this looks like a promising strategy to me too.\n\n> I know little about parquet - can it support FROM STDIN efficiently?\n\nParquet is a compressed binary format with data grouped by columns\n[1]. I wouldn't assume that this is a primary use case for this\nparticular format.\n\n[1]: https://parquet.apache.org/docs/file-format/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 23 Jun 2022 11:38:29 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Hi,\n\nOn 2022-06-23 11:38:29 +0300, Aleksander Alekseev wrote:\n> > I know little about parquet - can it support FROM STDIN efficiently?\n> \n> Parquet is a compressed binary format with data grouped by columns\n> [1]. I wouldn't assume that this is a primary use case for this\n> particular format.\n\nIMO decent COPY FROM / TO STDIN support is crucial, because otherwise you\ncan't do COPY from/to a client. Which would make the feature unusable for\nanybody not superuser, including just about all users of hosted PG.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:45:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "\nOn 2022-06-23 Th 21:45, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-23 11:38:29 +0300, Aleksander Alekseev wrote:\n>>> I know little about parquet - can it support FROM STDIN efficiently?\n>> Parquet is a compressed binary format with data grouped by columns\n>> [1]. I wouldn't assume that this is a primary use case for this\n>> particular format.\n> IMO decent COPY FROM / TO STDIN support is crucial, because otherwise you\n> can't do COPY from/to a client. Which would make the feature unusable for\n> anybody not superuser, including just about all users of hosted PG.\n>\n\n+1\n\n\nNote that Parquet puts the metadata at the end of each file, which makes\nit nice to write but somewhat unfriendly for streaming readers, which\nwould have to accumulate the whole file in order to process it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Jun 2022 10:14:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" }, { "msg_contents": "Hi Andrew,\n\n> > IMO decent COPY FROM / TO STDIN support is crucial, because otherwise you\n> > can't do COPY from/to a client. Which would make the feature unusable for\n> > anybody not superuser, including just about all users of hosted PG.\n> >\n>\n> +1\n>\n> Note that Parquet puts the metadata at the end of each file, which makes\n> it nice to write but somewhat unfriendly for streaming readers, which\n> would have to accumulate the whole file in order to process it.\n\nIt's not necessarily that bad since data is divided into pages, each\npage can be processed separately. However personally I have limited\nexperience with Parquet at this point. Some experimentation is\nrequired. I will keep in mind the requirement regarding COPY FROM / TO\nSTDIN.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 24 Jun 2022 18:04:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Make COPY extendable in order to support Parquet and other\n formats" } ]
[ { "msg_contents": "Hi all,\n\nIf there's not already a manager for the upcoming (July 2022)\ncommitfest, I'd like to volunteer. (And if there is, I'm happy to\nassist and start learning the ropes, if that would be helpful.)\n\n--Jacob\n\n\n", "msg_date": "Mon, 20 Jun 2022 12:29:57 -0500", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "CFM for 2022-07" }, { "msg_contents": "> On 20 Jun 2022, at 19:29, Jacob Champion <jchampion@timescale.com> wrote:\n\n> If there's not already a manager for the upcoming (July 2022)\n> commitfest, I'd like to volunteer.\n\n+1\n\n> (And if there is, I'm happy to assist and start learning the ropes, if that\n> would be helpful.)\n\n\nTurning it on it's head, I'm happy to assist you.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 20 Jun 2022 19:38:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: CFM for 2022-07" }, { "msg_contents": "On Mon, Jun 20, 2022 at 12:38 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 20 Jun 2022, at 19:29, Jacob Champion <jchampion@timescale.com> wrote:\n> > (And if there is, I'm happy to assist and start learning the ropes, if that\n> > would be helpful.)\n>\n> Turning it on it's head, I'm happy to assist you.\n\nThanks Daniel, that'd be great!\n\n--Jacob\n\n\n", "msg_date": "Mon, 20 Jun 2022 12:57:22 -0500", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM for 2022-07" }, { "msg_contents": "On Mon, Jun 20, 2022 at 12:57:22PM -0500, Jacob Champion wrote:\n> On Mon, Jun 20, 2022 at 12:38 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Turning it on it's head, I'm happy to assist you.\n> \n> Thanks Daniel, that'd be great!\n\nWow. Thanks, both of you.\n--\nMichael", "msg_date": "Tue, 21 Jun 2022 08:58:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: CFM for 2022-07" }, { "msg_contents": "If nobody has already volunteered for the next upcoming commitfest.\nI'd like to volunteer. I think early to say is better as always.\n\n--\nIbrar Ahmed\n\nIf nobody has already volunteered for the next upcoming commitfest.I'd like to volunteer. I think early to say is better as always.--Ibrar Ahmed", "msg_date": "Tue, 5 Jul 2022 08:17:26 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "CFM Manager" }, { "msg_contents": "On Tue, Jul 05, 2022 at 08:17:26AM +0500, Ibrar Ahmed wrote:\n> If nobody has already volunteered for the next upcoming commitfest.\n> I'd like to volunteer. I think early to say is better as always.\n\nJacob and Daniel have already volunteered. Based on the number of\npatches at hand (305 in total), getting more help is always welcome, I\nguess.\n--\nMichael", "msg_date": "Tue, 5 Jul 2022 12:50:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, Jul 5, 2022 at 8:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 05, 2022 at 08:17:26AM +0500, Ibrar Ahmed wrote:\n> > If nobody has already volunteered for the next upcoming commitfest.\n> > I'd like to volunteer. I think early to say is better as always.\n>\n> Jacob and Daniel have already volunteered. Based on the number of\n> patches at hand (305 in total), getting more help is always welcome, I\n> guess.\n> --\n> Michael\n>\nI am happy to help, but I am talking about the next one.\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Jul 5, 2022 at 8:50 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jul 05, 2022 at 08:17:26AM +0500, Ibrar Ahmed wrote:\n> If nobody has already volunteered for the next upcoming commitfest.\n> I'd like to volunteer. I think early to say is better as always.\n\nJacob and Daniel have already volunteered.  Based on the number of\npatches at hand (305 in total), getting more help is always welcome, I\nguess.\n--\nMichaelI am happy to help, but I am talking about the next one. -- Ibrar Ahmed", "msg_date": "Tue, 5 Jul 2022 16:31:54 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, Jul 5, 2022 at 4:31 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Tue, Jul 5, 2022 at 8:50 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Tue, Jul 05, 2022 at 08:17:26AM +0500, Ibrar Ahmed wrote:\n>> > If nobody has already volunteered for the next upcoming commitfest.\n>> > I'd like to volunteer. I think early to say is better as always.\n>>\n>> Jacob and Daniel have already volunteered. Based on the number of\n>> patches at hand (305 in total), getting more help is always welcome, I\n>> guess.\n>> --\n>> Michael\n>>\n> I am happy to help, but I am talking about the next one.\n>\n>\n> --\n> Ibrar Ahmed\n>\nIs anybody else volunteer for that, if not I am ready to take that\nresposibility.\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Jul 5, 2022 at 4:31 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Tue, Jul 5, 2022 at 8:50 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jul 05, 2022 at 08:17:26AM +0500, Ibrar Ahmed wrote:\n> If nobody has already volunteered for the next upcoming commitfest.\n> I'd like to volunteer. I think early to say is better as always.\n\nJacob and Daniel have already volunteered.  Based on the number of\npatches at hand (305 in total), getting more help is always welcome, I\nguess.\n--\nMichaelI am happy to help, but I am talking about the next one. -- Ibrar Ahmed\nIs anybody else volunteer for that, if not I am ready to take that resposibility.-- Ibrar Ahmed", "msg_date": "Thu, 11 Aug 2022 15:13:34 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Thu, Aug 11, 2022 at 3:14 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> Is anybody else volunteer for that, if not I am ready to take that resposibility.\n\nHi Ibrar,\n\nI don't think I've seen anyone else volunteer. I'd wait for a\ncommitter to confirm that you've got the job, though.\n\nAll: we're rapidly approaching the next CF, so if someone from the\ncrowd could chime in, it would probably help Ibrar prepare.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 22 Aug 2022 09:14:33 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Thu, Aug 11, 2022 at 3:14 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> Is anybody else volunteer for that, if not I am ready to take that resposibility.\n\n> Hi Ibrar,\n\n> I don't think I've seen anyone else volunteer. I'd wait for a\n> committer to confirm that you've got the job, though.\n\nYou attribute more organization to this than actually exists ;-)\n\nIf Ibrar wants the job I think it's his.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Aug 2022 12:40:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You attribute more organization to this than actually exists ;-)\n\nHa, fair enough!\n\n> If Ibrar wants the job I think it's his.\n\nExcellent. Ibrar, I'll be updating the CFM checklist [1] over the next\ncouple of weeks. I'll try to have sections of it touched up by the\ntime you're due to use them. Let me know if there's anything in\nparticular that is confusing or needs more explanation.\n\nThanks,\n--Jacob\n\n[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\n", "msg_date": "Mon, 22 Aug 2022 09:47:19 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > You attribute more organization to this than actually exists ;-)\n>\n> Ha, fair enough!\n>\n> > If Ibrar wants the job I think it's his.\n>\n> Excellent. Ibrar, I'll be updating the CFM checklist [1] over the next\n> couple of weeks. I'll try to have sections of it touched up by the\n> time you're due to use them. Let me know if there's anything in\n> particular that is confusing or needs more explanation.\n>\n> Thanks,\n> --Jacob\n>\n> [1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n>\n>\n> Thanks, I will start working.\n\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nOn Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com> wrote:On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You attribute more organization to this than actually exists ;-)\n\nHa, fair enough!\n\n> If Ibrar wants the job I think it's his.\n\nExcellent. Ibrar, I'll be updating the CFM checklist [1] over the next\ncouple of weeks. I'll try to have sections of it touched up by the\ntime you're due to use them. Let me know if there's anything in\nparticular that is confusing or needs more explanation.\n\nThanks,\n--Jacob\n\n[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\nThanks, I will start working. -- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.", "msg_date": "Tue, 23 Aug 2022 01:26:08 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, 23 Aug 2022 at 1:26 AM, Ibrar Ahmed <ibrar.ahmed@percona.com> wrote:\n\n>\n>\n> On Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com>\n> wrote:\n>\n>> On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > You attribute more organization to this than actually exists ;-)\n>>\n>> Ha, fair enough!\n>>\n>> > If Ibrar wants the job I think it's his.\n>>\n>> Excellent. Ibrar, I'll be updating the CFM checklist [1] over the next\n>> couple of weeks. I'll try to have sections of it touched up by the\n>> time you're due to use them. Let me know if there's anything in\n>> particular that is confusing or needs more explanation.\n>>\n>> Thanks,\n>> --Jacob\n>>\n>> [1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n>>\n>>\n>> Thanks, I will start working.\n>\n\nI’d like to assist.\n\n\n>\n> --\n>\n> Ibrar Ahmed.\n> Senior Software Engineer, PostgreSQL Consultant.\n>\n\nOn Tue, 23 Aug 2022 at 1:26 AM, Ibrar Ahmed <ibrar.ahmed@percona.com> wrote:On Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com> wrote:On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You attribute more organization to this than actually exists ;-)\n\nHa, fair enough!\n\n> If Ibrar wants the job I think it's his.\n\nExcellent. Ibrar, I'll be updating the CFM checklist [1] over the next\ncouple of weeks. I'll try to have sections of it touched up by the\ntime you're due to use them. Let me know if there's anything in\nparticular that is confusing or needs more explanation.\n\nThanks,\n--Jacob\n\n[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\nThanks, I will start working. I’d like to assist.-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.", "msg_date": "Tue, 23 Aug 2022 01:46:45 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, Aug 23, 2022 at 1:46 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>\n>\n> On Tue, 23 Aug 2022 at 1:26 AM, Ibrar Ahmed <ibrar.ahmed@percona.com>\n> wrote:\n>\n>>\n>>\n>> On Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com>\n>> wrote:\n>>\n>>> On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> > You attribute more organization to this than actually exists ;-)\n>>>\n>>> Ha, fair enough!\n>>>\n>>> > If Ibrar wants the job I think it's his.\n>>>\n>>> Excellent. Ibrar, I'll be updating the CFM checklist [1] over the next\n>>> couple of weeks. I'll try to have sections of it touched up by the\n>>> time you're due to use them. Let me know if there's anything in\n>>> particular that is confusing or needs more explanation.\n>>>\n>>> Thanks,\n>>> --Jacob\n>>>\n>>> [1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n>>>\n>>>\n>>> Thanks, I will start working.\n>>\n>\n> I’d like to assist.\n>\n> Thanks, Hamid\n\nThis will help to complete the tasks. I start looking at that; I will let\nyou know how we both\nmanage to share the load\n\n--\nIbrar Ahmed\n\nOn Tue, Aug 23, 2022 at 1:46 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:On Tue, 23 Aug 2022 at 1:26 AM, Ibrar Ahmed <ibrar.ahmed@percona.com> wrote:On Mon, Aug 22, 2022 at 9:47 PM Jacob Champion <jchampion@timescale.com> wrote:On Mon, Aug 22, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You attribute more organization to this than actually exists ;-)\n\nHa, fair enough!\n\n> If Ibrar wants the job I think it's his.\n\nExcellent. Ibrar, I'll be updating the CFM checklist [1] over the next\ncouple of weeks. I'll try to have sections of it touched up by the\ntime you're due to use them. Let me know if there's anything in\nparticular that is confusing or needs more explanation.\n\nThanks,\n--Jacob\n\n[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\nThanks, I will start working. I’d like to assist.Thanks, HamidThis will help to complete the tasks. I start looking at that; I will let you know how we bothmanage to share the load--Ibrar Ahmed", "msg_date": "Tue, 23 Aug 2022 01:49:48 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Mon, Aug 22, 2022 at 1:50 PM Ibrar Ahmed <ibrar.ahmed@percona.com> wrote:\n> This will help to complete the tasks. I start looking at that; I will let you know how we both\n> manage to share the load\n\nI have updated the CFM checklist through the \"2 days before CF\"\nsection. Let me know if you have questions/suggestions.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 23 Aug 2022 09:27:18 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, Aug 23, 2022 at 9:27 AM Jacob Champion <jchampion@timescale.com> wrote:\n> I have updated the CFM checklist through the \"2 days before CF\"\n> section. Let me know if you have questions/suggestions.\n\nI've additionally removed references to \"shame emails\" for\nnon-reviewers; I don't think CFMs are doing that anymore and I don't\nthink it'd be particularly productive anyway. I've also partially\nrewritten the \"every 5 to 7 days\" suggestions.\n\nI still have yet to update the section \"5 to 7 days before end of CF\"\nand onward.\n\n--Jacob\n\n\n", "msg_date": "Thu, 8 Sep 2022 14:34:19 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Thu, Sep 8, 2022 at 2:34 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I still have yet to update the section \"5 to 7 days before end of CF\"\n> and onward.\n\nWell, I've saved the hardest part for last...\n\nIbrar, Hamid, have the checklist rewrites been helpful so far? Are you\nplanning on doing an (optional!) triage, and if so, are there any\npieces in particular you'd like me to document?\n\n--Jacob\n\n\n", "msg_date": "Tue, 20 Sep 2022 10:45:36 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: CFM Manager" }, { "msg_contents": "On Tue, Sep 20, 2022 at 10:45 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Thu, Sep 8, 2022 at 2:34 PM Jacob Champion <jchampion@timescale.com>\n> wrote:\n> > I still have yet to update the section \"5 to 7 days before end of CF\"\n> > and onward.\n>\n> Well, I've saved the hardest part for last...\n>\n> Ibrar, Hamid, have the checklist rewrites been helpful so far? Are you\n> planning on doing an (optional!) triage, and if so, are there any\n> pieces in particular you'd like me to document?\n>\n> --Jacob\n>\n\nI think we are good now, thanks Jacob, for the effort.\n\n-- \nIbrar Ahmed\n\nOn Tue, Sep 20, 2022 at 10:45 PM Jacob Champion <jchampion@timescale.com> wrote:On Thu, Sep 8, 2022 at 2:34 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I still have yet to update the section \"5 to 7 days before end of CF\"\n> and onward.\n\nWell, I've saved the hardest part for last...\n\nIbrar, Hamid, have the checklist rewrites been helpful so far? Are you\nplanning on doing an (optional!) triage, and if so, are there any\npieces in particular you'd like me to document?\n\n--Jacob\nI think we are good now, thanks Jacob, for the effort.-- Ibrar Ahmed", "msg_date": "Mon, 26 Sep 2022 15:17:44 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CFM Manager" } ]
[ { "msg_contents": "Hi hackers!\n\nRecently we faced a problem with one of our production clusters. We use a\ncascade replication setup in this cluster, that is: master, standby (r1),\nand cascade standby (r2). From time to time, the replication lag on r1 used\nto grow, while on r2 it did not. Analysys showed that r1 startup process\nwas spending a lot of time in reading wal from disk. Increasing\n/sys/block/md2/queue/read_ahead_kb to 16384 (from 0) helps in this case.\nMaybe we can add fadvise call in postgresql startup, so it would not be\nnecessary to change settings on the hypervisor?", "msg_date": "Tue, 21 Jun 2022 10:36:55 +0300", "msg_from": "Kirill Reshke <reshke@double.cloud>", "msg_from_op": true, "msg_subject": "Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 1:07 PM Kirill Reshke <reshke@double.cloud> wrote:\n>\n> Recently we faced a problem with one of our production clusters. We use a cascade replication setup in this cluster, that is: master, standby (r1), and cascade standby (r2). From time to time, the replication lag on r1 used to grow, while on r2 it did not. Analysys showed that r1 startup process was spending a lot of time in reading wal from disk. Increasing /sys/block/md2/queue/read_ahead_kb to 16384 (from 0) helps in this case. Maybe we can add fadvise call in postgresql startup, so it would not be necessary to change settings on the hypervisor?\n>\n\nI wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\nhelp your case?\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Jun 2022 15:05:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 21 Jun 2022, at 12:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> I wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\n> help your case?\n\nAFAICS recovery_prefetch tries to prefetch main fork, but does not try to prefetch WAL itself before reading it. Kirill is trying to solve the problem of reading WAL segments that are our of OS page cache.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 21 Jun 2022 12:48:55 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": ">> > On 21 Jun 2022, at 12:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > I wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\n>> > help your case?\n>> \n>> AFAICS recovery_prefetch tries to prefetch main fork, but does not try to\n>> prefetch WAL itself before reading it. Kirill is trying to solve the problem of\n>> reading WAL segments that are our of OS page cache.\n\nIt seems that it is always by default set to 128 (kB) by default, another thing is that having (default) 16MB WAL segments might also hinder the readahead heuristics compared to having configured the bigger WAL segment size.\n\nMaybe the important question is why would be readahead mechanism be disabled in the first place via /sys | blockdev ?\n\n-J.\n\n\n", "msg_date": "Tue, 21 Jun 2022 10:20:14 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 21 Jun 2022, at 13:20, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> \n> Maybe the important question is why would be readahead mechanism be disabled in the first place via /sys | blockdev ?\n\nBecause database should know better than OS which data needs to be prefetched and which should not. Big OS readahead affects index scan performance.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 21 Jun 2022 13:24:01 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "> > Maybe the important question is why would be readahead mechanism be\n> disabled in the first place via /sys | blockdev ?\n> \n> Because database should know better than OS which data needs to be\n> prefetched and which should not. Big OS readahead affects index scan\n> performance.\n\nOK fair point, however the patch here is adding 1 syscall per XLOG_BLCKSZ which is not cheap either. The code is already hot and there is example from the past where syscalls were limiting the performance [1]. Maybe it could be prefetching in larger batches (128kB? 1MB? 16MB?) ?\n\n-J.\n\n[1] - https://commitfest.postgresql.org/28/2606/\n\n\n\n\n", "msg_date": "Tue, 21 Jun 2022 10:32:48 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 10:33 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> > > Maybe the important question is why would be readahead mechanism be\n> > disabled in the first place via /sys | blockdev ?\n> >\n> > Because database should know better than OS which data needs to be\n> > prefetched and which should not. Big OS readahead affects index scan\n> > performance.\n>\n> OK fair point, however the patch here is adding 1 syscall per XLOG_BLCKSZ which is not cheap either. The code is already hot and there is example from the past where syscalls were limiting the performance [1]. Maybe it could be prefetching in larger batches (128kB? 1MB? 16MB?) ?\n\nI've always thought we'd want to tell it about the *next* segment\nfile, to smooth the transition from one file to the next, something\nlike the attached (not tested).", "msg_date": "Tue, 21 Jun 2022 22:51:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 4:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Jun 21, 2022 at 10:33 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> > > > Maybe the important question is why would be readahead mechanism be\n> > > disabled in the first place via /sys | blockdev ?\n> > >\n> > > Because database should know better than OS which data needs to be\n> > > prefetched and which should not. Big OS readahead affects index scan\n> > > performance.\n> >\n> > OK fair point, however the patch here is adding 1 syscall per XLOG_BLCKSZ which is not cheap either. The code is already hot and there is example from the past where syscalls were limiting the performance [1]. Maybe it could be prefetching in larger batches (128kB? 1MB? 16MB?) ?\n>\n> I've always thought we'd want to tell it about the *next* segment\n> file, to smooth the transition from one file to the next, something\n> like the attached (not tested).\n\nYes, it makes sense to prefetch the \"future\" WAL files that \"may be\"\nneeded for recovery (crash recovery/archive or PITR recovery/standby\nrecovery), not the current WAL file. Having said that, it's not a\ngreat idea (IMO) to make the WAL readers prefetching instead WAL\nprefetching can be delegated to a new background worker or existing bg\nwriter or checkpointer which gets started during recovery.\n\nAlso, it's a good idea to measure the benefits with and without WAL\nprefetching for all recovery types - crash recovery/archive or PITR\nrecovery/standby recovery. For standby recovery, the WAL files may be\nin OS cache if there wasn't a huge apply lag.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 21 Jun 2022 16:33:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 3:18 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > On 21 Jun 2022, at 12:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\n> > help your case?\n>\n> AFAICS recovery_prefetch tries to prefetch main fork, but does not try to prefetch WAL itself before reading it. Kirill is trying to solve the problem of reading WAL segments that are our of OS page cache.\n>\n\nOkay, but normally the WAL written by walreceiver is read by the\nstartup process soon after it's written as indicated in code comments\n(get_sync_bit()). So, what is causing the delay here which makes the\nstartup process perform physical reads?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Jun 2022 16:54:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 4:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 21, 2022 at 3:18 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > > On 21 Jun 2022, at 12:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\n> > > help your case?\n> >\n> > AFAICS recovery_prefetch tries to prefetch main fork, but does not try to prefetch WAL itself before reading it. Kirill is trying to solve the problem of reading WAL segments that are our of OS page cache.\n> >\n>\n> Okay, but normally the WAL written by walreceiver is read by the\n> startup process soon after it's written as indicated in code comments\n> (get_sync_bit()). So, what is causing the delay here which makes the\n> startup process perform physical reads?\n\nThat's not always true. If there's a huge apply lag and/or\nrestartpoint is infrequent/frequent or there are many reads on the\nstandby - in all of these cases the OS cache can replace the WAL from\nit causing the startup process to hit the disk for WAL reading.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 21 Jun 2022 17:41:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Tue, Jun 21, 2022 at 5:41 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jun 21, 2022 at 4:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 21, 2022 at 3:18 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> > > > On 21 Jun 2022, at 12:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I wonder if the newly introduced \"recovery_prefetch\" [1] for PG-15 can\n> > > > help your case?\n> > >\n> > > AFAICS recovery_prefetch tries to prefetch main fork, but does not try to prefetch WAL itself before reading it. Kirill is trying to solve the problem of reading WAL segments that are our of OS page cache.\n> > >\n> >\n> > Okay, but normally the WAL written by walreceiver is read by the\n> > startup process soon after it's written as indicated in code comments\n> > (get_sync_bit()). So, what is causing the delay here which makes the\n> > startup process perform physical reads?\n>\n> That's not always true. If there's a huge apply lag and/or\n> restartpoint is infrequent/frequent or there are many reads on the\n> standby - in all of these cases the OS cache can replace the WAL from\n> it causing the startup process to hit the disk for WAL reading.\n>\n\nIt is possible that due to one or more these reasons startup process\nhas to physically read the WAL. I think it is better to find out what\nis going on for the OP. AFAICS, there is no mention of any other kind\nof reads on the problematic standby. As per the analysis shared in the\ninitial email, the replication lag is due to disk reads, so there\ndoesn't seem to be a very clear theory as to why the OP is seeing disk\nreads.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Jun 2022 18:42:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "> On Tue, Jun 21, 2022 at 10:33 PM Jakub Wartak <Jakub.Wartak@tomtom.com>\r\n> wrote:\r\n> > > > Maybe the important question is why would be readahead mechanism\r\n> > > > be\r\n> > > disabled in the first place via /sys | blockdev ?\r\n> > >\r\n> > > Because database should know better than OS which data needs to be\r\n> > > prefetched and which should not. Big OS readahead affects index scan\r\n> > > performance.\r\n> >\r\n> > OK fair point, however the patch here is adding 1 syscall per XLOG_BLCKSZ\r\n> which is not cheap either. The code is already hot and there is example from the\r\n> past where syscalls were limiting the performance [1]. Maybe it could be\r\n> prefetching in larger batches (128kB? 1MB? 16MB?) ?\r\n> \r\n> I've always thought we'd want to tell it about the *next* segment file, to\r\n> smooth the transition from one file to the next, something like the attached (not\r\n> tested).\r\n\r\nHey Thomas! \r\n\r\nApparently it's false theory. Redo-bench [1] results (1st is total recovery time in seconds, 3.1GB pgdata (out of which 2.6 pg_wals/166 files). Redo-bench was slightly hacked to drop fs caches always after copying so that there is nothing in fscache (both no pgdata and no pg_wals; shared fs). M_io_c is at default (10), recovery_prefetch same (try; on by default)\r\n\r\nmaster, default Linux readahead (128kb):\r\n33.979, 0.478\r\n35.137, 0.504\r\n34.649, 0.518\r\n\r\nmaster, blockdev --setra 0 /dev/nvme0n1: \r\n53.151, 0.603\r\n58.329, 0.525\r\n52.435, 0.536\r\n\r\nmaster, with yours patch (readaheads disabled) -- double checked, calls to fadvise64(offset=0 len=0) were there\r\n58.063, 0.593\r\n51.369, 0.574\r\n51.716, 0.59\r\n\r\nmaster, with Kirill's original patch (readaheads disabled)\r\n38.25, 1.134\r\n36.563, 0.582\r\n37.711, 0.584\r\n\r\nI've noted also that in both cases POSIX_FADV_SEQUENTIAL is being used instead of WILLNEED (?). \r\nI haven't quantified the tradeoff of master vs Kirill's with readahead, but I think that 1 additional syscall is not going to be cheap just for non-standard OS configurations (?)\r\n\r\n-J.\r\n\r\n[1] - https://github.com/macdice/redo-bench\r\n", "msg_date": "Tue, 21 Jun 2022 13:59:20 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\nOh, wow, your benchmarks show really impressive improvement.\n\n> I think that 1 additional syscall is not going to be cheap just for non-standard OS configurations\nAlso we can reduce number of syscalls by something like\n\n#if defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_WILLNEED)\n if ((readOff % (8 * XLOG_BLCKSZ)) == 0)\n posix_fadvise(readFile, readOff + XLOG_BLCKSZ, XLOG_BLCKSZ * 8, POSIX_FADV_WILLNEED);\n#endif\n\nand maybe define\\reuse the some GUC to control number of prefetched pages at once.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 21 Jun 2022 20:24:21 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": ">\n> > On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\n> Oh, wow, your benchmarks show really impressive improvement.\n>\n\nFWIW I was trying to speedup long sequential file reads in Postgres using\nfadvise hints. I've found no detectable improvements.\nThen I've written 1Mb - 1Gb sequential read test with both fadvise\nPOSIX_FADV_WILLNEED\nand POSIX_FADV_SEQUENTIAL in Linux. The only improvement I've found was\n\n1. when the size of read was around several Mb and fadvise len also around\nseveral Mb.\n2. when before fdavice and the first read there was a delay (which was\nsupposedly used by OS for reading into prefetch buffer)\n3. If I read sequential blocks i saw speedup only on first ones. Overall\nread speed of say 1Gb file remained unchanged no matter what.\n\nI became convinced that if I read something long, OS does necessary\nspeedups automatically (which is also in agreement with fadvise manual/code\ncomments).\nCould you please elaborate how have you got the results with that big\ndifference? (Though I don't against fadvise usage, at worst it is expected\nto be useless).\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\nOh, wow, your benchmarks show really impressive improvement.FWIW I was trying to speedup long sequential file reads in Postgres using fadvise hints. I've found no detectable improvements.Then I've written 1Mb - 1Gb sequential read test with both fadvise POSIX_FADV_WILLNEED and POSIX_FADV_SEQUENTIAL in Linux. The only improvement I've found was 1. when the size of read was around several Mb and fadvise len also around several Mb. 2. when before fdavice and the first read there was a delay (which was supposedly used by OS for reading into prefetch buffer)3. If I read sequential blocks i saw speedup only on first ones. Overall read speed of say 1Gb file remained unchanged no matter what.I became convinced that if I read something long, OS does necessary speedups automatically (which is also in agreement with fadvise manual/code comments).Could you please elaborate how have you got the results with that big difference? (Though I don't against fadvise usage, at worst it is expected to be useless).--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 21 Jun 2022 21:52:56 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 21 Jun 2022, at 20:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> > On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\n> Oh, wow, your benchmarks show really impressive improvement.\n> \n> FWIW I was trying to speedup long sequential file reads in Postgres using fadvise hints. I've found no detectable improvements.\n> Then I've written 1Mb - 1Gb sequential read test with both fadvise POSIX_FADV_WILLNEED and POSIX_FADV_SEQUENTIAL in Linux.\nDid you drop caches?\n\n> The only improvement I've found was \n> \n> 1. when the size of read was around several Mb and fadvise len also around several Mb. \n> 2. when before fdavice and the first read there was a delay (which was supposedly used by OS for reading into prefetch buffer)\nThat's the case of startup process: you read a xlog page, then redo records from this page.\n> 3. If I read sequential blocks i saw speedup only on first ones. Overall read speed of say 1Gb file remained unchanged no matter what.\n> \n> I became convinced that if I read something long, OS does necessary speedups automatically (which is also in agreement with fadvise manual/code comments).\n> Could you please elaborate how have you got the results with that big difference? (Though I don't against fadvise usage, at worst it is expected to be useless).\n\nFWIW we with Kirill observed drastically reduced lag on a production server when running patched version. Fidvise surely works :) The question is how to use it optimally.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 22 Jun 2022 13:07:07 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Wed, Jun 22, 2022 at 2:07 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n>\n>\n> > On 21 Jun 2022, at 20:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> >\n> > > On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com>\n> wrote:\n> > Oh, wow, your benchmarks show really impressive improvement.\n> >\n> > FWIW I was trying to speedup long sequential file reads in Postgres\n> using fadvise hints. I've found no detectable improvements.\n> > Then I've written 1Mb - 1Gb sequential read test with both fadvise\n> POSIX_FADV_WILLNEED and POSIX_FADV_SEQUENTIAL in Linux.\n> Did you drop caches?\n>\nYes. I saw nothing changes speed of long file (50Mb+) read.\n\n> > The only improvement I've found was\n> >\n> > 1. when the size of read was around several Mb and fadvise len also\n> around several Mb.\n> > 2. when before fdavice and the first read there was a delay (which was\n> supposedly used by OS for reading into prefetch buffer)\n> That's the case of startup process: you read a xlog page, then redo\n> records from this page.\n>\nThen I'd guess that your speedup is due to speeding up the first several\nMb's in many files opened (and delay for kernel prefetch is due to some\nother reason). That may differ from the case I've tried to measure speedup\nand this could be the cause of speedup in your case.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOn Wed, Jun 22, 2022 at 2:07 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> On 21 Jun 2022, at 20:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> > On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\n> Oh, wow, your benchmarks show really impressive improvement.\n> \n> FWIW I was trying to speedup long sequential file reads in Postgres using fadvise hints. I've found no detectable improvements.\n> Then I've written 1Mb - 1Gb sequential read test with both fadvise POSIX_FADV_WILLNEED and POSIX_FADV_SEQUENTIAL in Linux.\nDid you drop caches?Yes. I saw nothing changes speed of long file (50Mb+) read.\n> The only improvement I've found was \n> \n> 1. when the size of read was around several Mb and fadvise len also around several Mb. \n> 2. when before fdavice and the first read there was a delay (which was supposedly used by OS for reading into prefetch buffer)\nThat's the case of startup process: you read a xlog page, then redo records from this page.Then I'd guess that your speedup is due to speeding up the first several Mb's in many files opened (and delay for kernel prefetch is due to some other reason). That may differ from the case I've tried to measure speedup and this could be the cause of speedup in your case.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 22 Jun 2022 14:26:42 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 22 Jun 2022, at 13:26, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Then I'd guess that your speedup is due to speeding up the first several Mb's in many files opened\nI think in this case Thomas' aproach of prefetching next WAL segment would do better. But Jakub observed opposite results.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 22 Jun 2022 13:56:20 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": ">> > On 21 Jun 2022, at 16:59, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\n>> Oh, wow, your benchmarks show really impressive improvement.\n>> \n>> > I think that 1 additional syscall is not going to be cheap just for\n>> > non-standard OS configurations\n>> Also we can reduce number of syscalls by something like\n>> \n>> #if defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_WILLNEED)\n>> if ((readOff % (8 * XLOG_BLCKSZ)) == 0)\n>> posix_fadvise(readFile, readOff + XLOG_BLCKSZ, XLOG_BLCKSZ * 8,\n>> POSIX_FADV_WILLNEED); #endif\n>> \n>> and maybe define\\reuse the some GUC to control number of prefetched pages\n>> at once.\n\nHi, I was thinking the same, so I got the patch (attached) to the point it gets the identical performance with and without readahead enabled:\n\nbaseline, master, default Linux readahead (128kb):\n33.979, 0.478\n35.137, 0.504\n34.649, 0.518\n\nmaster+patched, readahead disabled:\n34.338, 0.528\n34.568, 0.575\n34.007, 1.136\n\nmaster+patched, readahead enabled (as default):\n33.935, 0.523\n34.109, 0.501\n33.408, 0.557\n\nThoughts?\n\nNotes:\n- no GUC, as the default/identical value seems to be the best\n- POSIX_FADV_SEQUENTIAL is apparently much slower and doesn't seem to have effect from xlogreader.c at all while _WILLNEED does (testing again contradicts \"common wisdom\"?)\n\n-J.", "msg_date": "Thu, 23 Jun 2022 08:50:51 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use fadvise in wal replay" }, { "msg_contents": "\n\n> 23 июня 2022 г., в 13:50, Jakub Wartak <Jakub.Wartak@tomtom.com> написал(а):\n> \n> Thoughts?\nThe patch leaves 1st 128KB chunk unprefetched. Does it worth to add and extra branch for 120KB after 1st block when readOff==0?\nOr maybe do\n+\t\tposix_fadvise(readFile, readOff + XLOG_BLCKSZ, RACHUNK, POSIX_FADV_WILLNEED);\ninstead of\n+\t\tposix_fadvise(readFile, readOff + RACHUNK , RACHUNK, POSIX_FADV_WILLNEED);\n?\n\n> Notes:\n> - no GUC, as the default/identical value seems to be the best\nI think adding this performance boost on most systems definitely worth 1 syscall per 16 pages. And I believe 128KB to be optimal for most storages. And having no GUCs sounds great.\n\nBut storage systems might be different, far beyond benchmarks.\nAll in all, I don't have strong opinion on having 1 or 0 GUCs to configure this.\n\nI've added patch to the CF.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 23 Jun 2022 14:30:45 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "Hey Andrey,\r\n\r\n> > 23 июня 2022 г., в 13:50, Jakub Wartak <Jakub.Wartak@tomtom.com>\r\n> написал(а):\r\n> >\r\n> > Thoughts?\r\n> The patch leaves 1st 128KB chunk unprefetched. Does it worth to add and extra\r\n> branch for 120KB after 1st block when readOff==0?\r\n> Or maybe do\r\n> +\t\tposix_fadvise(readFile, readOff + XLOG_BLCKSZ, RACHUNK,\r\n> POSIX_FADV_WILLNEED);\r\n> instead of\r\n> +\t\tposix_fadvise(readFile, readOff + RACHUNK , RACHUNK,\r\n> POSIX_FADV_WILLNEED);\r\n> ?\r\n\r\n> > Notes:\r\n> > - no GUC, as the default/identical value seems to be the best\r\n> I think adding this performance boost on most systems definitely worth 1 syscall\r\n> per 16 pages. And I believe 128KB to be optimal for most storages. And having\r\n> no GUCs sounds great.\r\n> \r\n> But storage systems might be different, far beyond benchmarks.\r\n> All in all, I don't have strong opinion on having 1 or 0 GUCs to configure this.\r\n> \r\n> I've added patch to the CF.\r\n\r\nCool. As for GUC I'm afraid there's going to be resistance of adding yet another GUC (to avoid many knobs). Ideally it would be nice if we had some advanced/deep/hidden parameters , but there isn't such thing.\r\nMaybe another option would be to use (N * maintenance_io_concurrency * XLOG_BLCKSZ), so N=1 that's 80kB and N=2 160kB (pretty close to default value, and still can be tweaked by enduser). Let's wait what others say?\r\n\r\n-J.\r\n", "msg_date": "Thu, 23 Jun 2022 09:49:31 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use fadvise in wal replay" }, { "msg_contents": "On Thu, Jun 23, 2022 at 09:49:31AM +0000, Jakub Wartak wrote:\n> it would be nice if we had some advanced/deep/hidden parameters , but there isn't such thing.\n\nThere's DEVELOPER_OPTIONS gucs, although I don't know if this is a good fit for\nthat.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 23 Jun 2022 09:06:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 23 Jun 2022, at 12:50, Jakub Wartak <jakub.wartak@tomtom.com> wrote:\n> \n> Thoughts?\n\nI've looked into the patch one more time. And I propose to change this line\n+\t\tposix_fadvise(readFile, readOff + RACHUNK, RACHUNK, POSIX_FADV_WILLNEED);\nto\n+\t\tposix_fadvise(readFile, readOff + XLOG_BLCKSZ, RACHUNK, POSIX_FADV_WILLNEED);\n\nCurrently first 128Kb of the file are not prefetched. But I expect that this change will produce similar performance results. I propose this change only for consistency, so we prefetch all data that we did not prefetch yet and going to read. What do you think?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 18 Jul 2022 18:04:47 +0400", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Thu, Jun 23, 2022 at 5:49 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> Cool. As for GUC I'm afraid there's going to be resistance of adding yet another GUC (to avoid many knobs). Ideally it would be nice if we had some advanced/deep/hidden parameters , but there isn't such thing.\n> Maybe another option would be to use (N * maintenance_io_concurrency * XLOG_BLCKSZ), so N=1 that's 80kB and N=2 160kB (pretty close to default value, and still can be tweaked by enduser). Let's wait what others say?\n\nI don't think adding more parameters is a problem intrinsically. A\ngood question to ask, though, is how the user is supposed to know what\nvalue they should configure. If we don't have any idea what value is\nlikely to be optimal, odds are users won't either.\n\nIt's not very clear to me that we have any kind of agreement on what\nthe basic approach should be here, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 15:55:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 18 Jul 2022, at 22:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Jun 23, 2022 at 5:49 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n>> Cool. As for GUC I'm afraid there's going to be resistance of adding yet another GUC (to avoid many knobs). Ideally it would be nice if we had some advanced/deep/hidden parameters , but there isn't such thing.\n>> Maybe another option would be to use (N * maintenance_io_concurrency * XLOG_BLCKSZ), so N=1 that's 80kB and N=2 160kB (pretty close to default value, and still can be tweaked by enduser). Let's wait what others say?\n> \n> I don't think adding more parameters is a problem intrinsically. A\n> good question to ask, though, is how the user is supposed to know what\n> value they should configure. If we don't have any idea what value is\n> likely to be optimal, odds are users won't either.\nWe know that 128KB is optimal on some representative configuration and that changing value won't really affect performance much. 128KB is marginally better then 8KB and removes some theoretical concerns about extra syscalls.\n\n> It's not very clear to me that we have any kind of agreement on what\n> the basic approach should be here, though.\n\nActually, the only question is offset from current read position: should it be 1 block or full readehead chunk. Again, this does not change anything, just a matter of choice.\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 4 Aug 2022 19:18:39 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Thu, Aug 4, 2022 at 9:48 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > On 18 Jul 2022, at 22:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Jun 23, 2022 at 5:49 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n\nI have a fundamental question on the overall idea - How beneficial it\nwill be if the process that's reading the current WAL page only does\n(at least attempts) the prefetching of future WAL pages? Won't the\nbenefit be higher if \"some\" other background process does prefetching?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Fri, 5 Aug 2022 18:32:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "Hi Bharath,\n\nthank you for the suggestion.\n\n> On 5 Aug 2022, at 16:02, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Thu, Aug 4, 2022 at 9:48 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> \n>>> On 18 Jul 2022, at 22:55, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> \n>>> On Thu, Jun 23, 2022 at 5:49 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> \n> I have a fundamental question on the overall idea - How beneficial it\n> will be if the process that's reading the current WAL page only does\n> (at least attempts) the prefetching of future WAL pages? Won't the\n> benefit be higher if \"some\" other background process does prefetching?\n\nIMO prefetching from other thread would have negative effect.\nfadvise() call is non-blocking, startup process won't do IO. It just informs kernel to schedule asynchronous page read.\nOn the other hand synchronization with other process might cost more than fadvise().\n\nAnyway cost of calling fadise() once per 16 page reads is neglectable.\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 6 Aug 2022 08:23:12 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Sat, Aug 6, 2022 at 10:53 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi Bharath,\n>\n> thank you for the suggestion.\n>\n> > On 5 Aug 2022, at 16:02, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Aug 4, 2022 at 9:48 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >>\n> >>> On 18 Jul 2022, at 22:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> >>>\n> >>> On Thu, Jun 23, 2022 at 5:49 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> >\n> > I have a fundamental question on the overall idea - How beneficial it\n> > will be if the process that's reading the current WAL page only does\n> > (at least attempts) the prefetching of future WAL pages? Won't the\n> > benefit be higher if \"some\" other background process does prefetching?\n>\n> IMO prefetching from other thread would have negative effect.\n> fadvise() call is non-blocking, startup process won't do IO. It just informs kernel to schedule asynchronous page read.\n> On the other hand synchronization with other process might cost more than fadvise().\n\nHm, POSIX_FADV_WILLNEED flag makes fadvise() non-blocking.\n\n> Anyway cost of calling fadise() once per 16 page reads is neglectable.\n\nAgree. Why can't we just prefetch the entire WAL file once whenever it\nis opened for the first time? Does the OS have any limitations on max\nsize to prefetch at once? It may sound aggressive, but it avoids\nfadvise() system calls, this will be especially useful if there are\nmany WAL files to recover (crash, PITR or standby recovery),\neventually we would want the total WAL file to be prefetched.\n\nIf prefetching the entire WAL file is okay, we could further do this:\n1) prefetch in XLogFileOpen() and all of segment_open callbacks, 2)\nrelease in XLogFileClose (it's being dong right now) and all of\nsegment_close callbacks - do this perhaps optionally.\n\nAlso, can't we use an existing function FilePrefetch()? That way,\nthere is no need for a new wait event type.\n\nThoughts?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Sun, 7 Aug 2022 07:09:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "\n\n> On 7 Aug 2022, at 06:39, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> Agree. Why can't we just prefetch the entire WAL file once whenever it\n> is opened for the first time? Does the OS have any limitations on max\n> size to prefetch at once? It may sound aggressive, but it avoids\n> fadvise() system calls, this will be especially useful if there are\n> many WAL files to recover (crash, PITR or standby recovery),\n> eventually we would want the total WAL file to be prefetched.\n> \n> If prefetching the entire WAL file is okay, we could further do this:\n> 1) prefetch in XLogFileOpen() and all of segment_open callbacks, 2)\n> release in XLogFileClose (it's being dong right now) and all of\n> segment_close callbacks - do this perhaps optionally.\n> \n> Also, can't we use an existing function FilePrefetch()? That way,\n> there is no need for a new wait event type.\n> \n> Thoughts?\n\nThomas expressed this idea upthread. Benchmarks done by Jakub showed that this approach had no significant improvement over existing master code.\nThe same benchmarks showed almost x1.5 improvement of readahead in 8Kb or 128Kb chunks.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 7 Aug 2022 21:41:16 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Sun, Aug 7, 2022 at 9:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n\nHi everyone. The patch is 16 lines, looks harmless and with proven\nbenefits. I'm moving this into RfC.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Sat, 12 Nov 2022 14:01:50 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "Hi, hackers!\n\nOn Sun, 13 Nov 2022 at 02:02, Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Sun, Aug 7, 2022 at 9:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n>\n> Hi everyone. The patch is 16 lines, looks harmless and with proven\n> benefits. I'm moving this into RfC.\n\nAs I've written up in the thread we can not gain much from this\noptimization. The results of Jakub shows around 2% difference:\n\n>baseline, master, default Linux readahead (128kb):\n>33.979, 0.478\n>35.137, 0.504\n>34.649, 0.518>\n\n>master+patched, readahead disabled:\n>34.338, 0.528\n>34.568, 0.575\n>34.007, 1.136\n\n>master+patched, readahead enabled (as default):\n>33.935, 0.523\n>34.109, 0.501\n>33.408, 0.557\n\nOn the other hand, the patch indeed is tiny and I don't think the\nproposed advise can ever make things bad. So, I've looked through the\npatch again and I agree it can be committed in the current state.\n\nKind regards,\nPavel Borisov,\nSupabase\n\n\n", "msg_date": "Sat, 26 Nov 2022 01:10:57 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Sat, 26 Nov 2022 at 01:10, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi, hackers!\n>\n> On Sun, 13 Nov 2022 at 02:02, Andrey Borodin <amborodin86@gmail.com> wrote:\n> >\n> > On Sun, Aug 7, 2022 at 9:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> >\n> > Hi everyone. The patch is 16 lines, looks harmless and with proven\n> > benefits. I'm moving this into RfC.\n>\n> As I've written up in the thread we can not gain much from this\n> optimization. The results of Jakub shows around 2% difference:\n>\n> >baseline, master, default Linux readahead (128kb):\n> >33.979, 0.478\n> >35.137, 0.504\n> >34.649, 0.518>\n>\n> >master+patched, readahead disabled:\n> >34.338, 0.528\n> >34.568, 0.575\n> >34.007, 1.136\n>\n> >master+patched, readahead enabled (as default):\n> >33.935, 0.523\n> >34.109, 0.501\n> >33.408, 0.557\n>\n> On the other hand, the patch indeed is tiny and I don't think the\n> proposed advise can ever make things bad. So, I've looked through the\n> patch again and I agree it can be committed in the current state.\n\nMy mailer corrected my previous message. The right one is:\n\"On the other hand, the patch indeed is tiny and I don't think the\nproposed _fadvise_ can ever make things bad\".\n\nPavel.\n\n\n", "msg_date": "Sat, 26 Nov 2022 01:31:06 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Fri, Nov 25, 2022 at 1:12 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> As I've written up in the thread we can not gain much from this\n> optimization. The results of Jakub shows around 2% difference:\n>\n> >baseline, master, default Linux readahead (128kb):\n> >33.979, 0.478\n> >35.137, 0.504\n> >34.649, 0.518>\n>\n> >master+patched, readahead disabled:\n> >34.338, 0.528\n> >34.568, 0.575\n> >34.007, 1.136\n>\n> >master+patched, readahead enabled (as default):\n> >33.935, 0.523\n> >34.109, 0.501\n> >33.408, 0.557\n>\n\nThe performance benefit shows up only when readahead is disabled. And\non many workloads readahead brings unneeded data into page cache, so\nit's preferred configuration.\nIn this particular case, time to apply WAL decreases from 53s to 33s.\n\nThanks!\n\nBest Regards, Andrey Borodin.\n\n\n", "msg_date": "Sun, 27 Nov 2022 10:56:54 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "Hi,\n\nI looked at this patch today. The change is fairly simple, so I decided\nto do a benchmark. To prepare, I created a cluster with a 1GB database,\ncreated a backup, and ran 1h UPDATE workload with WAL archiving. Then,\nthe actual benchmark does this:\n\n1. restore the datadir backup\n2. copy the WAL from archive\n3. drop caches\n4. start the cluster, measure time until end of recovery\n\nI did this with master/patched, and with/without Linux readahead. And I\ndid this on two different machines - both have SSD storage, one (i5) has\na RAID of SATA SSD devices, the other one (xeon) has a single NVMe SSD.\n\nThe results (in seconds) look like this (the amount of WAL is different\non each machine, so the timings are not comparable).\n\n host ra master patched\n --------------------------------------\n i5 0 615 513\n 256 392 396\n --------------------------------------\n xeon 0 2113 1436\n 256 1487 1460\n\nOn i5 (smaller machine with RAID of 6 x SATA SSD), with read-ahead\nenabled it takes ~390 seconds, and the patch makes no difference.\nWithout read-ahead, it takes ~615 seconds, and the patch does help a\nbit, but it's hardly competitive to the read-ahead.\n\nNote: 256 is read-ahead per physical device, the read-ahead value for\nthe whole RAID is 6x that, i.e. 1536. I was speculating that maybe the\nhard-coded 128kB RACHUNK is not sufficient, so I tried with 512kB. But\nthat actually made it worse, and the timing deteriorated to ~640s (that\nis, slower than master without read-ahead).\n\nOn the xeon (with NVMe SSD), it's different - the patch seems about as\neffective as regular read-ahead. So that's good.\n\n\nSo I'm a bit unsure about this patch. I doesn't seem like it can perform\nbetter than read-ahead (although perhaps it does, on a different storage\nsystem).\n\nWith disabled read-ahead it helps (at least a bit), although I'm not\nreally convinced there are good reasons to run without read-ahead. The\nreason for doing that was described like this:\n\n> Because database should know better than OS which data needs to be\n> prefetched and which should not. Big OS readahead affects index scan\n> performance.\n\nI don't recall seeing such issue, and I can't find anything like that in\nour mailinglist archives either. Sure, that doesn't mean it can't\nhappen, and read-ahead is a heuristics so it can do weird stuff. But in\nmy experience it tends to work fairly well. The issues I've seen are\ngenerally in the opposite direction, i.e. read-ahead not kicking in.\n\n\nAnyway, it's not my intent to prevent this patch getting committed, if\nsomeone wishes to do that. But I'm not quite convinced it actually helps\nwith a practical issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Jan 2023 22:19:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 22:19:10 +0100, Tomas Vondra wrote:\n> So I'm a bit unsure about this patch. I doesn't seem like it can perform\n> better than read-ahead (although perhaps it does, on a different storage\n> system).\n\nI really don't see the point of the patch as-is. It's not going to help OSs\nwithout useful readahead, because those don't have posix_fadvise either. And\nhardcoded readahead distance isn't useful - on e.g. cloud storage 128kB won't\nbe long enough to overcome network latency.\n\nI could see a *tad* more point in using posix_fadvise(fd, 0, 0,\nPOSIX_FADV_SEQUENTIAL) when opening WAL files, as that'll increase the kernel\nreadahead window over what's configured.\n\n\n> With disabled read-ahead it helps (at least a bit), although I'm not\n> really convinced there are good reasons to run without read-ahead. The\n> reason for doing that was described like this:\n\nAgreed. Postgres currently totally crashes and burns without OS\nreadhead. Buffered IO without readahead makes no sense. So I just don't see\nthe point of this patch.\n\n\n> > Because database should know better than OS which data needs to be\n> > prefetched and which should not. Big OS readahead affects index scan\n> > performance.\n\nI don't disagree fundamentally. But that doesn't make this patch a useful\nstarting point.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:19:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" }, { "msg_contents": "On Thu, 19 Jan 2023 at 18:19, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-01-19 22:19:10 +0100, Tomas Vondra wrote:\n>\n> > So I'm a bit unsure about this patch. I doesn't seem like it can perform\n> > better than read-ahead (although perhaps it does, on a different storage\n> > system).\n>\n> I really don't see the point of the patch as-is.\n...\n> I don't disagree fundamentally. But that doesn't make this patch a useful\n> starting point.\n\nIt sounds like this patch has gotten off on the wrong foot and is not\nworth moving forward to the next commitfest. Hopefully a starting over\nfrom a different approach might target i/o that is more amenable to\nfadvise. I'll mark it RwF.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Sat, 8 Apr 2023 23:01:24 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use fadvise in wal replay" } ]
[ { "msg_contents": "I suggest that we add the gcc (also clang) option -ftabstop=4.\n\nThis has two effects: First, it produces more accurate column numbers \nin compiler errors and correctly indents the code excerpts that the \ncompiler shows with those. Second, it enables the compiler's detection \nof confusingly indented code to work more correctly. (But the latter is \nonly a potential problem for code that does not use tabs for \nindentation, so it would be mostly a help during development with sloppy \neditor setups.)\n\nAttached is a patch to discover the option in configure.\n\nOne bit of trickery not addressed yet is that we might want to strip out \nthe option and not expose it through PGXS, since we don't know what \nwhitespacing rules external code uses.\n\nThoughts?", "msg_date": "Tue, 21 Jun 2022 12:49:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "gcc -ftabstop option" }, { "msg_contents": "At Tue, 21 Jun 2022 12:49:24 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> I suggest that we add the gcc (also clang) option -ftabstop=4.\n> \n> This has two effects: First, it produces more accurate column numbers\n> in compiler errors and correctly indents the code excerpts that the\n> compiler shows with those. Second, it enables the compiler's\n> detection of confusingly indented code to work more correctly. (But\n> the latter is only a potential problem for code that does not use tabs\n> for indentation, so it would be mostly a help during development with\n> sloppy editor setups.)\n> \n> Attached is a patch to discover the option in configure.\n> \n> One bit of trickery not addressed yet is that we might want to strip\n> out the option and not expose it through PGXS, since we don't know\n> what whitespacing rules external code uses.\n>\n> Thoughts?\n\nThere's no strong reason for everyone to accept that change, but I\nmyself don't mind whether the option is applied or not.\n\n\nThere was a related discussion.\n\nhttps://www.postgresql.org/message-id/BDE54C55-438C-48E9-B2A3-08EB3A0CBB9F%40yesql.se\n\n> The -ftabstop option is (to a large extent, not entirely) to warn when tabs and\n> spaces are mixed creating misleading indentation that the author didn't even\n> notice due to tabwidth settings? ISTM we are better of getting these warnings\n> than suppressing them.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Jun 2022 09:36:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gcc -ftabstop option" }, { "msg_contents": "> At Tue, 21 Jun 2022 12:49:24 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n>> One bit of trickery not addressed yet is that we might want to strip\n>> out the option and not expose it through PGXS, since we don't know\n>> what whitespacing rules external code uses.\n\nThis part seems like a bigger problem than the option is worth.\nI agree that we can't assume third-party code prefers 4-space tabs.\n\nKyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> There was a related discussion.\n> https://www.postgresql.org/message-id/BDE54C55-438C-48E9-B2A3-08EB3A0CBB9F%40yesql.se\n>> The -ftabstop option is (to a large extent, not entirely) to warn when tabs and\n>> spaces are mixed creating misleading indentation that the author didn't even\n>> notice due to tabwidth settings? ISTM we are better of getting these warnings\n>> than suppressing them.\n\nIsn't that kind of redundant given that (a) we have git whitespace\nwarnings about this and (b) pgindent will take care of any such\nproblems in the end?\n\nI'll grant the point about compiler warnings possibly not lining up\nprecisely. But that's yet to bother me personally, so maybe I'm\nunderestimating its value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jun 2022 20:48:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gcc -ftabstop option" }, { "msg_contents": "> On 21 Jun 2022, at 12:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> Second, it enables the compiler's detection of confusingly indented code to work more correctly.\n\nWouldn't we also need to add -Wmisleading-indentation for this to trigger any\nwarnings? Adding -ftabstop only allows the compiler to be able to properly\ndetect it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 22 Jun 2022 21:48:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: gcc -ftabstop option" }, { "msg_contents": "On 22.06.22 21:48, Daniel Gustafsson wrote:\n>> On 21 Jun 2022, at 12:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> Second, it enables the compiler's detection of confusingly indented code to work more correctly.\n> \n> Wouldn't we also need to add -Wmisleading-indentation for this to trigger any\n> warnings?\n\nThat is included in -Wall.\n\n\n", "msg_date": "Thu, 23 Jun 2022 09:09:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: gcc -ftabstop option" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe RMT[1] with the release team has set a date of June 30, 2022 for the \r\nPostgreSQL 15 Beta 2 release. We encourage you to try to close as many \r\nopen items[2] prior to the release.\r\n\r\nIf you are working on patches for Beta 2, please be sure that they are \r\ncommitted no later than June 26, 2022 AoE[3].\r\n\r\nThanks!\r\n\r\nJohn, Jonathan, Michael\r\n\r\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team\r\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\r\n[3] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Tue, 21 Jun 2022 09:27:46 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 Beta 2 release" }, { "msg_contents": "\nOn 2022-06-21 Tu 09:27, Jonathan S. Katz wrote:\n> Hi,\n>\n> The RMT[1] with the release team has set a date of June 30, 2022 for\n> the PostgreSQL 15 Beta 2 release. We encourage you to try to close as\n> many open items[2] prior to the release.\n>\n> If you are working on patches for Beta 2, please be sure that they are\n> committed no later than June 26, 2022 AoE[3].\n>\n> Thanks!\n>\n> John, Jonathan, Michael\n\n\nNot quite sure why I'm listed against the OAT hook issue, all I did was\ncommit a test that exposed the long existing problem :-)\n\nIt looks like Michael at least has some ideas about fixing it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 21 Jun 2022 17:36:35 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 Beta 2 release" }, { "msg_contents": "On Tue, Jun 21, 2022 at 05:36:35PM -0400, Andrew Dunstan wrote:\n> Not quite sure why I'm listed against the OAT hook issue, all I did was\n> commit a test that exposed the long existing problem :-)\n\nYes, we've discussed about this open and came to the conclusion that\nassigning it to you is not really fair, so don't worry :)\n\n> It looks like Michael at least has some ideas about fixing it.\n\nI am planning to send an update on this thread today, but that's not\ngoing to be an amazing thing.\n--\nMichael", "msg_date": "Wed, 22 Jun 2022 09:01:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 Beta 2 release" } ]
[ { "msg_contents": "Hi,\n\nProblem: Today when a data page is corrupted in the primary postgres with\nphysical replication (sync or async standbys), there seems to be no way to\nrepair it easily and we rely on PITR to recreate the postgres server or\ndrop the corrupted table (of course this is not an option for important\ncustomer tables, but may be okay for some maintenance or temporary tables).\nPITR is costly to do in a production environment oftentimes as it involves\ncreation of the full-blown postgres from the base backup and causing\ndowntime for the customers.\n\nSolution: Make use of the uncorrupted page present in sync or async\nstandby. The proposed tool/extension pg_page_repair (as we call it) can\nfetch the uncorrupted page from sync or async standby and overwrite the\ncorrupted one on the primary. Yes, there will be a challenge in making sure\nthat the WAL is replayed completely and standby is up-to-date so that we\nare sure that stale pages are not copied across. A simpler idea could be\nthat the pg_page_repair can wait until the standby replays/catches up with\nthe primary's flush LSN before fetching the uncorrupted page. A downside of\nthis approach is that the pg_page_repair waits for long or rather\ninfinitely if the replication lag is huge. As we all know that the\nreplication lag is something a good postgres solution will always monitor\nto keep it low, if true, the pg_page_repair is guaranteed to not wait for\nlonger. Another idea could be that the pg_page_repair gets the base page\nfrom the standby and applies all the WAL records pertaining to the\ncorrupted page using the base page to get the uncorrupted page. This\nrequires us to pull the replay logic from the core to pg_page_repair which\nisn't easy. Hence we propose to go with approach 1, but open to discuss on\napproach 2 as well. We suppose that the solution proposed in this thread\nholds good even for pages corresponding to indexes.\n\nImplementation Choices: pg_page_repair can either take the corrupted page\ninfo (db id, rel id, block number etc.) or just a relation name and\nautomatically figure out the corrupted page using pg_checksums for instance\nor just database name and automatically figure out all the corrupted pages.\nIt can either repair the corrupted pages online (only the corrupted table\nis inaccessible, the server continues to run) or take downtime if there are\nmany corrupted pages.\n\nFuture Scope: pg_page_repair can be integrated to the core so that the\npostgres will repair the pages automatically without manual intervention.\n\nOther Solutions: We did consider an approach where the tool could obtain\nthe FPI from WAL and replay till the latest WAL record to repair the page.\nBut there could be limitations such as FPI and related WAL not being\navailable in primary/archive location.\n\nThoughts?\n\nCredits (cc-ed): thanks to SATYANARAYANA NARLAPURAM for initial thoughts\nand thanks to Bharath Rupireddy, Chen Liang, mahendrakar s and Rohan Kumar\nfor internal discussions.\n\nThanks, RKN\n\n\nHi,Problem: Today when a data page is corrupted in the primary postgres with physical replication (sync or async standbys), there seems to be no way to repair it easily and we rely on PITR to recreate the postgres server or drop the corrupted table (of course this is not an option for important customer tables, but may be okay for some maintenance or temporary tables). PITR is costly to do in a production environment oftentimes as it involves creation of the full-blown postgres from the base backup and causing downtime for the customers.Solution: Make use of the uncorrupted page present in sync or async standby. The proposed tool/extension pg_page_repair (as we call it) can fetch the uncorrupted page from sync or async standby and overwrite the corrupted one on the primary. Yes, there will be a challenge in making sure that the WAL is replayed completely and standby is up-to-date so that we are sure that stale pages are not copied across. A simpler idea could be that the pg_page_repair can wait until the standby replays/catches up with the primary's flush LSN before fetching the uncorrupted page. A downside of this approach is that the pg_page_repair waits for long or rather infinitely if the replication lag is huge. As we all know that the replication lag is something a good postgres solution will always monitor to keep it low, if true, the pg_page_repair is guaranteed to not wait for longer. Another idea could be that the pg_page_repair gets the base page from the standby and applies all the WAL records pertaining to the corrupted page using the base page to get the uncorrupted page. This requires us to pull the replay logic from the core to pg_page_repair which isn't easy. Hence we propose to go with approach 1, but open to discuss on approach 2 as well. We suppose that the solution proposed in this thread holds good even for pages corresponding to indexes.Implementation Choices: pg_page_repair can either take the corrupted page info (db id, rel id, block number etc.) or just a relation name and automatically figure out the corrupted page using pg_checksums for instance or just database name and automatically figure out all the corrupted pages. It can either repair the corrupted pages online (only the corrupted table is inaccessible, the server continues to run) or take downtime if there are many corrupted pages.Future Scope: pg_page_repair can be integrated to the core so that the postgres will repair the pages automatically without manual intervention.Other Solutions: We did consider an approach where the tool could obtain the FPI from WAL and replay till the latest WAL record to repair the page. But there could be limitations such as FPI and related WAL not being available in primary/archive location.Thoughts?Credits (cc-ed): thanks to SATYANARAYANA NARLAPURAM for initial thoughts and thanks to Bharath Rupireddy, Chen Liang, mahendrakar s and Rohan Kumar for internal discussions.Thanks, RKN", "msg_date": "Wed, 22 Jun 2022 11:14:34 +0530", "msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_page_repair: a tool/extension to repair corrupted pages in\n postgres with streaming/physical replication" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 22, 2022 at 2:44 PM RKN Sai Krishna\n<rknsaiforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Problem: Today when a data page is corrupted in the primary postgres with physical replication (sync or async standbys), there seems to be no way to repair it easily and we rely on PITR to recreate the postgres server or drop the corrupted table (of course this is not an option for important customer tables, but may be okay for some maintenance or temporary tables). PITR is costly to do in a production environment oftentimes as it involves creation of the full-blown postgres from the base backup and causing downtime for the customers.\n>\n> Solution: Make use of the uncorrupted page present in sync or async standby. The proposed tool/extension pg_page_repair (as we call it) can fetch the uncorrupted page from sync or async standby and overwrite the corrupted one on the primary. Yes, there will be a challenge in making sure that the WAL is replayed completely and standby is up-to-date so that we are sure that stale pages are not copied across. A simpler idea could be that the pg_page_repair can wait until the standby replays/catches up with the primary's flush LSN before fetching the uncorrupted page. A downside of this approach is that the pg_page_repair waits for long or rather infinitely if the replication lag is huge. As we all know that the replication lag is something a good postgres solution will always monitor to keep it low, if true, the pg_page_repair is guaranteed to not wait for longer. Another idea could be that the pg_page_repair gets the base page from the standby and applies all the WAL records pertaining to the corrupted page using the base page to get the uncorrupted page. This requires us to pull the replay logic from the core to pg_page_repair which isn't easy. Hence we propose to go with approach 1, but open to discuss on approach 2 as well. We suppose that the solution proposed in this thread holds good even for pages corresponding to indexes.\n\nI'm interested in this topic and recalled I did some research on the\nfirst idea while writing experimental code several years ago[1].\n\nThe corruption that can be fixed by this feature is mainly physical\ncorruption, for example, introduced by storage array cache corruption,\narray firmware bugs, filesystem bugs, is that right? Logically corrupt\nblocks are much more likely to have been introduced as a result of a\nfailure or a bug in PostgreSQL, which would end up propagating to\nphysical standbys.\n\n>\n> Implementation Choices: pg_page_repair can either take the corrupted page info (db id, rel id, block number etc.) or just a relation name and automatically figure out the corrupted page using pg_checksums for instance or just database name and automatically figure out all the corrupted pages. It can either repair the corrupted pages online (only the corrupted table is inaccessible, the server continues to run) or take downtime if there are many corrupted pages.\n\nSince the server must be shutdown cleanly before running pg_checksums\nif we want to verify checksums of the page while the server is running\nwe would need to do online checksum verification we discussed\nbefore[2].\n\n>\n> Future Scope: pg_page_repair can be integrated to the core so that the postgres will repair the pages automatically without manual intervention.\n>\n> Other Solutions: We did consider an approach where the tool could obtain the FPI from WAL and replay till the latest WAL record to repair the page. But there could be limitations such as FPI and related WAL not being available in primary/archive location.\n\nHow do we find the FPI of the corrupted page effectively from WAL? We\ncould seek WAL records from backward but it could take a quite long\ntime.\n\nRegards,\n\n[1] https://github.com/MasahikoSawada/pgtools/tree/master/page_repair\n[2] https://www.postgresql.org/message-id/CAOBaU_aVvMjQn%3Dge5qPiJOPMmOj5%3Dii3st5Q0Y%2BWuLML5sR17w%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 22 Jun 2022 16:17:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_page_repair: a tool/extension to repair corrupted pages in\n postgres with streaming/physical replication" } ]
[ { "msg_contents": "Hi hackers,\n\nI think there's a missing reference to pgstat_replslot.c in pgstat.c.\n\nAttached a tiny patch to fix it.\n\nRegards,\nBertrand", "msg_date": "Wed, 22 Jun 2022 08:29:03 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Missing reference to pgstat_replslot.c in pgstat.c" }, { "msg_contents": "On Wed, Jun 22, 2022 at 3:29 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> I think there's a missing reference to pgstat_replslot.c in pgstat.c.\n>\n> Attached a tiny patch to fix it.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 22 Jun 2022 15:45:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing reference to pgstat_replslot.c in pgstat.c" }, { "msg_contents": "Hi,\n\nOn 2022-06-22 08:29:03 +0200, Drouvot, Bertrand wrote:\n> I think there's a missing reference to pgstat_replslot.c in pgstat.c.\n\nIndeed...\n\n> Attached a tiny patch to fix it.\n\nThanks. Pushed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Jun 2022 17:01:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing reference to pgstat_replslot.c in pgstat.c" }, { "msg_contents": "Hi,\n\nOn 6/23/22 2:01 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-22 08:29:03 +0200, Drouvot, Bertrand wrote:\n>> I think there's a missing reference to pgstat_replslot.c in pgstat.c.\n> Indeed...\n>\n>> Attached a tiny patch to fix it.\n> Thanks. Pushed.\n\nThanks!\n\nBertrand\n\n\n\n", "msg_date": "Thu, 23 Jun 2022 10:07:41 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Missing reference to pgstat_replslot.c in pgstat.c" } ]
[ { "msg_contents": "This possible change was alluded to in the meson thread at [0].\n\nThe proposal is to move the list of available languages from nls.mk into \na separate file called po/LINGUAS. Advantages:\n\n- It keeps the parts notionally managed by programmers (nls.mk)\n separate from the parts notionally managed by translators (LINGUAS).\n\n- It's the standard practice recommended by the Gettext manual\n nowadays.\n\n- The Meson build system also supports this layout (and of course\n doesn't know anything about our custom nls.mk), so this would enable\n sharing the list of languages between the two build systems.\n\n(The MSVC build system currently finds all po files by globbing, so it \nis not affected by this change.)\n\nIn practice, the list of languages is updated mostly by means of the \ncp-po script that I use to update the translations before releases [1], \nand I have a small patch ready for that to adapt to this change. (In \nany case, that wouldn't be needed until the beta of PG16.)\n\n\n[0]: \nhttps://www.postgresql.org/message-id/bfcd5353-0fb3-a05c-6f62-164d98c5689d@enterprisedb.com\n\n[1]: \nhttps://git.postgresql.org/gitweb/?p=pgtranslation/admin.git;a=blob;f=cp-po;h=d4ae9285697ba110228b6e01c8339b1d0f8c3458;hb=HEAD", "msg_date": "Wed, 22 Jun 2022 12:58:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "NLS: Put list of available languages into LINGUAS files" } ]
[ { "msg_contents": "macOS has traditionally used extension .dylib for shared libraries (used \nat build time) and .so for dynamically loaded modules (used by \ndlopen()). This complicates the build system a bit. Also, Meson uses \n.dylib for both, so it would be worth unifying this in order to be able \nto get equal build output.\n\nThere doesn't appear to be any reason to use any particular extension \nfor dlopened modules, since dlopen() will accept anything and PostgreSQL \nis well-factored to be able to deal with any extension. Other software \npackages that I have handy appear to be about 50/50 split on which \nextension they use for their plugins. So it seems possible to change \nthis safely.", "msg_date": "Wed, 22 Jun 2022 13:12:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Unify DLSUFFIX on Darwin" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> macOS has traditionally used extension .dylib for shared libraries (used \n> at build time) and .so for dynamically loaded modules (used by \n> dlopen()). This complicates the build system a bit. Also, Meson uses \n> .dylib for both, so it would be worth unifying this in order to be able \n> to get equal build output.\n\n> There doesn't appear to be any reason to use any particular extension \n> for dlopened modules, since dlopen() will accept anything and PostgreSQL \n> is well-factored to be able to deal with any extension. Other software \n> packages that I have handy appear to be about 50/50 split on which \n> extension they use for their plugins. So it seems possible to change \n> this safely.\n\nDoesn't this amount to a fundamental ABI break for extensions?\nYesterday they had to ship foo.so, today they have to ship foo.dylib.\n\nI'm not against the idea if we can avoid widespread extension\nbreakage, but that part seems like a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 09:45:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unify DLSUFFIX on Darwin" }, { "msg_contents": "On 22.06.22 15:45, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> macOS has traditionally used extension .dylib for shared libraries (used\n>> at build time) and .so for dynamically loaded modules (used by\n>> dlopen()). This complicates the build system a bit. Also, Meson uses\n>> .dylib for both, so it would be worth unifying this in order to be able\n>> to get equal build output.\n> \n>> There doesn't appear to be any reason to use any particular extension\n>> for dlopened modules, since dlopen() will accept anything and PostgreSQL\n>> is well-factored to be able to deal with any extension. Other software\n>> packages that I have handy appear to be about 50/50 split on which\n>> extension they use for their plugins. So it seems possible to change\n>> this safely.\n> \n> Doesn't this amount to a fundamental ABI break for extensions?\n> Yesterday they had to ship foo.so, today they have to ship foo.dylib.\n\nExtensions generally only load the module files using the extension-free \nbase name. And if they do specify the extension, they should use the \nprovided DLSUFFIX variable and not hardcode it. So I don't see how this \nwould be a problem.\n\n\n", "msg_date": "Fri, 24 Jun 2022 13:26:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Unify DLSUFFIX on Darwin" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 22.06.22 15:45, Tom Lane wrote:\n>> Doesn't this amount to a fundamental ABI break for extensions?\n>> Yesterday they had to ship foo.so, today they have to ship foo.dylib.\n\n> Extensions generally only load the module files using the extension-free \n> base name. And if they do specify the extension, they should use the \n> provided DLSUFFIX variable and not hardcode it. So I don't see how this \n> would be a problem.\n\nHm. Since we force people to recompile extensions for new major versions\nanyway, maybe it'd be all right. I'm sure there is *somebody* out there\nwho will have to adjust their build scripts, but it does seem like it\nshouldn't be much worse than other routine API changes.\n\n[ thinks for a bit... ] Might be worth double-checking that pg_upgrade\ndoesn't get confused in a cross-version upgrade. A quick grep doesn't\nfind that it refers to DLSUFFIX anywhere, but it definitely does pay\nattention to extensions' shared library names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jun 2022 10:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unify DLSUFFIX on Darwin" }, { "msg_contents": "\nOn 2022-06-24 Fr 10:13, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 22.06.22 15:45, Tom Lane wrote:\n>>> Doesn't this amount to a fundamental ABI break for extensions?\n>>> Yesterday they had to ship foo.so, today they have to ship foo.dylib.\n>> Extensions generally only load the module files using the extension-free \n>> base name. And if they do specify the extension, they should use the \n>> provided DLSUFFIX variable and not hardcode it. So I don't see how this \n>> would be a problem.\n> Hm. Since we force people to recompile extensions for new major versions\n> anyway, maybe it'd be all right. I'm sure there is *somebody* out there\n> who will have to adjust their build scripts, but it does seem like it\n> shouldn't be much worse than other routine API changes.\n>\n> [ thinks for a bit... ] Might be worth double-checking that pg_upgrade\n> doesn't get confused in a cross-version upgrade. A quick grep doesn't\n> find that it refers to DLSUFFIX anywhere, but it definitely does pay\n> attention to extensions' shared library names.\n>\n> \t\t\t\n\n\nThe buildfarm client uses `make show_dl_suffix` to determine filenames\nto look for when seeing if an installation is complete. It looks like\nthat will continue to work.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 28 Jun 2022 11:12:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Unify DLSUFFIX on Darwin" }, { "msg_contents": "On 24.06.22 16:13, Tom Lane wrote:\n> [ thinks for a bit... ] Might be worth double-checking that pg_upgrade\n> doesn't get confused in a cross-version upgrade. A quick grep doesn't\n> find that it refers to DLSUFFIX anywhere, but it definitely does pay\n> attention to extensions' shared library names.\n\npg_upgrade just checks that it can \"LOAD\" whatever it finds in probin. \nSo this will work if extensions use the recommended extension-free file \nnames. If they don't, they should get a clean failure.\n\nIf this becomes a problem in practice, we could make pg_dump \nautomatically adjust the probin on upgrade from an old version.\n\nI have committed this now. We can see how it goes.\n\n\n", "msg_date": "Wed, 6 Jul 2022 08:08:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Unify DLSUFFIX on Darwin" } ]
[ { "msg_contents": "Hi hackers,\n\nThe SYSTEM_USER is a sql reserved word as mentioned in [1] and is \ncurrently not implemented.\n\nPlease find attached a patch proposal to make use of the SYSTEM_USER so \nthat it returns the authenticated identity (if any) (aka authn_id in the \nPort struct).\n\nIndeed in some circumstances, the authenticated identity is not the \nSESSION_USER and then the information is lost from the connection point \nof view (it could still be retrieved thanks to commit 9afffcb833 and \nlog_connections set to on).\n\n_Example 1, using the gss authentification._\n\nSay we have this entry in pg_hba.conf:\n\nhost all all 0.0.0.0/0 gss map=mygssmap\n\nand the related mapping in pg_ident.conf\n\nmygssmap   /^(.*@.*)\\.LOCAL$    mary\n\nThen, connecting with a valid Kerberos Ticket that contains \n“bertrand@BDTFOREST.LOCAL” as the default principal that way: psql -U \nmary -h myhostname -d postgres,\n\nwe will get:\n\npostgres=> select current_user, session_user;\n  current_user | session_user\n--------------+--------------\n  mary         | mary\n(1 row)\n\nWhile the SYSTEM_USER would produce the Kerberos principal:\n\npostgres=> select system_user;\n        system_user\n--------------------------\nbertrand@BDTFOREST.LOCAL\n(1 row)\n\n_Example 2, using the peer authentification._\n\nSay we have this entry in pg_hba.conf:\n\nlocal all john peer map=mypeermap\n\nand the related mapping in pg_ident.conf\n\nmypeermap postgres john\n\nThen connected localy as the system user postgres and connecting to the \ndatabase that way: psql -U john -d postgres, we will get:\n\npostgres=> select current_user, session_user;\n  current_user | session_user\n--------------+--------------\n  john         | john\n(1 row)\n\nWhile the SYSTEM_USER would produce the system user that requested the \nconnection:\n\npostgres=> select system_user;\n  system_user\n-------------\n  postgres\n(1 row)\n\nThanks to those examples we have seen some situations where the \ninformation related to the authenticated identity has been lost from the \nconnection point of view (means not visible in the current_session or in \nthe session_user).\n\nThe purpose of this patch is to make it visible through the SYSTEM_USER \nsql reserved word.\n\n_Remarks: _\n\n- In case port->authn_id is NULL then the patch is returning the \nSESSION_USER for the SYSTEM_USER. Perhaps it should return NULL instead.\n\n- There is another thread [2] to expose port->authn_id to extensions and \ntriggers thanks to a new API. This thread [2] leads to discussions about \nproviding this information to the parallel workers too. While the new \nMyClientConnectionInfo being discussed in [2] could be useful to hold \nthe client information that needs to be shared between the backend and \nany parallel workers, it does not seem to be needed in the case \nport->authn_id is exposed through SYSTEM_USER (like it is not for \nCURRENT_USER and SESSION_USER).\n\nI will add this patch to the next commitfest.\nI look forward to your feedback.\n\nBertrand\n\n[1]: https://www.postgresql.org/docs/current/sql-keywords-appendix.html\n[2]: \nhttps://www.postgresql.org/message-id/flat/793d990837ae5c06a558d58d62de9378ab525d83.camel%40vmware.com", "msg_date": "Wed, 22 Jun 2022 15:25:22 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "SYSTEM_USER reserved word implementation" }, { "msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> Please find attached a patch proposal to make use of the SYSTEM_USER so \n> that it returns the authenticated identity (if any) (aka authn_id in the \n> Port struct).\n\nOn what grounds do you argue that that's the appropriate meaning of\nSYSTEM_USER?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 09:49:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 6/22/22 09:49, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> Please find attached a patch proposal to make use of the SYSTEM_USER so \n>> that it returns the authenticated identity (if any) (aka authn_id in the \n>> Port struct).\n> \n> On what grounds do you argue that that's the appropriate meaning of\n> SYSTEM_USER?\n\n\nWhat else do you imagine it might mean?\n\nHere is SQL Server interpretation for example:\n\nhttps://docs.microsoft.com/en-us/sql/t-sql/functions/system-user-transact-sql?view=sql-server-ver16\n\nAnd Oracle:\nhttp://luna-ext.di.fc.ul.pt/oracle11g/timesten.112/e13070/ttsql257.htm#i1120532\n\n \"SYSTEM_USER\n\n Returns the name of the current data store user\n as identified by the operating system.\"\n\n\nSeems equivalent.\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 10:12:26 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 6/22/22 09:49, Tom Lane wrote:\n>> On what grounds do you argue that that's the appropriate meaning of\n>> SYSTEM_USER?\n\n> What else do you imagine it might mean?\n\nI was hoping for some citation of the SQL spec.\n\n> Here is SQL Server interpretation for example:\n> \"SYSTEM_USER\n> Returns the name of the current data store user\n> as identified by the operating system.\"\n\nMeh. That's as clear as mud. (a) A big part of the question here\nis what is meant by \"current\" user, in the face of operations like\nSET ROLE. (b) \"as identified by the operating system\" does more to\nconfuse me than anything else. The operating system only deals in\nOS user names; does that wording mean that what you get back is an OS\nuser name rather than a SQL role name?\n\nMy immediate guess would be that the SQL committee only intends\nto deal in SQL role names and therefore SYSTEM_USER is defined\nto return one of those, but I've not gone looking in the spec\nto be sure.\n\nI'm also not that clear on what we expect authn_id to be, but\na quick troll in the code makes it look like it's not necessarily\na SQL role name, but might be some external identifier such as a\nKerberos principal. If that's the case I think it's going to be\ninappropriate to use SQL-spec syntax to return it. I don't object\nto inventing some PG-specific function for the purpose, though.\n\nBTW, are there any security concerns about exposing such identifiers?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 10:51:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 6/22/22 10:51, Tom Lane wrote:\n> My immediate guess would be that the SQL committee only intends\n> to deal in SQL role names and therefore SYSTEM_USER is defined\n> to return one of those, but I've not gone looking in the spec\n> to be sure.\n\nI only have a draft copy, but in SQL 2016 I find relatively thin \ndocumentation for what SYSTEM_USER is supposed to represent:\n\n The value specified by SYSTEM_USER is equal to an\n implementation-defined string that represents the\n operating system user who executed the SQL-client\n module that contains the externally-invoked procedure\n whose execution caused the SYSTEM_USER <general value\n specification> to be evaluated.\n\n> I'm also not that clear on what we expect authn_id to be, but\n> a quick troll in the code makes it look like it's not necessarily\n> a SQL role name, but might be some external identifier such as a\n> Kerberos principal. If that's the case I think it's going to be\n> inappropriate to use SQL-spec syntax to return it. I don't object\n> to inventing some PG-specific function for the purpose, though.\n\nTo me the Kerberos principal makes perfect sense given the definition above.\n\n> BTW, are there any security concerns about exposing such identifiers?\n\nOn the contrary, I would argue that not having the identifier for the \nexternal \"user\" available is a security concern. Ideally you want to be \nable to trace actions inside Postgres to the actual user that invoked them.\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 11:10:26 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 6/22/22 10:51, Tom Lane wrote:\n>> My immediate guess would be that the SQL committee only intends\n>> to deal in SQL role names and therefore SYSTEM_USER is defined\n>> to return one of those, but I've not gone looking in the spec\n>> to be sure.\n\n> I only have a draft copy, but in SQL 2016 I find relatively thin \n> documentation for what SYSTEM_USER is supposed to represent:\n\n> The value specified by SYSTEM_USER is equal to an\n> implementation-defined string that represents the\n> operating system user who executed the SQL-client\n> module that contains the externally-invoked procedure\n> whose execution caused the SYSTEM_USER <general value\n> specification> to be evaluated.\n\nHuh. Okay, if it's implementation-defined then we can define it\nas \"whatever auth.c put into authn_id\". Objection withdrawn.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 11:15:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n> On the contrary, I would argue that not having the identifier for the\n> external \"user\" available is a security concern. Ideally you want to be\n> able to trace actions inside Postgres to the actual user that invoked them.\n\nIf auditing is also the use case for SYSTEM_USER, you'll probably want\nto review the arguments for making it available to parallel workers\nthat were made in the other thread [1].\n\nInitial comments on the patch:\n\n> In case port->authn_id is NULL then the patch is returning the SESSION_USER for the SYSTEM_USER. Perhaps it should return NULL instead.\n\nIf the spec says that SYSTEM_USER \"represents the operating system\nuser\", but we don't actually know who that user was (authn_id is\nNULL), then I think SYSTEM_USER should also be NULL so as not to\nmislead auditors.\n\n> --- a/src/backend/utils/init/miscinit.c\n> +++ b/src/backend/utils/init/miscinit.c\n> @@ -473,6 +473,7 @@ static Oid AuthenticatedUserId = InvalidOid;\n> static Oid SessionUserId = InvalidOid;\n> static Oid OuterUserId = InvalidOid;\n> static Oid CurrentUserId = InvalidOid;\n> +static const char *SystemUser = NULL;\n>\n> /* We also have to remember the superuser state of some of these levels */\n> static bool AuthenticatedUserIsSuperuser = false;\n\nWhat's the rationale for introducing a new global for this? A downside\nis that now there are two sources of truth, for a security-critical\nattribute of the connection.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/793d990837ae5c06a558d58d62de9378ab525d83.camel%40vmware.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 08:35:02 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>> In case port->authn_id is NULL then the patch is returning the SESSION_USER for the SYSTEM_USER. Perhaps it should return NULL instead.\n\n> If the spec says that SYSTEM_USER \"represents the operating system\n> user\", but we don't actually know who that user was (authn_id is\n> NULL), then I think SYSTEM_USER should also be NULL so as not to\n> mislead auditors.\n\nYeah, that seems like a fundamental type mismatch. If we don't know\nthe OS user identifier, substituting a SQL role name is surely not\nthe right thing.\n\nI think a case could be made for ONLY returning non-null when authn_id\nrepresents some externally-verified identifier (OS user ID gotten via\npeer identification, Kerberos principal, etc).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 11:52:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 6/22/22 11:52, Tom Lane wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n>> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>>> In case port->authn_id is NULL then the patch is returning the SESSION_USER for the SYSTEM_USER. Perhaps it should return NULL instead.\n> \n>> If the spec says that SYSTEM_USER \"represents the operating system\n>> user\", but we don't actually know who that user was (authn_id is\n>> NULL), then I think SYSTEM_USER should also be NULL so as not to\n>> mislead auditors.\n> \n> Yeah, that seems like a fundamental type mismatch. If we don't know\n> the OS user identifier, substituting a SQL role name is surely not\n> the right thing.\n\n+1 agreed\n\n> I think a case could be made for ONLY returning non-null when authn_id\n> represents some externally-verified identifier (OS user ID gotten via\n> peer identification, Kerberos principal, etc).\n\nBut -1 on that.\n\nI think any time we have a non-null authn_id we should expose it. Are \nthere examples of cases when we have authn_id but for some reason don't \ntrust the value of it?\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 12:22:36 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 6/22/22 11:35, Jacob Champion wrote:\n> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>> --- a/src/backend/utils/init/miscinit.c\n>> +++ b/src/backend/utils/init/miscinit.c\n>> @@ -473,6 +473,7 @@ static Oid AuthenticatedUserId = InvalidOid;\n>> static Oid SessionUserId = InvalidOid;\n>> static Oid OuterUserId = InvalidOid;\n>> static Oid CurrentUserId = InvalidOid;\n>> +static const char *SystemUser = NULL;\n>>\n>> /* We also have to remember the superuser state of some of these levels */\n>> static bool AuthenticatedUserIsSuperuser = false;\n> \n> What's the rationale for introducing a new global for this? A downside\n> is that now there are two sources of truth, for a security-critical\n> attribute of the connection.\n\nWhy would you want to do it differently than \nSessionUserId/OuterUserId/CurrentUserId? It is analogous, no?\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 12:26:46 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 6/22/22 11:52, Tom Lane wrote:\n>> I think a case could be made for ONLY returning non-null when authn_id\n>> represents some externally-verified identifier (OS user ID gotten via\n>> peer identification, Kerberos principal, etc).\n\n> But -1 on that.\n\n> I think any time we have a non-null authn_id we should expose it. Are \n> there examples of cases when we have authn_id but for some reason don't \n> trust the value of it?\n\nI'm more concerned about whether we have a consistent story about what\nSYSTEM_USER means (another way of saying \"what type is it\"). If it's\njust the same as SESSION_USER it doesn't seem like we've added much.\n\nMaybe, instead of just being the raw user identifier, it should be\nsomething like \"auth_method:user_identifier\" so that one can tell\nwhat the identifier actually is and how it was verified.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 12:28:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 6/22/22 12:28, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 6/22/22 11:52, Tom Lane wrote:\n>>> I think a case could be made for ONLY returning non-null when authn_id\n>>> represents some externally-verified identifier (OS user ID gotten via\n>>> peer identification, Kerberos principal, etc).\n> \n>> But -1 on that.\n> \n>> I think any time we have a non-null authn_id we should expose it. Are \n>> there examples of cases when we have authn_id but for some reason don't \n>> trust the value of it?\n> \n> I'm more concerned about whether we have a consistent story about what\n> SYSTEM_USER means (another way of saying \"what type is it\"). If it's\n> just the same as SESSION_USER it doesn't seem like we've added much.\n> \n> Maybe, instead of just being the raw user identifier, it should be\n> something like \"auth_method:user_identifier\" so that one can tell\n> what the identifier actually is and how it was verified.\n\nOh, that's an interesting thought -- I like that.\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Jun 2022 12:32:38 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Jun 22, 2022 at 9:26 AM Joe Conway <mail@joeconway.com> wrote:\n> On 6/22/22 11:35, Jacob Champion wrote:\n> > On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n> Why would you want to do it differently than\n> SessionUserId/OuterUserId/CurrentUserId? It is analogous, no?\n\nLike I said, now there are two different sources of truth, and\nadditional code to sync the two, and two different APIs to set what\nshould be a single write-once attribute. But if SystemUser is instead\nderived from authn_id, like what's just been proposed with\n`method:authn_id`, I think there's a better argument for separating\nthe two.\n\n--Jacob\n\n\n", "msg_date": "Wed, 22 Jun 2022 09:48:08 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Jun 22, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Joe Conway <mail@joeconway.com> writes:\n> > On 6/22/22 11:52, Tom Lane wrote:\n> >> I think a case could be made for ONLY returning non-null when authn_id\n> >> represents some externally-verified identifier (OS user ID gotten via\n> >> peer identification, Kerberos principal, etc).\n>\n> > But -1 on that.\n>\n> > I think any time we have a non-null authn_id we should expose it. Are\n> > there examples of cases when we have authn_id but for some reason don't\n> > trust the value of it?\n>\n> I'm more concerned about whether we have a consistent story about what\n> SYSTEM_USER means (another way of saying \"what type is it\"). If it's\n> just the same as SESSION_USER it doesn't seem like we've added much.\n>\n> Maybe, instead of just being the raw user identifier, it should be\n> something like \"auth_method:user_identifier\" so that one can tell\n> what the identifier actually is and how it was verified.\n>\n>\nI was thinking this was trying to make the following possible:\n\npsql -U postgres\n# set session authorization other_superuser;\n# set role other_role;\n# select system_user, session_user, current_user;\npostgres | other_superuser | other_role\n\nThough admittedly using \"system\" for that seems somehow wrong.\nconnection_user would make more sense. Then the system_user would be, if\napplicable, an external identifier that got matched with the assigned\nconnection_user. I can definitely see having the external identifier be a\nstructured value.\n\nDavid J.\n\nOn Wed, Jun 22, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Joe Conway <mail@joeconway.com> writes:\n> On 6/22/22 11:52, Tom Lane wrote:\n>> I think a case could be made for ONLY returning non-null when authn_id\n>> represents some externally-verified identifier (OS user ID gotten via\n>> peer identification, Kerberos principal, etc).\n\n> But -1 on that.\n\n> I think any time we have a non-null authn_id we should expose it. Are \n> there examples of cases when we have authn_id but for some reason don't \n> trust the value of it?\n\nI'm more concerned about whether we have a consistent story about what\nSYSTEM_USER means (another way of saying \"what type is it\").  If it's\njust the same as SESSION_USER it doesn't seem like we've added much.\n\nMaybe, instead of just being the raw user identifier, it should be\nsomething like \"auth_method:user_identifier\" so that one can tell\nwhat the identifier actually is and how it was verified.I was thinking this was trying to make the following possible:psql -U postgres# set session authorization other_superuser;# set role other_role;# select system_user, session_user, current_user;postgres | other_superuser | other_roleThough admittedly using \"system\" for that seems somehow wrong.  connection_user would make more sense.  Then the system_user would be, if applicable, an external identifier that got matched with the assigned connection_user.  I can definitely see having the external identifier be a structured value.David J.", "msg_date": "Wed, 22 Jun 2022 09:51:04 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/22/22 6:32 PM, Joe Conway wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 6/22/22 12:28, Tom Lane wrote:\n>> Joe Conway <mail@joeconway.com> writes:\n>>> On 6/22/22 11:52, Tom Lane wrote:\n>>>> I think a case could be made for ONLY returning non-null when authn_id\n>>>> represents some externally-verified identifier (OS user ID gotten via\n>>>> peer identification, Kerberos principal, etc).\n>>\n>>> But -1 on that.\n>>\n>>> I think any time we have a non-null authn_id we should expose it. Are\n>>> there examples of cases when we have authn_id but for some reason don't\n>>> trust the value of it?\n>>\n>> I'm more concerned about whether we have a consistent story about what\n>> SYSTEM_USER means (another way of saying \"what type is it\").  If it's\n>> just the same as SESSION_USER it doesn't seem like we've added much.\n>>\n>> Maybe, instead of just being the raw user identifier, it should be\n>> something like \"auth_method:user_identifier\" so that one can tell\n>> what the identifier actually is and how it was verified.\n>\n> Oh, that's an interesting thought -- I like that.\n>\nThanks Joe and Tom for your feedback.\n\nI like this idea too and that's also more aligned with what \nlog_connections set to on would report (aka the auth method).\n\nBaring any objections, I'll work on that idea.\n\nBertrand\n\n\n\n", "msg_date": "Thu, 23 Jun 2022 09:53:39 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/22/22 5:35 PM, Jacob Champion wrote:\n> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>> On the contrary, I would argue that not having the identifier for the\n>> external \"user\" available is a security concern. Ideally you want to be\n>> able to trace actions inside Postgres to the actual user that invoked them.\n> If auditing is also the use case for SYSTEM_USER, you'll probably want\n> to review the arguments for making it available to parallel workers\n> that were made in the other thread [1].\n\nThanks Jacob for your feedback.\n\nI did some testing initially around the parallel workers and did not see \nany issues at that time.\n\nI just had another look and I agree that the parallel workers case needs \nto be addressed.\n\nI'll have a closer look to what you have done in [1].\n\nThanks\n\nBertrand\n\n[1]https://www.postgresql.org/message-id/flat/793d990837ae5c06a558d58d62de9378ab525d83.camel%40vmware.com\n\n\n\n", "msg_date": "Thu, 23 Jun 2022 10:06:43 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/23/22 10:06 AM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 6/22/22 5:35 PM, Jacob Champion wrote:\n>> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>>> On the contrary, I would argue that not having the identifier for the\n>>> external \"user\" available is a security concern. Ideally you want to be\n>>> able to trace actions inside Postgres to the actual user that \n>>> invoked them.\n>> If auditing is also the use case for SYSTEM_USER, you'll probably want\n>> to review the arguments for making it available to parallel workers\n>> that were made in the other thread [1].\n>\n> Thanks Jacob for your feedback.\n>\n> I did some testing initially around the parallel workers and did not \n> see any issues at that time.\n>\n> I just had another look and I agree that the parallel workers case \n> needs to be addressed.\n>\n> I'll have a closer look to what you have done in [1].\n>\n> Thanks\n>\n> Bertrand\n>\nPlease find attached patch version 2.\n\nIt does contain:\n\n- Tom's idea implementation (aka presenting the system_user as \nauth_method:authn_id)\n\n- A fix for the parallel workers issue mentioned by Jacob. The patch now \npropagates the SYSTEM_USER to the parallel workers.\n\n- Doc updates\n\n- Tap tests (some of them are coming from [1])\n\nLooking forward to your feedback,\n\nThanks\n\nBertrand\n\n[1] \nhttps://www.postgresql.org/message-id/flat/793d990837ae5c06a558d58d62de9378ab525d83.camel%40vmware.com", "msg_date": "Fri, 24 Jun 2022 11:49:23 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "\nOn 6/24/22 11:49 AM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 6/23/22 10:06 AM, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 6/22/22 5:35 PM, Jacob Champion wrote:\n>>> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>>>> On the contrary, I would argue that not having the identifier for the\n>>>> external \"user\" available is a security concern. Ideally you want \n>>>> to be\n>>>> able to trace actions inside Postgres to the actual user that \n>>>> invoked them.\n>>> If auditing is also the use case for SYSTEM_USER, you'll probably want\n>>> to review the arguments for making it available to parallel workers\n>>> that were made in the other thread [1].\n>>\n>> Thanks Jacob for your feedback.\n>>\n>> I did some testing initially around the parallel workers and did not \n>> see any issues at that time.\n>>\n>> I just had another look and I agree that the parallel workers case \n>> needs to be addressed.\n>>\n>> I'll have a closer look to what you have done in [1].\n>>\n>> Thanks\n>>\n>> Bertrand\n>>\n> Please find attached patch version 2.\n>\n> It does contain:\n>\n> - Tom's idea implementation (aka presenting the system_user as \n> auth_method:authn_id)\n>\n> - A fix for the parallel workers issue mentioned by Jacob. The patch \n> now propagates the SYSTEM_USER to the parallel workers.\n>\n> - Doc updates\n>\n> - Tap tests (some of them are coming from [1])\n>\n> Looking forward to your feedback,\n>\n> Thanks\n>\n> Bertrand\n\nFWIW here is a link to the commitfest entry: \nhttps://commitfest.postgresql.org/38/3703/\n\nBertrand\n\n\n\n\n\n", "msg_date": "Fri, 24 Jun 2022 14:47:50 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/24/22 2:47 PM, Drouvot, Bertrand wrote:\n>\n> On 6/24/22 11:49 AM, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 6/23/22 10:06 AM, Drouvot, Bertrand wrote:\n>>> Hi,\n>>>\n>>> On 6/22/22 5:35 PM, Jacob Champion wrote:\n>>>> On Wed, Jun 22, 2022 at 8:10 AM Joe Conway <mail@joeconway.com> wrote:\n>>>>> On the contrary, I would argue that not having the identifier for the\n>>>>> external \"user\" available is a security concern. Ideally you want \n>>>>> to be\n>>>>> able to trace actions inside Postgres to the actual user that \n>>>>> invoked them.\n>>>> If auditing is also the use case for SYSTEM_USER, you'll probably want\n>>>> to review the arguments for making it available to parallel workers\n>>>> that were made in the other thread [1].\n>>>\n>>> Thanks Jacob for your feedback.\n>>>\n>>> I did some testing initially around the parallel workers and did not \n>>> see any issues at that time.\n>>>\n>>> I just had another look and I agree that the parallel workers case \n>>> needs to be addressed.\n>>>\n>>> I'll have a closer look to what you have done in [1].\n>>>\n>>> Thanks\n>>>\n>>> Bertrand\n>>>\n>> Please find attached patch version 2.\n>>\n>> It does contain:\n>>\n>> - Tom's idea implementation (aka presenting the system_user as \n>> auth_method:authn_id)\n>>\n>> - A fix for the parallel workers issue mentioned by Jacob. The patch \n>> now propagates the SYSTEM_USER to the parallel workers.\n>>\n>> - Doc updates\n>>\n>> - Tap tests (some of them are coming from [1])\n>>\n>> Looking forward to your feedback,\n>>\n>> Thanks\n>>\n>> Bertrand\n>\n> FWIW here is a link to the commitfest entry: \n> https://commitfest.postgresql.org/38/3703/\n>\n> Bertrand\n>\nAttached a tiny rebase to make the CF bot CompilerWarnings happy.\n\nBertrand", "msg_date": "Sat, 25 Jun 2022 17:33:57 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 2022-Jun-25, Drouvot, Bertrand wrote:\n\n> diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h\n> index 0af130fbc5..8d761512fd 100644\n> --- a/src/include/miscadmin.h\n> +++ b/src/include/miscadmin.h\n> @@ -364,6 +364,10 @@ extern void InitializeSessionUserIdStandalone(void);\n> extern void SetSessionAuthorization(Oid userid, bool is_superuser);\n> extern Oid\tGetCurrentRoleId(void);\n> extern void SetCurrentRoleId(Oid roleid, bool is_superuser);\n> +/* kluge to avoid including libpq/libpq-be.h here */\n> +typedef struct Port MyPort;\n> +extern void InitializeSystemUser(const MyPort *port);\n> +extern const char* GetSystemUser(void);\n\nThis typedef here looks quite suspicious. I think this should suffice:\n\n+/* kluge to avoid including libpq/libpq-be.h here */\n+struct Port;\n+extern void InitializeSystemUser(struct Port *port);\n\nI suspect that having a typedef called MyPort is going to wreak serious\nhavoc for pgindent.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 27 Jun 2022 19:32:50 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/27/22 7:32 PM, Alvaro Herrera wrote:\n> On 2022-Jun-25, Drouvot, Bertrand wrote:\n>\n>> diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h\n>> index 0af130fbc5..8d761512fd 100644\n>> --- a/src/include/miscadmin.h\n>> +++ b/src/include/miscadmin.h\n>> @@ -364,6 +364,10 @@ extern void InitializeSessionUserIdStandalone(void);\n>> extern void SetSessionAuthorization(Oid userid, bool is_superuser);\n>> extern Oid GetCurrentRoleId(void);\n>> extern void SetCurrentRoleId(Oid roleid, bool is_superuser);\n>> +/* kluge to avoid including libpq/libpq-be.h here */\n>> +typedef struct Port MyPort;\n>> +extern void InitializeSystemUser(const MyPort *port);\n>> +extern const char* GetSystemUser(void);\n> This typedef here looks quite suspicious. I think this should suffice:\n>\n> +/* kluge to avoid including libpq/libpq-be.h here */\n> +struct Port;\n> +extern void InitializeSystemUser(struct Port *port);\n>\n> I suspect that having a typedef called MyPort is going to wreak serious\n> havoc for pgindent.\n\nGood catch, thanks!\n\nAttached new version to fix it as suggested.\n\nRegards,\n\nBertrand", "msg_date": "Tue, 28 Jun 2022 09:18:05 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 6/28/22 9:18 AM, Drouvot, Bertrand wrote:\n>\n> Attached new version to fix it as suggested.\n>\nJust to update current and new readers (if any) of this thread.\n\nIt has been agreed that the work on this patch is on hold until the \nClientConnectionInfo related work is finished (see the discussion in [1]).\n\nHaving said that I'm attaching a new patch \n\"v2-0004-system_user-implementation.patch\" for the SYSTEM_USER.\n\nThis new patch currently does not apply on master (so the CF bot will \nfail and this is expected) but does currently apply on top of \n\"v2-0001-Allow-parallel-workers-to-read-authn_id.patch\" provided in [1].\n\nThe reason of it, is that it helps the testing for [1].\n\n\n[1]: https://commitfest.postgresql.org/39/3563/\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 Aug 2022 15:31:45 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Fri, Aug 12, 2022 at 6:32 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> It has been agreed that the work on this patch is on hold until the\n> ClientConnectionInfo related work is finished (see the discussion in [1]).\n>\n> Having said that I'm attaching a new patch\n> \"v2-0004-system_user-implementation.patch\" for the SYSTEM_USER.\n\n(Not a full review.) Now that the implementation has increased in\ncomplexity, the original tests for the parallel workers have become\nunderpowered. As a concrete example, I forgot to serialize auth_method\nduring my most recent rewrite, and the tests still passed.\n\nI think it'd be better to check the contents of SYSTEM_USER, where we\ncan, rather than only testing for existence. Something like the\nattached, maybe? And it would also be good to add a similar test to\nthe authentication suite, so that you don't have to have Kerberos\nenabled to fully test SYSTEM_USER.\n\n--Jacob", "msg_date": "Tue, 16 Aug 2022 09:52:11 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 8/16/22 6:52 PM, Jacob Champion wrote:\n> On Fri, Aug 12, 2022 at 6:32 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> It has been agreed that the work on this patch is on hold until the\n>> ClientConnectionInfo related work is finished (see the discussion in [1]).\n>>\n>> Having said that I'm attaching a new patch\n>> \"v2-0004-system_user-implementation.patch\" for the SYSTEM_USER.\n> (Not a full review.) Now that the implementation has increased in\n> complexity, the original tests for the parallel workers have become\n> underpowered. As a concrete example, I forgot to serialize auth_method\n> during my most recent rewrite, and the tests still passed.\n>\n> I think it'd be better to check the contents of SYSTEM_USER, where we\n> can, rather than only testing for existence. Something like the\n> attached, maybe?\n\nYeah, fully agree, thanks for pointing out!\n\nI've included your suggestion in v2-0005 attached (it's expected to see \nthe CF bot failing for the same reason as mentioned up-thread).\n\n> And it would also be good to add a similar test to\n> the authentication suite, so that you don't have to have Kerberos\n> enabled to fully test SYSTEM_USER.\n\nAgree, I'll look at what can be done here.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 17 Aug 2022 09:51:17 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 8/17/22 9:51 AM, Drouvot, Bertrand wrote:\n> On 8/16/22 6:52 PM, Jacob Champion wrote:\n>\n>> And it would also be good to add a similar test to\n>> the authentication suite, so that you don't have to have Kerberos\n>> enabled to fully test SYSTEM_USER.\n>\n> Agree, I'll look at what can be done here.\n>\nI added authentication/t/003_peer.pl in \nv2-0006-system_user-implementation.patch attached.\n\nIt does the peer authentication and SYSTEM_USER testing with and without \na user name map.\n\n$ make -C src/test/authentication check PROVE_TESTS=t/003_peer.pl \nPROVE_FLAGS=-v\n\nok 1 - users with peer authentication have the correct SYSTEM_USER\nok 2 - parallel workers return the correct SYSTEM_USER when peer \nauthentication is used\nok 3 - user name map is well defined and working\nok 4 - users with peer authentication and user name map have the correct \nSYSTEM_USER\nok 5 - parallel workers return the correct SYSTEM_USER when peer \nauthentication and user name map is used\n1..5\nok\nAll tests successful.\n\nThat way one could test the SYSTEM_USER behavior without the need to \nhave kerberos enabled.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 17 Aug 2022 16:48:42 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Aug 17, 2022 at 04:48:42PM +0200, Drouvot, Bertrand wrote:\n> That way one could test the SYSTEM_USER behavior without the need to have\n> kerberos enabled.\n\nI was looking at this patch and noticed that SYSTEM_USER returns a\n\"name\", meaning that the value would be automatically truncated at 63\ncharacters. We shouldn't imply that as authn_ids can be longer than\nthat, and this issue gets a bit worse once with the auth_method\nappended to the string.\n\n+if (!$use_unix_sockets)\n+{\n+ plan skip_all =>\n+ \"authentication tests cannot run without Unix-domain sockets\";\n+}\n\nAre you sure that !$use_unix_sockets is safe here? Could we have\nplatforms where we use our port's getpeereid() with $use_unix_sockets\nworks? That would cause the test to fail with ENOSYS. Hmm. Without\nbeing able to rely on HAVE_GETPEEREID, we could check for the error\ngenerated when the fallback implementation does not work, and skip the\nrest of the test.\n--\nMichael", "msg_date": "Wed, 24 Aug 2022 13:27:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 8/24/22 6:27 AM, Michael Paquier wrote:\n> On Wed, Aug 17, 2022 at 04:48:42PM +0200, Drouvot, Bertrand wrote:\n>> That way one could test the SYSTEM_USER behavior without the need to have\n>> kerberos enabled.\n> I was looking at this patch\n\nThanks for looking at it!\n\n> and noticed that SYSTEM_USER returns a\n> \"name\", meaning that the value would be automatically truncated at 63\n> characters. We shouldn't imply that as authn_ids can be longer than\n> that, and this issue gets a bit worse once with the auth_method\n> appended to the string.\n\nGood catch! I'll fix that in the next version.\n\nHmm, I think it would make sense to keep system_user() with his friends \ncurrent_user() and session_user().\n\nBut now that system_user() will not return a name anymore (but a text), \nI think name.c is no longer the right place, what do you think? (If so, \nwhere would you suggest?)\n\n>\n> +if (!$use_unix_sockets)\n> +{\n> + plan skip_all =>\n> + \"authentication tests cannot run without Unix-domain sockets\";\n> +}\n>\n> Are you sure that !$use_unix_sockets is safe here? Could we have\n> platforms where we use our port's getpeereid() with $use_unix_sockets\n> works? That would cause the test to fail with ENOSYS. Hmm. Without\n> being able to rely on HAVE_GETPEEREID, we could check for the error\n> generated when the fallback implementation does not work, and skip the\n> rest of the test.\n\nOh right, I did not think about that, thanks for the suggestion.\n\nI'll change this in the next version and simply skip the rest of the \ntest in case we get \"peer authentication is not supported on this platform\".\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 24 Aug 2022 20:26:50 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 8/24/22 8:26 PM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 8/24/22 6:27 AM, Michael Paquier wrote:\n>> On Wed, Aug 17, 2022 at 04:48:42PM +0200, Drouvot, Bertrand wrote:\n>>> That way one could test the SYSTEM_USER behavior without the need to \n>>> have\n>>> kerberos enabled.\n>> I was looking at this patch\n>\n> Thanks for looking at it!\n>\n>> and noticed that SYSTEM_USER returns a\n>> \"name\", meaning that the value would be automatically truncated at 63\n>> characters.  We shouldn't imply that as authn_ids can be longer than\n>> that, and this issue gets a bit worse once with the auth_method\n>> appended to the string.\n>\n> Good catch! I'll fix that in the next version.\n>\n> Hmm, I think it would make sense to keep system_user() with his \n> friends current_user() and session_user().\n>\n> But now that system_user() will not return a name anymore (but a \n> text), I think name.c is no longer the right place, what do you think? \n> (If so, where would you suggest?)\n\nsystem_user() now returns a text and I moved it to miscinit.c in the new \nversion attached (I think it makes more sense now).\n\n>\n>>\n>> +if (!$use_unix_sockets)\n>> +{\n>> +   plan skip_all =>\n>> +     \"authentication tests cannot run without Unix-domain sockets\";\n>> +}\n>>\n>> Are you sure that !$use_unix_sockets is safe here?  Could we have\n>> platforms where we use our port's getpeereid() with $use_unix_sockets\n>> works?  That would cause the test to fail with ENOSYS.  Hmm. Without\n>> being able to rely on HAVE_GETPEEREID, we could check for the error\n>> generated when the fallback implementation does not work, and skip the\n>> rest of the test.\n>\n> Oh right, I did not think about that, thanks for the suggestion.\n>\n> I'll change this in the next version and simply skip the rest of the \n> test in case we get \"peer authentication is not supported on this \n> platform\".\n>\nNew version attached is also addressing Michael's remark regarding the \npeer authentication TAP test.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 25 Aug 2022 20:21:05 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Thu, Aug 25, 2022 at 08:21:05PM +0200, Drouvot, Bertrand wrote:\n> system_user() now returns a text and I moved it to miscinit.c in the new\n> version attached (I think it makes more sense now).\n\n+/* kluge to avoid including libpq/libpq-be.h here */\n+struct ClientConnectionInfo;\n+extern void InitializeSystemUser(struct ClientConnectionInfo conninfo);\n+extern const char* GetSystemUser(void);\n\nFWIW, I was also wondering about the need for all this initialization\nstanza and the extra SystemUser in TopMemoryContext. Now that we have\nMyClientConnectionInfo, I was thinking to just build the string in the\nSQL function as that's the only code path that needs to know about\nit. True that this approach saves some extra palloc() calls each time\nthe function is called.\n\n> New version attached is also addressing Michael's remark regarding the peer\n> authentication TAP test.\n\nThanks. I've wanted some basic tests for the peer authentication for\nsome time now, independently on this thread, so it would make sense to\nsplit that into a first patch and stress the buildfarm to see what\nhappens, then add these tests for SYSTEM_USER on top of the new test.\n--\nMichael", "msg_date": "Fri, 26 Aug 2022 10:02:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 8/26/22 3:02 AM, Michael Paquier wrote:\n> On Thu, Aug 25, 2022 at 08:21:05PM +0200, Drouvot, Bertrand wrote:\n>> system_user() now returns a text and I moved it to miscinit.c in the new\n>> version attached (I think it makes more sense now).\n> +/* kluge to avoid including libpq/libpq-be.h here */\n> +struct ClientConnectionInfo;\n> +extern void InitializeSystemUser(struct ClientConnectionInfo conninfo);\n> +extern const char* GetSystemUser(void);\n>\n> FWIW, I was also wondering about the need for all this initialization\n> stanza and the extra SystemUser in TopMemoryContext. Now that we have\n> MyClientConnectionInfo, I was thinking to just build the string in the\n> SQL function as that's the only code path that needs to know about\n> it.\n\nAgree that the extra SystemUser is not needed strictly speaking and that \nwe could build it each time the system_user function is called.\n\n> True that this approach saves some extra palloc() calls each time\n> the function is called.\n\nRight, with the current approach the SystemUser just needs to be \nconstructed one time.\n\nI also think that it's more consistent to have such a global variable \nwith his friends SessionUserId/OuterUserId/CurrentUserId (but at an \nextra memory cost in TopMemoryContext).\n\nLooks like there is pros and cons for both approach.\n\nI'm +1 for the current approach but I don't have a strong opinion about \nit so I'm also ok to change it the way you described if you think it's \nbetter.\n\n>> New version attached is also addressing Michael's remark regarding the peer\n>> authentication TAP test.\n> Thanks. I've wanted some basic tests for the peer authentication for\n> some time now, independently on this thread, so it would make sense to\n> split that into a first patch and stress the buildfarm to see what\n> happens, then add these tests for SYSTEM_USER on top of the new test.\n\nMakes fully sense, I've created a new thread [1] for this purpose, thanks!\n\nFor the moment I'm keeping the peer TAP test as it is in the current \nthread so that we can test the SYSTEM_USER behavior.\n\nI just realized that the previous patch version contained useless change \nin name.c: attached a new version so that name.c now remains untouched.\n\n\n[1]: \nhttps://www.postgresql.org/message-id/flat/aa60994b-1c66-ca7a-dab9-9a200dbac3d2%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 26 Aug 2022 12:16:18 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Fri, Aug 26, 2022 at 10:02:26AM +0900, Michael Paquier wrote:\n> FWIW, I was also wondering about the need for all this initialization\n> stanza and the extra SystemUser in TopMemoryContext. Now that we have\n> MyClientConnectionInfo, I was thinking to just build the string in the\n> SQL function as that's the only code path that needs to know about\n> it. True that this approach saves some extra palloc() calls each time\n> the function is called.\n\nAt the end, fine by me to keep this approach as that's more\nconsistent. I have reviewed the patch, and a few things caught my\nattention:\n- I think that we'd better switch InitializeSystemUser() to have two\nconst char * as arguments for authn_id and an auth_method, so as there\nis no need to use tweaks with UserAuth or ClientConnectionInfo in\nmiscadmin.h to bypass an inclusion of libpq-be.h or hba.h.\n- The OID of the new function should be in the range 8000-9999, as\ntaught by unused_oids.\n- Environments where the code is built without krb5 support would skip\nthe test where SYSTEM_USER should be not NULL when authenticated, so I\nhave added a test for that with MD5 in src/test/authentication/.\n- Docs have been reworded, and I have applied an indentation.\n- No need to use 200k rows in the table used to force the parallel\nscan, as long as the costs are set.\n\nIt is a bit late here, so I may have missed something. For now, how\ndoes the attached look to you?\n--\nMichael", "msg_date": "Wed, 7 Sep 2022 17:48:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 9/7/22 10:48 AM, Michael Paquier wrote:\n> On Fri, Aug 26, 2022 at 10:02:26AM +0900, Michael Paquier wrote:\n>> FWIW, I was also wondering about the need for all this initialization\n>> stanza and the extra SystemUser in TopMemoryContext. Now that we have\n>> MyClientConnectionInfo, I was thinking to just build the string in the\n>> SQL function as that's the only code path that needs to know about\n>> it. True that this approach saves some extra palloc() calls each time\n>> the function is called.\n> At the end, fine by me to keep this approach as that's more\n> consistent. I have reviewed the patch,\n\nThanks for looking at it!\n\n> and a few things caught my\n> attention:\n> - I think that we'd better switch InitializeSystemUser() to have two\n> const char * as arguments for authn_id and an auth_method, so as there\n> is no need to use tweaks with UserAuth or ClientConnectionInfo in\n> miscadmin.h to bypass an inclusion of libpq-be.h or hba.h.\n\nGood point, thanks! And there is no need to pass the whole \nClientConnectionInfo (should we add more fields to it in the future).\n\n> - The OID of the new function should be in the range 8000-9999, as\n> taught by unused_oids.\n\nThanks for pointing out!\n\nMy reasoning was to use one available OID close to the ones used for \nsession_user and current_user.\n\n> - Environments where the code is built without krb5 support would skip\n> the test where SYSTEM_USER should be not NULL when authenticated, so I\n> have added a test for that with MD5 in src/test/authentication/.\n\nGood point, thanks for the new test (as that would also not be tested \n(once added) in the new peer TAP test [1] for platforms where peer \nauthentication is not supported).\n\n> - Docs have been reworded, and I have applied an indentation.\nThanks, looks good to me.\n> - No need to use 200k rows in the table used to force the parallel\n> scan, as long as the costs are set.\nRight.\n>\n> It is a bit late here, so I may have missed something. For now, how\n> does the attached look to you?\n\n+# Test SYSTEM_USER <> NULL with parallel workers.\n\nNit: What about \"Test SYSTEM_USER get the correct value with parallel \nworkers\" as that's what we are actually testing.\n\nExcept the Nit above, that looks all good to me.\n\n\n[1]: https://commitfest.postgresql.org/39/3845/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/7/22 10:48 AM, Michael Paquier\n wrote:\n\n\nOn Fri, Aug 26, 2022 at 10:02:26AM +0900, Michael Paquier wrote:\n\n\nFWIW, I was also wondering about the need for all this initialization\nstanza and the extra SystemUser in TopMemoryContext. Now that we have\nMyClientConnectionInfo, I was thinking to just build the string in the\nSQL function as that's the only code path that needs to know about\nit. True that this approach saves some extra palloc() calls each time\nthe function is called.\n\n\n\nAt the end, fine by me to keep this approach as that's more\nconsistent. I have reviewed the patch, \n\nThanks for looking at it!\n\n\nand a few things caught my\nattention:\n- I think that we'd better switch InitializeSystemUser() to have two\nconst char * as arguments for authn_id and an auth_method, so as there\nis no need to use tweaks with UserAuth or ClientConnectionInfo in\nmiscadmin.h to bypass an inclusion of libpq-be.h or hba.h.\n\nGood point, thanks! And there is no need to pass the whole\n ClientConnectionInfo (should we add more fields to it in the\n future).\n \n\n- The OID of the new function should be in the range 8000-9999, as\ntaught by unused_oids.\n\nThanks for pointing out!\n\nMy reasoning was to use one available OID close to the ones used\n for session_user and current_user.\n\n\n\n- Environments where the code is built without krb5 support would skip\nthe test where SYSTEM_USER should be not NULL when authenticated, so I\nhave added a test for that with MD5 in src/test/authentication/.\n\nGood point, thanks for the new test (as that would also not be\n tested (once added) in the new peer TAP test [1] for platforms\n where peer authentication is not supported).\n\n\n- Docs have been reworded, and I have applied an indentation.\n\n Thanks, looks good to me.\n\n\n- No need to use 200k rows in the table used to force the parallel\nscan, as long as the costs are set.\n\n Right.\n\n\n\nIt is a bit late here, so I may have missed something. For now, how\ndoes the attached look to you?\n\n+# Test SYSTEM_USER <> NULL with parallel workers.\nNit: What about \"Test SYSTEM_USER get the correct value with\n parallel workers\" as that's what we are actually testing.\n\nExcept the Nit above, that looks all good to me.\n\n\n[1]: https://commitfest.postgresql.org/39/3845/\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Sep 2022 16:46:06 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 9/7/22 07:46, Drouvot, Bertrand wrote:\n> Except the Nit above, that looks all good to me.\n\nA few additional comments:\n\n> + assigned a database role. It is represented as\n> + <literal>auth_method:identity</literal> or\n> + <literal>NULL</literal> if the user has not been authenticated (for\n> + example if <xref linkend=\"auth-trust\"/> has been used).\n> + </para></entry>\n\nThis is rendered as\n\n ... (for example if Section 21.4 has been used).\n\nwhich IMO isn't too helpful. Maybe a <link> would read better than an\n<xref>?\n\nAlso, this function's placement in the docs (with the System Catalog\nInformation Functions) seems wrong to me. I think it should go up above\nin the Session Information Functions, with current_user et al.\n\n> + /* Build system user as auth_method:authn_id */\n> + char *system_user;\n> + Size authname_len = strlen(auth_method);\n> + Size authn_id_len = strlen(authn_id);\n> +\n> + system_user = palloc0(authname_len + authn_id_len + 2);\n> + strcat(system_user, auth_method);\n> + strcat(system_user, \":\");\n> + strcat(system_user, authn_id);\n\nIf we're palloc'ing anyway, can this be replaced with a single psprintf()?\n\n> + /* Initialize SystemUser now that MyClientConnectionInfo is restored. */\n> + InitializeSystemUser(MyClientConnectionInfo.authn_id,\n> + hba_authname(MyClientConnectionInfo.auth_method));\n\nIt makes me a little nervous to call hba_authname(auth_method) without\nchecking to see that auth_method is actually valid (which is only true\nif authn_id is not NULL).\n\nWe could pass the bare auth_method index, or update the documentation\nfor auth_method to state that it's guaranteed to be zero if authn_id is\nNULL (and then enforce that).\n\n> case SVFOP_CURRENT_USER:\n> case SVFOP_USER:\n> case SVFOP_SESSION_USER:\n> + case SVFOP_SYSTEM_USER:\n> case SVFOP_CURRENT_CATALOG:\n> case SVFOP_CURRENT_SCHEMA:\n> svf->type = NAMEOID;\n\nShould this be moved to use TEXTOID instead?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 7 Sep 2022 08:48:43 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Sep 07, 2022 at 08:48:43AM -0700, Jacob Champion wrote:\n> Also, this function's placement in the docs (with the System Catalog\n> Information Functions) seems wrong to me. I think it should go up above\n> in the Session Information Functions, with current_user et al.\n\nYeah, this had better use a <link>.\n\n>> + /* Initialize SystemUser now that MyClientConnectionInfo is restored. */\n>> + InitializeSystemUser(MyClientConnectionInfo.authn_id,\n>> + hba_authname(MyClientConnectionInfo.auth_method));\n> \n> It makes me a little nervous to call hba_authname(auth_method) without\n> checking to see that auth_method is actually valid (which is only true\n> if authn_id is not NULL).\n\nYou have mentioned that a couple of months ago if I recall correctly,\nand we pass down an enum value.\n\n> We could pass the bare auth_method index, or update the documentation\n> for auth_method to state that it's guaranteed to be zero if authn_id is\n> NULL (and then enforce that).\n> \n> > case SVFOP_CURRENT_USER:\n> > case SVFOP_USER:\n> > case SVFOP_SESSION_USER:\n> > + case SVFOP_SYSTEM_USER:\n> > case SVFOP_CURRENT_CATALOG:\n> > case SVFOP_CURRENT_SCHEMA:\n> > svf->type = NAMEOID;\n> \n> Should this be moved to use TEXTOID instead?\n\nYeah, it should. There is actually a second and much deeper issue\nhere, in the shape of a collation problem. See the assertion failure\nin exprSetCollation(), because we expect SQLValueFunction nodes to\nreturn a name or a non-collatable type. However, for this case, we'd\nrequire a text to get rid of the 63-character limit, and that's\na collatable type. This reminds me of the recent thread to work on\ngetting rid of this limit for the name type..\n--\nMichael", "msg_date": "Thu, 8 Sep 2022 10:17:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Sep 7, 2022 at 6:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> + /* Initialize SystemUser now that MyClientConnectionInfo is restored. */\n> >> + InitializeSystemUser(MyClientConnectionInfo.authn_id,\n> >> + hba_authname(MyClientConnectionInfo.auth_method));\n> >\n> > It makes me a little nervous to call hba_authname(auth_method) without\n> > checking to see that auth_method is actually valid (which is only true\n> > if authn_id is not NULL).\n>\n> You have mentioned that a couple of months ago if I recall correctly,\n> and we pass down an enum value.\n\nAh, sorry. Do you remember which thread?\n\nI am probably misinterpreting you, but I don't see why auth_method's\nbeing an enum helps. uaReject (and the \"reject\" string) is not a sane\nvalue to be using in SYSTEM_USER, and the more call stacks away we get\nfrom MyClientConnectionInfo, the easier it is to forget that that\nvalue is junk. As long as the code doesn't get more complicated, I\nsuppose there's no real harm being done, but it'd be cleaner not to\naccess auth_method at all if authn_id is NULL. I won't die on that\nhill, though.\n\n> There is actually a second and much deeper issue\n> here, in the shape of a collation problem.\n\nOh, none of that sounds fun. :/\n\n--Jacob\n\n\n", "msg_date": "Thu, 8 Sep 2022 10:37:01 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 9/7/22 5:48 PM, Jacob Champion wrote:\n> On 9/7/22 07:46, Drouvot, Bertrand wrote:\n>> Except the Nit above, that looks all good to me.\n> \n> A few additional comments:\n> \n>> + assigned a database role. It is represented as\n>> + <literal>auth_method:identity</literal> or\n>> + <literal>NULL</literal> if the user has not been authenticated (for\n>> + example if <xref linkend=\"auth-trust\"/> has been used).\n>> + </para></entry>\n> \n> This is rendered as\n> \n> ... (for example if Section 21.4 has been used).\n> \n> which IMO isn't too helpful. Maybe a <link> would read better than an\n> <xref>?\n\nThanks for looking at it!\nGood catch, V4 coming soon will make use of <link> instead.\n\n> \n> Also, this function's placement in the docs (with the System Catalog\n> Information Functions) seems wrong to me. I think it should go up above\n> in the Session Information Functions, with current_user et al.\n\nAgree, will move it to the Session Information Functions in V4.\n\n> \n>> + /* Build system user as auth_method:authn_id */\n>> + char *system_user;\n>> + Size authname_len = strlen(auth_method);\n>> + Size authn_id_len = strlen(authn_id);\n>> +\n>> + system_user = palloc0(authname_len + authn_id_len + 2);\n>> + strcat(system_user, auth_method);\n>> + strcat(system_user, \":\");\n>> + strcat(system_user, authn_id);\n> \n> If we're palloc'ing anyway, can this be replaced with a single psprintf()?\n\nFair point, V4 will make use of psprintf().\n\n> \n>> + /* Initialize SystemUser now that MyClientConnectionInfo is restored. */\n>> + InitializeSystemUser(MyClientConnectionInfo.authn_id,\n>> + hba_authname(MyClientConnectionInfo.auth_method));\n> \n> It makes me a little nervous to call hba_authname(auth_method) without\n> checking to see that auth_method is actually valid (which is only true\n> if authn_id is not NULL).\n\nWill add additional check for safety in V4.\n\n\n> \n> We could pass the bare auth_method index, or update the documentation\n> for auth_method to state that it's guaranteed to be zero if authn_id is\n> NULL (and then enforce that).\n> \n>> case SVFOP_CURRENT_USER:\n>> case SVFOP_USER:\n>> case SVFOP_SESSION_USER:\n>> + case SVFOP_SYSTEM_USER:\n>> case SVFOP_CURRENT_CATALOG:\n>> case SVFOP_CURRENT_SCHEMA:\n>> svf->type = NAMEOID;\n> \n> Should this be moved to use TEXTOID instead?\n\nGood catch, will do in V4.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 26 Sep 2022 15:09:40 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 9/8/22 3:17 AM, Michael Paquier wrote:\n> On Wed, Sep 07, 2022 at 08:48:43AM -0700, Jacob Champion wrote:\n> \n>> We could pass the bare auth_method index, or update the documentation\n>> for auth_method to state that it's guaranteed to be zero if authn_id is\n>> NULL (and then enforce that).\n>>\n>>> case SVFOP_CURRENT_USER:\n>>> case SVFOP_USER:\n>>> case SVFOP_SESSION_USER:\n>>> + case SVFOP_SYSTEM_USER:\n>>> case SVFOP_CURRENT_CATALOG:\n>>> case SVFOP_CURRENT_SCHEMA:\n>>> svf->type = NAMEOID;\n>>\n>> Should this be moved to use TEXTOID instead?\n> \n> Yeah, it should. There is actually a second and much deeper issue\n> here, in the shape of a collation problem. See the assertion failure\n> in exprSetCollation(), because we expect SQLValueFunction nodes to\n> return a name or a non-collatable type. However, for this case, we'd\n> require a text to get rid of the 63-character limit, and that's\n> a collatable type. This reminds me of the recent thread to work on\n> getting rid of this limit for the name type..\n\nPlease find attached V4 taking care of Jacob's previous comments.\n\nAs far the assertion failure mentioned by Michael when moving the \nSVFOP_SYSTEM_USER from NAMEOID to TEXTOID: V4 is assuming that it is \nsafe to force the collation to C_COLLATION_OID for SQLValueFunction \nhaving a TEXT type, but I would be happy to also hear your thoughts \nabout it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 26 Sep 2022 15:29:39 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On 9/26/22 06:29, Drouvot, Bertrand wrote:\n> Please find attached V4 taking care of Jacob's previous comments.\n\n> +\t/*\n> +\t * InitializeSystemUser should already be called once we are sure that\n> +\t * authn_id is not NULL (means auth_method is actually valid).\n> +\t * But keep the test here also for safety.\n> +\t */\n> +\tif (authn_id)\n\nSince there are only internal clients to the API, I'd argue this makes\nmore sense as an Assert(authn_id != NULL), but I don't think it's a\ndealbreaker.\n\n> As far the assertion failure mentioned by Michael when moving the \n> SVFOP_SYSTEM_USER from NAMEOID to TEXTOID: V4 is assuming that it is \n> safe to force the collation to C_COLLATION_OID for SQLValueFunction \n> having a TEXT type, but I would be happy to also hear your thoughts \n> about it.\n\nUnfortunately I don't have much to add here; I don't know enough about\nthe underlying problems.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 27 Sep 2022 15:38:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Tue, Sep 27, 2022 at 03:38:49PM -0700, Jacob Champion wrote:\n> On 9/26/22 06:29, Drouvot, Bertrand wrote:\n> Since there are only internal clients to the API, I'd argue this makes\n> more sense as an Assert(authn_id != NULL), but I don't think it's a\n> dealbreaker.\n\nUsing an assert() looks like a good idea from here. If this is called\nwith a NULL authn, this could reflect a problem in the authentication\nlogic.\n\n>> As far the assertion failure mentioned by Michael when moving the \n>> SVFOP_SYSTEM_USER from NAMEOID to TEXTOID: V4 is assuming that it is \n>> safe to force the collation to C_COLLATION_OID for SQLValueFunction \n>> having a TEXT type, but I would be happy to also hear your thoughts \n>> about it.\n> \n> Unfortunately I don't have much to add here; I don't know enough about\n> the underlying problems.\n\nI have been looking at that, and after putting my hands on that this\ncomes down to the facility introduced in 40c24bf. So, I think that\nwe'd better use COERCE_SQL_SYNTAX so as there is no need to worry\nabout the shortcuts this patch is trying to use with the collation\nsetup. And there are a few tests for get_func_sql_syntax() in\ncreate_view.sql. Note that this makes the patch slightly shorter, and\nsimpler.\n\nThe docs still mentioned \"name\", and not \"text\".\n\nThis brings in a second point. 40c24bf has refrained from removing\nSQLValueFunction, but based the experience on this thread I see a\npretty good argument in doing the jump once and for all. This\ndeserves a separate discussion, though. I'll do that and create a new\nthread.\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 12:28:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 9/28/22 5:28 AM, Michael Paquier wrote:\n> On Tue, Sep 27, 2022 at 03:38:49PM -0700, Jacob Champion wrote:\n>> On 9/26/22 06:29, Drouvot, Bertrand wrote:\n>> Since there are only internal clients to the API, I'd argue this makes\n>> more sense as an Assert(authn_id != NULL), but I don't think it's a\n>> dealbreaker.\n> \n> Using an assert() looks like a good idea from here. If this is called\n> with a NULL authn, this could reflect a problem in the authentication\n> logic.\n> \n\nAgree, thanks for pointing out.\n\n>>> As far the assertion failure mentioned by Michael when moving the\n>>> SVFOP_SYSTEM_USER from NAMEOID to TEXTOID: V4 is assuming that it is\n>>> safe to force the collation to C_COLLATION_OID for SQLValueFunction\n>>> having a TEXT type, but I would be happy to also hear your thoughts\n>>> about it.\n>>\n>> Unfortunately I don't have much to add here; I don't know enough about\n>> the underlying problems.\n> \n> I have been looking at that, and after putting my hands on that this\n> comes down to the facility introduced in 40c24bf. So, I think that\n> we'd better use COERCE_SQL_SYNTAX so as there is no need to worry\n> about the shortcuts this patch is trying to use with the collation\n> setup.\n\nNice!\n\n> And there are a few tests for get_func_sql_syntax() in\n> create_view.sql. Note that this makes the patch slightly shorter, and\n> simpler.\n>\n\nAgree that it does look simpler that way and that making use of \nCOERCE_SQL_SYNTAX does looks like a better approach. Nice catch!\n\n\n> The docs still mentioned \"name\", and not \"text\".\n> \n\nOups, thanks for pointing out.\n\nI had a look at v5 and it does look good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 12:58:48 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "On Wed, Sep 28, 2022 at 12:58:48PM +0200, Drouvot, Bertrand wrote:\n> I had a look at v5 and it does look good to me.\n\nOkay, cool. I have spent some time today doing a last pass over it\nand an extra round of tests. Things looked fine, so applied.\n--\nMichael", "msg_date": "Thu, 29 Sep 2022 15:12:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" }, { "msg_contents": "Hi,\n\nOn 9/29/22 8:12 AM, Michael Paquier wrote:\n> On Wed, Sep 28, 2022 at 12:58:48PM +0200, Drouvot, Bertrand wrote:\n>> I had a look at v5 and it does look good to me.\n> \n> Okay, cool. I have spent some time today doing a last pass over it\n> and an extra round of tests. Things looked fine, so applied.\n> --\n\nThanks for your precious help!\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 08:28:57 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SYSTEM_USER reserved word implementation" } ]
[ { "msg_contents": "ATRewriteTable() calls table_tuple_insert() with a bistate, to avoid clobbering\nand polluting the buffers.\n\nBut heap_insert() then calls \nheap_prepare_insert() >\nheap_toast_insert_or_update >\ntoast_tuple_externalize >\ntoast_save_datum >\nheap_insert(toastrel, toasttup, mycid, options, NULL /* without bistate:( */);\n\nI came up with this patch. I'm not sure but maybe it should be implemented at\nthe tableam layer and not inside heap. Maybe the BulkInsertState should have a\n2nd strategy buffer for toast tables.\n\nCREATE TABLE t(i int, a text, b text, c text,d text,e text,f text,g text);\nINSERT INTO t SELECT 0, array_agg(a),array_agg(a),array_agg(a),array_agg(a),array_agg(a),array_agg(a) FROM generate_series(1,999)n,repeat(n::text,99)a,generate_series(1,99)b GROUP BY b;\nINSERT INTO t SELECT * FROM t;\nINSERT INTO t SELECT * FROM t;\nINSERT INTO t SELECT * FROM t;\nINSERT INTO t SELECT * FROM t;\n\nALTER TABLE t ALTER i TYPE smallint;\nSELECT COUNT(1), relname, COUNT(1) FILTER(WHERE isdirty) FROM pg_buffercache b JOIN pg_class c ON c.oid=b.relfilenode GROUP BY 2 ORDER BY 1 DESC LIMIT 9;\n\nWithout this patch:\npostgres=# SELECT COUNT(1), relname, COUNT(1) FILTER(WHERE isdirty) FROM pg_buffercache b JOIN pg_class c ON c.oid=b.relfilenode GROUP BY 2 ORDER BY 1 DESC LIMIT 9;\n 10283 | pg_toast_55759 | 8967\n\nWith this patch:\n 1418 | pg_toast_16597 | 1418\n\n-- \nJustin", "msg_date": "Wed, 22 Jun 2022 09:38:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "Hi,\n\nOn 6/22/22 4:38 PM, Justin Pryzby wrote:\n> ATRewriteTable() calls table_tuple_insert() with a bistate, to avoid clobbering\n> and polluting the buffers.\n>\n> But heap_insert() then calls\n> heap_prepare_insert() >\n> heap_toast_insert_or_update >\n> toast_tuple_externalize >\n> toast_save_datum >\n> heap_insert(toastrel, toasttup, mycid, options, NULL /* without bistate:( */);\n\nGood catch!\n\n> I came up with this patch.\n\n+       /* Release pin after main table, before switching to write to \ntoast table */\n+       if (bistate)\n+               ReleaseBulkInsertStatePin(bistate);\n\nI'm not sure we should release and reuse here the bistate of the main \ntable: it looks like that with the patch ReadBufferBI() on the main \nrelation wont have the desired block already pinned (then would need to \nperform a read).\n\nWhat do you think about creating earlier a new dedicated bistate for the \ntoast table?\n\n+       if (bistate)\n+       {\n+               table_finish_bulk_insert(toastrel, options); // XXX\n\nI think it's too early, as it looks to me that at this stage we may have \nnot finished the whole bulk insert yet.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 6/22/22 4:38 PM, Justin Pryzby\n wrote:\n\n\nATRewriteTable() calls table_tuple_insert() with a bistate, to avoid clobbering\nand polluting the buffers.\n\nBut heap_insert() then calls\nheap_prepare_insert() >\nheap_toast_insert_or_update >\ntoast_tuple_externalize >\ntoast_save_datum >\nheap_insert(toastrel, toasttup, mycid, options, NULL /* without bistate:( */);\n\nGood catch! \n\nI came up with this patch. \n\n+       /* Release pin after main table, before switching to\n write to toast table */\n +       if (bistate)\n +               ReleaseBulkInsertStatePin(bistate);\nI'm not sure we should release and reuse here the bistate of the\n main table: it looks like that with the patch ReadBufferBI() on\n the main relation wont have the desired block already pinned (then\n would need to perform a read).\n\nWhat do you think about creating earlier a new dedicated bistate\n for the toast table?\n\n+       if (bistate)\n +       {\n +               table_finish_bulk_insert(toastrel, options); //\n XXX\nI think it's too early, as it looks to me that at this stage we\n may have not finished the whole bulk insert yet.\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Sep 2022 10:48:39 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "On Wed, Sep 07, 2022 at 10:48:39AM +0200, Drouvot, Bertrand wrote:\n> +       if (bistate)\n> +       {\n> +               table_finish_bulk_insert(toastrel, options); // XXX\n> \n> I think it's too early, as it looks to me that at this stage we may have not\n> finished the whole bulk insert yet.\n\nYeah, that feels fishy. Not sure what's the idea behind the XXX\ncomment, either. I have marked this patch as RwF, following the lack\nof reply.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 15:52:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "On Wed, Sep 07, 2022 at 10:48:39AM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/22/22 4:38 PM, Justin Pryzby wrote:\n> > ATRewriteTable() calls table_tuple_insert() with a bistate, to avoid clobbering\n> > and polluting the buffers.\n> > \n> > But heap_insert() then calls\n> > heap_prepare_insert() >\n> > heap_toast_insert_or_update >\n> > toast_tuple_externalize >\n> > toast_save_datum >\n> > heap_insert(toastrel, toasttup, mycid, options, NULL /* without bistate:( */);\n> \n> What do you think about creating earlier a new dedicated bistate for the\n> toast table?\n\nYes, but I needed to think about what data structure to put it in...\n\nHere, I created a 2nd bistate for toast whenever creating a bistate for\nheap. That avoids the need to add arguments to tableam's\ntable_tuple_insert(), in addition to the 6 other functions in the call\nstack.\n\nI also updated rewriteheap.c to handle the same problem in CLUSTER:\n\npostgres=# DROP TABLE t; CREATE TABLE t AS SELECT i, repeat((5555555+i)::text, 123456)t FROM generate_series(1,9999)i;\npostgres=# VACUUM FULL VERBOSE t ; SELECT COUNT(1), datname, coalesce(c.relname,b.relfilenode::text), d.relname FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class d ON d.reltoastrelid=c.oid LEFT JOIN pg_database db ON db.oid=b.reldatabase GROUP BY 2,3,4 ORDER BY 1 DESC LIMIT 22;\n\nUnpatched:\n 5000 | postgres | pg_toast_96188840 | t\n => 40MB of shared buffers\n\nPatched:\n 2048 | postgres | pg_toast_17097 | t\n\nNote that a similar problem seems to exist in COPY ... but I can't see\nhow to fix that one.\n\n-- \nJustin", "msg_date": "Sun, 27 Nov 2022 14:15:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "Hi!\n\nFound this discussion for our experiments with TOAST, I'd have to check it\nunder [1].\nI'm not sure, what behavior is expected when the main table is unpinned,\nbulk insert\nto the TOAST table is in progress, and the second query with a heavy bulk\ninsert to\nthe same TOAST table comes in?\n\nThank you!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ru\n\nOn Sun, Nov 27, 2022 at 11:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Sep 07, 2022 at 10:48:39AM +0200, Drouvot, Bertrand wrote:\n> > Hi,\n> >\n> > On 6/22/22 4:38 PM, Justin Pryzby wrote:\n> > > ATRewriteTable() calls table_tuple_insert() with a bistate, to avoid\n> clobbering\n> > > and polluting the buffers.\n> > >\n> > > But heap_insert() then calls\n> > > heap_prepare_insert() >\n> > > heap_toast_insert_or_update >\n> > > toast_tuple_externalize >\n> > > toast_save_datum >\n> > > heap_insert(toastrel, toasttup, mycid, options, NULL /* without\n> bistate:( */);\n> >\n> > What do you think about creating earlier a new dedicated bistate for the\n> > toast table?\n>\n> Yes, but I needed to think about what data structure to put it in...\n>\n> Here, I created a 2nd bistate for toast whenever creating a bistate for\n> heap. That avoids the need to add arguments to tableam's\n> table_tuple_insert(), in addition to the 6 other functions in the call\n> stack.\n>\n> I also updated rewriteheap.c to handle the same problem in CLUSTER:\n>\n> postgres=# DROP TABLE t; CREATE TABLE t AS SELECT i,\n> repeat((5555555+i)::text, 123456)t FROM generate_series(1,9999)i;\n> postgres=# VACUUM FULL VERBOSE t ; SELECT COUNT(1), datname,\n> coalesce(c.relname,b.relfilenode::text), d.relname FROM pg_buffercache b\n> LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN\n> pg_class d ON d.reltoastrelid=c.oid LEFT JOIN pg_database db ON\n> db.oid=b.reldatabase GROUP BY 2,3,4 ORDER BY 1 DESC LIMIT 22;\n>\n> Unpatched:\n> 5000 | postgres | pg_toast_96188840 | t\n> => 40MB of shared buffers\n>\n> Patched:\n> 2048 | postgres | pg_toast_17097 | t\n>\n> Note that a similar problem seems to exist in COPY ... but I can't see\n> how to fix that one.\n>\n> --\n> Justin\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Found this discussion for our experiments with TOAST, I'd have to check it under [1].I'm not sure, what behavior is expected when the main table is unpinned, bulk insertto the TOAST table is in progress, and the second query with a heavy bulk insert tothe same TOAST table comes in?Thank you![1] https://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ruOn Sun, Nov 27, 2022 at 11:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Sep 07, 2022 at 10:48:39AM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/22/22 4:38 PM, Justin Pryzby wrote:\n> > ATRewriteTable() calls table_tuple_insert() with a bistate, to avoid clobbering\n> > and polluting the buffers.\n> > \n> > But heap_insert() then calls\n> > heap_prepare_insert() >\n> > heap_toast_insert_or_update >\n> > toast_tuple_externalize >\n> > toast_save_datum >\n> > heap_insert(toastrel, toasttup, mycid, options, NULL /* without bistate:( */);\n> \n> What do you think about creating earlier a new dedicated bistate for the\n> toast table?\n\nYes, but I needed to think about what data structure to put it in...\n\nHere, I created a 2nd bistate for toast whenever creating a bistate for\nheap.  That avoids the need to add arguments to tableam's\ntable_tuple_insert(), in addition to the 6 other functions in the call\nstack.\n\nI also updated rewriteheap.c to handle the same problem in CLUSTER:\n\npostgres=# DROP TABLE t; CREATE TABLE t AS SELECT i, repeat((5555555+i)::text, 123456)t FROM generate_series(1,9999)i;\npostgres=# VACUUM FULL VERBOSE t ; SELECT COUNT(1), datname, coalesce(c.relname,b.relfilenode::text), d.relname FROM pg_buffercache b LEFT JOIN pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) LEFT JOIN pg_class d ON d.reltoastrelid=c.oid LEFT JOIN pg_database db ON db.oid=b.reldatabase GROUP BY 2,3,4 ORDER BY 1 DESC LIMIT 22;\n\nUnpatched:\n  5000 | postgres | pg_toast_96188840       | t\n  => 40MB of shared buffers\n\nPatched:\n  2048 | postgres | pg_toast_17097      | t\n\nNote that a similar problem seems to exist in COPY ... but I can't see\nhow to fix that one.\n\n-- \nJustin\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Tue, 13 Dec 2022 00:26:15 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "Hi Justin,\n\nThis patch has gone stale quite some time ago; CFbot does not seem to\nhave any history of a successful apply attemps, nor do we have any\nsuccesful build history (which was introduced some time ago already).\n\nAre you planning on rebasing this patch?\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 7 Nov 2023 17:17:06 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "@cfbot: rebased", "msg_date": "Thu, 16 Nov 2023 11:40:20 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" }, { "msg_contents": "@cfbot: rebased", "msg_date": "Mon, 15 Jul 2024 15:43:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE uses a bistate but not for toast tables" } ]
[ { "msg_contents": "Hi all,\r\n\r\nPostgreSQL currently maintains several data structures in the SLRU\r\ncache. The SLRU cache has scaling and sizing challenges because of it’s\r\nsimple implementation. The goal is to move these caches to the common\r\nbuffer cache to benefit from the stronger capabilities of the common\r\nbuffercache code. At AWS, we are building on the patch shared by Thomas\r\nMunro [1], which treats the SLRU pages as part of a pseudo-databatabe\r\nof ID 9. We will refer to the pages belonging to SLRU components as\r\nBufferedObject pages going forward.\r\n\r\nThe current SLRU pages do not have any header, so there is a need to\r\ncreate a new page header format for these. Our investigations revealed\r\nthat we need to:\r\n\r\n1. track LSN to ensure durability and consistency of all pages (for redo\r\n   and full page write purposes)\r\n2. have a checksum (for page correctness verification).\r\n3. A flag to identify if the page is a relational or BufferedObject\r\n4. Track version information.\r\n\r\nWe are suggesting a minimal BufferedObject page header\r\nto be the following, overlapping with the key fields near the beginning\r\nof the regular PageHeaderData:\r\n\r\ntypedef struct BufferedObjectPageHeaderData\r\n{\r\n    PageXLogRecPtr pd_lsn;\r\n    uint16_t       pd_checksum;\r\n    uint16_t       pd_flags;\r\n    uint16_t       pd_pagesize_version;\r\n} BufferedObjectPageHeaderData;\r\n\r\nFor reference, the regular page header looks like the following:\r\ntypedef struct PageHeaderData\r\n{\r\n    PageXLogRecPtr    pd_lsn;\r\n    uint16_t    pd_checksum;\r\n    uint16_t    pd_flags;\r\n    LocationIndex   pd_lower;\r\n    LocationIndex   pd_upper;\r\n    LocationIndex   pd_special;\r\n    uint16_t           pd_pagesize_version;\r\n    TransactionId   pd_prune_xid;\r\n    ItemIdDataCommon  pd_linp[];\r\n} PageHeaderData;\r\n\r\nAfter careful review, we have trimmed out the heap and index specific\r\nfields from the suggested header that do not add any value to SLRU\r\ncomponents. We plan to use pd_lsn, pd_checksum, and pd_pagesize_version\r\nin the same way that they are in relational pages. These fields are\r\nneeded to ensure consistency, durability and page correctness.\r\n\r\nWe will use the 4th bit of pd_flags to identify a BufferedObject page.\r\nIf the bit is set then this denotes a BufferedObject page. Today, bits\r\n1 - 3 are used for determining if there are any free line pointers, if\r\nthe page is full, and if all tuples on the page are visible to\r\neveryone, respectively. We will use this information accordingly in the\r\nstorage manager to determine which callback functions to use for file\r\nI/O operations. This approach allows the buffercache to have an\r\nuniversal method to quickly determine what type of page it is dealing\r\nwith at any time.\r\n\r\nUsing the new BufferedObject page header will be space efficient but\r\nintroduces a significant change in the codebase to now track two types\r\nof page header data. During upgrade, all SLRU files that exist on the\r\nsystem must be converted to the new format with page header. This will\r\nrequire rewriting all the SLRU pages with the page header as part of\r\npg_upgrade.\r\n\r\nWe believe that this is the correct approach for the long run. We would\r\nlove feedback if there are additional items of data that should be\r\ntracked as well. Alternatively, we could re-use the existing page\r\nheader and the unused fields could be used as a padding. This feels\r\nlike an unclean approach but would avoid having two page header types\r\nin the database.\r\n\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/flat/CA+hUKGKAYze99B-jk9NoMp-2BDqAgiRC4oJv+bFxghNgdieq8Q@mail.gmail.com\r\n\r\n\r\n\r\nDiscussed with: Joe Conway, Nathan Bossart, Shawn Debnath\r\n\r\n\r\nRishu Bagga\r\n\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Wed, 22 Jun 2022 21:06:29 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2022-06-22 21:06:29 +0000, Bagga, Rishu wrote:\n> 3. A flag to identify if the page is a relational or BufferedObject\n\nWhy is this needed in the page header?\n\n\n> Using the new BufferedObject page header will be space efficient but\n> introduces a significant change in the codebase to now track two types\n> of page header data. During upgrade, all SLRU files that exist on the\n> system must be converted to the new format with page header. This will\n> require rewriting all the SLRU pages with the page header as part of\n> pg_upgrade.\n\nHow are you proposing to deal with this in the \"key\" to \"offset in SLRU\"\nmapping? E.g. converting a xid to an offset in the pg_xact SLRU. I assume\nyou're thinking to deal with this by making the conversion math a bit more\ncomplicated?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Jun 2022 14:39:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "\"Bagga, Rishu\" <bagrishu@amazon.com> writes:\n> The current SLRU pages do not have any header, so there is a need to\n> create a new page header format for these. Our investigations revealed\n> that we need to:\n\n> 1. track LSN to ensure durability and consistency of all pages (for redo\n>    and full page write purposes)\n> 2. have a checksum (for page correctness verification).\n> 3. A flag to identify if the page is a relational or BufferedObject\n> 4. Track version information.\n\nIsn't this a nonstarter from the standpoint of pg_upgrade?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jun 2022 19:12:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2022-06-22 19:12:14 -0400, Tom Lane wrote:\n> \"Bagga, Rishu\" <bagrishu@amazon.com> writes:\n> > The current SLRU pages do not have any header, so there is a need to\n> > create a new page header format for these. Our investigations revealed\n> > that we need to:\n> \n> > 1. track LSN to ensure durability and consistency of all pages (for redo\n> > �� and full page write purposes)\n> > 2. have a checksum (for page correctness verification).\n> > 3. A flag to identify if the page is a relational or BufferedObject\n> > 4. Track version information.\n> \n> Isn't this a nonstarter from the standpoint of pg_upgrade?\n\nWe're rewriting some relation forks as part of pg_upgrade (visibility map\nIIRC?), so rewriting an SLRU is likely not prohibitive - there's much more of\na limit to the SLRU sizes than the number and aggregate size of relation\nforks.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Jun 2022 16:20:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi Andres,\r\nThanks for your response.\r\n\r\nTo answer your questions:\r\n\r\n>> 3. A flag to identify if the page is a relational or BufferedObject\r\n>Why is this needed in the page header?\r\n\r\nNow that we are dealing with two different type of page headers, we need to know how to interpret any given page. We need to use pd_flags to determine this.\r\n\r\n\r\n>How are you proposing to deal with this in the \"key\" to \"offset in >SLRU\"\r\n>mapping? E.g. converting a xid to an offset in the pg_xact SLRU. I >assume\r\n>you're thinking to deal with this by making the conversion math a bit >more\r\n>complicated?\r\n\r\nYou’re right; we would have to account for this in the conversion math between the ‘key’ and ‘offset’. The change to the macros would be as following:\r\n\r\n#define MULTIXACT_OFFSETS_PER_PAGE ((BLCKSZ - SizeOfBufferedObjectPageHeaderData) / sizeof(MultiXactOffset))\r\n\r\nAdditionally, we need to account for the size of the page header when reading and writing multixacts in memory based off of the entryno.\r\n\r\nRishu Bagga\r\n\r\nAmazon Web Services (AWS)\r\n\r\n\r\nOn 6/22/22, 2:40 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n\r\n Hi,\r\n\r\n On 2022-06-22 21:06:29 +0000, Bagga, Rishu wrote:\r\n > 3. A flag to identify if the page is a relational or BufferedObject\r\n\r\n Why is this needed in the page header?\r\n\r\n\r\n > Using the new BufferedObject page header will be space efficient but\r\n > introduces a significant change in the codebase to now track two types\r\n > of page header data. During upgrade, all SLRU files that exist on the\r\n > system must be converted to the new format with page header. This will\r\n > require rewriting all the SLRU pages with the page header as part of\r\n > pg_upgrade.\r\n\r\n How are you proposing to deal with this in the \"key\" to \"offset in SLRU\"\r\n mapping? E.g. converting a xid to an offset in the pg_xact SLRU. I assume\r\n you're thinking to deal with this by making the conversion math a bit more\r\n complicated?\r\n\r\n Greetings,\r\n\r\n Andres Freund\r\n\r\n\r\n\r\n", "msg_date": "Thu, 23 Jun 2022 20:25:21 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Wed, Jun 22, 2022 at 5:06 PM Bagga, Rishu <bagrishu@amazon.com> wrote:\n> We are suggesting a minimal BufferedObject page header\n> to be the following, overlapping with the key fields near the beginning\n> of the regular PageHeaderData:\n>\n> typedef struct BufferedObjectPageHeaderData\n> {\n> PageXLogRecPtr pd_lsn;\n> uint16_t pd_checksum;\n> uint16_t pd_flags;\n> uint16_t pd_pagesize_version;\n> } BufferedObjectPageHeaderData;\n>\n> For reference, the regular page header looks like the following:\n> typedef struct PageHeaderData\n> {\n> PageXLogRecPtr pd_lsn;\n> uint16_t pd_checksum;\n> uint16_t pd_flags;\n> LocationIndex pd_lower;\n> LocationIndex pd_upper;\n> LocationIndex pd_special;\n> uint16_t pd_pagesize_version;\n> TransactionId pd_prune_xid;\n> ItemIdDataCommon pd_linp[];\n> } PageHeaderData;\n>\n> After careful review, we have trimmed out the heap and index specific\n> fields from the suggested header that do not add any value to SLRU\n> components. We plan to use pd_lsn, pd_checksum, and pd_pagesize_version\n> in the same way that they are in relational pages. These fields are\n> needed to ensure consistency, durability and page correctness\n\nI think that it's not worth introducing a new page header format to\nsave 10 bytes per page. Keeping things on the same format is likely to\nsave more than the minor waste of space costs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Jun 2022 16:27:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2022-06-23 20:25:21 +0000, Bagga, Rishu wrote:\n> >> 3. A flag to identify if the page is a relational or BufferedObject\n> >Why is this needed in the page header?\n> \n> Now that we are dealing with two different type of page headers, we need to\n> know how to interpret any given page. We need to use pd_flags to determine\n> this.\n\nWhen do we need to do so? We should never need to figure out whether an\non-disk block is for an SLRU or something else, without also knowing which\nrelation / SLRU it is in.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jun 2022 14:21:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi Andres,\r\n\r\n\r\n>When do we need to do so? We should never need to figure out whether an\r\n>on-disk block is for an SLRU or something else, without also knowing >which\r\n>relation / SLRU it is in.\r\n\r\nYou are correct that we wouldn’t need to rely on the pd_flag bit to determine page type for any access to a page where we come top down following the hierarchy. However, for the purpose of debugging “from the bottom up” it would be critical to know what type of page is being read in a system with multiple page header types.\r\n\r\nOn 6/23/22, 2:22 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n\r\n\r\n Hi,\r\n\r\n On 2022-06-23 20:25:21 +0000, Bagga, Rishu wrote:\r\n > >> 3. A flag to identify if the page is a relational or BufferedObject\r\n > >Why is this needed in the page header?\r\n >\r\n > Now that we are dealing with two different type of page headers, we need to\r\n > know how to interpret any given page. We need to use pd_flags to determine\r\n > this.\r\n\r\n When do we need to do so? We should never need to figure out whether an\r\n on-disk block is for an SLRU or something else, without also knowing which\r\n relation / SLRU it is in.\r\n\r\n Greetings,\r\n\r\n Andres Freund\r\n\r\n", "msg_date": "Fri, 24 Jun 2022 00:39:41 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2022-06-24 00:39:41 +0000, Bagga, Rishu wrote:\n> >When do we need to do so? We should never need to figure out whether an\n> >on-disk block is for an SLRU or something else, without also knowing >which\n> >relation / SLRU it is in.\n> \n> You are correct that we wouldn’t need to rely on the pd_flag bit to\n> determine page type for any access to a page where we come top down\n> following the hierarchy. However, for the purpose of debugging “from the\n> bottom up” it would be critical to know what type of page is being read in a\n> system with multiple page header types.\n\nThat doesn't seem to justify using a bit on the page. Wouldn't it suffice to\nadd such information to the BufferDesc?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:06:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Thu, Jun 23, 2022 at 06:06:48PM -0700, Andres Freund wrote:\n\n> > You are correct that we wouldn’t need to rely on the pd_flag bit to\n> > determine page type for any access to a page where we come top down\n> > following the hierarchy. However, for the purpose of debugging “from the\n> > bottom up” it would be critical to know what type of page is being read in a\n> > system with multiple page header types.\n> \n> That doesn't seem to justify using a bit on the page. Wouldn't it suffice to\n> add such information to the BufferDesc?\n\nThe goal for the bit in pd_flags is to help identify what page header \nshould be present if one were to be looking at the physical page outside \nof the database. One example that comes to mind is pg_upgrade. There \nare other use cases where physical consistency checks could be applied, \nagain outside of a running database.\n\nOn Thu, Jun 23, 2022 at 04:27:33PM -0400, Robert Haas wrote:\n\n> I think that it's not worth introducing a new page header format to\n> save 10 bytes per page. Keeping things on the same format is likely to\n> save more than the minor waste of space costs.\n\nYeah, I think we are open to both approaches, though we believe it would \nbe cleaner to get started with a targeted page header for the new code. \nBut do understand having to understand/translate/deal with two page \nheader types might not be worth the savings in space.\n\nIf we stick with the current page header, of course, changes to pd_flag \nwon't be necessary anymore.\n\nStepping back, it seems like folks are okay with introducing a page \nheader to current SLRU components and that we are leaning towards \nsticking with the default one for now. We can proceed with this \napproach, and if needed, change it later if more folks chime in.\n\nCheers.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Fri, 24 Jun 2022 22:19:33 +0000", "msg_from": "Shawn Debnath <clocksweep@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2022-06-24 22:19:33 +0000, Shawn Debnath wrote:\n> On Thu, Jun 23, 2022 at 06:06:48PM -0700, Andres Freund wrote:\n>\n> > > You are correct that we wouldn’t need to rely on the pd_flag bit to\n> > > determine page type for any access to a page where we come top down\n> > > following the hierarchy. However, for the purpose of debugging “from the\n> > > bottom up” it would be critical to know what type of page is being read in a\n> > > system with multiple page header types.\n> >\n> > That doesn't seem to justify using a bit on the page. Wouldn't it suffice to\n> > add such information to the BufferDesc?\n>\n> The goal for the bit in pd_flags is to help identify what page header\n> should be present if one were to be looking at the physical page outside\n> of the database. One example that comes to mind is pg_upgrade. There\n> are other use cases where physical consistency checks could be applied,\n> again outside of a running database.\n\nOutside the database you'll know the path to the file, which will tell you\nit's not another kind of relation.\n\nThis really makes no sense to me. We don't have page flags indicating whether\na page is a heap, btree, visibility, fms whatever kind of page either. On a\ngreen field, it'd make sense to have such information in a metapage at the\nstart of each relation - but not on each page.\n\n\n> On Thu, Jun 23, 2022 at 04:27:33PM -0400, Robert Haas wrote:\n>\n> > I think that it's not worth introducing a new page header format to\n> > save 10 bytes per page. Keeping things on the same format is likely to\n> > save more than the minor waste of space costs.\n>\n> Yeah, I think we are open to both approaches, though we believe it would\n> be cleaner to get started with a targeted page header for the new code.\n> But do understand having to understand/translate/deal with two page\n> header types might not be worth the savings in space.\n\nNot sure if that changes anything, but it's maybe worth noting that we already\nhave some types of pages that don't use the full page header (at least\nfreespace.c, c.f. buffer_std argument to MarkBufferDirtyHint()). I can see an\nargument for shrinking the \"uniform\" part of the page header, and pushing more\nthings down into AMs. But I think that'd need to change the existing code, not\njust introduce something new for new code.\n\n\n> Stepping back, it seems like folks are okay with introducing a page\n> header to current SLRU components and that we are leaning towards\n> sticking with the default one for now. We can proceed with this\n> approach, and if needed, change it later if more folks chime in.\n\nI think we're clearly going to have to do that at some point not too far\naway. There's just too many capabilities that are made harder by not having\nthat information for SLRU pages. That's not to say that it's a prerequisite to\nmoving SLRUs into the buffer pool (using a hack like Thomas did until the page\nheader is introduced). Both are complicated enough projects on their own. I\nalso could see adding the page header before moving SLRUs in the buffer pool,\nthere isn't really a hard dependency.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jun 2022 15:45:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Fri, Jun 24, 2022 at 03:45:34PM -0700, Andres Freund wrote:\n\n> Outside the database you'll know the path to the file, which will tell you\n> it's not another kind of relation.\n> \n> This really makes no sense to me. We don't have page flags indicating whether\n> a page is a heap, btree, visibility, fms whatever kind of page either. On a\n> green field, it'd make sense to have such information in a metapage at the\n> start of each relation - but not on each page.\n\nAlright, I agree with the metapage having the necessary info. In\nthis case, we can rely on the hierarchy to determine the type of header.\nGiven we do not have an usecase requiring the flag, we should not\nconsume it at this point.\n\n\n> > On Thu, Jun 23, 2022 at 04:27:33PM -0400, Robert Haas wrote:\n> >\n> > > I think that it's not worth introducing a new page header format to\n> > > save 10 bytes per page. Keeping things on the same format is likely to\n> > > save more than the minor waste of space costs.\n> >\n> > Yeah, I think we are open to both approaches, though we believe it would\n> > be cleaner to get started with a targeted page header for the new code.\n> > But do understand having to understand/translate/deal with two page\n> > header types might not be worth the savings in space.\n> \n> Not sure if that changes anything, but it's maybe worth noting that we already\n> have some types of pages that don't use the full page header (at least\n> freespace.c, c.f. buffer_std argument to MarkBufferDirtyHint()). I can see an\n> argument for shrinking the \"uniform\" part of the page header, and pushing more\n> things down into AMs. But I think that'd need to change the existing code, not\n> just introduce something new for new code.\n\nWe did think through a universal page header concept that included just\nthe pd_lsn, pd_checksum, pd_flags and pulling in pd_pagesize_version and other\nfields as the non-uniform members for SLRU. Unfortunately, there is a gap of\n48 bits after pd_flags which makes it challenging with today's header. I am\nleaning towards the standard page header for now and revisiting the universal/uniform\npage header and AM changes in a separate effort. The push down to AM is an\ninteresting concept and should be worthwhile following up on.\n\n\n> > Stepping back, it seems like folks are okay with introducing a page\n> > header to current SLRU components and that we are leaning towards\n> > sticking with the default one for now. We can proceed with this\n> > approach, and if needed, change it later if more folks chime in.\n> \n> I think we're clearly going to have to do that at some point not too far\n> away. There's just too many capabilities that are made harder by not having\n> that information for SLRU pages. That's not to say that it's a prerequisite to\n> moving SLRUs into the buffer pool (using a hack like Thomas did until the page\n> header is introduced). Both are complicated enough projects on their own. I\n> also could see adding the page header before moving SLRUs in the buffer pool,\n> there isn't really a hard dependency.\n\nTo be honest, given the nature of changes, I would prefer to have it\ndone in one major version release than have it be split into multiple\nefforts. The value add really comes in from the consistency checks that\ncan be done and which are non-existent for SLRU today. \n\n\n\n\n", "msg_date": "Fri, 1 Jul 2022 00:23:41 +0000", "msg_from": "Shawn Debnath <clocksweep@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi, \r\n\r\nWe have been working on adding page headers to the SLRU pages, as part of the migration for SLRU to buffer cache. We’ve incorporated Thomas Munro’s patch and Heikki’s Storage manager changes[1] and have a patch for early feedback. \r\n\r\nAs part of our changes we have:\r\n\r\n1. Added page headers to the following\r\n \r\n *Commit_TS\r\n \t*CLOG\r\n \t*MultiXact\r\n \t*Subtrans\r\n \t*Serial (predicate.c)\r\n \t*Notify (async.c)\r\n\r\nFor commit_ts, clog and MultiXact, the PageXLogRecPtr field is populated with the LSN returned during the creation of a new page; as there is no WAL record for the rest, PageXLogRecPtr is set to “InvalidXlogRecPtr”.\r\n\r\nThere is one failing assert in predicate.c for SerialPagePrecedes with the page header changes; we are looking into this issue.\r\n\r\nThe page_version is set to PG_METAPAGE_LAYOUT_VERSION (which is 1)\r\n\r\n\r\n2. Change block number passed into ReadSlruBuffer from relative to absolute, and account for SLRU’s 256kb segment size in md.c.\r\n\r\n\r\n\r\nThe changes pass the regression tests. We are still working on handling the upgrade scenario and should have a patch out for that soon.\r\n\r\nAttached is the patch with all changes (Heikki and Munro’s patch and page headers) consolidated \r\n\r\n\r\nThanks,\r\nRishu Bagga, Amazon Web Services (AWS)\r\n\r\n[1] https://www.postgresql.org/message-id/128709bc-992c-b57a-7174-098433b7faa4@iki.fi\r\n\r\n[2] https://www.postgresql.org/message-id/CA+hUKG+02ZF-vjtUG4pH8bx+2Dn=eMh8GsT6jasiXZPgVxUXLw@mail.gmail.com\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nOn 9/27/22, 6:54 PM, \"Bagga, Rishu\" <bagrishu@amazon.com> wrote:\r\n\r\n Hi all,\r\n\r\n PostgreSQL currently maintains several data structures in the SLRU\r\n cache. The SLRU cache has scaling and sizing challenges because of it’s\r\n simple implementation. The goal is to move these caches to the common\r\n buffer cache to benefit from the stronger capabilities of the common\r\n buffercache code. At AWS, we are building on the patch shared by Thomas\r\n Munro [1], which treats the SLRU pages as part of a pseudo-databatabe\r\n of ID 9. We will refer to the pages belonging to SLRU components as\r\n BufferedObject pages going forward.\r\n\r\n The current SLRU pages do not have any header, so there is a need to\r\n create a new page header format for these. Our investigations revealed\r\n that we need to:\r\n\r\n 1. track LSN to ensure durability and consistency of all pages (for redo\r\n and full page write purposes)\r\n 2. have a checksum (for page correctness verification).\r\n 3. A flag to identify if the page is a relational or BufferedObject\r\n 4. Track version information.\r\n\r\n We are suggesting a minimal BufferedObject page header\r\n to be the following, overlapping with the key fields near the beginning\r\n of the regular PageHeaderData:\r\n\r\n typedef struct BufferedObjectPageHeaderData\r\n {\r\n PageXLogRecPtr pd_lsn;\r\n uint16_t pd_checksum;\r\n uint16_t pd_flags;\r\n uint16_t pd_pagesize_version;\r\n } BufferedObjectPageHeaderData;\r\n\r\n For reference, the regular page header looks like the following:\r\n typedef struct PageHeaderData\r\n {\r\n PageXLogRecPtr pd_lsn;\r\n uint16_t pd_checksum;\r\n uint16_t pd_flags;\r\n LocationIndex pd_lower;\r\n LocationIndex pd_upper;\r\n LocationIndex pd_special;\r\n uint16_t pd_pagesize_version;\r\n TransactionId pd_prune_xid;\r\n ItemIdDataCommon pd_linp[];\r\n } PageHeaderData;\r\n\r\n After careful review, we have trimmed out the heap and index specific\r\n fields from the suggested header that do not add any value to SLRU\r\n components. We plan to use pd_lsn, pd_checksum, and pd_pagesize_version\r\n in the same way that they are in relational pages. These fields are\r\n needed to ensure consistency, durability and page correctness.\r\n\r\n We will use the 4th bit of pd_flags to identify a BufferedObject page.\r\n If the bit is set then this denotes a BufferedObject page. Today, bits\r\n 1 - 3 are used for determining if there are any free line pointers, if\r\n the page is full, and if all tuples on the page are visible to\r\n everyone, respectively. We will use this information accordingly in the\r\n storage manager to determine which callback functions to use for file\r\n I/O operations. This approach allows the buffercache to have an\r\n universal method to quickly determine what type of page it is dealing\r\n with at any time.\r\n\r\n Using the new BufferedObject page header will be space efficient but\r\n introduces a significant change in the codebase to now track two types\r\n of page header data. During upgrade, all SLRU files that exist on the\r\n system must be converted to the new format with page header. This will\r\n require rewriting all the SLRU pages with the page header as part of\r\n pg_upgrade.\r\n\r\n We believe that this is the correct approach for the long run. We would\r\n love feedback if there are additional items of data that should be\r\n tracked as well. Alternatively, we could re-use the existing page\r\n header and the unused fields could be used as a padding. This feels\r\n like an unclean approach but would avoid having two page header types\r\n in the database.\r\n\r\n\r\n\r\n [1] - https://www.postgresql.org/message-id/flat/CA+hUKGKAYze99B-jk9NoMp-2BDqAgiRC4oJv+bFxghNgdieq8Q@mail.gmail.com\r\n\r\n\r\n\r\n Discussed with: Joe Conway, Nathan Bossart, Shawn Debnath\r\n\r\n\r\n Rishu Bagga\r\n\r\n Amazon Web Services (AWS)", "msg_date": "Wed, 28 Sep 2022 01:57:34 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "2022年9月28日(水) 10:57 Bagga, Rishu <bagrishu@amazon.com>:\n>\n> Hi,\n>\n> We have been working on adding page headers to the SLRU pages, as part of the migration for SLRU to buffer cache. We’ve incorporated Thomas Munro’s patch and Heikki’s Storage manager changes[1] and have a patch for early feedback.\n>\n> As part of our changes we have:\n>\n> 1. Added page headers to the following\n>\n> *Commit_TS\n> *CLOG\n> *MultiXact\n> *Subtrans\n> *Serial (predicate.c)\n> *Notify (async.c)\n>\n> For commit_ts, clog and MultiXact, the PageXLogRecPtr field is populated with the LSN returned during the creation of a new page; as there is no WAL record for the rest, PageXLogRecPtr is set to “InvalidXlogRecPtr”.\n>\n> There is one failing assert in predicate.c for SerialPagePrecedes with the page header changes; we are looking into this issue.\n>\n> The page_version is set to PG_METAPAGE_LAYOUT_VERSION (which is 1)\n>\n>\n> 2. Change block number passed into ReadSlruBuffer from relative to absolute, and account for SLRU’s 256kb segment size in md.c.\n>\n>\n>\n> The changes pass the regression tests. We are still working on handling the upgrade scenario and should have a patch out for that soon.\n>\n> Attached is the patch with all changes (Heikki and Munro’s patch and page headers) consolidated\n\nHi\n\ncfbot reports the patch no longer applies. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 09:44:51 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi Heikki and Thomas,\r\n\r\n\r\nI’ve reworked your patches for moving SLRUs to the buffer cache to add page headers to the SLRUs. Additionally, I’ve rebased this patch on top of the latest commit.\r\n\r\n\r\nChanges in this patch include:\r\n\r\n1. page headers on SLRU pages\r\n2. pg_upgrade support to add page headers to existing on-disk SLRU pages\r\n3. a new ReadBuffer mode RBM_TRIM for TrimCLOG and TrimMultiXact\r\n4. Removing concept of External LSNs introduced in Heikki’s patch, as page headers can now store LSNs normally for SLRUs.¬¬¬¬¬\r\n5. Addressed Serial SLRU asserts in previous patch\r\n\r\nWe still need to fix asserts triggered from memory allocation in the critical section in Munro’s patch in RecordNewMultiXact. \r\n\r\nCurrently, in GetNewMultiXact we enter the critical section, and end only after we finish our write, after doing RecordNewMultiXact in MultiXactIdCreateFromMembers. Now that we’re making ReadBuffer calls in RecordNewMultiXact, we allocate memory in the critical section, but this isn’t allowed.\r\n\r\nFor now, to avoid triggering asserts, I moved the end of the critical section before doing ReadBuffer calls, but this could cause potential data corruption for multixacts in the case ReadBuffer fails. \r\n\r\nA potential fix for this issue is to hold on to MultiXactGenLock until we successfully read and write to the buffer, to ensure no but this would cause ReadBuffer to become a bottleneck as no other backends could access the MultiXact state data.\r\n\r\nWe should figure out a way to allow ReadBuffer calls in critical sections specifically for multixacts, as the current behavior is to panic when multixact data write operations fail. \r\n\r\nI would appreciate your thoughts on how we could proceed here.\r\n\r\n\r\nP.S, Ian, thanks for reminding me to rebase the patch!\r\n\r\nSincerely,\r\n\r\nRishu Bagga, \r\n\r\nAmazon Web Services (AWS)", "msg_date": "Tue, 22 Nov 2022 06:02:10 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\r\n\r\n\r\n\r\n\r\n\r\nRebased and updated a new patch addressing the critical section issue in\r\nRecordNewMultliXact.In GetNewMultiXactId, we now make our ReadBuffer\r\ncalls before starting the critical section, but while holding the\r\nMultiXactGenLock, so we always fetch the correct buffers. We store them\r\nin an array that is accessed later in RecordNewMultiXact.\r\nThis way we can keep the existing functionality of only holding the MultiXactGenLock while reading in buffers, but can let go when we are writing,\r\nto preserve the existing concurrency paradigm.\r\n\r\n\r\n\r\n\r\n\r\nLet me know your thoughts on this approach.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nSincerely,\r\n\r\n\r\n\r\nRishu Bagga, Amazon Web Services (AWS)", "msg_date": "Thu, 15 Dec 2022 23:16:45 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Fri, 16 Dec 2022 at 04:47, Bagga, Rishu <bagrishu@amazon.com> wrote:\n> Rebased and updated a new patch addressing the critical section issue in\n> RecordNewMultliXact.In GetNewMultiXactId, we now make our ReadBuffer\n> calls before starting the critical section, but while holding the\n> MultiXactGenLock, so we always fetch the correct buffers. We store them\n> in an array that is accessed later in RecordNewMultiXact.\n> This way we can keep the existing functionality of only holding the MultiXactGenLock while reading in buffers, but can let go when we are writing,\n> to preserve the existing concurrency paradigm.\n> Let me know your thoughts on this approach.\n>\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n=== applying patch ./slru_to_buffer_cache_with_page_headers_v3.patch\n...\npatching file src/include/catalog/catversion.h\nHunk #1 FAILED at 57.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/catalog/catversion.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3514.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 18:35:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": ">On 1/3/23, 5:06 AM, \"vignesh C\" <vignesh21@gmail.com> wrote:\r\n>>On Fri, 16 Dec 2022 at 04:47, Bagga, Rishu <bagrishu@amazon.com> \r\n>>wrote:\r\n>>Rebased and updated a new patch addressing the critical section issue in\r\n>>RecordNewMultliXact.In GetNewMultiXactId, we now make our ReadBuffer\r\n>>calls before starting the critical section, but while holding the\r\n>>MultiXactGenLock, so we always fetch the correct buffers. We store \r\n>>them in an array that is accessed later in RecordNewMultiXact.\r\n>>This way we can keep the existing functionality of only holding the \r\n>>MultiXactGenLock while reading in buffers, but can let go when we are \r\n>>writing, to preserve the existing concurrency paradigm.\r\n>>Let me know your thoughts on this approach.\r\n\r\n\r\n> The patch does not apply on top of HEAD as in [1], please post a \r\n>rebased patch:\r\n>=== Applying patches on top of PostgreSQL commit ID\r\n>92957ed98c5c565362ce665266132a7f08f6b0c0 ===\r\n>=== applying patch ./slru_to_buffer_cache_with_page_headers_v3.patch\r\n...\r\n>patching file src/include/catalog/catversion.h\r\n>Hunk #1 FAILED at 57.\r\n>1 out of 1 hunk FAILED -- saving rejects to file\r\n>src/include/catalog/catversion.h.rej\r\n\r\n>[1] - http://cfbot.cputube.org/patch _41_3514.log\r\n\r\n>Regards,\r\n>Vignesh\r\n\r\nHi all,\r\n\r\nRebased the patch, and fixed a bug I introduced in the previous patch in \r\nTrimCLOG. \r\n\r\nWe ran a quick set of pgbench tests and observed no regressions. Here \r\nare the numbers:\r\n\r\n3 trials, with scale 10,000, 350 clients, 350 threads, over 30 minutes:\r\n\r\nMedian TPS:\r\n\r\nControl

Trial 1: 58331.0\r\nTrial 2: 57191.0\r\nTrial 3: 57101.3\r\n\r\nAverage of Medians: 57541.1\r\n\r\nSLRUs to BufferCache + Page Headers:\r\n\r\nTrial 1: 62605.0\r\nTrial 2: 62891.2\r\nTrial 3: 59906.3\r\n\r\nAverage of Medians: 61800.8\r\n\r\nMachine Specs:\r\n\r\nDriver\r\n\r\nInstance: m5d.metal\r\nArchitecture x86_64\r\nCPUs: 96\r\nRAM: 384 GiB\r\nOS: Amazon Linux 2\r\n\r\n\r\nServer\r\n\r\nInstance: r5dn.metal\r\nArchitecture x86_64\r\nCPUs: 64\r\nRAM: 500GiB\r\nOS: Amazon Linux 2\r\n\r\n\r\nLooking forward to your feedback on this.\r\n\r\nSincerely,\r\nRishu Bagga, Amazon Web Services (AWS)\r\n\r\n\r\n", "msg_date": "Tue, 10 Jan 2023 19:05:21 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "> Hi all,\r\n\r\n> Rebased the patch, and fixed a bug I introduced in the previous patch \r\n> in \r\n> TrimCLOG. \r\n\r\n> Looking forward to your feedback on this.\r\n\r\n\r\nHi all, \r\n\r\nRebased patch as per latest community changes since last email. \r\n\r\n\r\nSincerely,\r\n\r\nRishu Bagga, Amazon Web Services (AWS)", "msg_date": "Mon, 6 Feb 2023 19:12:47 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\n> From 098f37c0a17fc32a94bff43817414e01fcfb234f Mon Sep 17 00:00:00 2001\n> From: Rishu Bagga <bagrishu@amazon.com>\n> Date: Thu, 15 Sep 2022 00:55:25 +0000\n> Subject: [PATCH] slru to buffercache + page headers + upgrade\n> \n> ---\n> contrib/amcheck/verify_nbtree.c | 2 +-\n> [...]\n> 65 files changed, 2176 insertions(+), 3258 deletions(-)\n\nUnfortunately a quite large patch, with this little explanation, is hard to\nreview. I could read through the entire thread to try to figure out what this\nis doing, but I really shouldn't have to.\n\nYou're changing quite fundamental APIs across the tree. Why is that required\nfor the topic at hand? Why is it worth doing that, given it'll break loads of\nextensions?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 12:30:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 19:12:47 +0000, Bagga, Rishu wrote:\n> Rebased patch as per latest community changes since last email. \n\nThis version doesn't actually build.\n\nhttps://cirrus-ci.com/task/4512310190931968\n\n[19:43:20.131] FAILED: src/test/modules/test_slru/test_slru.so.p/test_slru.c.o \n[19:43:20.131] ccache cc -Isrc/test/modules/test_slru/test_slru.so.p -Isrc/include -I../src/include -Isrc/include/catalog -Isrc/include/nodes -Isrc/include/utils -Isrc/include/storage -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -fPIC -pthread -fvisibility=hidden -MD -MQ src/test/modules/test_slru/test_slru.so.p/test_slru.c.o -MF src/test/modules/test_slru/test_slru.so.p/test_slru.c.o.d -o src/test/modules/test_slru/test_slru.so.p/test_slru.c.o -c ../src/test/modules/test_slru/test_slru.c\n[19:43:20.131] ../src/test/modules/test_slru/test_slru.c:47:8: error: unknown type name ‘SlruCtlData’\n[19:43:20.131] 47 | static SlruCtlData TestSlruCtlData;\n[19:43:20.131] | ^~~~~~~~~~~\n[19:43:20.131] ../src/test/modules/test_slru/test_slru.c:57:19: error: unknown type name ‘SlruCtl’\n[19:43:20.131] 57 | test_slru_scan_cb(SlruCtl ctl, char *filename, int segpage, void *data)\n[19:43:20.131] | ^~~~~~~\n\n...\n\nAndres\n\n\n", "msg_date": "Tue, 7 Feb 2023 12:20:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "> Unfortunately a quite large patch, with this little explanation, is\r\n> hard to review. I could read through the entire thread to try to \r\n> figure out what this is doing, but I really shouldn't have to.\r\n \r\n> You're changing quite fundamental APIs across the tree. Why is that \r\n> required for the topic at hand? Why is it worth doing that, given \r\n> it'll break loads of extensions?\r\n \r\nHi Andres, \r\n\r\nThanks for your response.\r\n\r\nTo summarize, our underlying effort is to move the SLRUs to the buffer \r\ncache. We were working with Thomas Munro off a patch he introduced here\r\n[1]. Munro’s patch moves SLRUs to the buffer cache by introducing the\r\npseudo db id 9 to denote SLRU pages, but maintains the current “raw”\r\ndata format of SLRU pages. The addition of page headers in our patch\r\nresolves this issue [2] Munro mentions in this email [3]. \r\n\r\nHeikki Linnakangas then introduced patch on top of Munro’s patch that \r\nmodularizes the storage manager, allowing SLRUs to use it [4]. Instead\r\nof using db id 9, SLRUs use spcOid 9, and each SLRU is its own relation.\r\nHere, Heikki simplifies the storage manager by having each struct be \r\nresponsible for just one fork of a relation; thus increasing\r\nextensibility of the smgr API, including for SLRUs. [5] We integrated\r\nour changes introducing page headers for SLRU pages, and upgrade logic\r\nto Heikki’s latest patch. \r\n \r\n> > Rebased patch as per latest community changes since last email.\r\n\r\n> This version doesn't actually build.\r\n> https://cirrus-ci.com/task/4512310190931968\r\n> [19:43:20.131] FAILED: \r\n> src/test/modules/test_slru/test_slru.so.p/test_slru.c.o\r\n\r\n\r\nAs for the build failures, I was using make install to build, and make\r\ncheck for regression tests. The build failures in this link are from\r\nunit tests dependent on custom SLRUs, which would no longer apply. I’ve\r\nattached another patch that removes these tests.\r\n\r\n[1] https://www.postgresql.org/message\r\n-id/CA%2BhUKGKCkbtOutcz5M8Z%3D0pgAkwdiR57Lxk7803rGsgiBNST6w%40mail.gmail\r\n.com\r\n\r\n[2] “I needed to disable checksums and in-page LSNs, since SLRU pages\r\nhold raw data with no header. We'd probably eventually want regular\r\n(standard? formatted?) pages (the real work here may be implementing\r\nFPI for SLRUs so that checksums don't break your database on torn\r\nwrites). ” \r\n\r\n[3] https://www.postgresql.org/message-id/CA%2BhUKGKAYze99B-jk9NoMp-\r\n2BDqAgiRC4oJv%2BbFxghNgdieq8Q%40mail.gmail.com\r\n\r\n[4] https://www.postgresql.org/message-id/flat/128709bc-992c-b57a-7174-\r\n098433b7faa4%40iki.fi#a78d6250327795e95b02e9305e2d153e\r\n\r\n[5] “I think things would be more clear if we \r\nunbundled the forks at the SMGR level, so that we would have a separate \r\nSMgrRelation struct for each fork. And let's rename it to SMgrFile to \r\nmake the role more clear. I think that would reduce the confusion when \r\nwe start using it for SLRUs; an SLRU is not a relation, after all. md.c \r\nwould still segment each logical file into 1 GB segments, but it would \r\nnot need to deal with forks.”", "msg_date": "Wed, 8 Feb 2023 20:04:52 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\nOn 2023-02-08 20:04:52 +0000, Bagga, Rishu wrote:\n> To summarize, our underlying effort is to move the SLRUs to the buffer\n> cache. We were working with Thomas Munro off a patch he introduced here\n> [1]. Munro’s patch moves SLRUs to the buffer cache by introducing the\n> pseudo db id 9 to denote SLRU pages, but maintains the current “raw”\n> data format of SLRU pages. The addition of page headers in our patch\n> resolves this issue [2] Munro mentions in this email [3].\n>\n> Heikki Linnakangas then introduced patch on top of Munro’s patch that\n> modularizes the storage manager, allowing SLRUs to use it [4]. Instead\n> of using db id 9, SLRUs use spcOid 9, and each SLRU is its own relation.\n> Here, Heikki simplifies the storage manager by having each struct be\n> responsible for just one fork of a relation; thus increasing\n> extensibility of the smgr API, including for SLRUs. [5] We integrated\n> our changes introducing page headers for SLRU pages, and upgrade logic\n> to Heikki’s latest patch.\n\nThat doesn't explain the bulk of the changes here. Why e.g. does any of the\nabove require RelationGetSmgr() to handle the fork as well? Why do we need\nsmgrtruncate_multi()? And why does all of this happens as one patch?\n\nAs is, with a lot of changes mushed together, without distinct explanations\nfor why is what done, this patch is essentially unreviewable. It'll not make\nprogress in this form.\n\nIt doesn't help that much to reference prior discussions in the email I'm\nresponding to - the patches need to be mostly understandable on their own,\nwithout reading several threads. And there needs to be explanations in the\ncode as well, otherwise we'll have no chance to understand any of this in a\nfew years.\n\n\n> > > Rebased patch as per latest community changes since last email.\n>\n> > This version doesn't actually build.\n> > https://cirrus-ci.com/task/4512310190931968\n> > [19:43:20.131] FAILED:\n> > src/test/modules/test_slru/test_slru.so.p/test_slru.c.o\n>\n>\n> As for the build failures, I was using make install to build, and make\n> check for regression tests.\n\nThere's a *lot* more tests than the main regression tests. You need to make\nsure that they all continue to work. Enable tap tests and se check-world.\n\n\n> The build failures in this link are from unit tests dependent on custom\n> SLRUs, which would no longer apply. I’ve attached another patch that removes\n> these tests.\n\nI think you'd need to fix those tests, rather than just removing them.\n\nI suspect we're going to continue to want SLRU specific stats, but your patch\nalso seems to remove those.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Feb 2023 12:26:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On 08/02/2023 22:26, Andres Freund wrote:\n> On 2023-02-08 20:04:52 +0000, Bagga, Rishu wrote:\n>> To summarize, our underlying effort is to move the SLRUs to the buffer\n>> cache. We were working with Thomas Munro off a patch he introduced here\n>> [1]. Munro’s patch moves SLRUs to the buffer cache by introducing the\n>> pseudo db id 9 to denote SLRU pages, but maintains the current “raw”\n>> data format of SLRU pages. The addition of page headers in our patch\n>> resolves this issue [2] Munro mentions in this email [3].\n>>\n>> Heikki Linnakangas then introduced patch on top of Munro’s patch that\n>> modularizes the storage manager, allowing SLRUs to use it [4]. Instead\n>> of using db id 9, SLRUs use spcOid 9, and each SLRU is its own relation.\n>> Here, Heikki simplifies the storage manager by having each struct be\n>> responsible for just one fork of a relation; thus increasing\n>> extensibility of the smgr API, including for SLRUs. [5] We integrated\n>> our changes introducing page headers for SLRU pages, and upgrade logic\n>> to Heikki’s latest patch.\n> \n> That doesn't explain the bulk of the changes here. Why e.g. does any of the\n> above require RelationGetSmgr() to handle the fork as well? Why do we need\n> smgrtruncate_multi()? And why does all of this happens as one patch?\n> \n> As is, with a lot of changes mushed together, without distinct explanations\n> for why is what done, this patch is essentially unreviewable. It'll not make\n> progress in this form.\n> \n> It doesn't help that much to reference prior discussions in the email I'm\n> responding to - the patches need to be mostly understandable on their own,\n> without reading several threads. And there needs to be explanations in the\n> code as well, otherwise we'll have no chance to understand any of this in a\n> few years.\n\nAgreed. I rebased this over my rebased patch set from the other thread \nat \nhttps://www.postgresql.org/message-id/02825393-615a-ac81-0f05-f3cc2e6f875f%40iki.fi. \nAttached is a new patch with only the changes relative to that patch set.\n\nThis is still messy, but now I can see what the point is: make the \nSLRUs, which are tracked in the main buffer pool thanks to the other \npatches, use the standard page header.\n\nI'm not sure if I like that or not. I think we should clean up and \nfinish the other patches that this builds on first, and then decide if \nwe want to use the standard page header for the SLRUs or not. And if we \ndecide that we want the SLRU pages to have a page header, clean this up \nor rewrite it from scratch.\n\n- Heikki", "msg_date": "Mon, 27 Feb 2023 15:56:33 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Mon, Feb 27, 2023 at 8:56 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I'm not sure if I like that or not. I think we should clean up and\n> finish the other patches that this builds on first, and then decide if\n> we want to use the standard page header for the SLRUs or not. And if we\n> decide that we want the SLRU pages to have a page header, clean this up\n> or rewrite it from scratch.\n\nI'm not entirely sure either, but I think the idea has some potential.\nIf SLRU pages have headers, that means that they have LSNs, and\nperhaps then we could use those LSNs to figure out when they're safe\nto write to disk, instead of ad-hoc mechanisms. See SlruSharedData's\ngroup_lsn field.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 11:08:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Feb 27, 2023 at 8:56 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I'm not sure if I like that or not. I think we should clean up and\n> > finish the other patches that this builds on first, and then decide if\n> > we want to use the standard page header for the SLRUs or not. And if we\n> > decide that we want the SLRU pages to have a page header, clean this up\n> > or rewrite it from scratch.\n> \n> I'm not entirely sure either, but I think the idea has some potential.\n> If SLRU pages have headers, that means that they have LSNs, and\n> perhaps then we could use those LSNs to figure out when they're safe\n> to write to disk, instead of ad-hoc mechanisms. See SlruSharedData's\n> group_lsn field.\n\nI agree that it's got potential and seems like the right direction to go\nin. That would also allow for checksums for SLRUs and possibly support\nfor encryption which leverages the LSN and for a dynamic page feature\narea which could allow for an extended checksum or perhaps authenticated\nencryption with additonal authenticated data.\n\nThanks,\n\nStephen", "msg_date": "Mon, 27 Feb 2023 13:24:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": ">I think you'd need to fix those tests, rather than just removing them.\r\n>I suspect we're going to continue to want SLRU specific stats, but \r\n>your patch also seems to remove those.\r\n\r\n>This is still messy, but now I can see what the point is: make the \r\n>SLRUs, which are tracked in the main buffer pool thanks to the other \r\n>patches, use the standard page header.\r\n\r\n>I'm not sure if I like that or not. I think we should clean up and \r\n>finish the other patches that this builds on first, and then decide if \r\n>we want to use the standard page header for the SLRUs or not. And if \r\n>we decide that we want the SLRU pages to have a page header, clean \r\n>this up or rewrite it from scratch.\r\n\r\n>I'm not entirely sure either, but I think the idea has some potential.\r\n>If SLRU pages have headers, that means that they have LSNs, and\r\n>perhaps then we could use those LSNs to figure out when they're safe\r\n>to write to disk, instead of ad-hoc mechanisms. See SlruSharedData's\r\n>group_lsn field.\r\n\r\n>I agree that it's got potential and seems like the right direction to \r\n>go in. That would also allow for checksums for SLRUs and possibly \r\n>support for encryption which leverages the LSN and for a dynamic page \r\n>feature area which could allow for an extended checksum or perhaps \r\n>authenticated encryption with additonal authenticated data.\r\n\r\n\r\n\r\nHi all, \r\n\r\nThank you for the feedback on the last patch. I have prepared a new set \r\nof patches here that aim to address these concerns; cleaning the patch \r\nup, and breaking them into smaller parts. Each functional change is \r\nbroken into its own patch. To keep the change as minimal as possible, I \r\nhave removed the tangential changes from Heikki Linnakangas’ patch \r\nmodifying the storage manager, and kept the changes limited to moving \r\nSLRU components to buffer cache, and page header changes. \r\n\r\nThe first patch is the original patch that moves some of the SLRUs to \r\nthe buffercache, which Thomas Munro posted in [1], rebased to the latest \r\ncommit.\r\n\r\nThe second patch is a patch which fixes problems with trying to allocate \r\nmemory in critical sections in the commit log, and multixacts in Munro’s \r\npatch. In Munro’s patch - there are three places where we may need to \r\nallocate memory in a Critical Section, that I have addressed.\r\n\r\n1. When recording a transaction status, we may need to allocate memory \r\nfor a buffer pin to bring the clog page into the buffer cache. I added a \r\ncall to ResourceOwnerEnlargeBuffers before entering the critical section \r\nto resolve this issue.\r\n\r\n2. When recording a transaction status, we may need to allocate memory \r\nfor an entry in the storage manager hash table for the commit log in \r\nsmgropen. I added a function “VerifyClogLocatorInHashTable” which forces \r\nan smgropen call that does this if needed. This function is called \r\nbefore entering the Critical Section.\r\n\r\n3. When recording a multixact, we enter a critical section while writing \r\nthe multixact members. Now that the multlixact pages are stored in the \r\nbuffer cache, we may need to allocate memory here to retrieve a buffer \r\npage. I modified GetNewMultiXactId to also prefetch all the buffers we \r\nwill need before we enter critical section, so that we do not need to \r\nmake ReadBuffer calls while in the multixact critical section. \r\n\r\n\r\nThe third patch brings back the the SLRU control structure, to keep it \r\nas an extensible feature for now, and renames the handler for the \r\ncomponents we are moving into the buffercache to NREL (short for non \r\nrelational). nrel.c is essentially a copy of Munro’s modified slru.c, \r\nand I have restored the original slru.c. This allows for existing \r\nextensions utilizing SLRUs to keep working, and the test_slru unit tests \r\nto pass, as well as introducing a more accurate name for the handling of \r\ncomponents (CLOG, Multixact Offsets/Members, Async, Serial, Subtrans) \r\nthat are no longer handled by an SLRU, but are still non relational \r\ncomponents. To address Andres’s concern - I modified the slru stats test \r\ncode to still track all these current components and maintain the \r\nbehavior, and confirmed as those tests pass as well.\r\n\r\n\r\nThe fourth patch adds the page headers to these Non Relational (NREL) \r\ncomponents, and provides the upgrade story to rewrite the clog and \r\nmultixact files with page headers across upgrades.\r\n\r\nWith the changes from all four patches, they pass all tests with make \r\ninstallcheck-world, as well as test_slru.\r\n\r\n\r\nI hope these patches are easier to read and review, and would appreciate \r\nany feedback. \r\n\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CA+hUKGKAYze99B-jk9NoMp-2BDqAgiRC4oJv+bFxghNgdieq8Q@mail.gmail.com\r\n\r\n\r\n\r\nRishu Bagga,\r\n\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 5 Jul 2023 22:05:13 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Greetings,\n\n[snipped quoted bits]\n\nWould really be helpful to keep who the author of each quoted snipper\nwas when you quote them; dropping that makes it look like one person\nwrote all of them and that's confusing.\n\n* Bagga, Rishu (bagrishu@amazon.com) wrote:\n> The third patch brings back the the SLRU control structure, to keep it \n> as an extensible feature for now, and renames the handler for the \n> components we are moving into the buffercache to NREL (short for non \n> relational). nrel.c is essentially a copy of Munro’s modified slru.c, \n> and I have restored the original slru.c. This allows for existing \n> extensions utilizing SLRUs to keep working, and the test_slru unit tests \n> to pass, as well as introducing a more accurate name for the handling of \n> components (CLOG, Multixact Offsets/Members, Async, Serial, Subtrans) \n> that are no longer handled by an SLRU, but are still non relational \n> components. To address Andres’s concern - I modified the slru stats test \n> code to still track all these current components and maintain the \n> behavior, and confirmed as those tests pass as well.\n\nHaven't really looked over the patches yet but I wanted to push back on\nthis a bit- you're suggesting that we'd continue to maintain and update\nslru.c for the benefit of extensions which use it while none of the core\ncode uses it? For how long? For my 2c, at least, I'd rather we tell\nextension authors that they need to update their code instead. There's\nreasons why we're moving the SLRUs into the main buffer pool and having\npage headers for them and using the existing page code to read/write\nthem and extension authors should be eager to gain those advantages too.\nNot sure how much concern to place on extensions that aren't willing to\nadjust to changes like these.\n\n> The fourth patch adds the page headers to these Non Relational (NREL) \n> components, and provides the upgrade story to rewrite the clog and \n> multixact files with page headers across upgrades.\n\nNice.\n\n> With the changes from all four patches, they pass all tests with make \n> installcheck-world, as well as test_slru.\n\nAwesome, will try to take a look soon.\n\nThanks,\n\nStephen", "msg_date": "Mon, 17 Jul 2023 16:19:31 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "* Frost, Stephen (sfrowt(at)snowman(dot)net) wrote:\r\n\r\n> Haven't really looked over the patches yet but I wanted to push back \r\n> on this a bit- you're suggesting that we'd continue to maintain and \r\n> update slru.c for the benefit of extensions which use it while none of \r\n> the core code uses it? For how long? For my 2c, at least, I'd rather \r\n> we tell extension authors that they need to update their code instead. \r\n> There's reasons why we're moving the SLRUs into the main buffer pool \r\n> and having page headers for them and using the existing page code to \r\n> read/write them and extension authors should be eager to gain those \r\n> advantages too. Not sure how much concern to place on extensions that\r\n> aren't willing to adjust to changes like these.\r\n\r\n\r\nHi Stephen,\r\n\r\nThanks for your response. I proposed this version of the patch with the\r\nidea to make the changes gradual, and to minimize disruption of existing\r\nfunctionality, with the idea of eventually deprecating the SLRUs. If the\r\ncommunity is okay with completely removing the extensible SLRU\r\nmechanism, we don't have any objection to it either.\r\n\r\n\r\nOn another note, I have also attached an updated version of the last\r\npatch-set which is applicable on the latest commits.\r\n\r\nSincerely, \r\n\r\nRishu Bagga, Amazon Web Services (AWS)", "msg_date": "Fri, 18 Aug 2023 08:12:41 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Fri, Aug 18, 2023 at 08:12:41AM +0000, Bagga, Rishu wrote:\n> * Frost, Stephen (sfrowt(at)snowman(dot)net) wrote:\n>> Haven't really looked over the patches yet but I wanted to push back \n>> on this a bit- you're suggesting that we'd continue to maintain and \n>> update slru.c for the benefit of extensions which use it while none of \n>> the core code uses it? For how long? For my 2c, at least, I'd rather \n>> we tell extension authors that they need to update their code instead. \n>> There's reasons why we're moving the SLRUs into the main buffer pool \n>> and having page headers for them and using the existing page code to \n>> read/write them and extension authors should be eager to gain those \n>> advantages too. Not sure how much concern to place on extensions that\n>> aren't willing to adjust to changes like these.\n> \n> Thanks for your response. I proposed this version of the patch with the\n> idea to make the changes gradual, and to minimize disruption of existing\n> functionality, with the idea of eventually deprecating the SLRUs. If the\n> community is okay with completely removing the extensible SLRU\n> mechanism, we don't have any objection to it either.\n\nI think I agree with Stephen. We routinely make changes that require\nupdates to extensions, and I doubt anyone is terribly wild about\nmaintaining two SLRU systems for several years.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Aug 2023 09:15:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Fri, Aug 18, 2023 at 12:15 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> I think I agree with Stephen. We routinely make changes that require\n> updates to extensions, and I doubt anyone is terribly wild about\n> maintaining two SLRU systems for several years.\n\nYeah, maintaining two systems doesn't sound like a good idea.\n\nHowever, this would be a big change. I'm not sure how we validate a\nchange of this magnitude. There are both correctness and performance\nconsiderations. I saw there had been a few performance results on the\nthread from Thomas that spawned this thread; but I guess we'd want to\ndo more research. One question is: how do you decide how many buffers\nto use for each SLRU, and how many to leave available for regular\ndata?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Aug 2023 16:23:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Aug 18, 2023 at 12:15 PM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > I think I agree with Stephen. We routinely make changes that require\n> > updates to extensions, and I doubt anyone is terribly wild about\n> > maintaining two SLRU systems for several years.\n> \n> Yeah, maintaining two systems doesn't sound like a good idea.\n> \n> However, this would be a big change. I'm not sure how we validate a\n> change of this magnitude. There are both correctness and performance\n> considerations. I saw there had been a few performance results on the\n> thread from Thomas that spawned this thread; but I guess we'd want to\n> do more research. One question is: how do you decide how many buffers\n> to use for each SLRU, and how many to leave available for regular\n> data?\n\nAgreed that we'd certainly want to make sure it's all correct and to do\nperformance testing but in terms of how many buffers... isn't much of\nthe point of this that we have data in memory because it's being used\nand if it's not then we evict it? That is, I wouldn't think we'd have\nset parts of the buffer pool for SLRUs vs. regular data but would let\nthe actual demand drive what pages are in the cache and I thought that\nwas part of this proposal and part of the reasoning behind making this\nchange.\n\nThere's certainly an argument to be made that our current cache\nmanagement strategy doesn't account very well for the value of pages\n(eg: btree root pages vs. random heap pages, perhaps) and that'd\ncertainly be a good thing to improve on, but that's independent of this.\nIf it's not, then that's certainly moving the goal posts a very long way\nin terms of this effort.\n\nThanks,\n\nStephen", "msg_date": "Thu, 24 Aug 2023 15:28:44 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Hi,\n\n> Agreed that we'd certainly want to make sure it's all correct and to do\n> performance testing but in terms of how many buffers... isn't much of\n> the point of this that we have data in memory because it's being used\n> and if it's not then we evict it? That is, I wouldn't think we'd have\n> set parts of the buffer pool for SLRUs vs. regular data but would let\n> the actual demand drive what pages are in the cache and I thought that\n> was part of this proposal and part of the reasoning behind making this\n> change.\n>\n> There's certainly an argument to be made that our current cache\n> management strategy doesn't account very well for the value of pages\n> (eg: btree root pages vs. random heap pages, perhaps) and that'd\n> certainly be a good thing to improve on, but that's independent of this.\n> If it's not, then that's certainly moving the goal posts a very long way\n> in terms of this effort.\n\nDuring the triage of the patches submitted for the September CF [1] I\nnoticed that the corresponding CF entry [2] has two threads linked.\nOnly the first thread was used by CF bot [3], also Heikki and Thomas\nwere listed as the authors. The patch from the first thread rotted and\nwas not updated for some time which resulted in marking the patch as\nRwF for now [4]\n\nIt looks like the patch in *this* thread was never registered on the\ncommitfest and/or tested by CF bot, unless I'm missing something.\nUnfortunately it's a bit late to register it for the September CF\nespecially considering the fact that it doesn't apply at the moment.\n\nThis being said, please consider submitting the patch for the upcoming\nCF. Also, please consider joining the efforts and having one thread\nwith a single patchset rather than different threads with different\ncompeting patches. This will simplify the work of the reviewers a lot.\n\nPersonally I would suggest taking one step back and agree on a\nparticular RFC first and then continue working on a single patchset\naccording to this RFC. We did it in the past in similar cases and this\napproach proved to be productive.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n[2]: https://commitfest.postgresql.org/44/3514/\n[3]: http://cfbot.cputube.org/\n[4]: https://postgr.es/m/CAJ7c6TN%3D1EF1bTA6w8W%3D0e_Bj%2B-jsiHK0ap1uC_ZUGjwu%2B4JGw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 4 Sep 2023 18:57:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Thu, Aug 24, 2023 at 3:28 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Agreed that we'd certainly want to make sure it's all correct and to do\n> performance testing but in terms of how many buffers... isn't much of\n> the point of this that we have data in memory because it's being used\n> and if it's not then we evict it? That is, I wouldn't think we'd have\n> set parts of the buffer pool for SLRUs vs. regular data but would let\n> the actual demand drive what pages are in the cache and I thought that\n> was part of this proposal and part of the reasoning behind making this\n> change.\n\nI think that it's not quite that simple. In the regular buffer pool,\naccess to pages is controlled by buffer pins and buffer content locks,\nbut these mechanisms don't exist in the same way in the SLRU code. But\nbuffer pins drive usage counts which drive eviction decisions. So if\nyou move SLRU data into the main buffer pool, you either need to keep\nthe current locking regime and use some new logic to decide how much\nof shared_buffers to bequeath to the SLRU pools, OR you need to make\nSLRU access use buffer pins and buffer content locks. If you do the\nlatter, I think you substantially increase the cost of an uncontended\nSLRU buffer access, because you now need to pin the buffer, and and\nthen take and release the content lock, and then release the pin;\nwhereas today you can do all that by just taking and release the\nSLRU's lwlock. That's more atomic operations, and hence more costly, I\nthink. But even if not, it could perform terribly if SLRU buffers\nbecome more vulnerable to eviction than they are at present, or\nalternatively if they take over too much of the buffer pool and force\nother important data out.\n\nSLRUs are a performance hotspot, so even relatively minor changes to\ntheir performance characteristics can, I believe, have pretty\nnoticeable effects on performance overall.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Sep 2023 11:20:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Alekseev, Aleksander (aleksander@timescale.com) wrote:\r\n\r\n> It looks like the patch in *this* thread was never registered on the\r\n> commitfest and/or tested by CF bot, unless I'm missing something.\r\n> Unfortunately it's a bit late to register it for the September CF\r\n> especially considering the fact that it doesn't apply at the moment.\r\n\r\n> This being said, please consider submitting the patch for the upcoming\r\n> CF. Also, please consider joining the efforts and having one thread\r\n> with a single patchset rather than different threads with different\r\n> competing patches. This will simplify the work of the reviewers a lot.\r\n\r\n\r\nHi Aleksander,\r\n\r\nThank you for letting me know about this. I’ll follow up on the original \r\nthread within the next couple weeks with a new and updated patch for the \r\nnext commitfest.\r\n\r\n\r\nSincerely,\r\n\r\nRishu Bagga, Amazon Web Services (AWS)\r\n\r\n\r\n", "msg_date": "Thu, 7 Sep 2023 08:03:16 +0000", "msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>", "msg_from_op": true, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Aug 24, 2023 at 3:28 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Agreed that we'd certainly want to make sure it's all correct and to do\n> > performance testing but in terms of how many buffers... isn't much of\n> > the point of this that we have data in memory because it's being used\n> > and if it's not then we evict it? That is, I wouldn't think we'd have\n> > set parts of the buffer pool for SLRUs vs. regular data but would let\n> > the actual demand drive what pages are in the cache and I thought that\n> > was part of this proposal and part of the reasoning behind making this\n> > change.\n> \n> I think that it's not quite that simple. In the regular buffer pool,\n> access to pages is controlled by buffer pins and buffer content locks,\n> but these mechanisms don't exist in the same way in the SLRU code. But\n> buffer pins drive usage counts which drive eviction decisions. So if\n> you move SLRU data into the main buffer pool, you either need to keep\n> the current locking regime and use some new logic to decide how much\n> of shared_buffers to bequeath to the SLRU pools, OR you need to make\n> SLRU access use buffer pins and buffer content locks. If you do the\n> latter, I think you substantially increase the cost of an uncontended\n> SLRU buffer access, because you now need to pin the buffer, and and\n> then take and release the content lock, and then release the pin;\n> whereas today you can do all that by just taking and release the\n> SLRU's lwlock. That's more atomic operations, and hence more costly, I\n> think. But even if not, it could perform terribly if SLRU buffers\n> become more vulnerable to eviction than they are at present, or\n> alternatively if they take over too much of the buffer pool and force\n> other important data out.\n\nAn SLRU buffer access does also update the cur_lru_count for the SLRU,\nalong with the per-page page_lru_count, but those are 32bit and we don't\nenforce that they're done in order, so presumably those are less\nexpensive than the pinning and usage count updates.\n\nThis thread started with the issue that our current SLRUs are relatively\nsmall though and simply increasing their size would lead to issues as\nwe're just doing simple things like a linear search through them all at\ntimes, or at least that's more-or-less what I understood from [1]. More\ndetails on the specific 'scaling and sizing challenges' would be nice to\nhave. The current patches were at least claimed to improve performance\nwhile also using ReadBuffer_common [2]. Having an idea as to what is\nspecifically leading to that would be interesting though with all these\nchanges likely non-trivial. pgbench may not be the best way to measure\nthis, but it's still interesting to see an improvement like that.\n\nCertainly one concern about using the regular buffer pool is that\nfinding a victim page can be expensive and having that as part of an\nSLRU access could be pretty painful. Though we also have that problem\nelsewhere too.\n\nIf we're going to effectively segregate the buffer pool into SLRU parts\nvs. everything else and then use the existing strategies for SLRUs and\nhave that be different from what everything else is using ... then\nthat doesn't seem like it's really changing things. What would be the\npoint of moving the SLRUs into the main buffer pool then?\n\n> SLRUs are a performance hotspot, so even relatively minor changes to\n> their performance characteristics can, I believe, have pretty\n> noticeable effects on performance overall.\n\nAgreed, we certainly need to have a plan for how to performance test\nthis change and should try to come up with some 'worst case' tests.\n\nThanks,\n\nStephen\n\n[1]: https://postgr.es/m/EFAAC0BE-27E9-4186-B925-79B7C696D5AC%40amazon.com\n[2]: https://postgr.es/m/A09EAE0D-0D3F-4A34-ADE9-8AC1DCBE7D57%40amazon.com", "msg_date": "Fri, 8 Sep 2023 08:56:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" }, { "msg_contents": "On Fri, Sep 8, 2023 at 8:56 AM Stephen Frost <sfrost@snowman.net> wrote:\n> If we're going to effectively segregate the buffer pool into SLRU parts\n> vs. everything else and then use the existing strategies for SLRUs and\n> have that be different from what everything else is using ... then\n> that doesn't seem like it's really changing things. What would be the\n> point of moving the SLRUs into the main buffer pool then?\n\nI think that integrating the SLRUs into the same memory allocation as\nshared_buffers could potentially be pretty helpful even if the buffers\nare managed differently. For instance, each SLRU could evaluate (using\nsome algorithm) whether it needs more or fewer buffers than it has\ncurrently and either steal more buffers from shared_buffers or give\nsome back. That does seem less elegant than having everything running\nthrough a single system, but getting to the point where everything\nruns through a single system may be difficult, and such an\nintermediate state could have a lot of advantages with, perhaps, less\nrisk of breaking stuff.\n\nAs far as I am aware, the optimal amount of memory for any particular\nSLRU is almost always quite small compared to shared_buffers, but it\ncan be quite large compared to what we allocate currently. To do\nanything about that, we clearly need to fix the algorithms to scale\nbetter. But even once we've done that, I don't think we want to\nallocate the largest amount of clog buffers that anyone could ever\nneed in all instances, and the same for every other SLRU. That seems\nwasteful. We could expose a separate tunable for each one, but that is\na lot for users to tune correctly, and it implies using the same value\nfor the whole lifetime of the server. Letting the value float around\ndynamically would make a lot of sense especially for things like\npg_multixact -- if there's a lot of multixacts, grab some more\nbuffers, if there aren't, release some or even all of them. The same\nkind of thing can happen with other SLRUs -- e.g. as the oldest\nrunning xact gets older, you need more subxact buffers; when\nlong-running transactions end, you need fewer.\n\nAgain, I'm not sure what the right thing to do here actually is, just\nthat I wouldn't be too quick to reject a partial integration into\nshared_buffers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:09:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SLRUs in the main buffer pool - Page Header definitions" } ]
[ { "msg_contents": "Hi all,\n(Thomas and Andres in CC.)\n\nAndres has complained a couple of days ago about the quantity of logs\nthat can be fed into the TAP tests running pg_regress:\nhttps://www.postgresql.org/message-id/20220603195318.qk4voicqfdhlsnoh@alap3.anarazel.de\n\nThis concerns the TAP tests of pg_upgrade, as well as\n027_stream_regress.pl, where a crash while running the tests would\nshow a couple of megs worth of regression.diffs. Most of the output\ngenerated does not make much sense to have in this case, and the\nparallel schedule gives a harder time to spot the exact query involved\nin the crash (if that's a query) while the logs of the backend should\nbe enough to spot what's the problem with the PIDs tracked.\n\nOne idea I got to limit the useless output generated is to check the\nstatus of the cluster after running the regression test suite as\nrestart_on_crash is disabled by default in Cluster.pm, and avoid any \nfollow-up logic in these tests if the cluster is not running anymore,\nas of the attached.\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 23 Jun 2022 14:30:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Reducing logs produced by TAP tests running pg_regress on crash" }, { "msg_contents": "On Thu, Jun 23, 2022 at 02:30:13PM +0900, Michael Paquier wrote:\n> One idea I got to limit the useless output generated is to check the\n> status of the cluster after running the regression test suite as\n> restart_on_crash is disabled by default in Cluster.pm, and avoid any \n> follow-up logic in these tests if the cluster is not running anymore,\n> as of the attached.\n\nSo, this is still an open item..\n\nThomas, any objections about this one? Checking for the status of the\nnode after completing pg_regress still sounds like a good idea to me,\nbecause as restart_after_crash is disabled we would generate a ton of\nlogs coming from regression.diffs for nothing. On top of that the\nparallel connections make harder finding which query failed, and the\nlogs of the backend provide enough context already on a hard crash.\n--\nMichael", "msg_date": "Fri, 22 Jul 2022 10:09:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Reducing logs produced by TAP tests running pg_regress on crash" }, { "msg_contents": "On Fri, Jul 22, 2022 at 1:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jun 23, 2022 at 02:30:13PM +0900, Michael Paquier wrote:\n> > One idea I got to limit the useless output generated is to check the\n> > status of the cluster after running the regression test suite as\n> > restart_on_crash is disabled by default in Cluster.pm, and avoid any\n> > follow-up logic in these tests if the cluster is not running anymore,\n> > as of the attached.\n>\n> So, this is still an open item..\n>\n> Thomas, any objections about this one? Checking for the status of the\n> node after completing pg_regress still sounds like a good idea to me,\n> because as restart_after_crash is disabled we would generate a ton of\n> logs coming from regression.diffs for nothing. On top of that the\n> parallel connections make harder finding which query failed, and the\n> logs of the backend provide enough context already on a hard crash.\n\nWhat if the clue we need to understand why it crashed was in the\nregression diffs that we didn't dump?\n\nI wonder if we should move the noise suppression check closer to\npg_regress, so that it works also for the \"main\" pg_regress run, not\nonly the one in this new TAP test. As discussed in this thread,\ninconclusively:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGL7hxqbadkto7e1FCOLQhuHg%3DwVn_PDZd6fDMbQrrZisA%40mail.gmail.com\n\n\n", "msg_date": "Fri, 22 Jul 2022 13:18:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing logs produced by TAP tests running pg_regress on crash" }, { "msg_contents": "On Fri, Jul 22, 2022 at 01:18:34PM +1200, Thomas Munro wrote:\n> I wonder if we should move the noise suppression check closer to\n> pg_regress, so that it works also for the \"main\" pg_regress run, not\n> only the one in this new TAP test. As discussed in this thread,\n> inconclusively:\n\nYes, perhaps. We could reduce the amount of junk generated in\nregression.diffs in a more centralized way with this approach, and\nI agree that this should use restart_after_crash=off as well as\nsomething to prevent more tests to run if we are not able to connect, \nas you mentioned there. At least this would reduce the spam down to\ntests running in parallel to the session that crashed.\n\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGL7hxqbadkto7e1FCOLQhuHg%3DwVn_PDZd6fDMbQrrZisA%40mail.gmail.com\n\nAh, I forgot about this recent thread. Let's just move the discussion\nthere. Thanks!\n\nI am planning to remove this open item from the list and mark it as\n\"won't fix\", as this is a much older issue.\n--\nMichael", "msg_date": "Sat, 23 Jul 2022 12:07:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Reducing logs produced by TAP tests running pg_regress on crash" } ]
[ { "msg_contents": "I have been trying to find active interest for volunteer development of a GPL license style PostgreSQL extension (or default PostgreSQL additions), complete and available, on Sourceforge, PGXN and the public internet, that will support the following:\n\nHigh Precision Arithmetic and Elementary Functions Support\nIn PostgreSQL v14 or beyond.\nHPPM: High Precision PostgreSQL Mathematics.\n\n-The introduction of Integer Z, or Rational Mixed Decimal Q, numbers support in 64 bit PostgreSQL. Via HPZ, and HPQ, original types. In this specification, they are collectively referred to as HPX types. These two types can be spelled in capital letters, or lower case. They are:\n\nHPZ, HPQ or hpz, hpq.\nHPZ(n), HPQ(n), hpz(n), or hpq(n).\n\nHPX types can be declared like TEXT (with no associated value) or with an associated value, like varchar(n). IF HPX variables are declared with no associated value, the associated value for that variable is 20.\n\nThere should be range and multirange types corresponding to both HPX types:\n\nHPZRANGE, hpzrange, HPZMULTIRANGE, hpzmultirange.\nHPQRANGE, hpqrange, HPQMULTIRANGE, hpqmultirange.\n\n-The extension could be based on another 3rd party library, and will be written in C, in either case. There is already some support for this kind of mathematics, in terms of its logic, and optimisation, publicly available, in C, as Free Open Source Software. That can be researched and apprehended for this extension and for all the relevant PostgreSQL OS platforms that it (HPPM) is designed for.\n\n-Real numbers are comprised of the values of Integer, non-recurring Rational Numbers and recurring, and/or Irrational Numbers. Recurring numbers can be appropriately truncated, ultimately via another positive Integer value, always at least 1, to obtain a limited and approximating value. The approximating value can really be seen as a finite Rational number, possibly with Integer or Decimal parts, or both. These numbers may be positive or negative, or zero, scalar values, and always do exist on the one dimensional number line. These numbers may be positive, negative, or zero exactly.\n\n-HPX 'associated values' really are a number of relevant positive Integer figures (Precision), that will get stored with each HPX type variable or column type. These are specified at type or variable declaration, before further use. Or the total defaulting precision amount is applied, being 20, as already shown.\n\nPrecision can be accessed and changed by means of coded values being changed. Precision is always apprehended before calculation begins. Precision is used to control number operations, and value output, when any numeric manipulation or processing occurs, since things that may go towards an infinity need to be stopped before then, to be useful.\n\nIf an HPX value is data on its own, without any set precision, it has the corresponding precision amount. If it is inserted into a table column with a different precision, then that precision is applied, be it larger, equal, or less, resulting in no behaviour and an error being thrown.\n\nIf an HPX value, in a PostgreSQL code expression, is sent straight into a RETURN statement or a SELECT statement, without assignment or precision alteration, or is just specified in a comparison expression, then that datum will contain the highest precision value out of any of the others in its expression, by checking the largest one found as the expression is considered, from left to right, within its sub expression.\n\nIf such a precision value cannot be arrived at, since nothing has been specified, because the value is irrational or infinitely reccuring, then the default precision value, for truncation to be applied by, will be 20, here, of course.\n\n-This whole system will uphold any precision, certainly ones within a very large range limit, controlled by the already available type for large positive integers, the BIGINT. It can thereby enumerate digits within the range of\n(+/-)1 to (+/-)9,223,372,036,854,775,807. This is at least between one and positive nine quintilion digit places. More than enough for the speed and scope of today, or maybe tomorrow, and the Desktop PC, as either a client or a server.\n\nNaturally, evaluation will slow down, or not conclude in useful time frames, before those limits, presently. That phenomenon can be allowed, and left to the programmer to deal with or curtail.\n\nA TEXT variable or table column, or even another HPX or numeric type (if the data of the value is in range) can be used to store digit data alone. Mixed Decimals can be broken down into integers and dealt with from there using operators, and reassembled into mixed integers again, if absolutely necessary, although this will be slower and inneficient internally.\n\n--At the point of PostgreSQL code input and execution:\n\nselect pi(1001) as pi;\n\n--HPX types can be declared like TEXT or\n--like varchar(n),\n--Within a table creation command:\n\ncreate table example_table\n(\nid BIGSERIAL PRIMARY KEY,\na HPZ,\nb HPQ(50)\n);\n\nALTER TABLE example_table ALTER COLUMN b TYPE HPQ(30);\nALTER TABLE example_table ALTER COLUMN b TYPE HPQ(30);\n\nINSERT INTO example_table(a,b) VALUES(0, 0.1);\nINSERT INTO example_table(a,b) VALUES(100,1.1);\nINSERT INTO example_table(a,b) VALUES(200,2.2);\nINSERT INTO example_table(a,b) VALUES(300,3.3);\nINSERT INTO example_table(a,b) VALUES(400,4.4);\nINSERT INTO example_table(a,b) VALUES(500,5.5);\nINSERT INTO example_table(a,b) VALUES(600,6.6);\nINSERT INTO example_table(a,b) VALUES(700,7.7);\nINSERT INTO example_table(a,b) VALUES(800,8.8);\nINSERT INTO example_table(a,b) VALUES(900,9.9);\n\n--Or as variables, in some function:\n\ncreate or replace function example_function()\nreturns void\nlanguage plpgsql\nas\n$$\ndeclare\na HPQ;\nb HPQ(2);\nc HPQ(3);\nbegin\na = 0.1;\nb = 0.1;\nc=a*b;\nreturn void\nend;\n$$\n\n--Range and Multirange Types or Functions or Operators.\n\nselect hpzrange(1,3) && hpzrange(3,20) AS Intersecting;\n\nselect hpqrange(1.5,3) hpqrange(3,25.5) AS Mixed_Number_Range;\n\n-Value assignment to a typed variable by =.\n\n-Operators. Base 10 Arithmetic and comparisons support on Base 10 HPZ and HPQ, with casting:\n\n::,=,!=,<>,>,<,>=,<=,+,-,*,/,%,^\n\nThese include full division and integer only division (from type inference, between two HPZ integers, only), with no remainder, and a remainder only calculating operator (for all type circumstances), within all range possibilities of the involved two values under a particular operation.\n\n###########################################################################################################################################################################################################################\nThere should be the property of value inversion equality.\nConsider the following source code fragment as an example:\n\na = 1;\nb = 7;\nc = a/b;\noutput(c);\nd = c*b;\noutput(d);\noutput(c == d); //true\n###########################################################################################################################################################################################################################\n-REIFIED SUPPORT with broader syntax and operations and phenomena within PostgreSQL. Range and Multirange types, HPX integration with Tables, the between keyword, Array types, Indexing, Variables and related phenomena, the Record type, direct compatibility with the Aggregate and Window functions, and Partitions are all parts of a larger subset that should re-interact with HPZ or HPQ successfully. HPX types should also be integrated with Range Types, Multirange Types, Operators, their Functions and their Window Functions.\n###########################################################################################################################################################################################################################\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin, *.sh installer prefixes for a PostgreSQL installation from one file, each. Installation, Activation and Use instructions should be included, necessary for successful use, and the uninitiated. The extension should literally just install and be applicable, with no loading command necessary (if possible). For every time the PostgreSQL database process is run, by default.\n\n-Mathematical and Operational functions support:\n\ncast(HPZ as HPQ) returns HPQ;\ncast(HPQ as HPZ) returns HPZ;\ncast(TEXT as HPZ) returns HPZ;\ncast(TEXT as HPQ) returns HPQ;\ncast(HPQ as TEXT) returns TEXT;\ncast(HPZ as TEXT) returns TEXT;\ncast(HPZ as SMALLINT) returns SMALLINT;\ncast(SMALLINT as HPZ) returns HPZ;\ncast(HPZ as INTEGER) returns INTEGER;\ncast(INTEGER as HPZ) returns HPZ;\ncast(HPZ as BIGINT) returns BIGINT;\ncast(BIGINT as HPZ) returns HPZ;\ncast(HPQ as REAL) returns REAL;\ncast(REAL as HPQ) returns HPQ;\ncast(DOUBLE PRECISION as HPQ) returns HPQ;\ncast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;\ncast(HPQ as DECIMAL) returns DECIMAL;\ncast(DECIMAL as HPQ) returns HPQ;\ncast(HPQ as NUMERIC) returns NUMERIC;\ncast(NUMERIC as HPQ) returns HPQ;\n\nsign(HPQ input) returns HPZ;\nabs(HPQ input) returns HPZ;\nceil(HPQ input) returns HPZ;\nfloor(HPQ input) returns HPZ;\nround(HPQ input) returns HPZ;\nfactorial(HPZ input) returns HPZ;\nnCr(HPZ objects, HPZ selectionSize) returns HPZ;\nnPr(HPZ objects, HPZ selectionSize) returns HPZ;\n\nreciprocal(HPQ input) returns HPQ;\npi(BIGINT precision) returns HPQ;\ne(BIGINT precision) returns HPQ;\npower(HPQ base, HPQ exponent) returns HPQ;\nsqrt(HPQ input) returns HPQ;\nnroot(HPZ theroot, HPQ input) returns HPQ;\nlog10(HPQ input) returns HPQ;\nln(HPQ input) returns HPQ;\nlog2(HPQ input) returns HPQ;\n\nradtodeg(HPQ input)returns HPQ;\ndegtorad(HPQ input)returns HPQ;\nsind(HPQ input) returns HPQ;\ncosd(HPQ input) returns HPQ;\ntand(HPQ input) returns HPQ;\nasind(HPQ input) returns HPQ;\nacosd(HPQ input) returns HPQ;\natand(HPQ input) returns HPQ;\nsinr(HPQ input) returns HPQ;\ncosr(HPQ input) returns HPQ;\ntanr(HPQ input) returns HPQ;\nasinr(HPQ input) returns HPQ;\nacosr(HPQ input) returns HPQ;\natanr(HPQ input) returns HPQ;\n\n-Informative articles on all these things exist at:\n\nPostgreSQL v14 Database Documentation:\nhttps://www.postgresql.org/docs/14/index.html\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\nTrigonometry key values and the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nEnd of Specificiation.\n\nIs there anyone involved in pgsql-hackers@lists.postgresql.org who might be able to volunteer to create\na PostgreSQL extension, or include these as default additions to PostgreSQL, as a volunteer effort?\n\nYours Sincerely,\n\nSergio Minervini\n\nS.M.\n\nSent with [Proton Mail](https://proton.me/) secure email.\nI have been trying to find active interest for volunteer\r\ndevelopment of a GPL license style PostgreSQL extension (or default PostgreSQL additions), complete and\r\navailable, on Sourceforge, PGXN and the public internet, that will\r\nsupport the following:                                                                                                                                                                               High Precision Arithmetic and Elementary Functions SupportIn PostgreSQL v14 or beyond.HPPM: High Precision PostgreSQL Mathematics.-The\r\n introduction of Integer Z, or Rational Mixed Decimal Q, numbers support\r\n in 64 bit PostgreSQL. Via HPZ, and HPQ, original types. In this\r\nspecification, they are collectively referred to as HPX types. These two\r\n types can be spelled in capital letters, or lower case. They are:HPZ, HPQ or hpz, hpq.HPZ(n), HPQ(n), hpz(n), or hpq(n).HPX\r\n types can be declared like TEXT (with no associated value) or with an\r\nassociated value, like varchar(n).  IF HPX variables are declared with\r\nno associated value, the associated value for that variable is 20.There should be range and multirange types corresponding to both HPX types: HPZRANGE, hpzrange, HPZMULTIRANGE, hpzmultirange. HPQRANGE, hpqrange, HPQMULTIRANGE, hpqmultirange.-The\r\n extension could be based on another 3rd party library, and will be\r\nwritten in C, in either case. There is already some support for this\r\nkind of mathematics, in terms of its logic, and optimisation, publicly\r\navailable, in C, as Free Open Source Software. That can be researched\r\nand apprehended for this extension and for all the relevant PostgreSQL\r\nOS platforms that it (HPPM) is designed for.-Real\r\n numbers are comprised of the values of Integer, non-recurring Rational\r\nNumbers and recurring, and/or Irrational Numbers. Recurring numbers can\r\nbe appropriately truncated, ultimately via another positive Integer\r\nvalue, always at least 1, to obtain a limited and approximating value.\r\n The approximating value can really be seen as a finite Rational number,\r\n possibly with Integer or Decimal parts, or both. These numbers may be\r\npositive or negative, or zero, scalar values, and always do exist on the\r\n one dimensional number line. These numbers may be positive, negative,\r\nor zero exactly.-HPX 'associated values'\r\nreally are a number of relevant positive Integer figures (Precision),\r\nthat will get stored with each HPX type variable or column type. These\r\nare specified at type or variable declaration, before further use. Or\r\nthe total defaulting precision amount is applied, being 20, as already\r\nshown. Precision can be accessed and changed\r\nby means of coded values being changed.  Precision is always apprehended\r\n before calculation begins.  Precision is used to control number\r\noperations, and value output, when any numeric manipulation or\r\nprocessing occurs, since things that may go towards an infinity need to\r\nbe stopped before then, to be useful.If an HPX\r\n value is data on its own, without any set precision, it has the\r\ncorresponding precision amount. If it is inserted into a table column\r\nwith a different precision, then that precision is applied, be it\r\nlarger, equal, or less, resulting in no behaviour and an error being\r\nthrown.  If an HPX value, in a PostgreSQL code\r\n expression, is sent straight into a RETURN statement or a SELECT\r\nstatement, without assignment or precision alteration, or is just\r\nspecified in a comparison expression, then that datum will contain the\r\nhighest precision value out of any of the others in its expression, by\r\nchecking the largest one found as the expression is considered, from\r\nleft to right, within its sub expression.If\r\nsuch a precision value cannot be arrived at, since nothing has been\r\nspecified, because the value is irrational or infinitely reccuring, then\r\n the default precision value, for truncation to be applied by, will be\r\n20, here, of course.-This whole system will\r\nuphold any precision, certainly ones within a very large range limit,\r\ncontrolled by the already available type for large positive integers,\r\nthe BIGINT. It can thereby enumerate digits within the range of (+/-)1\r\n to (+/-)9,223,372,036,854,775,807. This is at least between one and\r\npositive nine quintilion digit places.  More than enough for the speed\r\nand scope of today, or maybe tomorrow, and the Desktop PC, as either a\r\nclient or a server.Naturally, evaluation will\r\nslow down, or not conclude in useful time frames, before those limits,\r\npresently.  That phenomenon can be allowed, and left to the programmer\r\nto deal with or curtail. A TEXT variable or\r\ntable column, or even another HPX or numeric type (if the data of the\r\nvalue is in range) can be used to store digit data alone. Mixed Decimals\r\n can be broken down into integers and dealt with from there using\r\noperators, and reassembled into mixed integers again, if absolutely\r\nnecessary, although this will be slower and inneficient internally.--At the point of PostgreSQL code input and execution:select pi(1001) as pi;--HPX types can be declared like TEXT or--like varchar(n),--Within a table creation command:create table example_table(id BIGSERIAL PRIMARY KEY,a HPZ,b HPQ(50));ALTER TABLE example_table ALTER COLUMN b TYPE HPQ(30);ALTER TABLE example_table ALTER COLUMN b TYPE HPQ(30);INSERT INTO example_table(a,b) VALUES(0,  0.1);INSERT INTO example_table(a,b) VALUES(100,1.1);INSERT INTO example_table(a,b) VALUES(200,2.2);INSERT INTO example_table(a,b) VALUES(300,3.3);INSERT INTO example_table(a,b) VALUES(400,4.4);INSERT INTO example_table(a,b) VALUES(500,5.5);INSERT INTO example_table(a,b) VALUES(600,6.6);INSERT INTO example_table(a,b) VALUES(700,7.7);INSERT INTO example_table(a,b) VALUES(800,8.8);INSERT INTO example_table(a,b) VALUES(900,9.9);--Or as variables, in some function:create or replace function example_function()returns voidlanguage plpgsqlas$$declarea HPQ; b HPQ(2);c HPQ(3);begina = 0.1;b = 0.1;c=a*b;return voidend;$$--Range and Multirange Types or Functions or Operators.select hpzrange(1,3) && hpzrange(3,20)  AS Intersecting;select hpqrange(1.5,3) hpqrange(3,25.5) AS Mixed_Number_Range;-Value assignment to a typed variable by =.-Operators.  Base 10 Arithmetic and comparisons support on Base 10 HPZ and HPQ, with casting:::,=,!=,<>,>,<,>=,<=,+,-,*,/,%,^These\r\n include full division and integer only division (from type inference,\r\nbetween two HPZ integers, only), with no remainder, and a remainder only\r\n calculating operator (for all type circumstances), within all range\r\npossibilities of the involved two values under a particular operation. ###########################################################################################################################################################################################################################There should be the property of value inversion equality.Consider the following source code fragment as an example:a = 1;      b = 7;       c = a/b;       output(c);       d = c*b;       output(d);output(c == d); //true###########################################################################################################################################################################################################################-REIFIED\r\n SUPPORT with broader syntax and operations and phenomena within\r\nPostgreSQL.  Range and Multirange types, HPX integration with Tables,\r\nthe between keyword, Array types, Indexing, Variables and related\r\nphenomena, the Record type, direct compatibility with the Aggregate and\r\nWindow functions, and Partitions are all parts of a larger subset that\r\nshould re-interact with HPZ or HPQ successfully.  HPX types should also\r\nbe integrated with Range Types, Multirange Types, Operators, their\r\nFunctions and their Window Functions.  ###########################################################################################################################################################################################################################-Ease\r\n of installation support. Particularly for Windows and Linux. *.exe,\r\n*.msi or *.rpm, *.deb, *.bin, *.sh installer prefixes for a PostgreSQL\r\ninstallation from one file, each.  Installation, Activation and Use\r\ninstructions should be included, necessary for successful use, and the\r\nuninitiated. The extension should literally just install and be\r\napplicable, with no loading command necessary (if possible). For every\r\ntime the PostgreSQL database process is run, by default. -Mathematical and Operational functions support:cast(HPZ as HPQ)  returns HPQ;cast(HPQ as HPZ)  returns HPZ;cast(TEXT as HPZ) returns HPZ;cast(TEXT as HPQ) returns HPQ;cast(HPQ as TEXT) returns TEXT;cast(HPZ as TEXT) returns TEXT;cast(HPZ as SMALLINT) returns SMALLINT;cast(SMALLINT as HPZ) returns HPZ;cast(HPZ as INTEGER)  returns INTEGER;cast(INTEGER as HPZ)  returns HPZ;cast(HPZ as BIGINT)   returns BIGINT;cast(BIGINT as HPZ)   returns HPZ;cast(HPQ as REAL)     returns REAL;cast(REAL as HPQ)     returns HPQ;cast(DOUBLE PRECISION as HPQ) returns HPQ;cast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;cast(HPQ as DECIMAL)  returns DECIMAL;cast(DECIMAL as HPQ)  returns HPQ;cast(HPQ as NUMERIC)  returns NUMERIC;cast(NUMERIC as HPQ)  returns HPQ;sign(HPQ input)  returns HPZ;abs(HPQ input)   returns HPZ;ceil(HPQ input)  returns HPZ;floor(HPQ input) returns HPZ;round(HPQ input) returns HPZ;factorial(HPZ input) returns HPZ;nCr(HPZ objects, HPZ selectionSize) returns HPZ;nPr(HPZ objects, HPZ selectionSize) returns HPZ;reciprocal(HPQ input) returns HPQ;pi(BIGINT precision)  returns HPQ;e(BIGINT precision)   returns HPQ;power(HPQ base, HPQ exponent) returns HPQ;sqrt(HPQ input) returns HPQ;nroot(HPZ theroot, HPQ input) returns HPQ;log10(HPQ input) returns HPQ;ln(HPQ input)    returns HPQ;log2(HPQ input)  returns HPQ;radtodeg(HPQ input)returns HPQ;degtorad(HPQ input)returns HPQ;sind(HPQ input)    returns HPQ;cosd(HPQ input)    returns HPQ;tand(HPQ input)    returns HPQ;asind(HPQ input)   returns HPQ;acosd(HPQ input)   returns HPQ;atand(HPQ input)   returns HPQ;sinr(HPQ input)    returns HPQ;cosr(HPQ input)    returns HPQ;tanr(HPQ input)    returns HPQ;asinr(HPQ input)   returns HPQ;acosr(HPQ input)   returns HPQ;atanr(HPQ input)   returns HPQ;-Informative articles on all these things exist at:PostgreSQL v14 Database Documentation:https://www.postgresql.org/docs/14/index.htmlComparison Operators: https://en.wikipedia.org/wiki/Relational_operatorFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functionsArithmetic Operations: https://en.wikipedia.org/wiki/ArithmeticInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integersModulus Operation: https://en.wikipedia.org/wiki/Modulo_operationRounding (Commercial Rounding): https://en.wikipedia.org/wiki/RoundingFactorial Operation: https://en.wikipedia.org/wiki/FactorialDegrees: https://en.wikipedia.org/wiki/Degree_(angle)Radians: https://en.wikipedia.org/wiki/RadianElementary Functions: https://en.wikipedia.org/wiki/Elementary_functionTrigonometry key values and the Unit Circle:https://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/---------------------------------------------------------------------------------------------------------------------------------------------------------End of Specificiation.Is there anyone involved in pgsql-hackers@lists.postgresql.org who might be able to volunteer to createa PostgreSQL extension, or include these as default additions to PostgreSQL, as a volunteer effort?Yours Sincerely,Sergio MinerviniS.M.\n\n\n\n\r\n Sent with Proton Mail secure email.", "msg_date": "Thu, 23 Jun 2022 06:42:01 +0000", "msg_from": "\"sminervini.prism\" <sminervini.prism@protonmail.com>", "msg_from_op": true, "msg_subject": "Query about free Volunteer Development for a PostgreSQL extension\n development." } ]
[ { "msg_contents": "PSA trivial patch fixing a harmless #define typo.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 23 Jun 2022 17:35:37 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Fix typo in pg_publication.c" }, { "msg_contents": "On Thu, Jun 23, 2022 at 05:35:37PM +1000, Peter Smith wrote:\n> PSA trivial patch fixing a harmless #define typo.\n\nThanks, done.\n--\nMichael", "msg_date": "Thu, 23 Jun 2022 16:43:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typo in pg_publication.c" } ]
[ { "msg_contents": "Hackers,\n\nSome PostgreSQL extensions need to sort their pieces of data. Then it\nworth to re-use our tuplesort. But despite our tuplesort having\nextensibility, it's hidden inside tuplesort.c. There are at least a\ncouple of examples of how extensions deal with that.\n\n1. RUM table access method: https://github.com/postgrespro/rum\nRUM repository contains a copy of tuplesort.c for each major\nPostgreSQL release. A reliable solution, but this is not how things\nare intended to work, right?\n2. OrioleDB table access method: https://github.com/orioledb/orioledb\nOrioleDB runs on patches PostgreSQL. It contains a patch, which just\nexposes all the guts of tuplesort.c to the tuplesort.h\nhttps://github.com/orioledb/postgres/commit/d42755f52c\n\nI think we need a proper way to let extension re-use our core\ntuplesort facility. The attached patchset is intended to do this the\nright way. Patches don't revise all the comments and lack code\nbeautification. The intention behind publishing this revision is to\nverify the direction and get some feedback for further work.\n\n0001-Remove-Tuplesortstate.copytup-v1.patch\nIt's unclear for me how do we split functionality between\nTuplesortstate.copytup() function and tuplesort_put*() functions. For\ninstance, copytup_index() and copytup_datum() return error while\ntuplesort_putindextuplevalues() and tuplesort_putdatum() do their\nwork. The patch removes Tuplesortstate.copytup() altogether, putting\ntheir functions to tuplesort_put*().\n\n0002-Tuplesortstate.getdatum1-method-v1.patch\n0003-Put-abbreviation-logic-into-puttuple_common-v1.patch\nThe tuplesort_put*() functions contains common part related to dealing\nwith abbreviation. The 0002 extracts logic of getting value of\nSortTuple.datum1 into Tuplesortstate.getdatum1() function. Thanks to\nthis new interface function, 0003 puts abbreviation logic into\nputtuple().\n\n0004-Move-freeing-memory-away-from-writetup-v1.patch\nAssuming that SortTuple.tuple is always just a single chunk of memory,\nwe can put memory counting logic away from Tuplesortstate.writetup().\nThis makes Tuplesortstate.getdatum1() easier to implement without\nknowledge of tuplesort.c guts.\n\n0005-Reorganize-data-structures-v1.patch\nThis commit splits the \"public\" part of Tuplesortstate into\nTuplesortOps, which is intended to be exposed outside tuplesort.c.\n\n0006-Split-tuplesortops.c-v1.patch\nThis patch finally splits tuplesortops.c from tuplesort.c. tuplesort.c\nleaves which generic routines for tuple sort, while tuplesortops.c\nprovides the implementation for particular tuple formats.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 23 Jun 2022 11:50:42 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Custom tuplesorts for extensions" }, { "msg_contents": "I've bumped into this case in RUM extension. The need to build it with\ntuplesort changes in different PG versions led me to reluctantly including\ndifferent tuplesort.c versions into the extension code. So I totally\nsupport the intention of this patch and I'm planning to invest some time to\nreview it.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI've bumped into this case in RUM extension. The need to build it with tuplesort changes in different PG versions led me to reluctantly including different tuplesort.c versions into the extension code. So I totally support the intention of this patch and I'm planning to invest some time to review it.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 23 Jun 2022 14:09:13 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": ">\n> Some PostgreSQL extensions need to sort their pieces of data. Then it\n> worth to re-use our tuplesort. But despite our tuplesort having\n> extensibility, it's hidden inside tuplesort.c. There are at least a\n> couple of examples of how extensions deal with that.\n>\n> 1. RUM table access method: https://github.com/postgrespro/rum\n> RUM repository contains a copy of tuplesort.c for each major\n> PostgreSQL release. A reliable solution, but this is not how things\n> are intended to work, right?\n> 2. OrioleDB table access method: https://github.com/orioledb/orioledb\n> OrioleDB runs on patches PostgreSQL. It contains a patch, which just\n> exposes all the guts of tuplesort.c to the tuplesort.h\n> https://github.com/orioledb/postgres/commit/d42755f52c\n>\n> I think we need a proper way to let extension re-use our core\n> tuplesort facility. The attached patchset is intended to do this the\n> right way. Patches don't revise all the comments and lack code\n> beautification. The intention behind publishing this revision is to\n> verify the direction and get some feedback for further work.\n>\n\nI still have one doubt about the thing: the compatibility with previous PG\nversions requires me to support code paths that I already added into RUM\nextension. I won't be able to drop it from extension for quite long time in\nthe future. It could be avoided if we backpatch this, which seems doubtful\nto me provided the volume of code changes.\n\nIf we just change this thing since say v16 this will only help to\nextensions that doesn't support earlier PG versions. I still consider the\nchange beneficial but wonder do you have some view on how should it be\nmanaged in existing extensions to benefit them?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nSome PostgreSQL extensions need to sort their pieces of data. Then it\nworth to re-use our tuplesort. But despite our tuplesort having\nextensibility, it's hidden inside tuplesort.c. There are at least a\ncouple of examples of how extensions deal with that.\n\n1. RUM table access method: https://github.com/postgrespro/rum\nRUM repository contains a copy of tuplesort.c for each major\nPostgreSQL release. A reliable solution, but this is not how things\nare intended to work, right?\n2. OrioleDB table access method: https://github.com/orioledb/orioledb\nOrioleDB runs on patches PostgreSQL. It contains a patch, which just\nexposes all the guts of tuplesort.c to the tuplesort.h\nhttps://github.com/orioledb/postgres/commit/d42755f52c\n\nI think we need a proper way to let extension re-use our core\ntuplesort facility. The attached patchset is intended to do this the\nright way. Patches don't revise all the comments and lack code\nbeautification. The intention behind publishing this revision is to\nverify the direction and get some feedback for further work.I still have one doubt about the thing: the compatibility with previous PG versions requires me to support code paths that I already added into RUM extension. I won't be able to drop it from extension for quite long time in the future. It could be avoided if we  backpatch this, which seems doubtful to me provided the volume of code changes.If we just change this thing since say v16 this will only help to extensions that doesn't support earlier PG versions. I still consider the change beneficial but wonder do you have some view on how should it be managed in existing extensions to benefit them?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 23 Jun 2022 15:26:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi!\n\nI've reviewed the patchset and noticed some minor issues:\n- extra semicolon in macro (lead to warnings)\n- comparison of var isWorker should be done in different way\n\nHere is an upgraded version of the patchset.\n\nOverall, I consider this patchset useful. Any opinions?\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 23 Jun 2022 15:12:27 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi, Pavel!\n\nThank you for your feedback.\n\nOn Thu, Jun 23, 2022 at 2:26 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> Some PostgreSQL extensions need to sort their pieces of data. Then it\n>> worth to re-use our tuplesort. But despite our tuplesort having\n>> extensibility, it's hidden inside tuplesort.c. There are at least a\n>> couple of examples of how extensions deal with that.\n>>\n>> 1. RUM table access method: https://github.com/postgrespro/rum\n>> RUM repository contains a copy of tuplesort.c for each major\n>> PostgreSQL release. A reliable solution, but this is not how things\n>> are intended to work, right?\n>> 2. OrioleDB table access method: https://github.com/orioledb/orioledb\n>> OrioleDB runs on patches PostgreSQL. It contains a patch, which just\n>> exposes all the guts of tuplesort.c to the tuplesort.h\n>> https://github.com/orioledb/postgres/commit/d42755f52c\n>>\n>> I think we need a proper way to let extension re-use our core\n>> tuplesort facility. The attached patchset is intended to do this the\n>> right way. Patches don't revise all the comments and lack code\n>> beautification. The intention behind publishing this revision is to\n>> verify the direction and get some feedback for further work.\n>\n>\n> I still have one doubt about the thing: the compatibility with previous PG versions requires me to support code paths that I already added into RUM extension. I won't be able to drop it from extension for quite long time in the future. It could be avoided if we backpatch this, which seems doubtful to me provided the volume of code changes.\n>\n> If we just change this thing since say v16 this will only help to extensions that doesn't support earlier PG versions. I still consider the change beneficial but wonder do you have some view on how should it be managed in existing extensions to benefit them?\n\nI don't think there is a way to help extensions with earlier PG\nversions. This is a significant patchset, which shouldn't be a subject\nfor backpatch. The existing extensions will benefit by simplification\nof maintenance for PG 16+ releases. I think that's all we can do.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 23 Jun 2022 16:14:59 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi, Maxim!\n\nOn Thu, Jun 23, 2022 at 3:12 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n> I've reviewed the patchset and noticed some minor issues:\n> - extra semicolon in macro (lead to warnings)\n> - comparison of var isWorker should be done in different way\n>\n> Here is an upgraded version of the patchset.\n\nThank you for fixing this.\n\n> Overall, I consider this patchset useful. Any opinions?\n\nThank you.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 23 Jun 2022 16:19:54 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi!\n\nOverall patch looks good let's mark it as ready for committer, shall we?\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!Overall patch looks good let's mark it as ready for committer, shall we? -- Best regards,Maxim Orlov.", "msg_date": "Wed, 6 Jul 2022 16:00:04 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "On Thu, 23 Jun 2022 at 14:12, Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n> I've reviewed the patchset and noticed some minor issues:\n> - extra semicolon in macro (lead to warnings)\n> - comparison of var isWorker should be done in different way\n>\n> Here is an upgraded version of the patchset.\n>\n> Overall, I consider this patchset useful. Any opinions?\n\nAll of the patches are currently missing descriptive commit messages,\nwhich I think is critical for getting this committed. As for per-patch\ncomments:\n\n0001: This patch removes copytup, but it is not quite clear why -\nplease describe the reasoning in the commit message.\n\n0002: getdatum1 needs comments on what it does. From the name, it\nwould return the datum1 from a sorttuple, but that's not what it does;\na better name would be putdatum1 or populatedatum1.\n\n0003: in the various tuplesort_put*tuple[values] functions, the datum1\nfield is manually extracted. Shouldn't we use the getdatum1 functions\nfrom 0002 instead? We could use either them directly to allow\ninlining, or use the indirection through tuplesortstate.\n\n0004: Needs a commit message, but otherwise seems fine.\n\n0005:\n> +struct TuplesortOps\n\nThis struct has no comment on what it is. Something like \"Public\ninterface of tuplesort operators, containing data directly accessable\nto users of tuplesort\" should suffice, but feel free to update the\nwording.\n\n> + void *arg;\n> +};\n\nThis field could use a comment on what it is used for, and how to use it.\n\n> +struct Tuplesortstate\n> +{\n> + TuplesortOps ops;\n\nThis field needs a comment too.\n\n0006: Needs a commit message, but otherwise seems fine.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 6 Jul 2022 16:41:44 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi,\n\nI think this needs to be evaluated for performance...\n\nI agree with the nearby comment that the commits need a bit of justification\nat least to review them.\n\n\nOn 2022-06-23 15:12:27 +0300, Maxim Orlov wrote:\n> From 03b78cdade3b86a0e97723721fa1d0bd64d0c7df Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Tue, 21 Jun 2022 13:28:27 +0300\n> Subject: [PATCH v2 1/6] Remove Tuplesortstate.copytup\n\nYea. I was recently complaining about the pointlessness of copytup.\n\n\n> From 1d78e271b22d7c6a1557defbe15ea5039ff28510 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Tue, 21 Jun 2022 14:03:13 +0300\n> Subject: [PATCH v2 2/6] Tuplesortstate.getdatum1 method\n\nHm. This adds a bunch of indirect function calls were there previously\nweren't.\n\n\n> From 494d46dcf938e5f824a498e38ce77782751208e1 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Tue, 21 Jun 2022 14:13:56 +0300\n> Subject: [PATCH v2 3/6] Put abbreviation logic into puttuple_common()\n\nThere's definitely a lot of redundancy removed... But the list of branches in\nputtuple_common() grew. Perhaps we instead can add a few flags to\nputttuple_common() that determine whether abbreviation should happen, so that\nwe only do the work necessary for the \"type\" of sort?\n\n\n\n> From ee2dd46b07d62e13ed66b5a38272fb5667c943f3 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Wed, 22 Jun 2022 00:14:51 +0300\n> Subject: [PATCH v2 4/6] Move freeing memory away from writetup()\n\nSeems to do more than just moving freeing around?\n\n> @@ -1973,8 +1963,13 @@ tuplesort_putdatum(Tuplesortstate *state, Datum val, bool isNull)\n> static void\n> puttuple_common(Tuplesortstate *state, SortTuple *tuple)\n> {\n> +\tMemoryContext oldcontext = MemoryContextSwitchTo(state->sortcontext);\n> +\n> \tAssert(!LEADER(state));\n>\n> +\tif (tuple->tuple != NULL)\n> +\t\tUSEMEM(state, GetMemoryChunkSpace(tuple->tuple));\n> +\n\nAdding even more branches into common code...\n\n\n\n> From 3a0e1fa7c7e4da46a86f7d5b9dd0392549f3b460 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Wed, 22 Jun 2022 18:11:26 +0300\n> Subject: [PATCH v2 5/6] Reorganize data structures\n\nHard to know what this is trying to achieve.\n\n> -struct Tuplesortstate\n> +struct TuplesortOps\n> {\n> -\tTupSortStatus status;\t\t/* enumerated value as shown above */\n> -\tint\t\t\tnKeys;\t\t\t/* number of columns in sort key */\n> -\tint\t\t\tsortopt;\t\t/* Bitmask of flags used to setup sort */\n> -\tbool\t\tbounded;\t\t/* did caller specify a maximum number of\n> -\t\t\t\t\t\t\t\t * tuples to return? */\n> -\tbool\t\tboundUsed;\t\t/* true if we made use of a bounded heap */\n> -\tint\t\t\tbound;\t\t\t/* if bounded, the maximum number of tuples */\n> -\tbool\t\ttuples;\t\t\t/* Can SortTuple.tuple ever be set? */\n> -\tint64\t\tavailMem;\t\t/* remaining memory available, in bytes */\n> -\tint64\t\tallowedMem;\t\t/* total memory allowed, in bytes */\n> -\tint\t\t\tmaxTapes;\t\t/* max number of input tapes to merge in each\n> -\t\t\t\t\t\t\t\t * pass */\n> -\tint64\t\tmaxSpace;\t\t/* maximum amount of space occupied among sort\n> -\t\t\t\t\t\t\t\t * of groups, either in-memory or on-disk */\n> -\tbool\t\tisMaxSpaceDisk; /* true when maxSpace is value for on-disk\n> -\t\t\t\t\t\t\t\t * space, false when it's value for in-memory\n> -\t\t\t\t\t\t\t\t * space */\n> -\tTupSortStatus maxSpaceStatus;\t/* sort status when maxSpace was reached */\n> \tMemoryContext maincontext;\t/* memory context for tuple sort metadata that\n> \t\t\t\t\t\t\t\t * persists across multiple batches */\n> \tMemoryContext sortcontext;\t/* memory context holding most sort data */\n> \tMemoryContext tuplecontext; /* sub-context of sortcontext for tuple data */\n> -\tLogicalTapeSet *tapeset;\t/* logtape.c object for tapes in a temp file */\n>\n> \t/*\n> \t * These function pointers decouple the routines that must know what kind\n\nTo me it seems odd to have memory contexts and similar things in a\ndatastructure calls *Ops.\n\n\n> From b06bcb5f3666f0541dfcc27c9c8462af2b5ec9e0 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Wed, 22 Jun 2022 21:48:05 +0300\n> Subject: [PATCH v2 6/6] Split tuplesortops.c\n\nI strongly suspect this will cause a slowdown. There was potential for\ncross-function optimization that's now removed.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 08:01:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": ".Hi!\n\nOn Wed, Jul 6, 2022 at 6:01 PM Andres Freund <andres@anarazel.de> wrote:\n> I think this needs to be evaluated for performance...\n\nSurely, this needs.\n\n> I agree with the nearby comment that the commits need a bit of justification\n> at least to review them.\n\nWill do this.\n> > From 1d78e271b22d7c6a1557defbe15ea5039ff28510 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Tue, 21 Jun 2022 14:03:13 +0300\n> > Subject: [PATCH v2 2/6] Tuplesortstate.getdatum1 method\n>\n> Hm. This adds a bunch of indirect function calls were there previously\n> weren't.\n\nYep. I think it worth changing this function to deal with many\nSortTuple's at once.\n\n> > From 494d46dcf938e5f824a498e38ce77782751208e1 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Tue, 21 Jun 2022 14:13:56 +0300\n> > Subject: [PATCH v2 3/6] Put abbreviation logic into puttuple_common()\n>\n> There's definitely a lot of redundancy removed... But the list of branches in\n> puttuple_common() grew. Perhaps we instead can add a few flags to\n> putttuple_common() that determine whether abbreviation should happen, so that\n> we only do the work necessary for the \"type\" of sort?\n\nGood point, will refactor that.\n\n> > From ee2dd46b07d62e13ed66b5a38272fb5667c943f3 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Wed, 22 Jun 2022 00:14:51 +0300\n> > Subject: [PATCH v2 4/6] Move freeing memory away from writetup()\n>\n> Seems to do more than just moving freeing around?\n\nYes, it also move memory accounting from tuplesort_put*() to\nputtuple_common(). Will revise this.\n\n> > @@ -1973,8 +1963,13 @@ tuplesort_putdatum(Tuplesortstate *state, Datum val, bool isNull)\n> > static void\n> > puttuple_common(Tuplesortstate *state, SortTuple *tuple)\n> > {\n> > + MemoryContext oldcontext = MemoryContextSwitchTo(state->sortcontext);\n> > +\n> > Assert(!LEADER(state));\n> >\n> > + if (tuple->tuple != NULL)\n> > + USEMEM(state, GetMemoryChunkSpace(tuple->tuple));\n> > +\n>\n> Adding even more branches into common code...\n\nI will see how to reduce branching here.\n\n> > From 3a0e1fa7c7e4da46a86f7d5b9dd0392549f3b460 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Wed, 22 Jun 2022 18:11:26 +0300\n> > Subject: [PATCH v2 5/6] Reorganize data structures\n>\n> Hard to know what this is trying to achieve.\n\nSplit the public interface part out of Tuplesortstate.\n\n> > -struct Tuplesortstate\n> > +struct TuplesortOps\n> > {\n> > - TupSortStatus status; /* enumerated value as shown above */\n> > - int nKeys; /* number of columns in sort key */\n> > - int sortopt; /* Bitmask of flags used to setup sort */\n> > - bool bounded; /* did caller specify a maximum number of\n> > - * tuples to return? */\n> > - bool boundUsed; /* true if we made use of a bounded heap */\n> > - int bound; /* if bounded, the maximum number of tuples */\n> > - bool tuples; /* Can SortTuple.tuple ever be set? */\n> > - int64 availMem; /* remaining memory available, in bytes */\n> > - int64 allowedMem; /* total memory allowed, in bytes */\n> > - int maxTapes; /* max number of input tapes to merge in each\n> > - * pass */\n> > - int64 maxSpace; /* maximum amount of space occupied among sort\n> > - * of groups, either in-memory or on-disk */\n> > - bool isMaxSpaceDisk; /* true when maxSpace is value for on-disk\n> > - * space, false when it's value for in-memory\n> > - * space */\n> > - TupSortStatus maxSpaceStatus; /* sort status when maxSpace was reached */\n> > MemoryContext maincontext; /* memory context for tuple sort metadata that\n> > * persists across multiple batches */\n> > MemoryContext sortcontext; /* memory context holding most sort data */\n> > MemoryContext tuplecontext; /* sub-context of sortcontext for tuple data */\n> > - LogicalTapeSet *tapeset; /* logtape.c object for tapes in a temp file */\n> >\n> > /*\n> > * These function pointers decouple the routines that must know what kind\n>\n> To me it seems odd to have memory contexts and similar things in a\n> datastructure calls *Ops.\n\nYep, it worth renaming TuplesortOps into TuplesortPublic or something.\n\n> > From b06bcb5f3666f0541dfcc27c9c8462af2b5ec9e0 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Wed, 22 Jun 2022 21:48:05 +0300\n> > Subject: [PATCH v2 6/6] Split tuplesortops.c\n>\n> I strongly suspect this will cause a slowdown. There was potential for\n> cross-function optimization that's now removed.\n\nI wonder how can cross-function optimizations bypass function\npointers. Is it possible?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 6 Jul 2022 20:45:44 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "On Wed, Jul 6, 2022 at 8:45 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > From b06bcb5f3666f0541dfcc27c9c8462af2b5ec9e0 Mon Sep 17 00:00:00 2001\n> > > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > > Date: Wed, 22 Jun 2022 21:48:05 +0300\n> > > Subject: [PATCH v2 6/6] Split tuplesortops.c\n> >\n> > I strongly suspect this will cause a slowdown. There was potential for\n> > cross-function optimization that's now removed.\n>\n> I wonder how can cross-function optimizations bypass function\n> pointers. Is it possible?\n\nOh, this is not just functions called by pointer. This is also\nputtuple_common() etc. OK, this needs to be checked.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 7 Jul 2022 10:54:30 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi, Matthias!\n\nThe revised patchset is attached.\n\nOn Wed, Jul 6, 2022 at 5:41 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> All of the patches are currently missing descriptive commit messages,\n> which I think is critical for getting this committed. As for per-patch\n> comments:\n>\n> 0001: This patch removes copytup, but it is not quite clear why -\n> please describe the reasoning in the commit message.\n\nBecause spit of logic between Tuplesortstate.copytup() function and\ntuplesort_put*() functions is unclear. It doesn't look like we need\nan abstraction here, while all the work could be done in\ntuplesort_put*().\n\n> 0002: getdatum1 needs comments on what it does. From the name, it\n> would return the datum1 from a sorttuple, but that's not what it does;\n> a better name would be putdatum1 or populatedatum1.\n>\n> 0003: in the various tuplesort_put*tuple[values] functions, the datum1\n> field is manually extracted. Shouldn't we use the getdatum1 functions\n> from 0002 instead? We could use either them directly to allow\n> inlining, or use the indirection through tuplesortstate.\n\ngetdatum1() was a bad name. Actually it restores original datum1\nduring rollback of abbreviations. I've replaced it with\nremoveabbrev(), which seems name to me and process many SortTuple's\nduring one call.\n\n> 0004: Needs a commit message, but otherwise seems fine.\n\nCommit message is added.\n\n> 0005:\n> > +struct TuplesortOps\n>\n> This struct has no comment on what it is. Something like \"Public\n> interface of tuplesort operators, containing data directly accessable\n> to users of tuplesort\" should suffice, but feel free to update the\n> wording.\n>\n> > + void *arg;\n> > +};\n>\n> This field could use a comment on what it is used for, and how to use it.\n>\n> > +struct Tuplesortstate\n> > +{\n> > + TuplesortOps ops;\n>\n> This field needs a comment too.\n>\n> 0006: Needs a commit message, but otherwise seems fine.\n\nTuplesortOps was renamed to TuplesortPublic. Comments and commit\nmessages are added.\n\nThere are some places, which potentially could cause a slowdown. I'm\ngoing to make some experiments with that.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 12 Jul 2022 11:22:53 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "On Tue, Jul 12, 2022 at 3:23 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n> There are some places, which potentially could cause a slowdown. I'm\n> going to make some experiments with that.\n\nI haven't looked at the patches, so I don't know of a specific place to\nlook for a slowdown, but I thought it might help to perform the same query\ntests as my most recent test for evaluating qsort variants (some\ndescription in [1]), and here is the spreadsheet. Overall, the differences\nlook like noise. A few cases with unabbreviatable text look a bit faster\nwith the patch. I'm not sure if that's a real difference, but in any case I\ndon't see a slowdown anywhere.\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsHeTACMP1JVQ%2Bm35-v2NkmEqsJMHLhEfWk4sTB5aw_jkQ%40mail.gmail.com\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Jul 2022 10:44:26 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi, John!\n\nOn Thu, Jul 21, 2022 at 6:44 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Tue, Jul 12, 2022 at 3:23 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > There are some places, which potentially could cause a slowdown. I'm\n> > going to make some experiments with that.\n>\n> I haven't looked at the patches, so I don't know of a specific place to look for a slowdown, but I thought it might help to perform the same query tests as my most recent test for evaluating qsort variants (some description in [1]), and here is the spreadsheet. Overall, the differences look like noise. A few cases with unabbreviatable text look a bit faster with the patch. I'm not sure if that's a real difference, but in any case I don't see a slowdown anywhere.\n>\n> [1] https://www.postgresql.org/message-id/CAFBsxsHeTACMP1JVQ%2Bm35-v2NkmEqsJMHLhEfWk4sTB5aw_jkQ%40mail.gmail.com\n\nGreat, thank you very much for the feedback!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 21 Jul 2022 17:40:19 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "I've looked through the updated patch. Overall it looks good enough.\n\nSome minor things:\n\n- PARALLEL_SORT macro is based on coordinate struct instead of state\nstruct. In some calls(i.e. from _bt_spools_heapscan) coordinate could\nappear to be NULL, which can be a segfault on items dereference inside the\nmacro.\n\n- state->worker and coordinate->isWorker a little bit differ in semantics\ni.e.:\n..............................................worker............... leader\nstate -> worker........................ >=0.....................-1\ncoordinate ->isWorker............. 1..........................0\n\n- in tuplesort_begin_index_btree I suppose it should be base->nKeys instead\nof state->nKeys\n\n- Cfbot reports gcc warnings due to mixed code and declarations. So I used\nthis to beautify code in tuplesortvariants.c a little. (This is added as a\nseparate patch 0007)\n\nAll these things are corrected/done in a new version 3 of a patchset (PFA).\nFor me, the patchset seems like a long-needed thing to support PostgreSQL\nextensibility. Overall corrections in v3 are minor, so I'd like to mark the\npatch as RfC if there are no objections.\n\n-- \nBest regards,\nPavel Borisov\nSupabase.", "msg_date": "Fri, 22 Jul 2022 19:56:20 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi, Pavel!\n\nThank you for your review and corrections.\n\nOn Fri, Jul 22, 2022 at 6:57 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I've looked through the updated patch. Overall it looks good enough.\n>\n> Some minor things:\n>\n> - PARALLEL_SORT macro is based on coordinate struct instead of state struct. In some calls(i.e. from _bt_spools_heapscan) coordinate could appear to be NULL, which can be a segfault on items dereference inside the macro.\n>\n> - state->worker and coordinate->isWorker a little bit differ in semantics i.e.:\n> ..............................................worker............... leader\n> state -> worker........................ >=0.....................-1\n> coordinate ->isWorker............. 1..........................0\n>\n> - in tuplesort_begin_index_btree I suppose it should be base->nKeys instead of state->nKeys\n\nPerfect, thank you!\n\n> - Cfbot reports gcc warnings due to mixed code and declarations. So I used this to beautify code in tuplesortvariants.c a little. (This is added as a separate patch 0007)\n\nIt appears that warnings were caused by the extra semicolon in\nTuplesortstateGetPublic() macro. I've removed that semicolon, and I\ndon't think we need a beautification patch. Also, please note that\nthere is no point to add indentation, which doesn't survive pgindent.\n\n> All these things are corrected/done in a new version 3 of a patchset (PFA). For me, the patchset seems like a long-needed thing to support PostgreSQL extensibility. Overall corrections in v3 are minor, so I'd like to mark the patch as RfC if there are no objections.\n\nThank you. I've also revised the comments in the top of tuplesort.c\nand tuplesortvariants.c. The revised patchset is attached.\n\nAlso, my OrioleDB colleagues Ilya Kobets and Tatsiana Yaumenenka run\ntests to check if the patchset causes a performance regression. The\nscript and results are present in the \"tuplesort_patch_test.zip\"\narchive. The final comparison is given in the result/final_table.txt.\nIn short, they repeat each test 10 times and there is no difference\nexceeding the random variation.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 24 Jul 2022 15:24:42 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "On Sun, Jul 24, 2022 at 3:24 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Also, my OrioleDB colleagues Ilya Kobets and Tatsiana Yaumenenka run\n> tests to check if the patchset causes a performance regression. The\n> script and results are present in the \"tuplesort_patch_test.zip\"\n> archive. The final comparison is given in the result/final_table.txt.\n> In short, they repeat each test 10 times and there is no difference\n> exceeding the random variation.\n\nI see the last revision passed cfbot without warnings. I've added the\nmeta information to commit messages. Also, I've re-run through the\nthread and it seems all the comments are addressed. I'm going to push\nthis if there are no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 25 Jul 2022 00:52:49 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Note that 0001+0002 (without the others) incurs warnings:\n\n$ time { make -j4 clean; make -j4; } >/dev/null\ntuplesort.c:1883:9: warning: unused variable 'i' [-Wunused-variable]\ntuplesort.c:1955:10: warning: unused variable 'i' [-Wunused-variable]\ntuplesort.c:2026:9: warning: unused variable 'i' [-Wunused-variable]\ntuplesort.c:2103:10: warning: unused variable 'i' [-Wunused-variable]\n\n(I wondered in the past if cfbot should try to test for clean builds of subsets\nof patchsets, and it came up recently with the JSON patches.)\n\nAlso, this comment has some bad indentation:\n\n* Set state to be consistent with never trying abbreviation.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 24 Jul 2022 18:23:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "Hi Justin!\n\nOn Mon, Jul 25, 2022 at 2:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Note that 0001+0002 (without the others) incurs warnings:\n>\n> $ time { make -j4 clean; make -j4; } >/dev/null\n> tuplesort.c:1883:9: warning: unused variable 'i' [-Wunused-variable]\n> tuplesort.c:1955:10: warning: unused variable 'i' [-Wunused-variable]\n> tuplesort.c:2026:9: warning: unused variable 'i' [-Wunused-variable]\n> tuplesort.c:2103:10: warning: unused variable 'i' [-Wunused-variable]\n>\n> (I wondered in the past if cfbot should try to test for clean builds of subsets\n> of patchsets, and it came up recently with the JSON patches.)\n>\n> Also, this comment has some bad indentation:\n>\n> * Set state to be consistent with never trying abbreviation.\n\nThank you for caching this. Fixed in the revision attached.\n\nTesting subsets of patchsets in cfbot looks like a good idea to me.\nHowever, I'm not sure if we always require subsets to be consistent.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 25 Jul 2022 12:30:24 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": ">\n> Thank you for caching this. Fixed in the revision attached.\n>\n> Testing subsets of patchsets in cfbot looks like a good idea to me.\n> However, I'm not sure if we always require subsets to be consistent.\n>\n\nHi, hackers!\n\nI've looked through a new v6 of a patchset and find it ok. When applied\n0001+0002 only I don't see warnings anymore. Build and tests are successful\nand Cfbot also looks good. I've marked the patch as RfC.\n\n-- \nBest regards,\nPavel Borisov\n\nThank you for caching this.  Fixed in the revision attached.\n\nTesting subsets of patchsets in cfbot looks like a good idea to me.\nHowever, I'm not sure if we always require subsets to be consistent. Hi, hackers!I've looked through a new v6 of a patchset and find it ok. When applied 0001+0002 only I don't see warnings anymore. Build and tests are successful and Cfbot also looks good. I've marked the patch as RfC.-- Best regards,Pavel Borisov", "msg_date": "Mon, 25 Jul 2022 17:00:49 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Custom tuplesorts for extensions" }, { "msg_contents": "On Mon, Jul 25, 2022 at 4:01 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> Thank you for caching this. Fixed in the revision attached.\n>>\n>> Testing subsets of patchsets in cfbot looks like a good idea to me.\n>> However, I'm not sure if we always require subsets to be consistent.\n>\n>\n> Hi, hackers!\n>\n> I've looked through a new v6 of a patchset and find it ok. When applied 0001+0002 only I don't see warnings anymore. Build and tests are successful and Cfbot also looks good. I've marked the patch as RfC.\n\nThank you, pushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 27 Jul 2022 08:36:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Custom tuplesorts for extensions" } ]
[ { "msg_contents": "Hi,\n\nI noticed BF member wrasse failed in 028_row_filter.pl.\n\n# Failed test 'check publish_via_partition_root behavior'\n# at t/028_row_filter.pl line 669.\n# got: ''\n# expected: '1|100\n# ...\n\nLog:\n2022-06-23 11:27:42.387 CEST [20589:3] 028_row_filter.pl LOG: statement: ALTER SUBSCRIPTION tap_sub REFRESH PUBLICATION WITH (copy_data = true)\n2022-06-23 11:27:42.470 CEST [20589:4] 028_row_filter.pl LOG: disconnection: session time: 0:00:00.098 user=nm database=postgres host=[local]\n2022-06-23 11:27:42.611 CEST [20593:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has started\n...\n2022-06-23 11:27:43.197 CEST [20610:3] 028_row_filter.pl LOG: statement: SELECT a, b FROM tab_rowfilter_partitioned ORDER BY 1, 2\n...\n2022-06-23 11:27:43.689 CEST [20593:2] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has finished \n\n From the Log, I can see it query the target table before the table sync is\nover. So, I think the reason is that we didn't wait for table sync to\nfinish after refreshing the publication. Sorry for not catching that\nealier. Here is a patch to fix it.\n\n\nBest regards,\nHou zj", "msg_date": "Thu, 23 Jun 2022 11:28:32 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix instability in subscription regression test" }, { "msg_contents": "On Thu, Jun 23, 2022 at 8:28 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> I noticed BF member wrasse failed in 028_row_filter.pl.\n>\n> # Failed test 'check publish_via_partition_root behavior'\n> # at t/028_row_filter.pl line 669.\n> # got: ''\n> # expected: '1|100\n> # ...\n>\n> Log:\n> 2022-06-23 11:27:42.387 CEST [20589:3] 028_row_filter.pl LOG: statement: ALTER SUBSCRIPTION tap_sub REFRESH PUBLICATION WITH (copy_data = true)\n> 2022-06-23 11:27:42.470 CEST [20589:4] 028_row_filter.pl LOG: disconnection: session time: 0:00:00.098 user=nm database=postgres host=[local]\n> 2022-06-23 11:27:42.611 CEST [20593:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has started\n> ...\n> 2022-06-23 11:27:43.197 CEST [20610:3] 028_row_filter.pl LOG: statement: SELECT a, b FROM tab_rowfilter_partitioned ORDER BY 1, 2\n> ...\n> 2022-06-23 11:27:43.689 CEST [20593:2] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has finished\n>\n> From the Log, I can see it query the target table before the table sync is\n> over. So, I think the reason is that we didn't wait for table sync to\n> finish after refreshing the publication. Sorry for not catching that\n> ealier. Here is a patch to fix it.\n\n+1\n\nThe patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 24 Jun 2022 11:15:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix instability in subscription regression test" }, { "msg_contents": "On Thu, Jun 23, 2022 at 4:58 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> I noticed BF member wrasse failed in 028_row_filter.pl.\n>\n> # Failed test 'check publish_via_partition_root behavior'\n> # at t/028_row_filter.pl line 669.\n> # got: ''\n> # expected: '1|100\n> # ...\n>\n> Log:\n> 2022-06-23 11:27:42.387 CEST [20589:3] 028_row_filter.pl LOG: statement: ALTER SUBSCRIPTION tap_sub REFRESH PUBLICATION WITH (copy_data = true)\n> 2022-06-23 11:27:42.470 CEST [20589:4] 028_row_filter.pl LOG: disconnection: session time: 0:00:00.098 user=nm database=postgres host=[local]\n> 2022-06-23 11:27:42.611 CEST [20593:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has started\n> ...\n> 2022-06-23 11:27:43.197 CEST [20610:3] 028_row_filter.pl LOG: statement: SELECT a, b FROM tab_rowfilter_partitioned ORDER BY 1, 2\n> ...\n> 2022-06-23 11:27:43.689 CEST [20593:2] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rowfilter_partitioned\" has finished\n>\n> From the Log, I can see it query the target table before the table sync is\n> over. So, I think the reason is that we didn't wait for table sync to\n> finish after refreshing the publication. Sorry for not catching that\n> ealier. Here is a patch to fix it.\n>\n\n+# wait for initial table synchronization to finish\n+$node_subscriber->poll_query_until('postgres', $synced_query)\n\nWe could probably slightly change the comment to say: \"wait for table\nsync to finish\". Normally, we use initial table sync after CREATE\nSUBSCRIPTION. This is a minor nitpick and I can take care of it before\ncommitting unless you think otherwise.\n\nYour analysis and patch look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 24 Jun 2022 07:57:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix instability in subscription regression test" }, { "msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Amit Kapila <amit.kapila16@gmail.com>\r\n> Sent: Friday, June 24, 2022 10:28 AM\r\n> To: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\r\n> Cc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\n> Subject: Re: Fix instability in subscription regression test\r\n> \r\n> On Thu, Jun 23, 2022 at 4:58 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > I noticed BF member wrasse failed in 028_row_filter.pl.\r\n> >\r\n> > # Failed test 'check publish_via_partition_root behavior'\r\n> > # at t/028_row_filter.pl line 669.\r\n> > # got: ''\r\n> > # expected: '1|100\r\n> > # ...\r\n> >\r\n> > Log:\r\n> > 2022-06-23 11:27:42.387 CEST [20589:3] 028_row_filter.pl LOG:\r\n> > statement: ALTER SUBSCRIPTION tap_sub REFRESH PUBLICATION WITH\r\n> > (copy_data = true)\r\n> > 2022-06-23 11:27:42.470 CEST [20589:4] 028_row_filter.pl LOG:\r\n> > disconnection: session time: 0:00:00.098 user=nm database=postgres\r\n> > host=[local]\r\n> > 2022-06-23 11:27:42.611 CEST [20593:1] LOG: logical replication table\r\n> > synchronization worker for subscription \"tap_sub\", table\r\n> \"tab_rowfilter_partitioned\" has started ...\r\n> > 2022-06-23 11:27:43.197 CEST [20610:3] 028_row_filter.pl LOG:\r\n> > statement: SELECT a, b FROM tab_rowfilter_partitioned ORDER BY 1, 2 ...\r\n> > 2022-06-23 11:27:43.689 CEST [20593:2] LOG: logical replication table\r\n> > synchronization worker for subscription \"tap_sub\", table\r\n> > \"tab_rowfilter_partitioned\" has finished\r\n> >\r\n> > From the Log, I can see it query the target table before the table\r\n> > sync is over. So, I think the reason is that we didn't wait for table\r\n> > sync to finish after refreshing the publication. Sorry for not\r\n> > catching that ealier. Here is a patch to fix it.\r\n> >\r\n> \r\n> +# wait for initial table synchronization to finish\r\n> +$node_subscriber->poll_query_until('postgres', $synced_query)\r\n> \r\n> We could probably slightly change the comment to say: \"wait for table sync to\r\n> finish\". Normally, we use initial table sync after CREATE SUBSCRIPTION. This is a\r\n> minor nitpick and I can take care of it before committing unless you think\r\n> otherwise.\r\n\r\nThanks for reviewing, the suggestion looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 24 Jun 2022 02:37:36 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix instability in subscription regression test" }, { "msg_contents": "On Fri, Jun 24, 2022 at 8:07 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> >\n> > +# wait for initial table synchronization to finish\n> > +$node_subscriber->poll_query_until('postgres', $synced_query)\n> >\n> > We could probably slightly change the comment to say: \"wait for table sync to\n> > finish\". Normally, we use initial table sync after CREATE SUBSCRIPTION. This is a\n> > minor nitpick and I can take care of it before committing unless you think\n> > otherwise.\n>\n> Thanks for reviewing, the suggestion looks good to me.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 24 Jun 2022 14:35:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix instability in subscription regression test" } ]
[ { "msg_contents": "Hi hackers,\nAlong with major TOAST improvement with Pluggable TOAST we see the really\nimportant smaller one - moving compression functionality when dealing with\noversized Tuples into Toast, because Toast is meant to deal with how\noversized Tuple is stored and it is logical to make it responsible for\ncompression too.\nCurrently it is done in heap_toast_insert_or_update() (file heaptoast.c)\nbefore the attribute is TOASTed, we suggest the TOAST should decide if the\nattribute must be compressed before stored externally or not for . Also, it\nallows us to make Toasters completely responsible for TOASTed data storage\n- how and where these data are stored.\nAny advice or suggestion would be welcome.\n\n--\nBest regards,\nNikita Malakhov.\n\nHi hackers,Along with major TOAST improvement with Pluggable TOAST we see the really important smaller one - moving compression functionality when dealing with oversized Tuples into Toast, because Toast is meant to deal with how oversized Tuple is stored and it is logical to make it responsible for compression too.Currently it is done in heap_toast_insert_or_update() (file heaptoast.c) before the attribute is TOASTed, we suggest the TOAST should decide if the attribute must be compressed before stored externally or not for . Also, it allows us to make Toasters completely responsible for TOASTed data storage - how and where these data are stored.Any advice or suggestion would be welcome.--Best regards,Nikita Malakhov.", "msg_date": "Thu, 23 Jun 2022 15:31:14 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "TOAST - moving Compression into Toast for oversized Tuples" } ]
[ { "msg_contents": "Some places in walsender.c and basebackup_copy.c open-code the sending \nof RowDescription and DataRow protocol messages. But there are already \nmore compact and robust solutions available for this, using \nDestRemoteSimple and associated machinery, already in use in walsender.c.\n\nThe attached patches 0001 and 0002 are tiny bug fixes I found during this.\n\nPatches 0003 and 0004 are the main refactorings. They should probably \nbe combined into one patch eventually, but this way the treatment of \nRowDescription and DataRow is presented separately.", "msg_date": "Thu, 23 Jun 2022 16:36:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "refactor some protocol message sending in walsender and basebackup" }, { "msg_contents": "On Thu, Jun 23, 2022 at 04:36:36PM +0200, Peter Eisentraut wrote:\n> Some places in walsender.c and basebackup_copy.c open-code the sending of\n> RowDescription and DataRow protocol messages. But there are already more\n> compact and robust solutions available for this, using DestRemoteSimple and\n> associated machinery, already in use in walsender.c.\n> \n> The attached patches 0001 and 0002 are tiny bug fixes I found during this.\n> \n> Patches 0003 and 0004 are the main refactorings. They should probably be\n> combined into one patch eventually, but this way the treatment of\n> RowDescription and DataRow is presented separately.\n\nAll 4 patches look reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:36:46 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactor some protocol message sending in walsender and\n basebackup" }, { "msg_contents": "On 01.07.22 23:36, Nathan Bossart wrote:\n> On Thu, Jun 23, 2022 at 04:36:36PM +0200, Peter Eisentraut wrote:\n>> Some places in walsender.c and basebackup_copy.c open-code the sending of\n>> RowDescription and DataRow protocol messages. But there are already more\n>> compact and robust solutions available for this, using DestRemoteSimple and\n>> associated machinery, already in use in walsender.c.\n>>\n>> The attached patches 0001 and 0002 are tiny bug fixes I found during this.\n>>\n>> Patches 0003 and 0004 are the main refactorings. They should probably be\n>> combined into one patch eventually, but this way the treatment of\n>> RowDescription and DataRow is presented separately.\n> \n> All 4 patches look reasonable to me.\n\nAll committed now, thanks.\n\n(I cleaned up the 0004 patch a bit more; there was some junk left in the \nposted patch.)\n\n\n", "msg_date": "Wed, 6 Jul 2022 08:51:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: refactor some protocol message sending in walsender and\n basebackup" } ]
[ { "msg_contents": "Moved from the pgsql-bugs mailing list [1].\n\nOn 6/23/22 07:03, Masahiko Sawada wrote:\n > Hi,\n >\n > On Sat, Jun 4, 2022 at 4:03 AM Andrey Lepikhov\n > <a.lepikhov@postgrespro.ru> wrote:\n >>\n >> According to subj you can try to create many tables (induced by the case\n >> of partitioned table) with long prefix - see 6727v.sql for reproduction.\n >> But now it's impossible because of logic of the makeUniqueTypeName()\n >> routine.\n >> You get the error:\n >> ERROR: could not form array type name for type ...\n >>\n >> It is very corner case, of course. But solution is easy and short. So,\n >> why not to fix? - See the patch in attachment.\n >\n > While this seems to be a good improvement, I think it's not a bug.\n > Probably we cannot backpatch it as it will end up having type names\n > defined by different naming rules. I'd suggest discussing it on\n > -hackers.\nDone.\n\n > Regarding the patch, I think we can merge makeUniqueTypeName() to\n > makeArrayTypeName() as there is no caller of makeUniqueTypeName() who\n > pass tryOriginal = true.\nI partially agree with you. But I have one reason to leave \nmakeUniqueTypeName() separated:\nIt may be used in other codes with auto generated types. For example, I \nthink, the DefineRelation routine should choose composite type instead \nof using the same name as the table.\n\n > Also looking at other ChooseXXXName()\n > functions, we don't care about integer overflow. Is it better to make\n > it consistent with them?\nDone.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/121e286f-3796-c9d7-9eab-6fb8e0b9c701%40postgrespro.ru\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 24 Jun 2022 10:12:41 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Postgres do not allow to create many tables with more than 63-symbols\n prefix" }, { "msg_contents": "Hi,\n\nThanks for working on this.\n\n >> According to subj you can try to create many tables (induced by the case\n> >> of partitioned table) with long prefix - see 6727v.sql for\n> reproduction.\n> >> But now it's impossible because of logic of the makeUniqueTypeName()\n> >> routine.\n> >> You get the error:\n> >> ERROR: could not form array type name for type ...\n> >>\n> >> It is very corner case, of course. But solution is easy and short. So,\n> >> why not to fix? - See the patch in attachment.\n> >\n> > While this seems to be a good improvement, I think it's not a bug.\n> > Probably we cannot backpatch it as it will end up having type names\n> > defined by different naming rules. I'd suggest discussing it on\n> > -hackers.\n> Done.\n>\n\nOn Citus extension, we hit a similar issue while creating partitions (over\nmultiple transactions in parallel). You can see some more discussions on\nthe related Github issue #5334\n<https://github.com/citusdata/citus/issues/5354>. We basically discuss this\nbehavior on the issue.\n\nI tested this patch with the mentioned issue, and as expected the issue is\nresolved.\n\nAlso, in general, the patch looks reasonable, following the approach\nthat ChooseRelationName()\nimplements makes sense to me as well.\n\nOnder KALACI\nDeveloping the Citus extension @Microsoft\n\nHi,Thanks for working on this.  >> According to subj you can try to create many tables (induced by the case\n >> of partitioned table) with long prefix - see 6727v.sql for reproduction.\n >> But now it's impossible because of logic of the makeUniqueTypeName()\n >> routine.\n >> You get the error:\n >> ERROR:  could not form array type name for type ...\n >>\n >> It is very corner case, of course. But solution is easy and short. So,\n >> why not to fix? - See the patch in attachment.\n >\n > While this seems to be a good improvement, I think it's not a bug.\n > Probably we cannot backpatch it as it will end up having type names\n > defined by different naming rules. I'd suggest discussing it on\n > -hackers.\nDone.On Citus extension, we hit a similar issue while creating partitions (over multiple transactions in parallel). You can see some more discussions on the related Github issue #5334. We basically discuss this behavior on the issue. I tested this patch with the mentioned issue, and as expected the issue is resolved.  Also, in general, the patch looks reasonable, following the approach that ChooseRelationName() implements makes sense to me as well.Onder KALACIDeveloping the Citus extension @Microsoft", "msg_date": "Fri, 24 Jun 2022 10:09:06 +0200", "msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres do not allow to create many tables with more than\n 63-symbols prefix" }, { "msg_contents": "On Fri, Jun 24, 2022 at 2:12 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> Moved from the pgsql-bugs mailing list [1].\n>\n> On 6/23/22 07:03, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > On Sat, Jun 4, 2022 at 4:03 AM Andrey Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >>\n> >> According to subj you can try to create many tables (induced by the case\n> >> of partitioned table) with long prefix - see 6727v.sql for reproduction.\n> >> But now it's impossible because of logic of the makeUniqueTypeName()\n> >> routine.\n> >> You get the error:\n> >> ERROR: could not form array type name for type ...\n> >>\n> >> It is very corner case, of course. But solution is easy and short. So,\n> >> why not to fix? - See the patch in attachment.\n> >\n> > While this seems to be a good improvement, I think it's not a bug.\n> > Probably we cannot backpatch it as it will end up having type names\n> > defined by different naming rules. I'd suggest discussing it on\n> > -hackers.\n> Done.\n\nThank for updating the patch. Please register this item to the next CF\nif not yet.\n\n>\n> > Regarding the patch, I think we can merge makeUniqueTypeName() to\n> > makeArrayTypeName() as there is no caller of makeUniqueTypeName() who\n> > pass tryOriginal = true.\n> I partially agree with you. But I have one reason to leave\n> makeUniqueTypeName() separated:\n> It may be used in other codes with auto generated types. For example, I\n> think, the DefineRelation routine should choose composite type instead\n> of using the same name as the table.\n\nOkay.\n\nI have one comment on v2 patch:\n\n + for(;;)\n {\n - dest[i - 1] = '_';\n - strlcpy(dest + i, typeName, NAMEDATALEN - i);\n - if (namelen + i >= NAMEDATALEN)\n - truncate_identifier(dest, NAMEDATALEN, false);\n -\n if (!SearchSysCacheExists2(TYPENAMENSP,\n - CStringGetDatum(dest),\n + CStringGetDatum(type_name),\n ObjectIdGetDatum(typeNamespace)))\n - return pstrdup(dest);\n + return type_name;\n +\n + /* Previous attempt was failed. Prepare a new one. */\n + pfree(type_name);\n + snprintf(suffix, sizeof(suffix), \"%d\", ++pass);\n + type_name = makeObjectName(\"\", typeName, suffix);\n }\n\n return NULL;\n\nI think it's better to break from the loop instead of returning from\nthere. That way, we won't need \"return NULL\".\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 27 Jun 2022 10:38:50 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres do not allow to create many tables with more than\n 63-symbols prefix" }, { "msg_contents": "On 6/27/22 06:38, Masahiko Sawada wrote:\n> On Fri, Jun 24, 2022 at 2:12 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 6/23/22 07:03, Masahiko Sawada wrote:\n>> > On Sat, Jun 4, 2022 at 4:03 AM Andrey Lepikhov\n>> > <a.lepikhov@postgrespro.ru> wrote:\n>> >> It is very corner case, of course. But solution is easy and short. So,\n>> >> why not to fix? - See the patch in attachment.\n>> >\n>> > While this seems to be a good improvement, I think it's not a bug.\n>> > Probably we cannot backpatch it as it will end up having type names\n>> > defined by different naming rules. I'd suggest discussing it on\n>> > -hackers.\n>> Done.\n> \n> Thank for updating the patch. Please register this item to the next CF\n> if not yet.\nDone [1].\n\n>> > Regarding the patch, I think we can merge makeUniqueTypeName() to\n>> > makeArrayTypeName() as there is no caller of makeUniqueTypeName() who\n>> > pass tryOriginal = true.\n>> I partially agree with you. But I have one reason to leave\n>> makeUniqueTypeName() separated:\n>> It may be used in other codes with auto generated types. For example, I\n>> think, the DefineRelation routine should choose composite type instead\n>> of using the same name as the table.\n> \n> Okay.\n> \n> I have one comment on v2 patch:\n> \n> + for(;;)\n> {\n> - dest[i - 1] = '_';\n> - strlcpy(dest + i, typeName, NAMEDATALEN - i);\n> - if (namelen + i >= NAMEDATALEN)\n> - truncate_identifier(dest, NAMEDATALEN, false);\n> -\n> if (!SearchSysCacheExists2(TYPENAMENSP,\n> - CStringGetDatum(dest),\n> + CStringGetDatum(type_name),\n> ObjectIdGetDatum(typeNamespace)))\n> - return pstrdup(dest);\n> + return type_name;\n> +\n> + /* Previous attempt was failed. Prepare a new one. */\n> + pfree(type_name);\n> + snprintf(suffix, sizeof(suffix), \"%d\", ++pass);\n> + type_name = makeObjectName(\"\", typeName, suffix);\n> }\n> \n> return NULL;\n> \n> I think it's better to break from the loop instead of returning from\n> there. That way, we won't need \"return NULL\".\nAgree. Done.\n\n[1] https://commitfest.postgresql.org/38/3712/\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 27 Jun 2022 07:57:29 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres do not allow to create many tables with more than\n 63-symbols prefix" }, { "msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> On 6/27/22 06:38, Masahiko Sawada wrote:\n>>>> Regarding the patch, I think we can merge makeUniqueTypeName() to\n>>>> makeArrayTypeName() as there is no caller of makeUniqueTypeName() who\n>>>> pass tryOriginal = true.\n\n>>> I partially agree with you. But I have one reason to leave\n>>> makeUniqueTypeName() separated:\n>>> It may be used in other codes with auto generated types. For example, I\n>>> think, the DefineRelation routine should choose composite type instead\n>>> of using the same name as the table.\n\nNo, this is an absolutely horrid idea. The rule that \"_foo\" means \"array\nof foo\" is quite well known to a lot of client code, and as long as they\ndon't use any type names approaching NAMEDATALEN, it's solid. So we must\nnot build backend code that uses \"_foo\"-like type names for any other\npurpose than arrays.\n\nI suspect in fact that the reason we ended up with this orphaned logic\nis that somebody pointed out this problem somewhere along the development\nof multiranges, whereupon makeMultirangeTypeName was changed to not\nuse the shared code --- but the breakup of makeArrayTypeName wasn't\nundone altogether. It should have been, because it just tempts other\npeople to make the same wrong choice.\n\nPushed with re-merging of the code into makeArrayTypeName and some\nwork on the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jul 2022 15:47:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres do not allow to create many tables with more than\n 63-symbols prefix" } ]
[ { "msg_contents": "\n<sigh>\n\nJust a note/reminder that \"seawasp\" has been unhappy for some days now \nbecause of yet another change in the unstable API provided by LLVM:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-06-23%2023%3A18%3A17\n\n llvmjit.c:1115:50: error: use of undeclared identifier 'LLVMJITCSymbolMapPair'\n LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n\n llvmjit.c:1233:81: error: too few arguments to function call, expected 3, have 2\n ref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n\nThe question is: should pg 15 try to be clang 15 ready? I'm afraid yes, as \nLLVM does 2 releases per year, so clang 15 should come out this Fall, \ntogether with pg 15. Possibly other changes will come before the \nreleases:-/\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 24 Jun 2022 10:35:27 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Future Postgres 15 and Clang 15" }, { "msg_contents": "On Fri, Jun 24, 2022 at 8:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Just a note/reminder that \"seawasp\" has been unhappy for some days now\n> because of yet another change in the unstable API provided by LLVM:\n\nHi Fabien,\n\nYeah, I've started on the changes needed for opaque pointers (that's\nthe change that's been generating warnings since LLVM14, and now\naborts in LLVM15), but I haven't figured out all the puzzles yet. I\nwill have another go at this this weekend and then post what I've got,\nto show where I'm stuck.\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-06-23%2023%3A18%3A17\n>\n> llvmjit.c:1115:50: error: use of undeclared identifier 'LLVMJITCSymbolMapPair'\n> LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n>\n> llvmjit.c:1233:81: error: too few arguments to function call, expected 3, have 2\n> ref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n\nAh yes, I hadn't seen that one yet. That function grew a \"Dispose\"\nargument, which we can just pass NULL for, at a guess:\n\nhttps://github.com/llvm/llvm-project/commit/14b7c108a2bf46541efc3a5c9cbd589b3afc18e6\n\n> The question is: should pg 15 try to be clang 15 ready? I'm afraid yes, as\n> LLVM does 2 releases per year, so clang 15 should come out this Fall,\n> together with pg 15. Possibly other changes will come before the\n> releases:-/\n\nOK let's try to get a patch ready first and then see what we can do.\nI'm more worried about code that compiles OK but then crashes or gives\nwrong query results (like the one for 9b4e4cfe) than I am about code\nthat doesn't compile at all (meaning no one can actually ship it!). I\nthink the way our two projects' release cycles work, there will\noccasionally be short periods where we can't use their very latest\nrelease, but we can try to avoid that...\n\n\n", "msg_date": "Sat, 25 Jun 2022 09:27:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Future Postgres 15 and Clang 15" }, { "msg_contents": "Hello Thomas,\n\n>> llvmjit.c:1233:81: error: too few arguments to function call, expected 3, have 2\n>> ref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n>\n> Ah yes, I hadn't seen that one yet. That function grew a \"Dispose\"\n> argument, which we can just pass NULL for, at a guess:\n>\n> https://github.com/llvm/llvm-project/commit/14b7c108a2bf46541efc3a5c9cbd589b3afc18e6\n\nI agree with the guess. Whether the NULL induced semantics is the right \none is much less obvious…\n\n>> The question is: should pg 15 try to be clang 15 ready? I'm afraid yes, as\n>> LLVM does 2 releases per year, so clang 15 should come out this Fall,\n>> together with pg 15. Possibly other changes will come before the\n>> releases:-/\n>\n> OK let's try to get a patch ready first and then see what we can do.\n> I'm more worried about code that compiles OK but then crashes or gives\n> wrong query results (like the one for 9b4e4cfe) than I am about code\n> that doesn't compile at all (meaning no one can actually ship it!). I\n> think the way our two projects' release cycles work, there will\n> occasionally be short periods where we can't use their very latest\n> release, but we can try to avoid that...\n\nYep. The part which would worry me is the code complexity and kludges \ninduced by trying to support a moving API. Maybe careful header-handled \nmacros can do the trick (eg for an added parameter as above), but I'm \nafraid it cannot always be that simple.\n\n-- \nFabien.", "msg_date": "Sat, 25 Jun 2022 08:24:30 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Future Postgres 15 and Clang 15" } ]
[ { "msg_contents": "When parsing JSON strings need to be converted from the JSON string\nformat to a c-style string. A simple copy of the buffer does not suffice\nbecause of the various escape sequences that that JSON supports. Because\nof this our JSON parser wrote characters into the c-style string buffer\none at a time.\n\nHowever, this is only necessary for these escaped sequences that map to\nanother character. This patch changes the behaviour for non-escaped\ncharacters. These are now copied in batches instead of one character at\na time.\n\nTo test performance of this change I used COPY BINARY from a JSONB table\ninto another, containing fairly JSONB values of ~15kB. The JSONB values\nare a JSON object with a single level. They contain a few small keys and\nvalues, but one very big value that's a stringified JSON blob. So this\nJSON blob contains a relatively high number of escape characters, to\nescape all the \" characters. This change improves performance for\nworkload this workload on my machine by ~18% (going from 1m24s to 1m09s).\n\n@Andres, there was indeed some low hanging fruit. \n@John Naylor, SSE2 indeed sounds like another nice improvement. I'll leave \nthat to you.", "msg_date": "Fri, 24 Jun 2022 08:47:09 +0000", "msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Hi,\n\nOn 2022-06-24 08:47:09 +0000, Jelte Fennema wrote:\n> To test performance of this change I used COPY BINARY from a JSONB table\n> into another, containing fairly JSONB values of ~15kB.\n\nThis will have a lot of other costs included (DML is expensive). I'd suggest\nstoring the json in a text column and casting it to json[b], with a filter\nontop of the json[b] result that cheaply filters it away. That should end up\nspending nearly all the time somewhere around json parsing.\n\nIt's useful for things like this to include a way for others to use the same\nbenchmark...\n\nI tried your patch with:\n\nDROP TABLE IF EXISTS json_as_text;\nCREATE TABLE json_as_text AS SELECT (SELECT json_agg(row_to_json(pd)) as t FROM pg_description pd) FROM generate_series(1, 100);\nVACUUM FREEZE json_as_text;\n\nSELECT 1 FROM json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';\n\nWhich the patch improves from 846ms to 754ms (best of three). A bit smaller\nthan your improvement, but still nice.\n\n\nI think your patch doesn't quite go far enough - we still end up looping for\neach character, have the added complication of needing to flush the\n\"buffer\". I'd be surprised if a \"dedicated\" loop to see until where the string\nlast isn't faster. That then obviously could be SIMDified.\n\n\nSeparately, it seems pretty awful efficiency / code density wise to have the\nNULL checks for ->strval all over. Might be worth forcing json_lex() and\njson_lex_string() to be inlined, with a constant parameter deciding whether\n->strval is expected. That'd likely be enough to get the compiler specialize\nthe code for us.\n\n\nMight also be worth to maintain ->strval using appendBinaryStringInfoNT().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jun 2022 17:18:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Hi,\n\nOn 2022-06-24 17:18:10 -0700, Andres Freund wrote:\n> On 2022-06-24 08:47:09 +0000, Jelte Fennema wrote:\n> > To test performance of this change I used COPY BINARY from a JSONB table\n> > into another, containing fairly JSONB values of ~15kB.\n> \n> This will have a lot of other costs included (DML is expensive). I'd suggest\n> storing the json in a text column and casting it to json[b], with a filter\n> ontop of the json[b] result that cheaply filters it away. That should end up\n> spending nearly all the time somewhere around json parsing.\n> \n> It's useful for things like this to include a way for others to use the same\n> benchmark...\n> \n> I tried your patch with:\n> \n> DROP TABLE IF EXISTS json_as_text;\n> CREATE TABLE json_as_text AS SELECT (SELECT json_agg(row_to_json(pd)) as t FROM pg_description pd) FROM generate_series(1, 100);\n> VACUUM FREEZE json_as_text;\n> \n> SELECT 1 FROM json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';\n> \n> Which the patch improves from 846ms to 754ms (best of three). A bit smaller\n> than your improvement, but still nice.\n> \n> \n> I think your patch doesn't quite go far enough - we still end up looping for\n> each character, have the added complication of needing to flush the\n> \"buffer\". I'd be surprised if a \"dedicated\" loop to see until where the string\n> last isn't faster. That then obviously could be SIMDified.\n\nA naive implementation (attached) of that gets me down to 706ms.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 24 Jun 2022 18:05:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Sat, Jun 25, 2022 at 8:05 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > I tried your patch with:\n> >\n> > DROP TABLE IF EXISTS json_as_text;\n> > CREATE TABLE json_as_text AS SELECT (SELECT json_agg(row_to_json(pd)) as t FROM pg_description pd) FROM generate_series(1, 100);\n> > VACUUM FREEZE json_as_text;\n> >\n> > SELECT 1 FROM json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';\n> >\n> > Which the patch improves from 846ms to 754ms (best of three). A bit smaller\n> > than your improvement, but still nice.\n> >\n> >\n> > I think your patch doesn't quite go far enough - we still end up looping for\n> > each character, have the added complication of needing to flush the\n> > \"buffer\". I'd be surprised if a \"dedicated\" loop to see until where the string\n> > last isn't faster. That then obviously could be SIMDified.\n>\n> A naive implementation (attached) of that gets me down to 706ms.\n\nTaking this a step further, I modified json_lex and json_lex_string to\nuse a const end pointer instead of maintaining the length (0001). The\nobserved speedup is small enough that it might not be real, but the\ncode is simpler this way, and it makes 0002 and 0003 easier to reason\nabout. Then I modified your patch to do the same (0002). Hackish SSE2\nsupport is in 0003.\n\nTo exercise the SIMD code a bit, I added a second test:\n\nDROP TABLE IF EXISTS long_json_as_text;\nCREATE TABLE long_json_as_text AS\nwith long as (\n select repeat(description, 10) from pg_description pd\n)\nSELECT (select json_agg(row_to_json(long)) as t from long) from\ngenerate_series(1, 100);\nVACUUM FREEZE long_json_as_text;\n\nSELECT 1 FROM long_json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';\n\nWith this, I get (turbo disabled, min of 3):\n\nshort test:\nmaster: 769ms\n0001: 751ms\n0002: 703ms\n0003: 701ms\n\nlong test;\nmaster: 939ms\n0001: 883ms\n0002: 537ms\n0003: 439ms\n\nI think 0001/2 are mostly in committable shape.\n\nWith 0003, I'd want to make the portability check a bit nicer and more\ncentralized. I'm thinking of modifying the CRC check to report that\nthe host cpu/compiler understands SSE4.2 x86 intrinsics, and then the\ncompile time SSE2 check can piggyback on top of that without a runtime\ncheck. This is conceptually easy but a bit of work to not look like a\nhack (which probably means the ARM CRC check should look more generic\nsomehow...). The regression tests will likely need some work as well.\n\n> Separately, it seems pretty awful efficiency / code density wise to have the\n> NULL checks for ->strval all over. Might be worth forcing json_lex() and\n> json_lex_string() to be inlined, with a constant parameter deciding whether\n> ->strval is expected. That'd likely be enough to get the compiler specialize\n> the code for us.\n\nI had a look at this but it's a bit more invasive than I want to\ndevote time to at this point.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Jul 2022 12:10:20 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 12:10:20 +0700, John Naylor wrote:\n> diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c\n> index eeedc0645a..ad4858c623 100644\n> --- a/src/common/jsonapi.c\n> +++ b/src/common/jsonapi.c\n> @@ -851,10 +851,26 @@ json_lex_string(JsonLexContext *lex)\n> \t\t}\n> \t\telse if (lex->strval != NULL)\n> \t\t{\n> +\t\t\t/* start lookahead at next byte */\n> +\t\t\tchar\t *p = s + 1;\n> +\n> \t\t\tif (hi_surrogate != -1)\n> \t\t\t\treturn JSON_UNICODE_LOW_SURROGATE;\n> \n> -\t\t\tappendStringInfoChar(lex->strval, *s);\n> +\t\t\twhile (p < end)\n> +\t\t\t{\n> +\t\t\t\tif (*p == '\\\\' || *p == '\"' || (unsigned char) *p < 32)\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tp++;\n> +\t\t\t}\n> +\n> +\t\t\tappendBinaryStringInfo(lex->strval, s, p - s);\n> +\n> +\t\t\t/*\n> +\t\t\t * s will be incremented at the top of the loop, so set it to just\n> +\t\t\t * behind our lookahead position\n> +\t\t\t */\n> +\t\t\ts = p - 1;\n> \t\t}\n> \t}\n> \n> -- \n> 2.36.1\n\nI think before committing something along those lines we should make the\nrelevant bits also be applicable when ->strval is NULL, as several functions\nuse that (notably json_in IIRC). Afaics we'd just need to move the strval\ncheck to be around the appendBinaryStringInfo(). And it should simplify the\nfunction, because some of the relevant code is duplicated outside as well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 22:18:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Jul 6, 2022 at 12:18 PM Andres Freund <andres@anarazel.de> wrote:\n\n> I think before committing something along those lines we should make the\n> relevant bits also be applicable when ->strval is NULL, as several functions\n> use that (notably json_in IIRC). Afaics we'd just need to move the strval\n> check to be around the appendBinaryStringInfo().\n\nThat makes sense and is easy.\n\n> And it should simplify the\n> function, because some of the relevant code is duplicated outside as well...\n\nNot sure how far to take this, but I put the returnable paths inside\nthe \"other\" path, so only backslash will go back to the top.\n\nBoth the above changes are split into a new 0003 patch for easier\nreview, but in the end will likely be squashed with 0002.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Jul 2022 15:58:44 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "I've pushed 0001 (although the email seems to have been swallowed\nagain), and pending additional comments on 0002 and 0003 I'll squash\nand push those next week. 0004 needs some thought on integrating with\nsymbols we discover during configure.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 15:06:54 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Fri, Jul 8, 2022 at 3:06 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> I've pushed 0001 (although the email seems to have been swallowed\n> again), and pending additional comments on 0002 and 0003 I'll squash\n> and push those next week.\n\nThis is done.\n\n> 0004 needs some thought on integrating with\n> symbols we discover during configure.\n\nStill needs thought.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:32:51 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 15:58:44 +0700, John Naylor wrote:\n> From 82e13b6bebd85a152ededcfd75495c0c0f642354 Mon Sep 17 00:00:00 2001\n> From: John Naylor <john.naylor@postgresql.org>\n> Date: Wed, 6 Jul 2022 15:50:09 +0700\n> Subject: [PATCH v4 4/4] Use vectorized lookahead in json_lex_string on x86\n> \n> ---\n> src/common/jsonapi.c | 48 ++++++++++++++++++++++++++++++++++++++++++++\n> 1 file changed, 48 insertions(+)\n> \n> diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c\n> index 81e176ad8d..44e8ed2b2f 100644\n> --- a/src/common/jsonapi.c\n> +++ b/src/common/jsonapi.c\n> @@ -24,6 +24,12 @@\n> #include \"miscadmin.h\"\n> #endif\n> \n> +/* WIP: put somewhere sensible and consider removing CRC from the names */\n> +#if (defined (__x86_64__) || defined(_M_AMD64)) && (defined(USE_SSE42_CRC32C) || defined(USE_SSE42_CRC32C_WITH_RUNTIME_CHECK))\n> +#include <nmmintrin.h>\n> +#define USE_SSE2\n> +#endif\n> +\n> /*\n> * The context of the parser is maintained by the recursive descent\n> * mechanism, but is passed explicitly to the error reporting routine\n> @@ -842,12 +848,54 @@ json_lex_string(JsonLexContext *lex)\n> \t\t}\n> \t\telse\n> \t\t{\n> +#ifdef USE_SSE2\n> +\t\t\t__m128i\t\tblock,\n> +\t\t\t\t\t\thas_backslash,\n> +\t\t\t\t\t\thas_doublequote,\n> +\t\t\t\t\t\tcontrol,\n> +\t\t\t\t\t\thas_control,\n> +\t\t\t\t\t\terror_cum = _mm_setzero_si128();\n> +\t\t\tconst\t\t__m128i backslash = _mm_set1_epi8('\\\\');\n> +\t\t\tconst\t\t__m128i doublequote = _mm_set1_epi8('\"');\n> +\t\t\tconst\t\t__m128i max_control = _mm_set1_epi8(0x1F);\n> +#endif\n> \t\t\t/* start lookahead at current byte */\n> \t\t\tchar\t *p = s;\n> \n> \t\t\tif (hi_surrogate != -1)\n> \t\t\t\treturn JSON_UNICODE_LOW_SURROGATE;\n> \n> +#ifdef USE_SSE2\n> +\t\t\twhile (p < end - sizeof(__m128i))\n> +\t\t\t{\n> +\t\t\t\tblock = _mm_loadu_si128((const __m128i *) p);\n> +\n> +\t\t\t\t/* direct comparison to quotes and backslashes */\n> +\t\t\t\thas_backslash = _mm_cmpeq_epi8(block, backslash);\n> +\t\t\t\thas_doublequote = _mm_cmpeq_epi8(block, doublequote);\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * use saturation arithmetic to check for <= highest control\n> +\t\t\t\t * char\n> +\t\t\t\t */\n> +\t\t\t\tcontrol = _mm_subs_epu8(block, max_control);\n> +\t\t\t\thas_control = _mm_cmpeq_epi8(control, _mm_setzero_si128());\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * set bits in error_cum where the corresponding lanes in has_*\n> +\t\t\t\t * are set\n> +\t\t\t\t */\n> +\t\t\t\terror_cum = _mm_or_si128(error_cum, has_backslash);\n> +\t\t\t\terror_cum = _mm_or_si128(error_cum, has_doublequote);\n> +\t\t\t\terror_cum = _mm_or_si128(error_cum, has_control);\n> +\n> +\t\t\t\tif (_mm_movemask_epi8(error_cum))\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tp += sizeof(__m128i);\n> +\t\t\t}\n> +#endif\t\t\t\t\t\t\t/* USE_SSE2 */\n> +\n> \t\t\twhile (p < end)\n> \t\t\t{\n> \t\t\t\tif (*p == '\\\\' || *p == '\"')\n> -- \n> 2.36.1\n> \n\nI wonder if we can't abstract this at least a bit better. If we go that route\na bit further, then add another arch, this code will be pretty much\nunreadable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 08:41:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if we can't abstract this at least a bit better. If we go that route\n> a bit further, then add another arch, this code will be pretty much\n> unreadable.\n\nIMO, it's pretty unreadable *now*, for lack of comments about what it's\ndoing and why.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:53:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 11:53:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if we can't abstract this at least a bit better. If we go that route\n> > a bit further, then add another arch, this code will be pretty much\n> > unreadable.\n>\n> IMO, it's pretty unreadable *now*, for lack of comments about what it's\n> doing and why.\n\nYea, that could at least be addressed by adding comments. But even with a\nbunch of comments, it'd still be pretty hard to read once the events above\nhave happened (and they seem kind of inevitable).\n\nI wonder if we can add a somewhat more general function for scanning until\nsome characters are found using SIMD? There's plenty other places that could\nbe useful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 09:07:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Mon, Jul 11, 2022 at 11:07 PM Andres Freund <andres@anarazel.de> wrote:\n\n> I wonder if we can add a somewhat more general function for scanning until\n> some characters are found using SIMD? There's plenty other places that\ncould\n> be useful.\n\nIn simple cases, we could possibly abstract the entire loop. With this\nparticular case, I imagine the most approachable way to write the loop\nwould be a bit more low-level:\n\nwhile (p < end - VECTOR_WIDTH &&\n !vector_has_byte(p, '\\\\') &&\n !vector_has_byte(p, '\"') &&\n vector_min_byte(p, 0x20))\n p += VECTOR_WIDTH\n\nI wonder if we'd lose a bit of efficiency here by not accumulating set bits\nfrom the three conditions, but it's worth trying.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 11, 2022 at 11:07 PM Andres Freund <andres@anarazel.de> wrote:> I wonder if we can add a somewhat more general function for scanning until> some characters are found using SIMD? There's plenty other places that could> be useful.In simple cases, we could possibly abstract the entire loop. With this particular case, I imagine the most approachable way to write the loop would be a bit more low-level:while (p < end - VECTOR_WIDTH &&       !vector_has_byte(p, '\\\\') &&       !vector_has_byte(p, '\"') &&       vector_min_byte(p, 0x20))    p += VECTOR_WIDTHI wonder if we'd lose a bit of efficiency here by not accumulating set bits from the three conditions, but it's worth trying.-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 12 Jul 2022 13:57:48 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "\nOn 2022-06-24 Fr 20:18, Andres Freund wrote:\n> Hi,\n>\n> On 2022-06-24 08:47:09 +0000, Jelte Fennema wrote:\n>> To test performance of this change I used COPY BINARY from a JSONB table\n>> into another, containing fairly JSONB values of ~15kB.\n> This will have a lot of other costs included (DML is expensive). I'd suggest\n> storing the json in a text column and casting it to json[b], with a filter\n> ontop of the json[b] result that cheaply filters it away. That should end up\n> spending nearly all the time somewhere around json parsing.\n>\n> It's useful for things like this to include a way for others to use the same\n> benchmark...\n>\n> I tried your patch with:\n>\n> DROP TABLE IF EXISTS json_as_text;\n> CREATE TABLE json_as_text AS SELECT (SELECT json_agg(row_to_json(pd)) as t FROM pg_description pd) FROM generate_series(1, 100);\n> VACUUM FREEZE json_as_text;\n>\n> SELECT 1 FROM json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';\n\n\nI've been doing some other work related to json parsing and John\nreferred me to this. But it's actually not the best test for pure json\nparsing - casting to jsonb involves some extra work besides pure\nparsing. Instead I've been using this query with the same table, which\nshould be almost all json parsing:\n\n\nselect 1 from json_as_text where t::json is null;\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Jul 2022 11:03:43 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "I wrote\n\n> On Mon, Jul 11, 2022 at 11:07 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I wonder if we can add a somewhat more general function for scanning until\n> > some characters are found using SIMD? There's plenty other places that could\n> > be useful.\n>\n> In simple cases, we could possibly abstract the entire loop. With this particular case, I imagine the most approachable way to write the loop would be a bit more low-level:\n>\n> while (p < end - VECTOR_WIDTH &&\n> !vector_has_byte(p, '\\\\') &&\n> !vector_has_byte(p, '\"') &&\n> vector_min_byte(p, 0x20))\n> p += VECTOR_WIDTH\n>\n> I wonder if we'd lose a bit of efficiency here by not accumulating set bits from the three conditions, but it's worth trying.\n\nThe attached implements the above, more or less, using new pg_lfind8()\nand pg_lfind8_le(), which in turn are based on helper functions that\nact on a single vector. The pg_lfind* functions have regression tests,\nbut I haven't done the same for json yet. I went the extra step to use\nbit-twiddling for non-SSE builds using uint64 as a \"vector\", which\nstill gives a pretty good boost (test below, min of 3):\n\nmaster:\n356ms\n\nv5:\n259ms\n\nv5 disable SSE:\n288ms\n\nIt still needs a bit of polishing and testing, but I think it's a good\nworkout for abstracting SIMD out of the way.\n\n-------------\ntest:\n\nDROP TABLE IF EXISTS long_json_as_text;\nCREATE TABLE long_json_as_text AS\nwith long as (\n select repeat(description, 11)\n from pg_description\n)\nselect (select json_agg(row_to_json(long))::text as t from long) from\ngenerate_series(1, 100);\nVACUUM FREEZE long_json_as_text;\n\nselect 1 from long_json_as_text where t::json is null; -- from Andrew upthread\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Aug 2022 20:33:21 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Mon, Aug 15, 2022 at 08:33:21PM +0700, John Naylor wrote:\n> The attached implements the above, more or less, using new pg_lfind8()\n> and pg_lfind8_le(), which in turn are based on helper functions that\n> act on a single vector. The pg_lfind* functions have regression tests,\n> but I haven't done the same for json yet. I went the extra step to use\n> bit-twiddling for non-SSE builds using uint64 as a \"vector\", which\n> still gives a pretty good boost (test below, min of 3):\n\nLooks pretty reasonable to me.\n\n> +#ifdef USE_SSE2\n> +\t\tchunk = _mm_loadu_si128((const __m128i *) &base[i]);\n> +#else\n> +\t\tmemcpy(&chunk, &base[i], sizeof(chunk));\n> +#endif\t\t\t\t\t\t\t/* USE_SSE2 */\n\n> +#ifdef USE_SSE2\n> +\t\tchunk = _mm_loadu_si128((const __m128i *) &base[i]);\n> +#else\n> +\t\tmemcpy(&chunk, &base[i], sizeof(chunk));\n> +#endif\t\t\t\t\t\t\t/* USE_SSE2 */\n\nPerhaps there should be a macro or inline function for loading a vector so\nthat these USE_SSE2 checks can be abstracted away, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Aug 2022 14:23:04 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Tue, Aug 16, 2022 at 4:23 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 15, 2022 at 08:33:21PM +0700, John Naylor wrote:\n> > +#ifdef USE_SSE2\n> > + chunk = _mm_loadu_si128((const __m128i *) &base[i]);\n> > +#else\n> > + memcpy(&chunk, &base[i], sizeof(chunk));\n> > +#endif /* USE_SSE2 */\n>\n> Perhaps there should be a macro or inline function for loading a vector so\n> that these USE_SSE2 checks can be abstracted away, too.\n\nThis is done. Also:\n- a complete overhaul of the pg_lfind8* tests\n- using a typedef for the vector type\n- some refactoring, name changes and other cleanups (a few of these\ncould also be applied to the 32-byte element path, but that is left\nfor future work)\n\nTODO: json-specific tests of the new path\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Aug 2022 15:11:36 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Fri, Aug 19, 2022 at 03:11:36PM +0700, John Naylor wrote:\n> This is done. Also:\n> - a complete overhaul of the pg_lfind8* tests\n> - using a typedef for the vector type\n> - some refactoring, name changes and other cleanups (a few of these\n> could also be applied to the 32-byte element path, but that is left\n> for future work)\n> \n> TODO: json-specific tests of the new path\n\nThis looks pretty good to me. Should we rename vector_broadcast() and\nvector_has_zero() to indicate that they are working with bytes (e.g.,\nvector_broadcast_byte())? We might be able to use vector_broadcast_int()\nin the 32-bit functions, and your other vector functions already have a\n_byte suffix.\n\nIn general, the approach you've taken seems like a decent readability\nimprovement. I'd be happy to try my hand at adjusting the 32-bit path and\nadding ARM versions of all this stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Aug 2022 13:42:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Fri, Aug 19, 2022 at 01:42:15PM -0700, Nathan Bossart wrote:\n> On Fri, Aug 19, 2022 at 03:11:36PM +0700, John Naylor wrote:\n>> This is done. Also:\n>> - a complete overhaul of the pg_lfind8* tests\n>> - using a typedef for the vector type\n>> - some refactoring, name changes and other cleanups (a few of these\n>> could also be applied to the 32-byte element path, but that is left\n>> for future work)\n>> \n>> TODO: json-specific tests of the new path\n> \n> This looks pretty good to me. Should we rename vector_broadcast() and\n> vector_has_zero() to indicate that they are working with bytes (e.g.,\n> vector_broadcast_byte())? We might be able to use vector_broadcast_int()\n> in the 32-bit functions, and your other vector functions already have a\n> _byte suffix.\n> \n> In general, the approach you've taken seems like a decent readability\n> improvement. I'd be happy to try my hand at adjusting the 32-bit path and\n> adding ARM versions of all this stuff.\n\nI spent some more time looking at this one, and I had a few ideas that I\nthought I'd share. 0001 is your v6 patch with a few additional changes,\nincluding simplying the assertions for readability, splitting out the\nVector type into Vector8 and Vector32 (needed for ARM), and adjusting\npg_lfind32() to use the new tools in simd.h. 0002 adds ARM versions of\neverything, which obsoletes the other thread I started [0]. This is still\na little rough around the edges (e.g., this should probably be more than 2\npatches), but I think it helps demonstrate a more comprehensive design than\nwhat I've proposed in the pg_lfind32-for-ARM thread [0].\n\nApologies if I'm stepping on your toes a bit here.\n\n[0] https://postgr.es/m/20220819200829.GA395728%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 20 Aug 2022 22:47:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Sun, Aug 21, 2022 at 12:47 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> I spent some more time looking at this one, and I had a few ideas that I\n> thought I'd share. 0001 is your v6 patch with a few additional changes,\n> including simplying the assertions for readability, splitting out the\n> Vector type into Vector8 and Vector32 (needed for ARM), and adjusting\n> pg_lfind32() to use the new tools in simd.h. 0002 adds ARM versions of\n> everything, which obsoletes the other thread I started [0]. This is still\n> a little rough around the edges (e.g., this should probably be more than 2\n> patches), but I think it helps demonstrate a more comprehensive design than\n> what I've proposed in the pg_lfind32-for-ARM thread [0].\n>\n> Apologies if I'm stepping on your toes a bit here.\n\nNot at all! However, the 32-bit-element changes are irrelevant for\njson, and make review more difficult. I would suggest keeping those in\nthe other thread starting with whatever refactoring is needed. I can\nalways rebase over that.\n\nNot a full review, but on a brief look:\n\n- I like the idea of simplifying the assertions, but I can't get\nbehind using platform lfind to do it, since it has a different API,\nrequires new functions we don't need, and possibly has portability\nissues. A simple for-loop is better for assertions.\n- A runtime elog is not appropriate for a compile time check -- use\n#error instead.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Aug 2022 09:35:34 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Mon, Aug 22, 2022 at 09:35:34AM +0700, John Naylor wrote:\n> Not at all! However, the 32-bit-element changes are irrelevant for\n> json, and make review more difficult. I would suggest keeping those in\n> the other thread starting with whatever refactoring is needed. I can\n> always rebase over that.\n\nYeah, I'll remove those to keep this thread focused.\n\n> - I like the idea of simplifying the assertions, but I can't get\n> behind using platform lfind to do it, since it has a different API,\n> requires new functions we don't need, and possibly has portability\n> issues. A simple for-loop is better for assertions.\n\nMy main goal with this was improving readability, which is likely possible\nwithout lfind(). I'll see what I can do.\n\n> - A runtime elog is not appropriate for a compile time check -- use\n> #error instead.\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Aug 2022 14:22:29 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Mon, Aug 22, 2022 at 02:22:29PM -0700, Nathan Bossart wrote:\n> On Mon, Aug 22, 2022 at 09:35:34AM +0700, John Naylor wrote:\n>> Not at all! However, the 32-bit-element changes are irrelevant for\n>> json, and make review more difficult. I would suggest keeping those in\n>> the other thread starting with whatever refactoring is needed. I can\n>> always rebase over that.\n> \n> Yeah, I'll remove those to keep this thread focused.\n\nHere's a new version of the patch with the 32-bit changes and calls to\nlfind() removed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 22 Aug 2022 20:32:36 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Tue, Aug 23, 2022 at 10:32 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 22, 2022 at 02:22:29PM -0700, Nathan Bossart wrote:\n> > On Mon, Aug 22, 2022 at 09:35:34AM +0700, John Naylor wrote:\n> >> Not at all! However, the 32-bit-element changes are irrelevant for\n> >> json, and make review more difficult. I would suggest keeping those in\n> >> the other thread starting with whatever refactoring is needed. I can\n> >> always rebase over that.\n> >\n> > Yeah, I'll remove those to keep this thread focused.\n>\n> Here's a new version of the patch with the 32-bit changes and calls to\n> lfind() removed.\n\nLGTM overall. My plan is to split out the json piece, adding tests for\nthat, and commit the infrastructure for it fairly soon. Possible\nbikeshedding: Functions like vector8_eq() might be misunderstood as\ncomparing two vectors, but here we are comparing each lane with a\nscalar. I wonder if vector8_eq_scalar() et al might be more clear.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Aug 2022 13:03:03 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Tue, Aug 23, 2022 at 01:03:03PM +0700, John Naylor wrote:\n> On Tue, Aug 23, 2022 at 10:32 AM Nathan Bossart\n>> Here's a new version of the patch with the 32-bit changes and calls to\n>> lfind() removed.\n> \n> LGTM overall. My plan is to split out the json piece, adding tests for\n> that, and commit the infrastructure for it fairly soon. Possible\n> bikeshedding: Functions like vector8_eq() might be misunderstood as\n> comparing two vectors, but here we are comparing each lane with a\n> scalar. I wonder if vector8_eq_scalar() et al might be more clear.\n\nGood point. I had used vector32_veq() to denote vector comparison, which\nwould extend to something like vector8_seq(). But that doesn't seem\ndescriptive enough. It might be worth considering vector8_contains() or\nvector8_has() as well. I don't really have an opinion, but if I had to\npick something, I guess I'd choose vector8_contains().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Aug 2022 10:15:46 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Aug 24, 2022 at 12:15 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 01:03:03PM +0700, John Naylor wrote:\n> > On Tue, Aug 23, 2022 at 10:32 AM Nathan Bossart\n> >> Here's a new version of the patch with the 32-bit changes and calls to\n> >> lfind() removed.\n> >\n> > LGTM overall. My plan is to split out the json piece, adding tests for\n> > that, and commit the infrastructure for it fairly soon. Possible\n> > bikeshedding: Functions like vector8_eq() might be misunderstood as\n> > comparing two vectors, but here we are comparing each lane with a\n> > scalar. I wonder if vector8_eq_scalar() et al might be more clear.\n>\n> Good point. I had used vector32_veq() to denote vector comparison, which\n> would extend to something like vector8_seq(). But that doesn't seem\n> descriptive enough. It might be worth considering vector8_contains() or\n> vector8_has() as well. I don't really have an opinion, but if I had to\n> pick something, I guess I'd choose vector8_contains().\n\nIt seems \"scalar\" would be a bad choice since it already means\n(confusingly) operating on the least significant element of a vector.\nI'm thinking of *_has and *_has_le, matching the already existing in\nthe earlier patch *_has_zero.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Aug 2022 11:59:25 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Aug 24, 2022 at 11:59:25AM +0700, John Naylor wrote:\n> It seems \"scalar\" would be a bad choice since it already means\n> (confusingly) operating on the least significant element of a vector.\n> I'm thinking of *_has and *_has_le, matching the already existing in\n> the earlier patch *_has_zero.\n\nThat seems reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Aug 2022 09:56:03 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Aug 24, 2022 at 11:56 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 24, 2022 at 11:59:25AM +0700, John Naylor wrote:\n> > It seems \"scalar\" would be a bad choice since it already means\n> > (confusingly) operating on the least significant element of a vector.\n> > I'm thinking of *_has and *_has_le, matching the already existing in\n> > the earlier patch *_has_zero.\n>\n> That seems reasonable to me.\n\nOkay, done that way, also in v9:\n- a convenience macro in the test suite which is handy now and can be\nused for 32-bit element tests if we like\n- more tests\n- pgindent and some additional comment smithing\n- split out the json piece for a later commit\n- For the following comment, pgindent will put spaced operands on a\nseparate line which is not great for readability. and our other\nreference to the Stanford bithacks page keeps the in-page link, and I\nsee no reason to exclude it -- if it goes missing, the whole page will\nstill load. So I put back those two details.\n\n+ * To find bytes <= c, we can use bitwise operations to find\nbytes < c+1,\n+ * but it only works if c+1 <= 128 and if the highest bit in v\nis not set.\n+ * Adapted from\n+ * https://graphics.stanford.edu/~seander/bithacks.html#HasLessInWord\n\nI think I'll go ahead and commit 0001 in a couple days pending further comments.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Aug 2022 13:35:45 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Thu, Aug 25, 2022 at 01:35:45PM +0700, John Naylor wrote:\n> - For the following comment, pgindent will put spaced operands on a\n> separate line which is not great for readability. and our other\n> reference to the Stanford bithacks page keeps the in-page link, and I\n> see no reason to exclude it -- if it goes missing, the whole page will\n> still load. So I put back those two details.\n> \n> + * To find bytes <= c, we can use bitwise operations to find\n> bytes < c+1,\n> + * but it only works if c+1 <= 128 and if the highest bit in v\n> is not set.\n> + * Adapted from\n> + * https://graphics.stanford.edu/~seander/bithacks.html#HasLessInWord\n\nThis was just unnecessary fiddling on my part, sorry about that.\n\n> +test_lfind8_internal(uint8 key)\n> +{\n> +\tuint8\t\tcharbuf[LEN_WITH_TAIL(Vector8)];\n> +\tconst int\tlen_no_tail = LEN_NO_TAIL(Vector8);\n> +\tconst int\tlen_with_tail = LEN_WITH_TAIL(Vector8);\n> +\n> +\tmemset(charbuf, 0xFF, len_with_tail);\n> +\t/* search tail to test one-byte-at-a-time path */\n> +\tcharbuf[len_with_tail - 1] = key;\n> +\tif (key > 0x00 && pg_lfind8(key - 1, charbuf, len_with_tail))\n> +\t\telog(ERROR, \"pg_lfind8() found nonexistent element <= '0x%x'\", key - 1);\n> +\tif (key < 0xFF && !pg_lfind8(key, charbuf, len_with_tail))\n> +\t\telog(ERROR, \"pg_lfind8() did not find existing element <= '0x%x'\", key);\n> +\tif (key < 0xFE && pg_lfind8(key + 1, charbuf, len_with_tail))\n> +\t\telog(ERROR, \"pg_lfind8() found nonexistent element <= '0x%x'\", key + 1);\n> +\n> +\tmemset(charbuf, 0xFF, len_with_tail);\n> +\t/* search with vector operations */\n> +\tcharbuf[len_no_tail - 1] = key;\n> +\tif (key > 0x00 && pg_lfind8(key - 1, charbuf, len_no_tail))\n> +\t\telog(ERROR, \"pg_lfind8() found nonexistent element <= '0x%x'\", key - 1);\n> +\tif (key < 0xFF && !pg_lfind8(key, charbuf, len_no_tail))\n> +\t\telog(ERROR, \"pg_lfind8() did not find existing element <= '0x%x'\", key);\n> +\tif (key < 0xFE && pg_lfind8(key + 1, charbuf, len_no_tail))\n> +\t\telog(ERROR, \"pg_lfind8() found nonexistent element <= '0x%x'\", key + 1);\n> +}\n\nnitpick: Shouldn't the elog() calls use \"==\" instead of \"<=\" for this one?\n\nOtherwise, 0001 looks good to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Aug 2022 20:14:37 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Fri, Aug 26, 2022 at 10:14 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> > +test_lfind8_internal(uint8 key)\n[...]\n> > + elog(ERROR, \"pg_lfind8() found nonexistent element <= '0x%x'\", key + 1);\n> > +}\n>\n> nitpick: Shouldn't the elog() calls use \"==\" instead of \"<=\" for this one?\n\nGood catch, will fix.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Aug 2022 10:48:30 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Thu, Aug 25, 2022 at 1:35 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> I think I'll go ahead and commit 0001 in a couple days pending further comments.\n\nPushed with Nathan's correction and some cosmetic rearrangements.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Aug 2022 15:03:48 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Tue, Aug 23, 2022 at 1:03 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> LGTM overall. My plan is to split out the json piece, adding tests for\n> that, and commit the infrastructure for it fairly soon.\n\nHere's the final piece. I debated how many tests to add and decided it\nwas probably enough to add one each for checking quotes and\nbackslashes in the fast path. There is one cosmetic change in the\ncode: Before, the vectorized less-equal check compared to 0x1F, but\nthe byte-wise path did so with < 32. I made them both \"less-equal 31\"\nfor consistency. I'll commit this by the end of the week unless anyone\nhas a better idea about testing.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 31 Aug 2022 10:50:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Aug 31, 2022 at 10:50:39AM +0700, John Naylor wrote:\n> Here's the final piece. I debated how many tests to add and decided it\n> was probably enough to add one each for checking quotes and\n> backslashes in the fast path. There is one cosmetic change in the\n> code: Before, the vectorized less-equal check compared to 0x1F, but\n> the byte-wise path did so with < 32. I made them both \"less-equal 31\"\n> for consistency. I'll commit this by the end of the week unless anyone\n> has a better idea about testing.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Aug 2022 21:17:12 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "On Wed, Aug 31, 2022 at 11:17 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 31, 2022 at 10:50:39AM +0700, John Naylor wrote:\n> > Here's the final piece. I debated how many tests to add and decided it\n> > was probably enough to add one each for checking quotes and\n> > backslashes in the fast path. There is one cosmetic change in the\n> > code: Before, the vectorized less-equal check compared to 0x1F, but\n> > the byte-wise path did so with < 32. I made them both \"less-equal 31\"\n> > for consistency. I'll commit this by the end of the week unless anyone\n> > has a better idea about testing.\n>\n> LGTM\n\nPushed, thanks for looking!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Sep 2022 09:49:52 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to have a setup where there is a primary, standby and\npg_receivewal (which acts as a server that maintains the entire WAL).\nQuorum is any one of standby and pg_receivewal. In case of primary crash,\nwhen I promote standby (timeline switch from 5 to 6) and restart\npg_receivewal to connect to the promoted standby, I get an error saying\n\"pg_receivewal: could not send replication command \"START_REPLICATION\":\nERROR: requested starting point 16/4C000000 on timeline 5 is not in this\nserver's history. This server's history forked from timeline 5 at\n16/4BFFF268\".\n\npg_receivewal latest lsn is 16/4BFFF268 with the timeline id being 5.\n\nJust wondering why is the pg_receivewal requesting the new primary with the\nstarting point as 16/4C000000, even though the latest lsn is 16/4BFFF268.\n\nIs that because of the following code snippet in pg_receivewal by any\nchance?\n\n/*\n* Move the starting pointer to the start of the next segment, if the\n* highest one we saw was completed. Otherwise start streaming from\n* the beginning of the .partial segment.\n*/\nif (!high_ispartial)\nhigh_segno++;\n\nIf it is because of the above code, Can we let the pg_receivewal request\nthe new primary to provide WAL from forked lsn (by asking primary what the\nforked lsn and the corresponding timeline are)?\n\nThanks,\nRKN\n\nHi,I'm trying to have a setup where there is a primary, standby and pg_receivewal (which acts as a server that maintains the entire WAL). Quorum is any one of standby and pg_receivewal. In case of primary crash, when I promote standby (timeline switch from 5 to 6) and restart pg_receivewal to connect to the promoted standby, I get an error saying \"pg_receivewal: could not send replication command \"START_REPLICATION\": ERROR:  requested starting point 16/4C000000 on timeline 5 is not in this server's history. This server's history forked from timeline 5 at 16/4BFFF268\".pg_receivewal latest lsn is 16/4BFFF268 with the timeline id being 5.Just wondering why is the pg_receivewal requesting the new primary with the starting point as 16/4C000000, even though the latest lsn is 16/4BFFF268.Is that because of the following code snippet in pg_receivewal by any chance?\t\t/*\t\t * Move the starting pointer to the start of the next segment, if the\t\t * highest one we saw was completed. Otherwise start streaming from\t\t * the beginning of the .partial segment.\t\t */\t\tif (!high_ispartial)\t\t\thigh_segno++; If it is because of the above code, Can we let the pg_receivewal request the new primary to provide WAL from forked lsn (by asking primary what the forked lsn and the corresponding timeline are)? Thanks,RKN", "msg_date": "Fri, 24 Jun 2022 17:53:59 +0530", "msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_receivewal unable to connect to promoted standby" } ]
[ { "msg_contents": "Hi,\n\n\nI have a made a small deductive database on top of PostgreSQL for educational/research purposes. In this setting, due to certain VIEW-constructions, queries often end up being self-joins on primary keys, e.g.:\n\n\nSELECT t1.id, t2.val\n\nFROM t AS t1 JOIN t AS t2 USING (id);\n\n\nwhere t(id) is a primary key. This query is equivalent to the much more efficient:\n\n\nSELECT id, val\n\nFROM t AS t1;\n\n\nHowever, PostgreSQL currently does not seem to implement this simplification. Therefore, I have looked into writing an extension that performs this, but I am struggling a bit with finding out when this simplification should be done, i.e. which hook I should implement.\n\n\nThe simplification is not too different from those done in prep/prepjoin.c, i.e. doing the simplification on the query-tree directly. However, I think I then would need to implement a planner_hook, as it is the only hook giving me direct access to the query-tree. But I need to perform my simplification after view-definitions have been expanded into the query, and after the transformations in prepjoin.c (but before the rest of planning). But there seems to be no easy way to inject a function there, as this is buried deep in the middle of the planner-function.\n\n\nI therefore looked into using a set_join_pathlist_hook, and try to do the simplification at path-level. I.e., doing something like:\n\n\nstatic void self_join_optimize_hook(PlannerInfo *root, RelOptInfo* joinrel, RelOptInfo* outerrel, RelOptInfo* innerrel, JoinType jointype, JoinPathExtraData* extra)\n\n{\n\n if (is_selfjoin_on_pk(root, joinrel, extra)) {\n\n ListCell *p;\n\n foreach(p, innerrel->pathlist) {\n\n add_path(joinrel, (Path *) p);\n\n }\n\n }\n\n}\n\n\nThat is, if joinrel is a (binary) self-join on a primary key, the paths for evaluating the join is the same as the paths for evaluating the innerrel, However, this does not work, as the rest of the query may require values from the other table (e.g. t2 in the example above). I therefore need to replace all mentions of t2 with t1, but is this possible at a path-level?\n\n\nIf not, does anyone have a an idea on how this can be done in a different way? Thanks!\n\n\n\nKind regards,\n\n\nLeif Harald Karlsen\n\nSenior Lecturer\n\nDepartment of Informatics\n\nUniversity of Oslo\n\n\n\n\n\n\n\n\nHi,\n\n\nI have a made a small deductive database on top of PostgreSQL for educational/research purposes. In this setting, due to certain VIEW-constructions, queries often end up being self-joins on primary keys, e.g.:\n\n\nSELECT t1.id, t2.val\nFROM t AS t1 JOIN t AS t2 USING (id);\n\n\nwhere t(id) is a primary key. This query is equivalent to the much more efficient:\n\n\nSELECT id, val\nFROM t AS t1;\n\n\nHowever, PostgreSQL currently does not seem to implement this simplification. Therefore, I have looked into writing an extension that performs this, but I am struggling a bit with finding out when this simplification should be done, i.e. which hook I should\n implement.\n\n\nThe simplification is not too different from those done in prep/prepjoin.c, i.e. doing the simplification on the query-tree directly. However, I think I then would need to implement a planner_hook, as it is the only hook giving me direct access to the query-tree.\n But I need to perform my simplification after view-definitions have been expanded into the query, and after the transformations in prepjoin.c (but before the rest of planning). But there seems to be no easy way to inject a function there, as this is buried\n deep in the middle of the planner-function.\n\n\nI therefore looked into using a set_join_pathlist_hook, and try to do the simplification at path-level. I.e., doing something like:\n\n\nstatic void self_join_optimize_hook(PlannerInfo *root, RelOptInfo* joinrel, RelOptInfo* outerrel, RelOptInfo* innerrel, JoinType jointype, JoinPathExtraData* extra)\n{\n    if (is_selfjoin_on_pk(root, joinrel, extra)) {\n        ListCell *p; \n\n        foreach(p, innerrel->pathlist) {\n            add_path(joinrel, (Path *) p);\n\n        }\n    }\n} \n\n\nThat is, if joinrel is a (binary) self-join on a primary key, the paths for evaluating the join is the same as the paths for evaluating the innerrel, However, this does not work, as the rest of the query may require values from the other table (e.g. t2 in\n the example above). I therefore need to replace all mentions of t2 with t1, but is this possible at a path-level?\n\n\n\nIf not, does anyone have a an idea on how this can be done in a different way? Thanks!\n\n\n\n\nKind regards,\n\n\n\nLeif Harald Karlsen\nSenior Lecturer\nDepartment of Informatics\nUniversity of Oslo", "msg_date": "Fri, 24 Jun 2022 13:58:43 +0000", "msg_from": "Leif Harald Karlsen <leifhka@ifi.uio.no>", "msg_from_op": true, "msg_subject": "Implement hook for self-join simplification" }, { "msg_contents": "On 24/6/2022 18:58, Leif Harald Karlsen wrote:\n> I have a made a small deductive database on top of PostgreSQL for \n> educational/research purposes. In this setting, due to certain \n> VIEW-constructions, queries often end up being self-joins on primary \n> keys, e.g.:\n> SELECT t1.id, t2.val\n> FROM t AS t1 JOIN t AS t2 USING (id);\n> \n> where t(id) is a primary key. This query is equivalent to the much more \n> efficient:\n> SELECT id, val FROM t AS t1;\n> \n> However, PostgreSQL currently does not seem to implement this \n> simplification. Therefore, I have looked into writing an extension that \n> performs this, but I am struggling a bit with finding out when this \n> simplification should be done, i.e. which hook I should implement.\nIt is true, but you can use a proposed patch that adds such \nfunctionality [1].\n\nI tried to reproduce your case:\nCREATE TABLE t(id int PRIMARY KEY, val text);\nexplain verbose\nSELECT t1.id, t2.val FROM t AS t1 JOIN t AS t2 USING (id);\n\nWith this patch you will get a plan:\n Seq Scan on public.t t2\n Output: t2.id, t2.val\n Filter: (t2.id IS NOT NULL)\n\nThe approach, implemented in this patch looks better because removes \nself-joins on earlier stage than the path generation stage. Feel free to \nuse it in your research.\n\n[1] \nhttps://www.postgresql.org/message-id/a1d6290c-44e0-0dfc-3fca-66a68b3109ef@postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 24 Jun 2022 22:27:50 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Implement hook for self-join simplification" }, { "msg_contents": "Hi Andrey,\n\n\nThank you for the quick answer, and for the pointer to the patch! This looks like just the thing I need!\n\n\nOn a more general note: What would, in general, be the best way to implement such optimizations? Is there a good way to do this as an extension, or is a patch the preferred way?\n\nKind regards,\nLeif Harald Karlsen\nSenior Lecturer\nDepartment of Informatics\nUniversity of Oslo\n________________________________\nFrom: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nSent: 24 June 2022 19:27:50\nTo: Leif Harald Karlsen; pgsql-hackers@lists.postgresql.org\nSubject: Re: Implement hook for self-join simplification\n\nOn 24/6/2022 18:58, Leif Harald Karlsen wrote:\n> I have a made a small deductive database on top of PostgreSQL for\n> educational/research purposes. In this setting, due to certain\n> VIEW-constructions, queries often end up being self-joins on primary\n> keys, e.g.:\n> SELECT t1.id, t2.val\n> FROM t AS t1 JOIN t AS t2 USING (id);\n>\n> where t(id) is a primary key. This query is equivalent to the much more\n> efficient:\n> SELECT id, val FROM t AS t1;\n>\n> However, PostgreSQL currently does not seem to implement this\n> simplification. Therefore, I have looked into writing an extension that\n> performs this, but I am struggling a bit with finding out when this\n> simplification should be done, i.e. which hook I should implement.\nIt is true, but you can use a proposed patch that adds such\nfunctionality [1].\n\nI tried to reproduce your case:\nCREATE TABLE t(id int PRIMARY KEY, val text);\nexplain verbose\nSELECT t1.id, t2.val FROM t AS t1 JOIN t AS t2 USING (id);\n\nWith this patch you will get a plan:\n Seq Scan on public.t t2\n Output: t2.id, t2.val\n Filter: (t2.id IS NOT NULL)\n\nThe approach, implemented in this patch looks better because removes\nself-joins on earlier stage than the path generation stage. Feel free to\nuse it in your research.\n\n[1]\nhttps://www.postgresql.org/message-id/a1d6290c-44e0-0dfc-3fca-66a68b3109ef@postgrespro.ru\n\n--\nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n\n\n\n\n\n\n\n\n\nHi Andrey,\n\n\nThank you for the quick answer, and for the pointer to the patch! This looks like just the thing I need!\n\n\nOn a more general note: What would, in general, be the best way to implement such optimizations? Is there a good way to do this as an extension, or is a patch the preferred way?\n\n\nKind regards,\nLeif Harald Karlsen\nSenior Lecturer\nDepartment of Informatics\nUniversity of Oslo\n\n\n\nFrom: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nSent: 24 June 2022 19:27:50\nTo: Leif Harald Karlsen; pgsql-hackers@lists.postgresql.org\nSubject: Re: Implement hook for self-join simplification\n \n\n\n\nOn 24/6/2022 18:58, Leif Harald Karlsen wrote:\n> I have a made a small deductive database on top of PostgreSQL for \n> educational/research purposes. In this setting, due to certain \n> VIEW-constructions, queries often end up being self-joins on primary \n> keys, e.g.:\n> SELECT t1.id, t2.val\n> FROM t AS t1 JOIN t AS t2 USING (id);\n> \n> where t(id) is a primary key. This query is equivalent to the much more \n> efficient:\n> SELECT id, val FROM t AS t1;\n> \n> However, PostgreSQL currently does not seem to implement this \n> simplification. Therefore, I have looked into writing an extension that \n> performs this, but I am struggling a bit with finding out when this \n> simplification should be done, i.e. which hook I should implement.\nIt is true, but you can use a proposed patch that adds such \nfunctionality [1].\n\nI tried to reproduce your case:\nCREATE TABLE t(id int PRIMARY KEY, val text);\nexplain verbose\nSELECT t1.id, t2.val FROM t AS t1 JOIN t AS t2 USING (id);\n\nWith this patch you will get a plan:\n  Seq Scan on public.t t2\n    Output: t2.id, t2.val\n    Filter: (t2.id IS NOT NULL)\n\nThe approach, implemented in this patch looks better because removes \nself-joins on earlier stage than the path generation stage. Feel free to \nuse it in your research.\n\n[1] \nhttps://www.postgresql.org/message-id/a1d6290c-44e0-0dfc-3fca-66a68b3109ef@postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 24 Jun 2022 18:43:16 +0000", "msg_from": "Leif Harald Karlsen <leifhka@ifi.uio.no>", "msg_from_op": true, "msg_subject": "Re: Implement hook for self-join simplification" }, { "msg_contents": "On 24/6/2022 23:43, Leif Harald Karlsen wrote:\n> Thank you for the quick answer, and for the pointer to the patch! This \n> looks like just the thing I need! \n> On a more general note: What would, in general, be the best way to \n> implement such optimizations? Is there a good way to do this as an \n> extension, or is a patch the preferred way?\nAccording to my experience, it depends on your needings.\nFor example, self-join-removal feature, or my current project - \nflattening of nested subqueries - is much more optimal to implement as a \npatch, because you can do it so early as possible and can generalize \nparts of the core code and thus, reduce size of your code a lot.\nBut if you want to use your code with many PG versions, even already \nworking in production or you make just a research, without immediate \npractical result - your choice is an extension.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Sat, 25 Jun 2022 10:42:46 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Implement hook for self-join simplification" } ]
[ { "msg_contents": "Hi,\n\nI noticed that in many places we check or assert the OIDs of our\nbuilt-in AMs. E.g. in planning, we will only do row compares in\nindexes that have BTREE_AM_OID, and we can only sort tuples for\nbtamhandler -based index ams if their oid is the BTREE_AM_OID. That\nseems like an artificial limitation to me.\n\nAlthough it makes sense to ensure that we don't accidentally call such\nfunctions from the 'wrong location', it does mean that a user cannot\nmanually install the preinstalled access methods and get a working\nindex AM, because the internal code is checking the OID of the\nsupplied relation's access method, which will not match the expected\nvalue when manually installed.\n\nIs this expected? Would we accept patches that remove or reduce the\nimpact of these artificial limitations?\n\nKind regards,\n\nMatthias van de Meent\n\nPS. I noticed this when checking the sortsupport code for some active\npatches, and after playing around a bit while trying to run PG\nextensions that are not system-registered. The attached test case\nfails, where I'd expect it to succeed, or at least I expected it not\nto fail at \"This AM's OID has an unexpected value\".", "msg_date": "Fri, 24 Jun 2022 16:17:06 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Pre-installed index access methods cannot be manually installed." }, { "msg_contents": "> PS. I noticed this when checking the sortsupport code for some active\n> patches, and after playing around a bit while trying to run PG\n> extensions that are not system-registered. The attached test case\n> fails, where I'd expect it to succeed, or at least I expected it not\n> to fail at \"This AM's OID has an unexpected value\".\n\nI just realised that the failure of the specific mentioned test case\nwas unrelated to the issue at hand, as it correctly shows that you\ncan't use int8-opclasses for int4 columns.\n\nThe attached fixes that test case by creating a table with a bigint\ncolumn instead, so that the test correctly, but unexpectedly, outputs\n\"ERROR: unexpected non-btree AM: NNNNN\".\n\n- Matthias", "msg_date": "Fri, 24 Jun 2022 16:25:40 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Pre-installed index access methods cannot be manually installed." }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> Although it makes sense to ensure that we don't accidentally call such\n> functions from the 'wrong location', it does mean that a user cannot\n> manually install the preinstalled access methods and get a working\n> index AM, because the internal code is checking the OID of the\n> supplied relation's access method, which will not match the expected\n> value when manually installed.\n\nThis seems like a straw argument, considering that there would be\ndozens of other problems in the way of removing or replacing any\nbuilt-in index AMs.\n\n> Is this expected? Would we accept patches that remove or reduce the\n> impact of these artificial limitations?\n\nSeems like a significant waste of effort to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jun 2022 10:43:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-installed index access methods cannot be manually installed." } ]
[ { "msg_contents": "Hi,\nLooking at the patch,\n\n+ if (copyable_characters_length)\n+ {\n+ /* flush copyable characters */\n+ appendBinaryStringInfo(\n+ lex->strval,\n+ s - copyable_characters_length,\n+ copyable_characters_length);\n+\n+ }\n break;\n\nI wonder why copyable_characters_length is not reset after flushing.\n\nCheers\n\nHi,Looking at the patch,+           if (copyable_characters_length)+           {+               /* flush copyable characters */+               appendBinaryStringInfo(+                                      lex->strval,+                                      s - copyable_characters_length,+                                      copyable_characters_length);++           }            break;I wonder why copyable_characters_length is not reset after flushing.Cheers", "msg_date": "Fri, 24 Jun 2022 13:58:08 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "> +           if (copyable_characters_length)\n> +           {\n> +               /* flush copyable characters */\n> +               appendBinaryStringInfo(\n> +                                      lex->strval,\n> +                                      s - copyable_characters_length,\n> +                                      copyable_characters_length);\n> +\n> +           }\n>            break;\n> \n> I wonder why copyable_characters_length is not reset after flushing.\n\nIt breaks from the loop right after. So copyable_characters_length isn't used \nagain and thus resetting is not necessary. But I agree this could use a comment.\n\n", "msg_date": "Fri, 24 Jun 2022 21:48:15 +0000", "msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" }, { "msg_contents": "> I wonder why copyable_characters_length is not reset after flushing.\n\nIt's not necessary because of the break statement right after. But this part \nof the code was refactored away in John's improved patch that's actually \nmerged: \nhttps://github.com/postgres/postgres/commit/3838fa269c15706df2b85ce2d6af8aacd5611655 \n\n\n", "msg_date": "Mon, 29 Aug 2022 10:47:28 +0000", "msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying" } ]
[ { "msg_contents": "Commit 64919aaab made pull_up_simple_subquery set rte->subquery = NULL\nafter doing the deed, so that we don't waste cycles copying a\nnow-useless subquery tree around. I discovered today while\nworking on another patch that if you invoke query_tree_mutator\nor range_table_mutator on the whole Query after that point,\nrange_table_mutator dumps core, because it's expecting subquery\nlinks to never be NULL. There's apparently noplace in our core\ncode that does that today, but I'm a bit surprised we've not heard\ncomplaints from anyone else. I propose to do this to harden it:\n\ndiff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c\nindex 876f84dd39..8d58265010 100644\n--- a/src/backend/nodes/nodeFuncs.c\n+++ b/src/backend/nodes/nodeFuncs.c\n@@ -3788,7 +3788,9 @@ range_table_mutator(List *rtable,\n /* we don't bother to copy eref, aliases, etc; OK? */\n break;\n case RTE_SUBQUERY:\n- if (!(flags & QTW_IGNORE_RT_SUBQUERIES))\n+ /* In the planner, subquery is null if it's been flattened */\n+ if (!(flags & QTW_IGNORE_RT_SUBQUERIES) &&\n+ rte->subquery != NULL)\n {\n CHECKFLATCOPY(newrte->subquery, rte->subquery, Query);\n MUTATE(newrte->subquery, newrte->subquery, Query *);\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jun 2022 17:44:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Core dump in range_table_mutator()" }, { "msg_contents": "On Fri, 24 Jun 2022 at 22:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Commit 64919aaab made pull_up_simple_subquery set rte->subquery = NULL\n> after doing the deed, so that we don't waste cycles copying a\n> now-useless subquery tree around. I discovered today while\n> working on another patch that if you invoke query_tree_mutator\n> or range_table_mutator on the whole Query after that point,\n> range_table_mutator dumps core, because it's expecting subquery\n> links to never be NULL. There's apparently noplace in our core\n> code that does that today, but I'm a bit surprised we've not heard\n> complaints from anyone else. I propose to do this to harden it:\n>\n\nMakes sense.\n\nNot directly related to that change ... I think it would be easier to\nfollow if the CHECKFLATCOPY() was replaced with a separate Assert()\nand FLATCOPY() (I had to go and remind myself what CHECKFLATCOPY()\ndid).\n\nDoing that would allow CHECKFLATCOPY() to be deleted, since this is\nthe only place that uses it -- every other case knows the node type is\ncorrect before doing a FLATCOPY().\n\nWell almost. The preceding FLATCOPY() of the containing RangeTblEntry\ndoesn't check the node type, but that could be fixed by using\nlfirst_node() instead of lfirst() at the start of the loop, which\nwould be neater.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 25 Jun 2022 03:10:02 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Core dump in range_table_mutator()" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Not directly related to that change ... I think it would be easier to\n> follow if the CHECKFLATCOPY() was replaced with a separate Assert()\n> and FLATCOPY() (I had to go and remind myself what CHECKFLATCOPY()\n> did).\n> Doing that would allow CHECKFLATCOPY() to be deleted, since this is\n> the only place that uses it -- every other case knows the node type is\n> correct before doing a FLATCOPY().\n\nWell, if we want to clean this up a bit rather than just doing the\nminimum safe fix ... I spent some time why we were bothering with the\nFLATCOPY step at all, rather than just mutating the Query* pointer.\nI think the reason is to not fail if the QTW_DONT_COPY_QUERY flag is\nset, but maybe we should clear that flag when recursing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jun 2022 23:39:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Core dump in range_table_mutator()" }, { "msg_contents": "On Sat, 25 Jun 2022 at 04:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Well, if we want to clean this up a bit rather than just doing the\n> minimum safe fix ... I spent some time why we were bothering with the\n> FLATCOPY step at all, rather than just mutating the Query* pointer.\n> I think the reason is to not fail if the QTW_DONT_COPY_QUERY flag is\n> set, but maybe we should clear that flag when recursing?\n>\n\nHmm, interesting, but we don't actually pass on that flag when\nrecursing anyway. Since it is the mutator routine's responsibility to\nmake a possibly-modified copy of its input node, if it wants to\nrecurse into the subquery, it should always call query_tree_mutator()\nwith QTW_DONT_COPY_QUERY unset, and range_table_mutator() should never\nneed to FLATCOPY() the subquery.\n\nBut then, in the interests of further tidying up, why does\nrange_table_mutator() call copyObject() on the subquery if\nQTW_IGNORE_RT_SUBQUERIES is set? If QTW_IGNORE_RT_SUBQUERIES isn't\nset, the mutator routine will either copy and modify the subquery, or\nit will return the original unmodified subquery node via\nexpression_tree_mutator(), without copying it. So then if\nQTW_IGNORE_RT_SUBQUERIES is set, why not also just return the original\nunmodified subquery node?\n\nSo then the RTE_SUBQUERY case in range_table_mutator() would only have to do:\n\n case RTE_SUBQUERY:\n if (!(flags & QTW_IGNORE_RT_SUBQUERIES))\n MUTATE(newrte->subquery, newrte->subquery, Query *);\n break;\n\nwhich wouldn't fall over if the subquery were NULL.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 25 Jun 2022 11:20:09 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Core dump in range_table_mutator()" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Sat, 25 Jun 2022 at 04:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, if we want to clean this up a bit rather than just doing the\n>> minimum safe fix ... I spent some time why we were bothering with the\n>> FLATCOPY step at all, rather than just mutating the Query* pointer.\n>> I think the reason is to not fail if the QTW_DONT_COPY_QUERY flag is\n>> set, but maybe we should clear that flag when recursing?\n\n> Hmm, interesting, but we don't actually pass on that flag when\n> recursing anyway. Since it is the mutator routine's responsibility to\n> make a possibly-modified copy of its input node, if it wants to\n> recurse into the subquery, it should always call query_tree_mutator()\n> with QTW_DONT_COPY_QUERY unset, and range_table_mutator() should never\n> need to FLATCOPY() the subquery.\n\nActually, QTW_DONT_COPY_QUERY is dead code AFAICS: we don't use it\nanywhere, and Debian Code Search doesn't know of any outside users\neither. Removing it might be something to do in v16. (I think\nit's a bit late for unnecessary API changes in v15.)\n\n> But then, in the interests of further tidying up, why does\n> range_table_mutator() call copyObject() on the subquery if\n> QTW_IGNORE_RT_SUBQUERIES is set?\n\nI thought about that for a bit, but all of the QTW_IGNORE flags\nwork like that, and I'm hesitant to change it. There may be\ncode that assumes it can modify those trees in-place afterwards.\n\nCommitted with just the change to use straight MUTATE, making\nthis case exactly like the other places with QTW_IGNORE options.\nThanks for the discussion!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Jun 2022 09:06:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Core dump in range_table_mutator()" } ]
[ { "msg_contents": "Hi Pgsql-Hackers\n\nAs part of ongoing work on PostgreSQL security hardening we have\nadded a capability to disable all file system access (COPY TO/FROM\n[PROGRAM] <filename>, pg_*file*() functions, lo_*() functions\naccessing files, etc) in a way that can not be re-enabled without\nalready having access to the file system. That is via a flag which can\nbe set only in postgresql.conf or on the command line.\n\nCurrently the file system access is controlled via being a SUPREUSER\nor having the pg_read_server_files, pg_write_server_files and\npg_execute_server_program roles. The problem with this approach is\nthat it will not stop an attacker who has managed to become the\nPostgreSQL SUPERUSER from breaking out of the server to reading and\nwriting files and running programs in the surrounding container, VM or\nOS.\n\nIf we had had this then for example the infamous 2020 PgCrypto worm\n[1] would have been much less likely to succeed.\n\nSo here are a few questions to get discussion started.\n\n1) would it be enough to just disable WRITING to the filesystem (COPY\n... TO ..., COPY TO ... PROGRAM ...) or are some reading functions\nalso potentially exploitable or at least making attackers life easier\n?\n2) should configuration be all-or-nothing or more fine-tunable (maybe\na comma-separated list of allowed features) ?\n3) should this be back-patched (we can provide batches for all\nsupported PgSQL versions)\n4) We should likely start with this flag off, but any paranoid (read -\ngood and security conscious) DBA can turn it on.\n5) Which file access functions should be in the unsafe list -\npg_current_logfile is likely safe as is pg_relation_filenode, but\npg_ls_dir likely is not. some subversions might be ok again, like\npg_ls_waldir ?\n6) or should we control it via disabling the pg_*_server_* roles for\ndifferent take of configuration from 5) ?\n\nAs I said, we are happy to provide patches (and docs, etc) for all the\nPostgreSQL versions we decide to support here.\n\nBest Regards\nHannu\n\n\n-----\n[1] https://www.securityweek.com/pgminer-crypto-mining-botnet-abuses-postgresql-distribution\n\n\n", "msg_date": "Sat, 25 Jun 2022 00:08:13 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Hardening PostgreSQL via (optional) ban on local file system access" }, { "msg_contents": "On Fri, Jun 24, 2022 at 3:08 PM Hannu Krosing <hannuk@google.com> wrote:\n\n>\n> 1) would it be enough to just disable WRITING to the filesystem (COPY\n> ... TO ..., COPY TO ... PROGRAM ...) or are some reading functions\n> also potentially exploitable or at least making attackers life easier\n> ?\n>\n\nI would protect read paths as well as write ones.\n\nThough ISTM reading would need to be more fine-grained - raw filesystem\nreads and system reads (i.e., something like get_raw_page(...))\n\n2) should configuration be all-or-nothing or more fine-tunable (maybe\n> a comma-separated list of allowed features) ?\n>\n\nFirst pass, all-or-nothing, focus on architecture and identification.\nIdeally we can then easily go in and figure out specific capabilities that\nneed to be enumerated should we desire. Or, as noted below, figure out how\nto do a DBA administered whitelist.\n\n3) should this be back-patched (we can provide batches for all\n> supported PgSQL versions)\n>\n\nI would love to in theory, but to do this right I suspect that the amount\nof desirable refactoring would make doing so prohibitive.\n\n\n> 4) We should likely start with this flag off, but any paranoid (read -\n> good and security conscious) DBA can turn it on.\n>\n\nIn the end the vast majority of our users will have the decision as to the\ndefault state of this decided for them by their distribution or service\nprovider. I'm fine with having build-from-source users get the more\npermissive default.\n\n5) Which file access functions should be in the unsafe list -\n> pg_current_logfile is likely safe as is pg_relation_filenode, but\n> pg_ls_dir likely is not. some subversions might be ok again, like\n> pg_ls_waldir ?\n> 6) or should we control it via disabling the pg_*_server_* roles for\n> different take of configuration from 5) ?\n>\n\nI would suggest neither: we should funnel user-initiated access to the\nfilesystem, for read and write, through its own API that will simply\nprevent all writes (and maybe reads) based upon this flag. If we can\nsomehow enforce that a C coded extension also use this API we should do so,\nbut IIUC that is not possible.\nThis basically puts things in a \"default deny\" mode. I do think that we\nneed something that permits a filesystem user to then say \"except these\"\n(i.e., a whitelist). The thing added to the whitelist should be available\nin the PostgreSQL log file when the API rejects the attempt to access the\nfilesystem. Unfortunately, at the moment I hit a brick wall when thinking\nexactly how that could be accomplished. At least, in a minimal/zero trust\ntype of setup. Having the API access include the module, function, and\nversion making the request and having a semantic versioning based whitelist\n(like, e.g., npm) would work sufficiently well in a \"the only ones that get\ninstalled on the server are trusted to play by the rules\" setup.\n\nProbably over-engineering it like I have a tendency to do, but some food\nfor thought nonetheless.\n\nDavid J.\n\nOn Fri, Jun 24, 2022 at 3:08 PM Hannu Krosing <hannuk@google.com> wrote:\n1) would it be enough to just disable WRITING to the filesystem (COPY\n... TO ..., COPY TO ... PROGRAM ...) or are some reading functions\nalso potentially exploitable or at least making attackers life easier\n?I would protect read paths as well as write ones.Though ISTM reading would need to be more fine-grained - raw filesystem reads and system reads (i.e., something like get_raw_page(...))\n2) should configuration be all-or-nothing or more fine-tunable (maybe\na comma-separated list of allowed features) ?First pass, all-or-nothing, focus on architecture and identification.  Ideally we can then easily go in and figure out specific capabilities that need to be enumerated should we desire.  Or, as noted below, figure out how to do a DBA administered whitelist.\n3) should this be back-patched (we can provide batches for all\nsupported PgSQL versions)I would love to in theory, but to do this right I suspect that the amount of desirable refactoring would make doing so prohibitive. \n4) We should likely start with this flag off, but any paranoid (read -\ngood and security conscious)  DBA can turn it on.In the end the vast majority of our users will have the decision as to the default state of this decided for them by their distribution or service provider.  I'm fine with having build-from-source users get the more permissive default.\n5) Which file access functions should be in the unsafe list -\npg_current_logfile is likely safe as is pg_relation_filenode, but\npg_ls_dir likely is not. some subversions might be ok again, like\npg_ls_waldir ?\n6) or should we control it via disabling the pg_*_server_* roles for\ndifferent take of configuration from 5) ?I would suggest neither: we should funnel user-initiated access to the filesystem, for read and write, through its own API that will simply prevent all writes (and maybe reads) based upon this flag.  If we can somehow enforce that a C coded extension also use this API we should do so, but IIUC that is not possible.This basically puts things in a \"default deny\" mode.  I do think that we need something that permits a filesystem user to then say \"except these\" (i.e., a whitelist).  The thing added to the whitelist should be available in the PostgreSQL log file when the API rejects the attempt to access the filesystem.  Unfortunately, at the moment I hit a brick wall when thinking exactly how that could be accomplished.  At least, in a minimal/zero trust type of setup.  Having the API access include the module, function, and version making the request and having a semantic versioning based whitelist (like, e.g., npm) would work sufficiently well in a \"the only ones that get installed on the server are trusted to play by the rules\" setup.Probably over-engineering it like I have a tendency to do, but some food for thought nonetheless.David J.", "msg_date": "Fri, 24 Jun 2022 15:58:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Sat, Jun 25, 2022 at 12:08:13AM +0200, Hannu Krosing wrote:\n> As part of ongoing work on PostgreSQL security hardening we have\n> added a capability to disable all file system access (COPY TO/FROM\n> [PROGRAM] <filename>, pg_*file*() functions, lo_*() functions\n> accessing files, etc) in a way that can not be re-enabled without\n> already having access to the file system. That is via a flag which can\n> be set only in postgresql.conf or on the command line.\n> \n> Currently the file system access is controlled via being a SUPREUSER\n> or having the pg_read_server_files, pg_write_server_files and\n> pg_execute_server_program roles. The problem with this approach is\n> that it will not stop an attacker who has managed to become the\n> PostgreSQL SUPERUSER from breaking out of the server to reading and\n> writing files and running programs in the surrounding container, VM or\n> OS.\n\nThere was some recent discussion in this area you might be interested in\n[0].\n\n[0] https://postgr.es/m/20220520225619.GA876272%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 24 Jun 2022 16:06:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Hi,\n\nOn 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> Currently the file system access is controlled via being a SUPREUSER\n> or having the pg_read_server_files, pg_write_server_files and\n> pg_execute_server_program roles. The problem with this approach is\n> that it will not stop an attacker who has managed to become the\n> PostgreSQL SUPERUSER from breaking out of the server to reading and\n> writing files and running programs in the surrounding container, VM or\n> OS.\n\nIf a user has superuser rights, they automatically can execute arbitrary\ncode. It's that simple. Removing roles isn't going to change that. Our code\ndoesn't protect against C functions mismatching their SQL level\ndefinitions. With that you can do a lot of things.\n\n\n> If we had had this then for example the infamous 2020 PgCrypto worm\n> [1] would have been much less likely to succeed.\n\nMeh.\n\n\n> So here are a few questions to get discussion started.\n> \n> 1) would it be enough to just disable WRITING to the filesystem (COPY\n> ... TO ..., COPY TO ... PROGRAM ...) or are some reading functions\n> also potentially exploitable or at least making attackers life easier\n> ?\n> 2) should configuration be all-or-nothing or more fine-tunable (maybe\n> a comma-separated list of allowed features) ?\n> 3) should this be back-patched (we can provide batches for all\n> supported PgSQL versions)\n\nErr, what?\n\n> 4) We should likely start with this flag off, but any paranoid (read -\n> good and security conscious) DBA can turn it on.\n> 5) Which file access functions should be in the unsafe list -\n> pg_current_logfile is likely safe as is pg_relation_filenode, but\n> pg_ls_dir likely is not. some subversions might be ok again, like\n> pg_ls_waldir ?\n> 6) or should we control it via disabling the pg_*_server_* roles for\n> different take of configuration from 5) ?\n> \n> As I said, we are happy to provide patches (and docs, etc) for all the\n> PostgreSQL versions we decide to support here.\n\nI don't see anything here that provides a meaningful increase in security.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jun 2022 16:13:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Sat, Jun 25, 2022 at 1:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> > Currently the file system access is controlled via being a SUPREUSER\n> > or having the pg_read_server_files, pg_write_server_files and\n> > pg_execute_server_program roles. The problem with this approach is\n> > that it will not stop an attacker who has managed to become the\n> > PostgreSQL SUPERUSER from breaking out of the server to reading and\n> > writing files and running programs in the surrounding container, VM or\n> > OS.\n>\n> If a user has superuser rights, they automatically can execute arbitrary\n> code. It's that simple. Removing roles isn't going to change that. Our code\n> doesn't protect against C functions mismatching their SQL level\n> definitions. With that you can do a lot of things.\n\nAre you claiming that one can manipulate PostgreSQL to do any file\nwrites directly by manipulating pg_proc to call the functions \"in a\nwrong way\" ?\n\nMy impression was that this was largely fixed via disabling the old\ndirect file calling convention, but then again I did not pay much\nattention at that time :)\n\nSo your suggestion would be to also include disabling access to at\nleast pg_proc for creating C and internal functions and possibly some\nother system tables to remove this threat ?\n\nCheers\nHannu\n\n\n", "msg_date": "Sat, 25 Jun 2022 01:23:36 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Sat, Jun 25, 2022 at 1:23 AM Hannu Krosing <hannuk@google.com> wrote:\n\n> My impression was that this was largely fixed via disabling the old\n> direct file calling convention, but then again I did not pay much\n> attention at that time :)\n\nI meant of course direct FUNCTION calling convention (Version 0\nCalling Conventions)\n\n-- Hannu\n\n\n", "msg_date": "Sat, 25 Jun 2022 01:27:06 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> > Currently the file system access is controlled via being a SUPREUSER\n> > or having the pg_read_server_files, pg_write_server_files and\n> > pg_execute_server_program roles. The problem with this approach is\n> > that it will not stop an attacker who has managed to become the\n> > PostgreSQL SUPERUSER from breaking out of the server to reading and\n> > writing files and running programs in the surrounding container, VM or\n> > OS.\n>\n> If a user has superuser rights, they automatically can execute arbitrary\n> code. It's that simple. Removing roles isn't going to change that. Our code\n> doesn't protect against C functions mismatching their SQL level\n> definitions. With that you can do a lot of things.\n>\n>\nUsing only psql connected by the postgres role, without touching the\nfilesystem to bootstrap your attack, how would this be done? If you\nspecify CREATE FUNCTION ... LANGUAGE c you have to supply filename\nreferences, not a code body and you won't have been able to put that code\non the server.\n\nWe should be capable of having the core server be inescapable to the\nfilesystem for a superuser logged in remotely. All such access they can do\nwith the filesystem would be mediated by controlled code/APIs.\n\nC-based extensions would be an issue without a solution that does provide\nan inescapable sandbox aside from going through our API. Which I suspect\nis basically impossible given our forked process driven execution model.\n\nDavid J.\n\n\nDavid J.\n\nOn Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> Currently the file system access is controlled via being a SUPREUSER\n> or having the pg_read_server_files, pg_write_server_files and\n> pg_execute_server_program roles. The problem with this approach is\n> that it will not stop an attacker who has managed to become the\n> PostgreSQL  SUPERUSER from  breaking out of the server to reading and\n> writing files and running programs in the surrounding container, VM or\n> OS.\n\nIf a user has superuser rights, they automatically can execute arbitrary\ncode. It's that simple. Removing roles isn't going to change that. Our code\ndoesn't protect against C functions mismatching their SQL level\ndefinitions. With that you can do a lot of things.Using only psql connected by the postgres role, without touching the filesystem to bootstrap your attack, how would this be done?  If you specify CREATE FUNCTION ... LANGUAGE c you have to supply filename references, not a code body and you won't have been able to put that code on the server.We should be capable of having the core server be inescapable to the filesystem for a superuser logged in remotely.  All such access they can do with the filesystem would be mediated by controlled code/APIs.C-based extensions would be an issue without a solution that does provide an inescapable sandbox aside from going through our API.  Which I suspect is basically impossible given our forked process driven execution model.David J.David J.", "msg_date": "Fri, 24 Jun 2022 16:29:38 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n\n> > 3) should this be back-patched (we can provide batches for all\n> > supported PgSQL versions)\n>\n> Err, what?\n\nTranslation: Backpatching these changes to any stable versions will\nnot be acceptable (per the project versioning policy [1]), since these\nchanges would be considered new feature. These changes can break\ninstallations, if released in a minor version.\n\n[1]: https://www.postgresql.org/support/versioning/\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Fri, 24 Jun 2022 16:46:28 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "My understanding was that unless activated by admin these changes\nwould change nothing.\n\nAnd they would be (borderline :) ) security fixes\n\nAnd the versioning policy link actually does not say anything about\nnot adding features to older versions (I know this is the policy, just\npointing out the info in not on that page).\n\nOn Sat, Jun 25, 2022 at 1:46 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n>\n> > > 3) should this be back-patched (we can provide batches for all\n> > > supported PgSQL versions)\n> >\n> > Err, what?\n>\n> Translation: Backpatching these changes to any stable versions will\n> not be acceptable (per the project versioning policy [1]), since these\n> changes would be considered new feature. These changes can break\n> installations, if released in a minor version.\n>\n> [1]: https://www.postgresql.org/support/versioning/\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n\n\n", "msg_date": "Sat, 25 Jun 2022 01:59:35 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Friday, June 24, 2022, Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n>\n> > > 3) should this be back-patched (we can provide batches for all\n> > > supported PgSQL versions)\n> >\n> > Err, what?\n>\n> Translation: Backpatching these changes to any stable versions will\n> not be acceptable (per the project versioning policy [1]), since these\n> changes would be considered new feature. These changes can break\n> installations, if released in a minor version.\n>\n>\nNo longer having the public schema in the search_path was a feature that\ngot back-patched, with known bad consequences, without any way for the DBA\nto voice their opinion on the matter. This proposal seems similar enough\nto at least ask the question, with full DBA control and no known bad\nconsequences.\n\nDavid J.\n\nOn Friday, June 24, 2022, Gurjeet Singh <gurjeet@singh.im> wrote:On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n\n> > 3) should this be back-patched (we can provide batches for all\n> > supported PgSQL versions)\n>\n> Err, what?\n\nTranslation: Backpatching these changes to any stable versions will\nnot be acceptable (per the project versioning policy [1]), since these\nchanges would be considered new feature. These changes can break\ninstallations, if released in a minor version.\nNo longer having the public schema in the search_path was a feature that got back-patched, with known bad consequences, without any way for the DBA to voice their opinion on the matter.  This proposal seems similar enough to at least ask the question, with full DBA control and no known bad consequences.David J.", "msg_date": "Fri, 24 Jun 2022 17:08:17 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "The old versions should definitely not have it turned on by default. I\nprobably was not as clear as I thought in bringing out that point..\n\nFor upcoming next ones the distributors may want to turn it on for\nsome more security-conscious (\"enterprize\") distributions.\n\n-- Hannu\n\nOn Sat, Jun 25, 2022 at 2:08 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>\n>\n> On Friday, June 24, 2022, Gurjeet Singh <gurjeet@singh.im> wrote:\n>>\n>> On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n>> > On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n>>\n>> > > 3) should this be back-patched (we can provide batches for all\n>> > > supported PgSQL versions)\n>> >\n>> > Err, what?\n>>\n>> Translation: Backpatching these changes to any stable versions will\n>> not be acceptable (per the project versioning policy [1]), since these\n>> changes would be considered new feature. These changes can break\n>> installations, if released in a minor version.\n>>\n>\n> No longer having the public schema in the search_path was a feature that got back-patched, with known bad consequences, without any way for the DBA to voice their opinion on the matter. This proposal seems similar enough to at least ask the question, with full DBA control and no known bad consequences.\n>\n> David J.\n>\n\n\n", "msg_date": "Sat, 25 Jun 2022 02:17:55 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "(fixed your top-posting)\n\nOn Fri, Jun 24, 2022 at 4:59 PM Hannu Krosing <hannuk@google.com> wrote:\n> On Sat, Jun 25, 2022 at 1:46 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > On Fri, Jun 24, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> >\n> > > > 3) should this be back-patched (we can provide batches for all\n> > > > supported PgSQL versions)\n> > >\n> > > Err, what?\n> >\n> > Translation: Backpatching these changes to any stable versions will\n> > not be acceptable (per the project versioning policy [1]), since these\n> > changes would be considered new feature. These changes can break\n> > installations, if released in a minor version.\n> >\n> > [1]: https://www.postgresql.org/support/versioning/\n>\n> My understanding was that unless activated by admin these changes\n> would change nothing.\n>\n> And they would be (borderline :) ) security fixes\n>\n> And the versioning policy link actually does not say anything about\n> not adding features to older versions (I know this is the policy, just\n> pointing out the info in not on that page).\n\nI wanted to be sure before I mentioned it, and also because I've been\naway from the community for a few years [1], so I too searched the\npage for any relevant mentions of the word \"feature\" on that page.\nWhile you're correct that the policy does not address/prohibit\naddition of new features in minor releases, but the following line\nfrom the policy comes very close to saying it, without actually saying\nit.\n\n> ... PostgreSQL minor releases fix only frequently-encountered bugs, security issues, and data corruption problems to reduce the risk associated with upgrading ...\n\nLike I recently heard a \"wise one\" recently say: \"oh those [Postgres]\ndocs are totally unclear[,] but they're technically correct\".\n\nBTW, the \"Translation\" bit was for folks new to, or not familiar with,\ncommunity and its lingo; I'm sure you already knew what Andres meant\n:-)\n\n[1]: I'll milk the \"I've been away from the community for a few years\"\nexcuse for as long as possible ;-)\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Fri, 24 Jun 2022 17:26:41 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Hi,\n\nOn 2022-06-25 01:23:36 +0200, Hannu Krosing wrote:\n> Are you claiming that one can manipulate PostgreSQL to do any file\n> writes directly by manipulating pg_proc to call the functions \"in a\n> wrong way\" ?\n\nYes.\n\n\n> My impression was that this was largely fixed via disabling the old\n> direct file calling convention, but then again I did not pay much\n> attention at that time :)\n\nIt got a tad harder, that's all.\n\n\n> So your suggestion would be to also include disabling access to at\n> least pg_proc for creating C and internal functions and possibly some\n> other system tables to remove this threat ?\n\nNo. I seriously doubt that pursuing this makes sense. Fundamentally, if you\nfound a way to escalate to superuser, you're superuser. Superuser can create\nextensions etc. That's game over. Done.\n\nYou can of course make postgres drop a few privileges, to make it harder to\nturn escalation-to-superuser into wider access to the whole system. That could\nvery well make sense - but of course there's quite a few things that postgres\nneeds to do to work, so there's significant limits to what you can do.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jun 2022 18:09:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "(please don't top-post. Surely you've been around this community long\nenough to know that)\n\n\nOn Sat, Jun 25, 2022 at 1:59 AM Hannu Krosing <hannuk@google.com> wrote:\n\n> My understanding was that unless activated by admin these changes\n> would change nothing.\n>\n\nThat is assuming you can do this with changing just a couple of lines of\ncode. Which you will not be able to do. The risk of back patching something\nlike that even if off by default is *way* too large.\n\n\nAnd they would be (borderline :) ) security fixes\n>\n\nNo, they would not. Not anymore than adding a new authentication method for\nexample could be considered a security fix.\n\n\n\nAnd the versioning policy link actually does not say anything about\n> not adding features to older versions (I know this is the policy, just\n> pointing out the info in not on that page).\n>\n\nYes it does:\n\nThe PostgreSQL Global Development Group releases a new major version\ncontaining new features about once a year. Each major version receives bug\nfixes and, if need be, security fixes that are released at least once every\nthree months in what we call a \"minor release.\"\n\nAnd slightly further down:\n\nWhile upgrading will always contain some level of risk, PostgreSQL minor\nreleases fix only frequently-encountered bugs, security issues, and data\ncorruption problems to reduce the risk associated with upgrading.\n\n\nSo unless you claim this is a frequently encountered bug (it's not -- it's\nacting exactly has intentional), security issue (same) or data corruption\n(unrelated), it should not go in a minor version. It's very clear.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\n(please don't top-post. Surely you've been around this community long enough to know that)On Sat, Jun 25, 2022 at 1:59 AM Hannu Krosing <hannuk@google.com> wrote:My understanding was that unless activated by admin these changes\nwould change nothing.That is assuming you can do this with changing just a couple of lines of code. Which you will not be able to do. The risk of back patching something like that even if off by default is *way* too large.\nAnd they would be (borderline :) ) security fixesNo, they would not. Not anymore than adding a new authentication method for example could be considered a security fix.\nAnd the versioning policy link actually does not say anything about\nnot adding features to older versions (I know this is the policy, just\npointing out the info in not on that page).Yes it does:The PostgreSQL Global Development Group releases a new major version containing new features about once a year. Each major version receives bug fixes and, if need be, security fixes that are released at least once every three months in what we call a \"minor release.\"And slightly further down:While upgrading will always contain some level of risk, PostgreSQL minor releases fix only frequently-encountered bugs, security issues, and data corruption problems to reduce the risk associated with upgrading.So unless you claim this is a frequently encountered bug (it's not -- it's acting exactly has intentional), security issue (same) or data corruption (unrelated), it should not go in a minor version. It's very clear.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 25 Jun 2022 17:43:30 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Sat, Jun 25, 2022 at 1:13 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> On 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> > Currently the file system access is controlled via being a SUPREUSER\n> > or having the pg_read_server_files, pg_write_server_files and\n> > pg_execute_server_program roles. The problem with this approach is\n> > that it will not stop an attacker who has managed to become the\n> > PostgreSQL SUPERUSER from breaking out of the server to reading and\n> > writing files and running programs in the surrounding container, VM or\n> > OS.\n>\n> If a user has superuser rights, they automatically can execute arbitrary\n> code. It's that simple. Removing roles isn't going to change that. Our code\n> doesn't protect against C functions mismatching their SQL level\n> definitions. With that you can do a lot of things.\n>\n\nYeah, trying to close this hole is a *very* large architectural change.\n\nI think a much better use of time is to further reduce the *need* to use\nsuperuser to the point that the vast majority of installations can run\nwithout having it. For example the addition of the pg_monitor role has made\na huge difference to the number of things needing superuser access. As doe\nthe \"grantable gucs\" in 15, for example. Enumerating what remaining things\ncan be done safely without such access and working on turning that into\ngrantable permissions or roles will be a much safer way to work on the\nunderlying problem (which definitely is a real one), and as a bonus that\ngives a more granular control over things even *beyond* just the file\nsystem access.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Jun 25, 2022 at 1:13 AM Andres Freund <andres@anarazel.de> wrote:\nOn 2022-06-25 00:08:13 +0200, Hannu Krosing wrote:\n> Currently the file system access is controlled via being a SUPREUSER\n> or having the pg_read_server_files, pg_write_server_files and\n> pg_execute_server_program roles. The problem with this approach is\n> that it will not stop an attacker who has managed to become the\n> PostgreSQL  SUPERUSER from  breaking out of the server to reading and\n> writing files and running programs in the surrounding container, VM or\n> OS.\n\nIf a user has superuser rights, they automatically can execute arbitrary\ncode. It's that simple. Removing roles isn't going to change that. Our code\ndoesn't protect against C functions mismatching their SQL level\ndefinitions. With that you can do a lot of things.Yeah, trying to close this hole is a *very* large architectural change.I think a much better use of time is to further reduce the *need* to use superuser to the point that the vast majority of installations can run without having it. For example the addition of the pg_monitor role has made a huge difference to the number of things needing superuser access. As doe the \"grantable gucs\" in 15, for example. Enumerating what remaining things can be done safely without such access and working on turning that into grantable permissions or roles  will be a much safer way to work on the underlying problem (which definitely is a real one), and as a bonus that gives a more granular control over things even *beyond* just the file system access.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 25 Jun 2022 17:47:23 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "\n\n> On 25 Jun 2022, at 03:08, Hannu Krosing <hannuk@google.com> wrote:\n> \n> Currently the file system access is controlled via being a SUPREUSER\n\nMy 2 cents. Ongoing work on making superuser access unneeded seems much more relevant to me.\nIMO superuser == full OS access available from postgres process. I think there's uncountable set of ways to affect OS from superuser.\nE.g. you can create a TOAST value compressed by pglz that allows you to look few kilobytes before detoasted datum. Or make an archive_command = 'gcc my shell code'.\nIt's not even funny to invent things that you can hack as a superuser.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 25 Jun 2022 22:17:28 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "What are your ideas of applying a change similar to above to actually\nbeing a superuser ?\n\nThat is adding a check for \"superuser being currently available\" to\nfunction superuser() in\n./src/backend/utils/misc/superuser.c ?\n\nIt could be as simple as a flag that can be set only at startup for\nmaximum speed - though the places needing superuser check are never in\nspeed-critical path as far as I have seen.\n\nOr it could be more complex and dynamix, like having a file similar to\npg_hba.conf that defines the ability to run code as superuser based on\na set of attributes each of which could be * meaning \"any\"\n\nThese could be\n * database (to limit superuser to only certain databases)\n * session user (to allow only some users to become superuser,\nincluding via SECURITY DEFINER functions)\n * backend pid (this would allow kind of 2-factor authentication -\nconnect, then use that session's pid to add a row to\npg_ok_to_be_sup.conf file, then continue with your superuser-y stuff)\n * valid-until - this lets one allow superuser for a limited period\nwithout fear of forgetting top disable it\n\nThis approach would have the the benefit of being very low-code while\ndelivering the extra protection of needing pre-existing access to\nfilesystem to enable/disable .\n\nFor easiest usability the pg_ok_to_be_sup.conf file should be outside\npg_reload_conf() - either just read each time the superuser() check is\nrun, or watched via inotify and reloaded each time it changes.\n\nCheers,\nHannu\n\nP.S: - thanks Magnus for the \"please don't top-post\" notice - I also\nneed to remember to check if all the quoted mail history is left in\nwhen I just write a replay without touching any of it. I hope it does\nthe right thing and leaves it out, but it just might unders some\nconditions bluntly append it anyway just in case :)\n\n\n", "msg_date": "Sat, 25 Jun 2022 22:54:39 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Having a superuser.conf file could also be used to solve another issue\n- currently, if you remove the SUPERUSER attribute from all users your\nonly option to get it back would be to run in single-user mode. With\nconf file one could add an \"always\" option there which makes any\nmatching user superuser to fix whatever needs fixing as superuser.\n\nCheers\nHannu\n\nOn Sat, Jun 25, 2022 at 10:54 PM Hannu Krosing <hannuk@google.com> wrote:\n>\n> What are your ideas of applying a change similar to above to actually\n> being a superuser ?\n>\n> That is adding a check for \"superuser being currently available\" to\n> function superuser() in\n> ./src/backend/utils/misc/superuser.c ?\n>\n> It could be as simple as a flag that can be set only at startup for\n> maximum speed - though the places needing superuser check are never in\n> speed-critical path as far as I have seen.\n>\n> Or it could be more complex and dynamix, like having a file similar to\n> pg_hba.conf that defines the ability to run code as superuser based on\n> a set of attributes each of which could be * meaning \"any\"\n>\n> These could be\n> * database (to limit superuser to only certain databases)\n> * session user (to allow only some users to become superuser,\n> including via SECURITY DEFINER functions)\n> * backend pid (this would allow kind of 2-factor authentication -\n> connect, then use that session's pid to add a row to\n> pg_ok_to_be_sup.conf file, then continue with your superuser-y stuff)\n> * valid-until - this lets one allow superuser for a limited period\n> without fear of forgetting top disable it\n>\n> This approach would have the the benefit of being very low-code while\n> delivering the extra protection of needing pre-existing access to\n> filesystem to enable/disable .\n>\n> For easiest usability the pg_ok_to_be_sup.conf file should be outside\n> pg_reload_conf() - either just read each time the superuser() check is\n> run, or watched via inotify and reloaded each time it changes.\n>\n> Cheers,\n> Hannu\n>\n> P.S: - thanks Magnus for the \"please don't top-post\" notice - I also\n> need to remember to check if all the quoted mail history is left in\n> when I just write a replay without touching any of it. I hope it does\n> the right thing and leaves it out, but it just might unders some\n> conditions bluntly append it anyway just in case :)\n\n\n", "msg_date": "Sat, 25 Jun 2022 23:11:00 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Sat, 2022-06-25 at 00:08 +0200, Hannu Krosing wrote:\n> Hi Pgsql-Hackers\n> \n> As part of ongoing work on PostgreSQL security hardening we have\n> added a capability to disable all file system access (COPY TO/FROM\n> [PROGRAM] <filename>, pg_*file*() functions, lo_*() functions\n> accessing files, etc) in a way that can not be re-enabled without\n> already having access to the file system. That is via a flag which\n> can\n> be set only in postgresql.conf or on the command line.\n\nHow much of this can be done as a special extension already?\n\nFor instance, a ProcessUtility_hook can prevent superuser from\nexecuting COPY TO/FROM PROGRAM.\n\nAs others point out, that would still leave a lot of surface area for\nattacks, e.g. by manipulating the catalog. But it could be a starting\nplace to make attacks \"harder\", without core postgres needing to make\nsecurity promises that will be hard to keep.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 27 Jun 2022 13:37:49 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "My current thinking is (based on more insights from Andres) that we\nshould also have a startup flag to disable superuser altogether to\navoid bypasses via direct manipulation of pg_proc.\n\nExperience shows that 99% of the time one can run PostgreSQL just fine\nwithout a superuser, so having a superuser available all the time is\nkind of like leaving a loaded gun on the kitchen table because you\nsometimes need to go hunting.\n\nI am especially waiting for Andres' feedback on viability this approach.\n\n\nCheers\nHannu\n\nOn Mon, Jun 27, 2022 at 10:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2022-06-25 at 00:08 +0200, Hannu Krosing wrote:\n> > Hi Pgsql-Hackers\n> >\n> > As part of ongoing work on PostgreSQL security hardening we have\n> > added a capability to disable all file system access (COPY TO/FROM\n> > [PROGRAM] <filename>, pg_*file*() functions, lo_*() functions\n> > accessing files, etc) in a way that can not be re-enabled without\n> > already having access to the file system. That is via a flag which\n> > can\n> > be set only in postgresql.conf or on the command line.\n>\n> How much of this can be done as a special extension already?\n>\n> For instance, a ProcessUtility_hook can prevent superuser from\n> executing COPY TO/FROM PROGRAM.\n>\n> As others point out, that would still leave a lot of surface area for\n> attacks, e.g. by manipulating the catalog. But it could be a starting\n> place to make attacks \"harder\", without core postgres needing to make\n> security promises that will be hard to keep.\n>\n> Regards,\n> Jeff Davis\n>\n>\n\n\n", "msg_date": "Mon, 27 Jun 2022 23:36:53 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Mon, 2022-06-27 at 23:36 +0200, Hannu Krosing wrote:\n> My current thinking is (based on more insights from Andres) that we\n> should also have a startup flag to disable superuser altogether to\n> avoid bypasses via direct manipulation of pg_proc.\n\nWhat do you mean by \"disable superuser altogether\"? What about SECURITY\nDEFINER functions, or extension install scripts, or background workers\ncreated by extensions?\n\nDo you mean prevent logging in as superuser and prevent SET ROLE to\nsuperuser? What if a user *becomes* superuser in the middle of a\nsession?\n\nIf we go down this road, I wonder if we should reconsider the idea of\nchanging superuser status of an existing role. Changing superuser\nstatus already creates some weirdness. We could go so far as to say you\ncan only create/drop superusers via a tool or config file when the\nserver is shut down.\n\nI don't think I've ever used more than a couple superusers, and I don't\nthink I've had a good reason to change superuser status of an existing\nuser before, except as a hack to have non-superuser-owned\nsubscriptions. There are probably better solutions to that problem. [CC\nMark as he may be interested in this discussion.]\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 28 Jun 2022 11:01:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Mon, Jun 27, 2022 at 5:37 PM Hannu Krosing <hannuk@google.com> wrote:\n> My current thinking is (based on more insights from Andres) that we\n> should also have a startup flag to disable superuser altogether to\n> avoid bypasses via direct manipulation of pg_proc.\n>\n> Experience shows that 99% of the time one can run PostgreSQL just fine\n> without a superuser, so having a superuser available all the time is\n> kind of like leaving a loaded gun on the kitchen table because you\n> sometimes need to go hunting.\n>\n> I am especially waiting for Andres' feedback on viability this approach.\n\nWell, I'm not Andres but I don't think not having a superuser at all\nis in any way a viable approach. It's necessary to be able to\nadminister the database system, and the bootstrap superuser can't be\nremoved outright in any case because it owns a ton of objects.\n\nThere are basically two ways of trying to solve this problem. On the\none hand we could try to create a mode in which the privileges of the\nsuperuser are restricted enough that the superuser can't break out to\nthe operating system. The list of things that would need to be blocked\nis, I think, more extensive than any list you've give so far. The\nother is to stick with the idea of an unrestricted superuser but come\nup with ways of giving a controlled subset of the superuser's\nprivileges to a non-superuser. I believe this is the more promising\napproach, and there have been multiple discussion threads about it in\nthe last six months.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Jun 2022 14:30:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "I was not after *completely* removing it, but just having an option\nwhich makes the superuser() function always return false.\n\nFor known cases of needing a superuser there would be a way to enable\nit , perhaps via a sentinel file or pg_hba-like configuration file.\n\nAnd as first cut I would advocate for disabling SECURITY DEFINER\nfunctions for simplicity and robustness of defense. A short term\nsolution for these would be to re-write them in C (or Go or Rust or\nany other compiled language) as C functions have full access anyway.\nBut C functions fall in the same category as other defenses discussed\nat the start of this thread - namely to use them, you already need\naccess to the file system.\n\nRunning production databases without superuser available is not as\nimpossible as you may think - Cloud SQL version of PostgreSQL has\nbeen in use with great success for years without exposing a real\nsuperuser to end users (there are some places where\n`cloudsqlsuperuser` gives you partial superuser'y abilities).\n\nLetting user turn off the superuser access when no known need for it\nexists (which is 99.9% in must use cases) would improve secondary\ndefenses noticeably.\n\nIt would also be a good start to figuring out the set of roles into\nwhich one can decompose superuser access in longer run\n\n--\nHannu\n\n\nOn Tue, Jun 28, 2022 at 8:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 27, 2022 at 5:37 PM Hannu Krosing <hannuk@google.com> wrote:\n> > My current thinking is (based on more insights from Andres) that we\n> > should also have a startup flag to disable superuser altogether to\n> > avoid bypasses via direct manipulation of pg_proc.\n> >\n> > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > without a superuser, so having a superuser available all the time is\n> > kind of like leaving a loaded gun on the kitchen table because you\n> > sometimes need to go hunting.\n> >\n> > I am especially waiting for Andres' feedback on viability this approach.\n>\n> Well, I'm not Andres but I don't think not having a superuser at all\n> is in any way a viable approach. It's necessary to be able to\n> administer the database system, and the bootstrap superuser can't be\n> removed outright in any case because it owns a ton of objects.\n>\n> There are basically two ways of trying to solve this problem. On the\n> one hand we could try to create a mode in which the privileges of the\n> superuser are restricted enough that the superuser can't break out to\n> the operating system. The list of things that would need to be blocked\n> is, I think, more extensive than any list you've give so far. The\n> other is to stick with the idea of an unrestricted superuser but come\n> up with ways of giving a controlled subset of the superuser's\n> privileges to a non-superuser. I believe this is the more promising\n> approach, and there have been multiple discussion threads about it in\n> the last six months.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Jun 2022 23:18:56 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Hi,\n\nOn 2022-06-27 23:36:53 +0200, Hannu Krosing wrote:\n> My current thinking is (based on more insights from Andres) that we\n> should also have a startup flag to disable superuser altogether to\n> avoid bypasses via direct manipulation of pg_proc.\n\nTo me that makes no sense whatsoever. You're not going to be able to create\nextensions etc anymore.\n\n\n> Experience shows that 99% of the time one can run PostgreSQL just fine\n> without a superuser\n\nIME that's not at all true. It might not be needed interactively, but that's\nnot all the same as not being needed at all.\n\n\nIMO this whole thread is largely poking at the wrong side of the issue. A\nsuperuser is a superuser is a superuser. There's reasons superusers exist,\nbecause lots of operations are fundamentally not safe. IMO removing superuser\nor making superuser not be a superuser is a fool's errand - time is much\nbetter spent reducing the number of tasks that need superuser.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Jun 2022 16:27:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Tue, 2022-06-28 at 23:18 +0200, Hannu Krosing wrote:\n> I was not after *completely* removing it, but just having an option\n> which makes the superuser() function always return false.\n\nDid you test that? I'm guessing that would cause lots of problems,\ne.g., installing extensions.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 28 Jun 2022 22:17:17 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Tue, 2022-06-28 at 16:27 -0700, Andres Freund wrote:\n> > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > without a superuser\n> \n> IME that's not at all true. It might not be needed interactively, but that's\n> not all the same as not being needed at all.\n\nI also disagree with that. Not having a superuser is one of the pain\npoints with using a hosted database: no untrusted procedural languages,\nno untrusted extensions (unless someone hacked up PostgreSQL or provided\na workaround akin to a SECURITY DEFINER function), etc.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 29 Jun 2022 08:51:10 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Hi,\n\nOn 2022-06-29 08:51:10 +0200, Laurenz Albe wrote:\n> On Tue, 2022-06-28 at 16:27 -0700, Andres Freund wrote:\n> > > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > > without a superuser\n> > \n> > IME that's not at all true. It might not be needed interactively, but that's\n> > not all the same as not being needed at all.\n> \n> I also disagree with that. Not having a superuser is one of the pain\n> points with using a hosted database: no untrusted procedural languages,\n> no untrusted extensions (unless someone hacked up PostgreSQL or provided\n> a workaround akin to a SECURITY DEFINER function), etc.\n\nI'm not sure what exactly you're disagreeing with? I'm not saying that\nsuperuser isn't needed interactively in general, just that there are\nreasonably common scenarios in which that's the case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Jun 2022 00:05:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "The idea is to allow superuser, but only in case you *already* have\naccess to the file system.\nYou could think of it as a two factor authentication for using superuser.\n\n\nSo in the simplest implementation it would be\n\ntouch $PGDATA/allow_superuser\n\npsql\nhannuk=# CREATE EXTENSION ...\n\nrm $PGDATA/allow_superuser\n\n\nand in more sophisticated implementation it could be\n\nterminal 1:\npsql\nhannuk=# select pg_backend_pid();\n pg_backend_pid\n----------------\n 1749025\n(1 row)\n\nterminal 2:\necho 1749025 > $PGDATA/allow_superuser\n\nback to terminal 1 still connected to backend with pid 1749025:\n$ CREATE EXTENSION ...\n\n.. and then clean up the sentinel file after, or just make it valid\nfor N minutes from creation\n\n\nCheers,\nHannu Krosing\n\nOn Wed, Jun 29, 2022 at 8:51 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2022-06-28 at 16:27 -0700, Andres Freund wrote:\n> > > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > > without a superuser\n> >\n> > IME that's not at all true. It might not be needed interactively, but that's\n> > not all the same as not being needed at all.\n>\n> I also disagree with that. Not having a superuser is one of the pain\n> points with using a hosted database: no untrusted procedural languages,\n> no untrusted extensions (unless someone hacked up PostgreSQL or provided\n> a workaround akin to a SECURITY DEFINER function), etc.\n>\n> Yours,\n> Laurenz Albe\n\n\n", "msg_date": "Wed, 29 Jun 2022 09:45:59 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Wed, 2022-06-29 at 00:05 -0700, Andres Freund wrote:\n> On 2022-06-29 08:51:10 +0200, Laurenz Albe wrote:\n> > On Tue, 2022-06-28 at 16:27 -0700, Andres Freund wrote:\n> > > > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > > > without a superuser\n> > > \n> > > IME that's not at all true. It might not be needed interactively, but that's\n> > > not all the same as not being needed at all.\n> > \n> > I also disagree with that.  Not having a superuser is one of the pain\n> > points with using a hosted database: no untrusted procedural languages,\n> > no untrusted extensions (unless someone hacked up PostgreSQL or provided\n> > a workaround akin to a SECURITY DEFINER function), etc.\n> \n> I'm not sure what exactly you're disagreeing with? I'm not saying that\n> superuser isn't needed interactively in general, just that there are\n> reasonably common scenarios in which that's the case.\n\nI was unclear, sorry. I agreed with you that you can't do without superuser\nand disagreed with the claim that 99% of the time nobody needs superuser\naccess.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 29 Jun 2022 12:48:57 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "Ah, I see.\n\nMy counterclaim was that there are lots of use cases where one would\nwant to be extra sure that *only* they, as the owner of the database,\ncan access the database as a superuser and not someone else. Even if\nthere is some obscure way for that \"someone else\" to either connect as\na superuser or to escalate privileges to superuser.\n\nAnd what I propose would be a means to achieve that at the expense of\nextra steps when starting to act as a superuser.\n\nIn a nutshell this would be equivalent for two factor authentication\nfor acting as a superuser -\n 1) you must be able to log in as a user with superuser attribute\n 2) you must present proof that you can access the underlying file system\n\nCheers,\nHannu Krosing\n\n\nOn Wed, Jun 29, 2022 at 12:48 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Wed, 2022-06-29 at 00:05 -0700, Andres Freund wrote:\n> > On 2022-06-29 08:51:10 +0200, Laurenz Albe wrote:\n> > > On Tue, 2022-06-28 at 16:27 -0700, Andres Freund wrote:\n> > > > > Experience shows that 99% of the time one can run PostgreSQL just fine\n> > > > > without a superuser\n> > > >\n> > > > IME that's not at all true. It might not be needed interactively, but that's\n> > > > not all the same as not being needed at all.\n> > >\n> > > I also disagree with that. Not having a superuser is one of the pain\n> > > points with using a hosted database: no untrusted procedural languages,\n> > > no untrusted extensions (unless someone hacked up PostgreSQL or provided\n> > > a workaround akin to a SECURITY DEFINER function), etc.\n> >\n> > I'm not sure what exactly you're disagreeing with? I'm not saying that\n> > superuser isn't needed interactively in general, just that there are\n> > reasonably common scenarios in which that's the case.\n>\n> I was unclear, sorry. I agreed with you that you can't do without superuser\n> and disagreed with the claim that 99% of the time nobody needs superuser\n> access.\n>\n> Yours,\n> Laurenz Albe\n\n\n", "msg_date": "Wed, 29 Jun 2022 13:19:25 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Wed, Jun 29, 2022 at 3:46 AM Hannu Krosing <hannuk@google.com> wrote:\n> terminal 1:\n> psql\n> hannuk=# select pg_backend_pid();\n> pg_backend_pid\n> ----------------\n> 1749025\n> (1 row)\n>\n> terminal 2:\n> echo 1749025 > $PGDATA/allow_superuser\n>\n> back to terminal 1 still connected to backend with pid 1749025:\n> $ CREATE EXTENSION ...\n>\n> .. and then clean up the sentinel file after, or just make it valid\n> for N minutes from creation\n\nI don't think this would be very convenient in most scenarios, and I\nthink it would also be difficult to implement correctly. I don't think\nyou can get by with just having superuser() return false sometimes\ndespite pg_authid.rolsuper being true. There's a lot of subtle\nassumptions in the code to the effect that the properties of a session\nare basically stable unless some SQL is executed which changes things.\nI think if we start injecting hacks like this it may seem to work in\nlight testing but we'll never get to the end of the bug reports.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Jun 2022 11:52:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Thu, Jun 30, 2022 at 11:52:20AM -0400, Robert Haas wrote:\n> I don't think this would be very convenient in most scenarios, and I\n> think it would also be difficult to implement correctly. I don't think\n> you can get by with just having superuser() return false sometimes\n> despite pg_authid.rolsuper being true. There's a lot of subtle\n> assumptions in the code to the effect that the properties of a session\n> are basically stable unless some SQL is executed which changes things.\n> I think if we start injecting hacks like this it may seem to work in\n> light testing but we'll never get to the end of the bug reports.\n\nYeah, seems it would have to be specified per-session, but how would you\nspecify a specific session before the session starts?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 30 Jun 2022 13:25:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Thu, Jun 30, 2022 at 7:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jun 30, 2022 at 11:52:20AM -0400, Robert Haas wrote:\n> > I don't think this would be very convenient in most scenarios,\n\nThis is the eternal problem with security - more security always\nincludes more inconvenience.\n\nUnlocking your door when coming home is more inconvenient than not\nunlocking it, and the least inconvenient thing would be not having a\ndoor at all.\nImagine coming to your door with a heavy shopping bag in each hand -\nat this moment almost anyone would prefer the door not being there :)\n\nThis one would be for cases where you want best multi-layer\nprotections also against unknown threats and are ready to trade some\nconvenience for security. Also it would not be that bad once you use\nautomated deployment pipelines which just need an extra line to unlock\nsuperuser for deployment.\n\n> > and I\n> > think it would also be difficult to implement correctly. I don't think\n> > you can get by with just having superuser() return false sometimes\n> > despite pg_authid.rolsuper being true. There's a lot of subtle\n> > assumptions in the code to the effect that the properties of a session\n> > are basically stable unless some SQL is executed which changes things.\n\nA good barrier SQL for this could be SET ROLE=... .\nMaybe just have a mode where a superuser can not log in _or_ SET ROLE\nunless this is explicitly allowed in pg_superuser.conf\n\n> > I think if we start injecting hacks like this it may seem to work in\n> > light testing but we'll never get to the end of the bug reports.\n\nIn this case it looks like each of these bug reports would mean an\navoided security breach which for me looks preferable.\n\nAgain, this would be all optional, opt-in, DBA-needs-to-set-it-up\nfeature for the professionally paranoid and not something we suddenly\nforce on people who would like to run all their databases using\nuser=postgres database=postgres with trust specified in the\npg_hba.conf \"because the firewall takes care of security\" :)\n\n> Yeah, seems it would have to be specified per-session, but how would you\n> specify a specific session before the session starts?\n\nOne often recommended way to do superuser'y things in a secure\nproduction database is to have a non-privileged NOINHERIT user for\nlogging in and then do\nSET ROLE=<superuserrole>;\nwhen needed, similar to using su/sudo in shell. This practice both\nreduces the attack surface and also provides auditability by knowing\nwho logged in for superuser work.\n\nIn this case one could easily get the pid and do the needed extra\nsetup before escalating privileges to superuser.\n\n---\nHannu\n\n\n", "msg_date": "Fri, 1 Jul 2022 11:14:59 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "And thanks to Robert and Bruce for bringing up good points about\npotential pitfalls!\n\nI think we do have a good discussion going on here :)\n\n---\nHannu\n\nOn Fri, Jul 1, 2022 at 11:14 AM Hannu Krosing <hannuk@google.com> wrote:\n>\n> On Thu, Jun 30, 2022 at 7:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Jun 30, 2022 at 11:52:20AM -0400, Robert Haas wrote:\n> > > I don't think this would be very convenient in most scenarios,\n>\n> This is the eternal problem with security - more security always\n> includes more inconvenience.\n>\n> Unlocking your door when coming home is more inconvenient than not\n> unlocking it, and the least inconvenient thing would be not having a\n> door at all.\n> Imagine coming to your door with a heavy shopping bag in each hand -\n> at this moment almost anyone would prefer the door not being there :)\n>\n> This one would be for cases where you want best multi-layer\n> protections also against unknown threats and are ready to trade some\n> convenience for security. Also it would not be that bad once you use\n> automated deployment pipelines which just need an extra line to unlock\n> superuser for deployment.\n>\n> > > and I\n> > > think it would also be difficult to implement correctly. I don't think\n> > > you can get by with just having superuser() return false sometimes\n> > > despite pg_authid.rolsuper being true. There's a lot of subtle\n> > > assumptions in the code to the effect that the properties of a session\n> > > are basically stable unless some SQL is executed which changes things.\n>\n> A good barrier SQL for this could be SET ROLE=... .\n> Maybe just have a mode where a superuser can not log in _or_ SET ROLE\n> unless this is explicitly allowed in pg_superuser.conf\n>\n> > > I think if we start injecting hacks like this it may seem to work in\n> > > light testing but we'll never get to the end of the bug reports.\n>\n> In this case it looks like each of these bug reports would mean an\n> avoided security breach which for me looks preferable.\n>\n> Again, this would be all optional, opt-in, DBA-needs-to-set-it-up\n> feature for the professionally paranoid and not something we suddenly\n> force on people who would like to run all their databases using\n> user=postgres database=postgres with trust specified in the\n> pg_hba.conf \"because the firewall takes care of security\" :)\n>\n> > Yeah, seems it would have to be specified per-session, but how would you\n> > specify a specific session before the session starts?\n>\n> One often recommended way to do superuser'y things in a secure\n> production database is to have a non-privileged NOINHERIT user for\n> logging in and then do\n> SET ROLE=<superuserrole>;\n> when needed, similar to using su/sudo in shell. This practice both\n> reduces the attack surface and also provides auditability by knowing\n> who logged in for superuser work.\n>\n> In this case one could easily get the pid and do the needed extra\n> setup before escalating privileges to superuser.\n>\n> ---\n> Hannu\n\n\n", "msg_date": "Fri, 1 Jul 2022 11:17:45 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On 7/1/22 05:14, Hannu Krosing wrote:\n> On Thu, Jun 30, 2022 at 7:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> On Thu, Jun 30, 2022 at 11:52:20AM -0400, Robert Haas wrote:\n>> > I don't think this would be very convenient in most scenarios,\n> \n> This is the eternal problem with security - more security always\n> includes more inconvenience.\n\nyep\n\n> This one would be for cases where you want best multi-layer\n> protections also against unknown threats and are ready to trade some\n> convenience for security. Also it would not be that bad once you use\n> automated deployment pipelines which just need an extra line to unlock\n> superuser for deployment.\n\n+1\n\n>>> and I think it would also be difficult to implement correctly. I\n>>> don't think you can get by with just having superuser() return\n>>> false sometimes despite pg_authid.rolsuper being true. There's a\n>>> lot of subtle assumptions in the code to the effect that the\n>>> properties of a session are basically stable unless some SQL is\n>>> executed which changes things.\n> A good barrier SQL for this could be SET ROLE=... .\n> Maybe just have a mode where a superuser can not log in _or_ SET ROLE\n> unless this is explicitly allowed in pg_superuser.conf\n\nAgreed.\n\nIn fact in a recent discussion with Joshua Brindle (CC'd) he wished for \na way that we could designate the current session \"tainted\". For example \nif role joe with membership in postgres should always be logging in from \n192.168.42.0/24 when performing admin duties as postgres, but logs in \nfrom elsewhere their session should be marked tainted and escalating to \npostgres should be denied.\n\n>> > I think if we start injecting hacks like this it may seem to work in\n>> > light testing but we'll never get to the end of the bug reports.\n> \n> In this case it looks like each of these bug reports would mean an\n> avoided security breach which for me looks preferable.\n> \n> Again, this would be all optional, opt-in, DBA-needs-to-set-it-up\n> feature for the professionally paranoid and not something we suddenly\n> force on people who would like to run all their databases using\n> user=postgres database=postgres with trust specified in the\n> pg_hba.conf \"because the firewall takes care of security\" :)\n> \n>> Yeah, seems it would have to be specified per-session, but how would you\n>> specify a specific session before the session starts?\n> \n> One often recommended way to do superuser'y things in a secure\n> production database is to have a non-privileged NOINHERIT user for\n> logging in and then do\n> SET ROLE=<superuserrole>;\n> when needed, similar to using su/sudo in shell. This practice both\n> reduces the attack surface and also provides auditability by knowing\n> who logged in for superuser work.\n\n+many\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 08:46:05 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" }, { "msg_contents": "On Fri, Jul 1, 2022 at 5:15 AM Hannu Krosing <hannuk@google.com> wrote:\n> This is the eternal problem with security - more security always\n> includes more inconvenience.\n\nBut the same amount of security can be more or less inconvenient, and\nI don't think your proposal does very well there. More inconvenience\ndoesn't mean more security.\n\nI actually think this whole line of attack is probably a dead end. My\npreferred approach is to find ways of delegating a larger subset of\nsuperuser privileges to non-superusers, or to prevent people from\nassuming the superuser role in the first place. Trying to restrict\nwhat superusers can do seems like a much more difficult path, and I\nthink it might be a dead end. But if such an approach has any hope of\nsuccess, I think it's going to have to try to create a situation where\nmost of the administration that you need to do can be done most of the\ntime with some sort of restricted superuser privileges, and only in\nextreme scenarios do you need to change the cluster state to allow\nfull superuser access. There's no such nuance in your proposal. It's\njust a great big switch that makes superuser mean either nothing, or\nall the things it means today. I don't think that's really a\nmeaningful step forward.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 09:32:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hardening PostgreSQL via (optional) ban on local file system\n access" } ]
[ { "msg_contents": "Now that I've gotten your attention..\n\nI expect pg_upgrade to fail if I run it twice in a row.\n\nIt would be reasonable if it were to fail during the \"--check\" phase,\nmaybe by failing like this:\n| New cluster database \"...\" is not empty: found relation \"...\"\n\nBut that fails to happen if the cluster has neither tables nor matviews, in\nwhich case, it passes the check phase and then fails like this:\n\nPerforming Upgrade\n------------------\nAnalyzing all rows in the new cluster ok\nFreezing all rows in the new cluster ok\nDeleting files from new pg_xact ok\nCopying old pg_clog to new server ok\nSetting oldest XID for new cluster ok\nSetting next transaction ID and epoch for new cluster ok\nDeleting files from new pg_multixact/offsets ok\nCopying old pg_multixact/offsets to new server ok\nDeleting files from new pg_multixact/members ok\nCopying old pg_multixact/members to new server ok\nSetting next multixact ID and offset for new cluster ok\nResetting WAL archives ok\nSetting frozenxid and minmxid counters in new cluster connection to server on socket \"/home/pryzbyj/src/postgres/.s.PGSQL.50432\" failed: FATAL: could not open relation with OID 2610\nFailure, exiting\n\nI'll concede that a cluster which has no tables sounds a lot like a toy, but I\nfind it disturbing that nothing prevents running the process twice, up to the\npoint that it's evidently corrupted the catalog.\n\nWhile looking at this, I noticed that starting postgres --single immediately\nafter initdb allows creating objects with OIDs below FirstNormalObjectId\n(thereby defeating pg_upgrade's check). I'm not familiar with the behavioral\ndifferences of single user mode, and couldn't find anything in the\ndocumentation.\n\n\n", "msg_date": "Sat, 25 Jun 2022 11:04:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_upgrade allows itself to be run twice" }, { "msg_contents": "On Sat, Jun 25, 2022 at 11:04:37AM -0500, Justin Pryzby wrote:\n> I expect pg_upgrade to fail if I run it twice in a row.\n\nYep.\n\n> It would be reasonable if it were to fail during the \"--check\" phase,\n> maybe by failing like this:\n> | New cluster database \"...\" is not empty: found relation \"...\"\n\nSo, we get a complaint that the new cluster is not empty after one\npg_upgrade run with a new command of pg_upgrade, with or without\n--check. This happens in check_new_cluster(), where we'd fatal if a\nrelation uses a namespace different than pg_catalog.\n\n> But that fails to happen if the cluster has neither tables nor matviews, in\n> which case, it passes the check phase and then fails like this:\n\nIndeed, as of get_rel_infos(). \n\n> I'll concede that a cluster which has no tables sounds a lot like a toy, but I\n> find it disturbing that nothing prevents running the process twice, up to the\n> point that it's evidently corrupted the catalog.\n\nI have heard of cases where instances were only used with a set of\nforeign tables, for example. Not sure that this is spread enough to\nworry about, but this would fail as much as your case.\n\n> While looking at this, I noticed that starting postgres --single immediately\n> after initdb allows creating objects with OIDs below FirstNormalObjectId\n> (thereby defeating pg_upgrade's check). I'm not familiar with the behavioral\n> differences of single user mode, and couldn't find anything in the\n> documentation.\n\nThis one comes from NextOID in the control data file after a fresh\ninitdb, and GetNewObjectId() would enforce that in a postmaster\nenvironment to be FirstNormalObjectId when assigning the first user\nOID. Would you imply an extra step at the end of initdb to update the\ncontrol data file of the new cluster to reflect FirstNormalObjectId?\n--\nMichael", "msg_date": "Wed, 29 Jun 2022 13:17:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On Wed, Jun 29, 2022 at 01:17:33PM +0900, Michael Paquier wrote:\n> On Sat, Jun 25, 2022 at 11:04:37AM -0500, Justin Pryzby wrote:\n> \n> > I'll concede that a cluster which has no tables sounds a lot like a toy, but I\n> > find it disturbing that nothing prevents running the process twice, up to the\n> > point that it's evidently corrupted the catalog.\n> \n> I have heard of cases where instances were only used with a set of\n> foreign tables, for example. Not sure that this is spread enough to\n> worry about, but this would fail as much as your case.\n\nforeign tables have OIDs too.\n\nts=# SELECT * FROM pg_class WHERE relkind ='f';\noid | 1554544611\n\n> > While looking at this, I noticed that starting postgres --single immediately\n> > after initdb allows creating objects with OIDs below FirstNormalObjectId\n> > (thereby defeating pg_upgrade's check). I'm not familiar with the behavioral\n> > differences of single user mode, and couldn't find anything in the\n> > documentation.\n> \n> This one comes from NextOID in the control data file after a fresh\n> initdb, and GetNewObjectId() would enforce that in a postmaster\n> environment to be FirstNormalObjectId when assigning the first user\n> OID. Would you imply an extra step at the end of initdb to update the\n> control data file of the new cluster to reflect FirstNormalObjectId?\n\nI added a call to reset xlog, similar to what's in pg_upgrade.\nUnfortunately, I don't see an easy way to silence it.\n\n-- \nJustin", "msg_date": "Thu, 7 Jul 2022 01:22:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "rebased and updated\n\nRobert thought that it might be reasonable for someone to initdb, and\nthen connect and make some modifications, and then pg_upgrade.\nhttps://www.postgresql.org/message-id/CA%2BTgmoYwaXh_wRRa2CqL4XpM4r6YEbq1%2Bec%3D%2B8b7Dr7-T_tT%2BQ%40mail.gmail.com\n\nBut the DBs are dropped by pg_upgrade, so that seems to serve no\npurpose, except for shared relations (and global objects?). In the case\nof shared relations, it seems unsafe (even though my test didn't cause\nan immediate explosion).\n\nSo rather than continuing to allow arbitrary changes between initdb and\npg_upgrade, I propose to prohibit all changes. I'd consider allowing an\n\"inclusive\" list of allowable changes, if someone were to propose such a\nthing - but since DBs are dropped, I'm not sure what it could include.", "msg_date": "Mon, 5 Sep 2022 12:03:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On 07.07.22 08:22, Justin Pryzby wrote:\n>> This one comes from NextOID in the control data file after a fresh\n>> initdb, and GetNewObjectId() would enforce that in a postmaster\n>> environment to be FirstNormalObjectId when assigning the first user\n>> OID. Would you imply an extra step at the end of initdb to update the\n>> control data file of the new cluster to reflect FirstNormalObjectId?\n> I added a call to reset xlog, similar to what's in pg_upgrade.\n> Unfortunately, I don't see an easy way to silence it.\n\nI think it would be better to update the control file directly instead \nof going through pg_resetwal. (See \nsrc/include/common/controldata_utils.h for the required functions.)\n\nHowever, I don't know whether we need to add special provisions that \nguard against people using postgres --single in complicated ways. Many \nconsider the single-user mode deprecated outside of initdb use.\n\n\n\n", "msg_date": "Tue, 1 Nov 2022 13:54:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On Tue, Nov 01, 2022 at 01:54:35PM +0100, Peter Eisentraut wrote:\n> On 07.07.22 08:22, Justin Pryzby wrote:\n> > > This one comes from NextOID in the control data file after a fresh\n> > > initdb, and GetNewObjectId() would enforce that in a postmaster\n> > > environment to be FirstNormalObjectId when assigning the first user\n> > > OID. Would you imply an extra step at the end of initdb to update the\n> > > control data file of the new cluster to reflect FirstNormalObjectId?\n> > I added a call to reset xlog, similar to what's in pg_upgrade.\n> > Unfortunately, I don't see an easy way to silence it.\n> \n> I think it would be better to update the control file directly instead of\n> going through pg_resetwal. (See src/include/common/controldata_utils.h for\n> the required functions.)\n> \n> However, I don't know whether we need to add special provisions that guard\n> against people using postgres --single in complicated ways. Many consider\n> the single-user mode deprecated outside of initdb use.\n\nThanks for looking.\n\nOne other thing I noticed (by accident!) is that pg_upgrade doesn't\nprevent itself from trying to upgrade a cluster on top of itself:\n\n| $ /usr/pgsql-15/bin/initdb -D pg15.dat -N\n| $ /usr/pgsql-15/bin/pg_upgrade -D pg15.dat -d pg15.dat -b /usr/pgsql-15/bin\n| Performing Upgrade\n| ------------------\n| Analyzing all rows in the new cluster ok\n| Freezing all rows in the new cluster ok\n| Deleting files from new pg_xact ok\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n| Copying old pg_xact to new server\n| *failure*\n| \n| Consult the last few lines of \"pg15.dat/pg_upgrade_output.d/20221101T055916.486/log/pg_upgrade_utility.log\" for\n>>\n| command: cp -Rf \"pg15.dat/pg_xact\" \"pg15.dat/pg_xact\" >> \"pg15.dat/pg_upgrade_output.d/20221101T055916.486/log/pg_upgrade_utility.log\" 2>&1\n| cp: cannot stat 'pg15.dat/pg_xact': No such file or directory\n\nThis may be of little concern since it's upgrading a version to itself, which\nonly applies to developers.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 1 Nov 2022 08:07:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On 01.11.22 14:07, Justin Pryzby wrote:\n> On Tue, Nov 01, 2022 at 01:54:35PM +0100, Peter Eisentraut wrote:\n>> On 07.07.22 08:22, Justin Pryzby wrote:\n>>>> This one comes from NextOID in the control data file after a fresh\n>>>> initdb, and GetNewObjectId() would enforce that in a postmaster\n>>>> environment to be FirstNormalObjectId when assigning the first user\n>>>> OID. Would you imply an extra step at the end of initdb to update the\n>>>> control data file of the new cluster to reflect FirstNormalObjectId?\n>>> I added a call to reset xlog, similar to what's in pg_upgrade.\n>>> Unfortunately, I don't see an easy way to silence it.\n>>\n>> I think it would be better to update the control file directly instead of\n>> going through pg_resetwal. (See src/include/common/controldata_utils.h for\n>> the required functions.)\n>>\n>> However, I don't know whether we need to add special provisions that guard\n>> against people using postgres --single in complicated ways. Many consider\n>> the single-user mode deprecated outside of initdb use.\n> \n> Thanks for looking.\n\nI think the above is a \"returned with feedback\" at this point.\n\n> One other thing I noticed (by accident!) is that pg_upgrade doesn't\n> prevent itself from trying to upgrade a cluster on top of itself:\n> \n> | $ /usr/pgsql-15/bin/initdb -D pg15.dat -N\n> | $ /usr/pgsql-15/bin/pg_upgrade -D pg15.dat -d pg15.dat -b /usr/pgsql-15/bin\n> | Performing Upgrade\n> | ------------------\n> | Analyzing all rows in the new cluster ok\n> | Freezing all rows in the new cluster ok\n> | Deleting files from new pg_xact ok\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> | Copying old pg_xact to new server\n> | *failure*\n> |\n> | Consult the last few lines of \"pg15.dat/pg_upgrade_output.d/20221101T055916.486/log/pg_upgrade_utility.log\" for\n>>>\n> | command: cp -Rf \"pg15.dat/pg_xact\" \"pg15.dat/pg_xact\" >> \"pg15.dat/pg_upgrade_output.d/20221101T055916.486/log/pg_upgrade_utility.log\" 2>&1\n> | cp: cannot stat 'pg15.dat/pg_xact': No such file or directory\n> \n> This may be of little concern since it's upgrading a version to itself, which\n> only applies to developers.\n\nI think this would be worth addressing nonetheless, for robustness. For \ncomparison, \"cp\" and \"mv\" will error if you give source and destination \nthat are the same file.\n\n\n\n", "msg_date": "Thu, 1 Dec 2022 10:30:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On Thu, Dec 01, 2022 at 10:30:16AM +0100, Peter Eisentraut wrote:\n> On 01.11.22 14:07, Justin Pryzby wrote:\n> > On Tue, Nov 01, 2022 at 01:54:35PM +0100, Peter Eisentraut wrote:\n> > > On 07.07.22 08:22, Justin Pryzby wrote:\n> > > > > This one comes from NextOID in the control data file after a fresh\n> > > > > initdb, and GetNewObjectId() would enforce that in a postmaster\n> > > > > environment to be FirstNormalObjectId when assigning the first user\n> > > > > OID. Would you imply an extra step at the end of initdb to update the\n> > > > > control data file of the new cluster to reflect FirstNormalObjectId?\n> > > > I added a call to reset xlog, similar to what's in pg_upgrade.\n> > > > Unfortunately, I don't see an easy way to silence it.\n> > > \n> > > I think it would be better to update the control file directly instead of\n> > > going through pg_resetwal. (See src/include/common/controldata_utils.h for\n> > > the required functions.)\n> > > \n> > > However, I don't know whether we need to add special provisions that guard\n> > > against people using postgres --single in complicated ways. Many consider\n> > > the single-user mode deprecated outside of initdb use.\n> > \n> > Thanks for looking.\n\nTo be clear, I don't think it's worth doing anything special just to\navoid creating special OIDs if someone runs postgres --single\nimmediately after initdb.\n\nHowever, setting FirstNormalOid in initdb itself (rather than in the\nnext invocation of postgres, if it isn't in single-user-mode) was the\nmechanism by which to avoid the original problem that pg_upgrade can be\nrun twice, if there are no non-system relations. That still seems\ndesirable to fix somehow.\n\n> I think the above is a \"returned with feedback\" at this point.\n> \n> > One other thing I noticed (by accident!) is that pg_upgrade doesn't\n> > prevent itself from trying to upgrade a cluster on top of itself:\n\n> > | command: cp -Rf \"pg15.dat/pg_xact\" \"pg15.dat/pg_xact\" >> \"pg15.dat/pg_upgrade_output.d/20221101T055916.486/log/pg_upgrade_utility.log\" 2>&1\n> > | cp: cannot stat 'pg15.dat/pg_xact': No such file or directory\n> > \n> > This may be of little concern since it's upgrading a version to itself, which\n> > only applies to developers.\n> \n> I think this would be worth addressing nonetheless, for robustness. For\n> comparison, \"cp\" and \"mv\" will error if you give source and destination that\n> are the same file.\n\nI addressed this in 002.\n\n-- \nJustin", "msg_date": "Fri, 16 Dec 2022 07:38:09 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade allows itself to be run twice" }, { "msg_contents": "On Fri, Dec 16, 2022 at 07:38:09AM -0600, Justin Pryzby wrote:\n> However, setting FirstNormalOid in initdb itself (rather than in the\n> next invocation of postgres, if it isn't in single-user-mode) was the\n> mechanism by which to avoid the original problem that pg_upgrade can be\n> run twice, if there are no non-system relations. That still seems\n> desirable to fix somehow.\n\n+ if (new_cluster.controldata.chkpnt_nxtoid != FirstNormalObjectId)\n+ pg_fatal(\"New cluster is not pristine: OIDs have been assigned since initdb (%u != %u)\\n\",\n+ new_cluster.controldata.chkpnt_nxtoid, FirstNormalObjectId);\n\nOn wraparound FirstNormalObjectId would be the first value we use for\nnextOid. Okay, that's very unlikely going to happen, still, strictly\nspeaking, that could be incorrect.\n\n>> I think this would be worth addressing nonetheless, for robustness. For\n>> comparison, \"cp\" and \"mv\" will error if you give source and destination that\n>> are the same file.\n> \n> I addressed this in 002.\n\n+ if (strcmp(make_absolute_path(old_cluster.pgdata),\n+ make_absolute_path(new_cluster.pgdata)) == 0)\n+ pg_fatal(\"cannot upgrade a cluster on top of itself\");\n\nShouldn't this be done after adjust_data_dir(), which is the point\nwhere we'll know the actual data folders we are working on by querying\npostgres -C data_directory?\n--\nMichael", "msg_date": "Mon, 19 Dec 2022 15:53:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade allows itself to be run twice" } ]
[ { "msg_contents": "JSON/SQL jsonpath\n\nFor example, a jsonpath string with deliberate typo 'like_regexp' \n(instead of 'like_regex'):\n\nselect js\nfrom (values (jsonb '{}')) as f(js)\nwhere js @? '$ ? (@ like_regexp \"^xxx\")';\n\nERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\nLINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@ li...\n ^\n\nBoth 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n\nPerhaps some improvement can be thought of?\n\nSimilar messages in release 14 seem to use 'invalid token', which is better:\n\nselect js\nfrom (values (jsonb '{\"a\":\"b\"}')) as f(js)\nwhere js @? '$ ? (@.a .= \"b\")';\nERROR: syntax error, unexpected invalid token at or near \"=\" of \njsonpath input\n\nthanks,\nErik Rijkers\n\n\n\n", "msg_date": "Sun, 26 Jun 2022 17:44:22 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n> JSON/SQL jsonpath\n>\n> For example, a jsonpath string with deliberate typo 'like_regexp'\n> (instead of 'like_regex'):\n>\n> select js\n> from (values (jsonb '{}')) as f(js)\n> where js @? '$ ? (@ like_regexp \"^xxx\")';\n>\n> ERROR:  syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n> li...\n>                                                              ^\n>\n> Both  'IDENT_P'  and  'at or near \" \"'  seem pretty useless.\n>\n> Perhaps some improvement can be thought of?\n>\n> Similar messages in release 14 seem to use 'invalid token', which is\n> better:\n>\n> select js\n> from (values (jsonb '{\"a\":\"b\"}')) as f(js)\n> where js @? '$ ? (@.a .= \"b\")';\n> ERROR:  syntax error, unexpected invalid token at or near \"=\" of\n> jsonpath input\n>\n>\n\nYeah :-(\n\nThis apparently goes back to the original jsonpath commit 72b6460336e.\nThere are similar error messages in the back branch regression tests:\n\nandrew@ub20:pgl $ grep -r IDENT_P pg_*/src/test/regress/expected/\npg_12/src/test/regress/expected/jsonpath.out:ERROR:  syntax error, unexpected IDENT_P at end of jsonpath input\npg_13/src/test/regress/expected/jsonpath.out:ERROR:  syntax error, unexpected IDENT_P at end of jsonpath input\npg_14/src/test/regress/expected/jsonpath.out:ERROR:  syntax error, unexpected IDENT_P at end of jsonpath input\n\nFor some reason the parser contains a '%error-verbose' directive, unlike\nall our other bison parsers. Removing that fixes it, as in this patch.\nI'm a bit inclined to say we should backpatch the removal of the\ndirective, but I guess a lack of complaints suggests it's not a huge issue.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 27 Jun 2022 11:15:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n> > JSON/SQL jsonpath\n> >\n> > For example, a jsonpath string with deliberate typo 'like_regexp'\n> > (instead of 'like_regex'):\n> >\n> > select js\n> > from (values (jsonb '{}')) as f(js)\n> > where js @? '$ ? (@ like_regexp \"^xxx\")';\n> >\n> > ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n> > LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n> > li...\n> > ^\n> >\n> > Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n> >\n> > Perhaps some improvement can be thought of?\n> >\n> > Similar messages in release 14 seem to use 'invalid token', which is\n> > better:\n> >\n> > select js\n> > from (values (jsonb '{\"a\":\"b\"}')) as f(js)\n> > where js @? '$ ? (@.a .= \"b\")';\n> > ERROR: syntax error, unexpected invalid token at or near \"=\" of\n> > jsonpath input\n> >\n> >\n>\n> Yeah :-(\n>\n> This apparently goes back to the original jsonpath commit 72b6460336e.\n> There are similar error messages in the back branch regression tests:\n>\n> andrew@ub20:pgl $ grep -r IDENT_P pg_*/src/test/regress/expected/\n> pg_12/src/test/regress/expected/jsonpath.out:ERROR: syntax error, unexpected IDENT_P at end of jsonpath input\n> pg_13/src/test/regress/expected/jsonpath.out:ERROR: syntax error, unexpected IDENT_P at end of jsonpath input\n> pg_14/src/test/regress/expected/jsonpath.out:ERROR: syntax error, unexpected IDENT_P at end of jsonpath input\n>\n> For some reason the parser contains a '%error-verbose' directive, unlike\n> all our other bison parsers. Removing that fixes it, as in this patch.\n> I'm a bit inclined to say we should backpatch the removal of the\n> directive,\n>\n\nI guess it is okay to backpatch unless we think some user will be\ndependent on such a message or there could be other side effects of\nremoving this. One thing that is not clear to me is why OP sees an\nacceptable message (ERROR: syntax error, unexpected invalid token at\nor near \"=\" of jsonpath input) for a similar query in 14?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 29 Jun 2022 18:30:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "Op 29-06-2022 om 15:00 schreef Amit Kapila:\n> On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n>>> JSON/SQL jsonpath\n>>>\n>>> For example, a jsonpath string with deliberate typo 'like_regexp'\n>>> (instead of 'like_regex'):\n>>>\n>>> select js\n>>> from (values (jsonb '{}')) as f(js)\n>>> where js @? '$ ? (@ like_regexp \"^xxx\")';\n>>>\n>>> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n>>> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n>>> li...\n>>>\n>>> Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n>>>\n\n> removing this. One thing that is not clear to me is why OP sees an\n> acceptable message (ERROR: syntax error, unexpected invalid token at\n> or near \"=\" of jsonpath input) for a similar query in 14?\n\nTo mention that was perhaps unwise of me because The IDENT_P (or more \ngenerally, *_P) messages can be provoked on 14 too. I just thought \n'invalid token' might be a better message because 'token' gives a more \ndirect association with 'errors during parsing' which I assume is the \ncase here.\n\nIDENT_P or ANY_P convey exactly nothing.\n\n\nErik\n\n\n\n\n\n", "msg_date": "Wed, 29 Jun 2022 15:28:02 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "On Wed, Jun 29, 2022 at 4:28 PM Erik Rijkers <er@xs4all.nl> wrote:\n> Op 29-06-2022 om 15:00 schreef Amit Kapila:\n> > On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n> >>> JSON/SQL jsonpath\n> >>>\n> >>> For example, a jsonpath string with deliberate typo 'like_regexp'\n> >>> (instead of 'like_regex'):\n> >>>\n> >>> select js\n> >>> from (values (jsonb '{}')) as f(js)\n> >>> where js @? '$ ? (@ like_regexp \"^xxx\")';\n> >>>\n> >>> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n> >>> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n> >>> li...\n> >>>\n> >>> Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n> >>>\n>\n> > removing this. One thing that is not clear to me is why OP sees an\n> > acceptable message (ERROR: syntax error, unexpected invalid token at\n> > or near \"=\" of jsonpath input) for a similar query in 14?\n>\n> To mention that was perhaps unwise of me because The IDENT_P (or more\n> generally, *_P) messages can be provoked on 14 too. I just thought\n> 'invalid token' might be a better message because 'token' gives a more\n> direct association with 'errors during parsing' which I assume is the\n> case here.\n>\n> IDENT_P or ANY_P convey exactly nothing.\n\n+1\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 29 Jun 2022 17:58:37 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "\nOn 2022-06-29 We 10:58, Alexander Korotkov wrote:\n> On Wed, Jun 29, 2022 at 4:28 PM Erik Rijkers <er@xs4all.nl> wrote:\n>> Op 29-06-2022 om 15:00 schreef Amit Kapila:\n>>> On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n>>>>> JSON/SQL jsonpath\n>>>>>\n>>>>> For example, a jsonpath string with deliberate typo 'like_regexp'\n>>>>> (instead of 'like_regex'):\n>>>>>\n>>>>> select js\n>>>>> from (values (jsonb '{}')) as f(js)\n>>>>> where js @? '$ ? (@ like_regexp \"^xxx\")';\n>>>>>\n>>>>> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n>>>>> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n>>>>> li...\n>>>>>\n>>>>> Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n>>>>>\n>>> removing this. One thing that is not clear to me is why OP sees an\n>>> acceptable message (ERROR: syntax error, unexpected invalid token at\n>>> or near \"=\" of jsonpath input) for a similar query in 14?\n>> To mention that was perhaps unwise of me because The IDENT_P (or more\n>> generally, *_P) messages can be provoked on 14 too. I just thought\n>> 'invalid token' might be a better message because 'token' gives a more\n>> direct association with 'errors during parsing' which I assume is the\n>> case here.\n>>\n>> IDENT_P or ANY_P convey exactly nothing.\n> +1\n>\n\n\nI agree, but I don't think \"invalid token\" is all that much better. I\nthink the right fix is just to get rid of the parser setting that causes\nproduction of these additions to the error message, and make it just\nlike all the other bison parsers we have. Then the problem just disappears.\n\nIt's a very slight change of behaviour, but I agree with Amit that we\ncan backpatch it. I will do so shortly unless there's an objection.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Jun 2022 11:29:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "On Wed, Jun 29, 2022 at 6:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Op 29-06-2022 om 15:00 schreef Amit Kapila:\n> > On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n> >>> JSON/SQL jsonpath\n> >>>\n> >>> For example, a jsonpath string with deliberate typo 'like_regexp'\n> >>> (instead of 'like_regex'):\n> >>>\n> >>> select js\n> >>> from (values (jsonb '{}')) as f(js)\n> >>> where js @? '$ ? (@ like_regexp \"^xxx\")';\n> >>>\n> >>> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n> >>> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n> >>> li...\n> >>>\n> >>> Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n> >>>\n>\n> > removing this. One thing that is not clear to me is why OP sees an\n> > acceptable message (ERROR: syntax error, unexpected invalid token at\n> > or near \"=\" of jsonpath input) for a similar query in 14?\n>\n> To mention that was perhaps unwise of me because The IDENT_P (or more\n> generally, *_P) messages can be provoked on 14 too.\n>\n\nOkay, then I think it is better to backpatch this fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Jun 2022 13:49:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" }, { "msg_contents": "\nOn 2022-06-30 Th 04:19, Amit Kapila wrote:\n> On Wed, Jun 29, 2022 at 6:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n>> Op 29-06-2022 om 15:00 schreef Amit Kapila:\n>>> On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> On 2022-06-26 Su 11:44, Erik Rijkers wrote:\n>>>>> JSON/SQL jsonpath\n>>>>>\n>>>>> For example, a jsonpath string with deliberate typo 'like_regexp'\n>>>>> (instead of 'like_regex'):\n>>>>>\n>>>>> select js\n>>>>> from (values (jsonb '{}')) as f(js)\n>>>>> where js @? '$ ? (@ like_regexp \"^xxx\")';\n>>>>>\n>>>>> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n>>>>> LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@\n>>>>> li...\n>>>>>\n>>>>> Both 'IDENT_P' and 'at or near \" \"' seem pretty useless.\n>>>>>\n>>> removing this. One thing that is not clear to me is why OP sees an\n>>> acceptable message (ERROR: syntax error, unexpected invalid token at\n>>> or near \"=\" of jsonpath input) for a similar query in 14?\n>> To mention that was perhaps unwise of me because The IDENT_P (or more\n>> generally, *_P) messages can be provoked on 14 too.\n>>\n> Okay, then I think it is better to backpatch this fix.\n\n\n\nDone.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Jul 2022 17:28:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON/SQL: jsonpath: incomprehensible error message" } ]
[ { "msg_contents": "Hello Hackers,\n\nWhile working on one of my blogs on the B-Tree indexes\n<https://www.percona.com/blog/postgresql-14-b-tree-index-reduced-bloat-with-bottom-up-deletion/>,\nI needed to look at a range of B-Tree page statistics. So the goto solution\nwas to use pageinspect. However, reviewing stats for multiple pages meant\nissuing multiple queries. I felt that there's an opportunity for\nimprovement in the extension by extending the API to output the statistics\nfor multiple pages with a single query.\n\nThat attached patch is based on the master branch. It makes the following\nchanges to the pageinspect contrib module:\n- Updates bt_page_stats_internal function to accept 3 arguments instead of\n2.\n- The function now uses SRF macros to return a set rather than a single\nrow. The function call now requires specifying column names.\n\nThe extension version is bumped to 1.11 (PAGEINSPECT_V1_11).\nTo maintain backward compatibility, for versions below 1.11, the multi-call\nmechanism is ended to keep the old behavior consistent.\n\nRegression test cases for the module are updated as well as part of this\nchange. Here is a subset of queries that are added to the btree.sql test\ncase file for pageinspect.\n\n----\nCREATE TABLE test2 AS (SELECT generate_series(1, 5000) AS col1);\nCREATE INDEX test2_col1_idx ON test2(col1);\nSELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\nSELECT * FROM bt_page_stats('test2_col1_idx', 1, 0);\nSELECT * FROM bt_page_stats('test2_col1_idx', 0, 1);\nSELECT * FROM bt_page_stats('test2_col1_idx', 1, -1);\nDROP TABLE test2;\n----\n\nRegards.\n\n--\nHamid Akhtar,\nPercona LLC,\nURL : www.percona.com\nCELL:+923335449950\nSKYPE: engineeredvirus", "msg_date": "Mon, 27 Jun 2022 12:31:55 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Allow pageinspect's bt_page_stats function to return a set of rows\n instead of a single row" }, { "msg_contents": "Hi,\n\nOn 6/27/22 9:31 AM, Hamid Akhtar wrote:\n>\n> Hello Hackers,\n>\n> While working on one of my blogs on the B-Tree indexes \n> <https://www.percona.com/blog/postgresql-14-b-tree-index-reduced-bloat-with-bottom-up-deletion/>, \n> I needed to look at a range of B-Tree page statistics. So the goto \n> solution was to use pageinspect. However, reviewing stats for multiple \n> pages meant issuing multiple queries.\n\nFWIW, I think you could also rely on generate_series()\n\n\n> I felt that there's an opportunity for improvement in the extension by \n> extending the API to output the statistics for multiple pages with a \n> single query.\n>\n> That attached patch is based on the master branch. It makes the \n> following changes to the pageinspect contrib module:\n> - Updates bt_page_stats_internal function to accept 3 arguments \n> instead of 2.\n> - The function now uses SRF macros to return a set rather than a \n> single row. The function call now requires specifying column names.\n>\n> The extension version is bumped to 1.11 (PAGEINSPECT_V1_11).\n> To maintain backward compatibility, for versions below 1.11, the \n> multi-call mechanism is ended to keep the old behavior consistent.\n>\n> Regression test cases for the module are updated as well as part of \n> this change. Here is a subset of queries that are added to the \n> btree.sql test case file for pageinspect.\n>\n> ----\n> CREATE TABLE test2 AS (SELECT generate_series(1, 5000) AS col1);\n> CREATE INDEX test2_col1_idx ON test2(col1);\n> SELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\n\nFor example, this could be written as:\n\nselect * from\ngenerate_series(1, 2) blkno ,\nbt_page_stats('test2_col1_idx',blkno::int);\n\nOr, if one wants to inspect to whole relation, something like:\n\nselect * from\ngenerate_series(1, pg_relation_size('test2_col1_idx'::regclass::text) / \n8192 - 1) blkno ,\nbt_page_stats('test2_col1_idx',blkno::int);\n\nRegards,\n\nBertrand\n\n\n\n\n\n\nHi,\n\nOn 6/27/22 9:31 AM, Hamid Akhtar wrote:\n\n\n\n\n\n\nHello Hackers,\n\n While working on one of \n my blogs on the B-Tree indexes, I needed to look at a\n range of B-Tree page statistics. So the goto solution was to\n use pageinspect. However, reviewing stats for multiple pages\n meant issuing multiple queries. \n\n\nFWIW, I think you could also rely on generate_series()\n\n\n\n\nI felt that there's an opportunity for\n improvement in the extension by extending the API to output\n the statistics for multiple pages with a single query.\n\n That attached patch is based on the master branch. It makes\n the following changes to the pageinspect contrib module:\n - Updates bt_page_stats_internal function to accept 3\n arguments instead of 2.\n - The function now uses SRF macros to return a set rather than\n a single row. The function call now requires specifying column\n names.\n\nThe extension version is bumped to 1.11\n (PAGEINSPECT_V1_11).\n To maintain backward compatibility, for versions below 1.11,\n the multi-call mechanism is ended to keep the old behavior\n consistent.\n \n\n Regression test cases for the module are updated as well as\n part of this change. Here is a subset of queries that are\n added to the btree.sql test case file for pageinspect.\n\n ----\n CREATE TABLE test2 AS (SELECT generate_series(1, 5000)\n AS col1); \n CREATE INDEX test2_col1_idx ON test2(col1);\n SELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\n\n\n\n\n\nFor example, this could be written as:\nselect * from\n generate_series(1, 2) blkno ,\n bt_page_stats('test2_col1_idx',blkno::int);\nOr, if one wants to inspect to whole relation, something like:\nselect * from\n generate_series(1,\n pg_relation_size('test2_col1_idx'::regclass::text) / 8192 - 1)\n blkno ,\n bt_page_stats('test2_col1_idx',blkno::int);\n\nRegards,\nBertrand", "msg_date": "Mon, 27 Jun 2022 10:09:42 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n>\n>\n> Hello Hackers,\n>\n> While working on one of my blogs on the B-Tree indexes, I needed to look at a range of B-Tree page statistics. So the goto solution was to use pageinspect. However, reviewing stats for multiple pages meant issuing multiple queries.\n\n+1 to improve the API.\n\n> I felt that there's an opportunity for improvement in the extension by extending the API to output the statistics for multiple pages with a single query.\n>\n> That attached patch is based on the master branch. It makes the following changes to the pageinspect contrib module:\n> - Updates bt_page_stats_internal function to accept 3 arguments instead of 2.\n> - The function now uses SRF macros to return a set rather than a single row. The function call now requires specifying column names.\n>\n> The extension version is bumped to 1.11 (PAGEINSPECT_V1_11).\n> To maintain backward compatibility, for versions below 1.11, the multi-call mechanism is ended to keep the old behavior consistent.\n>\n> Regression test cases for the module are updated as well as part of this change. Here is a subset of queries that are added to the btree.sql test case file for pageinspect.\n>\n> ----\n> CREATE TABLE test2 AS (SELECT generate_series(1, 5000) AS col1);\n> CREATE INDEX test2_col1_idx ON test2(col1);\n> SELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\n>\n> For example, this could be written as:\n>\n> select * from\n> generate_series(1, 2) blkno ,\n> bt_page_stats('test2_col1_idx',blkno::int);\n>\n> Or, if one wants to inspect to whole relation, something like:\n>\n> select * from\n> generate_series(1, pg_relation_size('test2_col1_idx'::regclass::text) / 8192 - 1) blkno ,\n> bt_page_stats('test2_col1_idx',blkno::int);\n\nGood one. But not all may know the alternatives. Do we have any\ndifference in the execution times for the above query vs the new\nfunction introduced in the v1 patch? If there's not much difference, I\nwould suggest adding an SQL function around the generate_series\napproach in the pageinspect extension for better and easier usability.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 27 Jun 2022 16:22:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Mon, 27 Jun 2022 at 15:52, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n> wrote:\n> >\n> > Hi,\n> >\n> > On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n> >\n> >\n> > Hello Hackers,\n> >\n> > While working on one of my blogs on the B-Tree indexes, I needed to look\n> at a range of B-Tree page statistics. So the goto solution was to use\n> pageinspect. However, reviewing stats for multiple pages meant issuing\n> multiple queries.\n>\n> +1 to improve the API.\n>\n> > I felt that there's an opportunity for improvement in the extension by\n> extending the API to output the statistics for multiple pages with a single\n> query.\n> >\n> > That attached patch is based on the master branch. It makes the\n> following changes to the pageinspect contrib module:\n> > - Updates bt_page_stats_internal function to accept 3 arguments instead\n> of 2.\n> > - The function now uses SRF macros to return a set rather than a single\n> row. The function call now requires specifying column names.\n> >\n> > The extension version is bumped to 1.11 (PAGEINSPECT_V1_11).\n> > To maintain backward compatibility, for versions below 1.11, the\n> multi-call mechanism is ended to keep the old behavior consistent.\n> >\n> > Regression test cases for the module are updated as well as part of this\n> change. Here is a subset of queries that are added to the btree.sql test\n> case file for pageinspect.\n> >\n> > ----\n> > CREATE TABLE test2 AS (SELECT generate_series(1, 5000) AS col1);\n> > CREATE INDEX test2_col1_idx ON test2(col1);\n> > SELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\n> >\n> > For example, this could be written as:\n> >\n> > select * from\n> > generate_series(1, 2) blkno ,\n> > bt_page_stats('test2_col1_idx',blkno::int);\n> >\n> > Or, if one wants to inspect to whole relation, something like:\n> >\n> > select * from\n> > generate_series(1, pg_relation_size('test2_col1_idx'::regclass::text) /\n> 8192 - 1) blkno ,\n> > bt_page_stats('test2_col1_idx',blkno::int);\n>\n> Good one. But not all may know the alternatives.\n\n\n+1\n\n\n> Do we have any\n> difference in the execution times for the above query vs the new\n> function introduced in the v1 patch? If there's not much difference, I\n> would suggest adding an SQL function around the generate_series\n> approach in the pageinspect extension for better and easier usability.\n>\n\nBased on some basic SQL execution time comparison of the two approaches, I\nsee that the API change, on average, is around 40% faster than the SQL.\n\nCREATE TABLE test2 AS (SELECT generate_series(1, 5000000) AS col1);\nCREATE INDEX test2_col1_idx ON test2(col1);\n\nEXPLAIN ANALYZE\nSELECT * FROM bt_page_stats('test2_col1_idx', 1, 5000);\n\nEXPLAIN ANALYZE\nSELECT * FROM GENERATE_SERIES(1, 5000) blkno,\nbt_page_stats('test2_col1_idx',blkno::int);\n\nFor me, the API change returns back the data in around 74ms whereas the SQL\nreturns it in 102ms. So considering this and as you mentioned, the\nalternative may not be that obvious to everyone, it is a fair improvement.\n\n\n>\n> Regards,\n> Bharath Rupireddy.\n>\n\nOn Mon, 27 Jun 2022 at 15:52, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n>\n>\n> Hello Hackers,\n>\n> While working on one of my blogs on the B-Tree indexes, I needed to look at a range of B-Tree page statistics. So the goto solution was to use pageinspect. However, reviewing stats for multiple pages meant issuing multiple queries.\n\n+1 to improve the API.\n\n> I felt that there's an opportunity for improvement in the extension by extending the API to output the statistics for multiple pages with a single query.\n>\n> That attached patch is based on the master branch. It makes the following changes to the pageinspect contrib module:\n> - Updates bt_page_stats_internal function to accept 3 arguments instead of 2.\n> - The function now uses SRF macros to return a set rather than a single row. The function call now requires specifying column names.\n>\n> The extension version is bumped to 1.11 (PAGEINSPECT_V1_11).\n> To maintain backward compatibility, for versions below 1.11, the multi-call mechanism is ended to keep the old behavior consistent.\n>\n> Regression test cases for the module are updated as well as part of this change. Here is a subset of queries that are added to the btree.sql test case file for pageinspect.\n>\n> ----\n> CREATE TABLE test2 AS (SELECT generate_series(1, 5000) AS col1);\n> CREATE INDEX test2_col1_idx ON test2(col1);\n> SELECT * FROM bt_page_stats('test2_col1_idx', 1, 2);\n>\n> For example, this could be written as:\n>\n> select * from\n> generate_series(1, 2) blkno ,\n> bt_page_stats('test2_col1_idx',blkno::int);\n>\n> Or, if one wants to inspect to whole relation, something like:\n>\n> select * from\n> generate_series(1, pg_relation_size('test2_col1_idx'::regclass::text) / 8192 - 1) blkno ,\n> bt_page_stats('test2_col1_idx',blkno::int);\n\nGood one. But not all may know the alternatives.+1  Do we have any\ndifference in the execution times for the above query vs the new\nfunction introduced in the v1 patch? If there's not much difference, I\nwould suggest adding an SQL function around the generate_series\napproach in the pageinspect extension for better and easier usability.Based on some basic SQL execution time comparison of the two approaches, I see that the API change, on average, is around 40% faster than the SQL.CREATE TABLE test2 AS (SELECT generate_series(1, 5000000) AS col1);CREATE INDEX test2_col1_idx ON test2(col1);EXPLAIN ANALYZESELECT * FROM bt_page_stats('test2_col1_idx', 1, 5000);EXPLAIN ANALYZESELECT * FROM GENERATE_SERIES(1, 5000) blkno, bt_page_stats('test2_col1_idx',blkno::int);For me, the API change returns back the data in around 74ms whereas the SQL returns it in 102ms. So considering this and as you mentioned, the alternative may not be that obvious to everyone, it is a fair improvement. \n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 30 Jun 2022 13:24:00 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Thu, Jun 30, 2022 at 1:54 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n>> Do we have any\n>> difference in the execution times for the above query vs the new\n>> function introduced in the v1 patch? If there's not much difference, I\n>> would suggest adding an SQL function around the generate_series\n>> approach in the pageinspect extension for better and easier usability.\n>\n>\n> Based on some basic SQL execution time comparison of the two approaches, I see that the API change, on average, is around 40% faster than the SQL.\n>\n> CREATE TABLE test2 AS (SELECT generate_series(1, 5000000) AS col1);\n> CREATE INDEX test2_col1_idx ON test2(col1);\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM bt_page_stats('test2_col1_idx', 1, 5000);\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM GENERATE_SERIES(1, 5000) blkno, bt_page_stats('test2_col1_idx',blkno::int);\n>\n> For me, the API change returns back the data in around 74ms whereas the SQL returns it in 102ms. So considering this and as you mentioned, the alternative may not be that obvious to everyone, it is a fair improvement.\n\nI'm wondering what happens with a bit of huge data and different test\ncases each test case executed, say, 2 or 3 times.\n\nIf the difference in execution times is always present, then the API\napproach or changing the core function would make more sense.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 30 Jun 2022 14:57:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Thu, 30 Jun 2022 at 14:27, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Jun 30, 2022 at 1:54 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n> wrote:\n> >\n> >> Do we have any\n> >> difference in the execution times for the above query vs the new\n> >> function introduced in the v1 patch? If there's not much difference, I\n> >> would suggest adding an SQL function around the generate_series\n> >> approach in the pageinspect extension for better and easier usability.\n> >\n> >\n> > Based on some basic SQL execution time comparison of the two approaches,\n> I see that the API change, on average, is around 40% faster than the SQL.\n> >\n> > CREATE TABLE test2 AS (SELECT generate_series(1, 5000000) AS col1);\n> > CREATE INDEX test2_col1_idx ON test2(col1);\n> >\n> > EXPLAIN ANALYZE\n> > SELECT * FROM bt_page_stats('test2_col1_idx', 1, 5000);\n> >\n> > EXPLAIN ANALYZE\n> > SELECT * FROM GENERATE_SERIES(1, 5000) blkno,\n> bt_page_stats('test2_col1_idx',blkno::int);\n> >\n> > For me, the API change returns back the data in around 74ms whereas the\n> SQL returns it in 102ms. So considering this and as you mentioned, the\n> alternative may not be that obvious to everyone, it is a fair improvement.\n>\n> I'm wondering what happens with a bit of huge data and different test\n> cases each test case executed, say, 2 or 3 times.\n>\n> If the difference in execution times is always present, then the API\n> approach or changing the core function would make more sense.\n>\n\nTechnically, AFAIK, the performance difference will always be there.\nFirstly, in the API change, there is no additional overhead of the\ngenerate_series function. Additionally, with API change, looping over the\npages has a smaller overhead when compared with the overhead of the SQL\napproach.\n\n\n>\n> Regards,\n> Bharath Rupireddy.\n>\n\nOn Thu, 30 Jun 2022 at 14:27, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Jun 30, 2022 at 1:54 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n>> Do we have any\n>> difference in the execution times for the above query vs the new\n>> function introduced in the v1 patch? If there's not much difference, I\n>> would suggest adding an SQL function around the generate_series\n>> approach in the pageinspect extension for better and easier usability.\n>\n>\n> Based on some basic SQL execution time comparison of the two approaches, I see that the API change, on average, is around 40% faster than the SQL.\n>\n> CREATE TABLE test2 AS (SELECT generate_series(1, 5000000) AS col1);\n> CREATE INDEX test2_col1_idx ON test2(col1);\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM bt_page_stats('test2_col1_idx', 1, 5000);\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM GENERATE_SERIES(1, 5000) blkno, bt_page_stats('test2_col1_idx',blkno::int);\n>\n> For me, the API change returns back the data in around 74ms whereas the SQL returns it in 102ms. So considering this and as you mentioned, the alternative may not be that obvious to everyone, it is a fair improvement.\n\nI'm wondering what happens with a bit of huge data and different test\ncases each test case executed, say, 2 or 3 times.\n\nIf the difference in execution times is always present, then the API\napproach or changing the core function would make more sense.Technically, AFAIK, the performance difference will always be there. Firstly, in the API change, there is no additional overhead of the generate_series function. Additionally, with API change, looping over the pages has a smaller overhead when compared with the overhead of the SQL approach. \n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 30 Jun 2022 14:40:07 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "Hi,\n\nOn 6/30/22 10:24 AM, Hamid Akhtar wrote:\n> On Mon, 27 Jun 2022 at 15:52, Bharath Rupireddy \n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand\n> <bdrouvot@amazon.com> wrote:\n> >\n> > Hi,\n> >\n> > On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n> >\n> >\n> > Hello Hackers,\n> >\n> > While working on one of my blogs on the B-Tree indexes, I needed\n> to look at a range of B-Tree page statistics. So the goto solution\n> was to use pageinspect. However, reviewing stats for multiple\n> pages meant issuing multiple queries.\n>\n> +1 to improve the API.\n>\nI think it makes sense too.\n\nBut what about the other pageinspect's functions that also use a single \nblkno as parameter? Should not the patch also takes care of them?\n\nRegards,\n\nBertrand\n\n\n\n\n\n\nHi,\n\nOn 6/30/22 10:24 AM, Hamid Akhtar\n wrote:\n\n\n\n\n\n\n\nOn Mon, 27 Jun 2022 at\n 15:52, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n wrote:\n\n\n On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n wrote:\n >\n > Hi,\n >\n > On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n >\n >\n > Hello Hackers,\n >\n > While working on one of my blogs on the B-Tree\n indexes, I needed to look at a range of B-Tree page\n statistics. So the goto solution was to use pageinspect.\n However, reviewing stats for multiple pages meant issuing\n multiple queries.\n\n +1 to improve the API.\n\n\n\n\n\nI think it makes sense too.\nBut what about the other pageinspect's functions that also use a\n single blkno as parameter? Should not the patch also takes care of\n them?\nRegards,\nBertrand", "msg_date": "Fri, 1 Jul 2022 10:01:33 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows\n instead of a single row" }, { "msg_contents": "On Fri, 1 Jul 2022 at 13:01, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n\n> Hi,\n> On 6/30/22 10:24 AM, Hamid Akhtar wrote:\n>\n> On Mon, 27 Jun 2022 at 15:52, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>> On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n>> wrote:\n>> >\n>> > Hi,\n>> >\n>> > On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n>> >\n>> >\n>> > Hello Hackers,\n>> >\n>> > While working on one of my blogs on the B-Tree indexes, I needed to\n>> look at a range of B-Tree page statistics. So the goto solution was to use\n>> pageinspect. However, reviewing stats for multiple pages meant issuing\n>> multiple queries.\n>>\n>> +1 to improve the API.\n>>\n> I think it makes sense too.\n>\n> But what about the other pageinspect's functions that also use a single\n> blkno as parameter? Should not the patch also takes care of them?\n>\n> I've started working on that. But it's going to be a much bigger change\nwith a lot of code refactoring.\n\nSo, taking this one step at a time, IMHO, this patch is good to be reviewed\nnow.\n\n\n> Regards,\n>\n> Bertrand\n>\n\nOn Fri, 1 Jul 2022 at 13:01, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n\nHi,\n\nOn 6/30/22 10:24 AM, Hamid Akhtar\n wrote:\n\n\n\n\n\n\nOn Mon, 27 Jun 2022 at\n 15:52, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n wrote:\n\n\n On Mon, Jun 27, 2022 at 1:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n wrote:\n >\n > Hi,\n >\n > On 6/27/22 9:31 AM, Hamid Akhtar wrote:\n >\n >\n > Hello Hackers,\n >\n > While working on one of my blogs on the B-Tree\n indexes, I needed to look at a range of B-Tree page\n statistics. So the goto solution was to use pageinspect.\n However, reviewing stats for multiple pages meant issuing\n multiple queries.\n\n +1 to improve the API.\n\n\n\n\n\nI think it makes sense too.\nBut what about the other pageinspect's functions that also use a\n single blkno as parameter? Should not the patch also takes care of\n them?\nI've started working on that. But it's going to be a much bigger change with a lot of code refactoring.So, taking this one step at a time, IMHO, this patch is good to be reviewed now. Regards,\nBertrand", "msg_date": "Mon, 25 Jul 2022 22:21:02 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> That attached patch is based on the master branch. It makes the following\n> changes to the pageinspect contrib module:\n> - Updates bt_page_stats_internal function to accept 3 arguments instead of\n> 2.\n> - The function now uses SRF macros to return a set rather than a single\n> row. The function call now requires specifying column names.\n\nFWIW, I think you'd be way better off changing the function name, say\nto bt_multi_page_stats(). Overloading the name this way is going to\nlead to great confusion, e.g. somebody who fat-fingers the number of\noutput arguments in a JDBC call could see confusing results due to\ninvoking the wrong one of the two functions. Also, I'm not quite sure\nwhat you mean by \"The function call now requires specifying column\nnames\", but it doesn't sound like an acceptable restriction from a\ncompatibility standpoint. If a different name dodges that issue then\nit's clearly a better way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Jul 2022 15:36:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Thu, 28 Jul 2022 at 00:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> > That attached patch is based on the master branch. It makes the following\n> > changes to the pageinspect contrib module:\n> > - Updates bt_page_stats_internal function to accept 3 arguments instead\n> of\n> > 2.\n> > - The function now uses SRF macros to return a set rather than a single\n> > row. The function call now requires specifying column names.\n>\n> FWIW, I think you'd be way better off changing the function name, say\n> to bt_multi_page_stats(). Overloading the name this way is going to\n> lead to great confusion, e.g. somebody who fat-fingers the number of\n> output arguments in a JDBC call could see confusing results due to\n> invoking the wrong one of the two functions. Also, I'm not quite sure\n> what you mean by \"The function call now requires specifying column\n> names\", but it doesn't sound like an acceptable restriction from a\n> compatibility standpoint. If a different name dodges that issue then\n> it's clearly a better way.\n>\n> regards, tom lane\n>\n\nAttached please find the latest version of the patch;\npageinspect_btree_multipagestats_02.patch.\n\nIt no longer modifies the existing bt_page_stats function. Instead, it\nintroduces a new function bt_multi_page_stats as you had suggested. The\nfunction expects three arguments where the first argument is the index name\nfollowed by block number and number of blocks to be returned.\n\nPlease ignore this statement. It was a typo.\n\"The function call now requires specifying column names\"", "msg_date": "Sun, 31 Jul 2022 18:00:53 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Sun, 31 Jul 2022 at 18:00, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>\n>\n> On Thu, 28 Jul 2022 at 00:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n>> > That attached patch is based on the master branch. It makes the\n>> following\n>> > changes to the pageinspect contrib module:\n>> > - Updates bt_page_stats_internal function to accept 3 arguments instead\n>> of\n>> > 2.\n>> > - The function now uses SRF macros to return a set rather than a single\n>> > row. The function call now requires specifying column names.\n>>\n>> FWIW, I think you'd be way better off changing the function name, say\n>> to bt_multi_page_stats(). Overloading the name this way is going to\n>> lead to great confusion, e.g. somebody who fat-fingers the number of\n>> output arguments in a JDBC call could see confusing results due to\n>> invoking the wrong one of the two functions. Also, I'm not quite sure\n>> what you mean by \"The function call now requires specifying column\n>> names\", but it doesn't sound like an acceptable restriction from a\n>> compatibility standpoint. If a different name dodges that issue then\n>> it's clearly a better way.\n>>\n>> regards, tom lane\n>>\n>\n> Attached please find the latest version of the patch;\n> pageinspect_btree_multipagestats_02.patch.\n>\n> It no longer modifies the existing bt_page_stats function. Instead, it\n> introduces a new function bt_multi_page_stats as you had suggested. The\n> function expects three arguments where the first argument is the index name\n> followed by block number and number of blocks to be returned.\n>\n> Please ignore this statement. It was a typo.\n> \"The function call now requires specifying column names\"\n>\n\nAttached is the rebased version of the patch\n(pageinspect_btree_multipagestats_03.patch) for the master branch.\n\n\n>\n>", "msg_date": "Sun, 31 Jul 2022 18:18:16 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nLooks good to me.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 01 Aug 2022 18:28:54 +0000", "msg_from": "Naeem Akhter <naeem.akhter@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows\n instead of a single row" }, { "msg_contents": "On Mon, Aug 1, 2022 at 11:29 PM Naeem Akhter <naeem.akhter@percona.com>\nwrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Looks good to me.\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThe patch has a compilation error on the latest code base, please rebase\nyour patch.\n\n[03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error:\n‘result’ was not declared in this scope\n[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n[03:08:46.087] | ^~~~~~\n[03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error:\nexpected primary-expression before ‘scanbuflen’\n[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n...\n\n-- \nIbrar Ahmed\n\nOn Mon, Aug 1, 2022 at 11:29 PM Naeem Akhter <naeem.akhter@percona.com> wrote:The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           not tested\nDocumentation:            not tested\n\nLooks good to me.\n\nThe new status of this patch is: Ready for Committer The patch has a compilation error on the latest code base, please rebase your patch.  [03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error: ‘result’ was not declared in this scope[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);[03:08:46.087] | ^~~~~~[03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error: expected primary-expression before ‘scanbuflen’[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);...-- Ibrar Ahmed", "msg_date": "Tue, 6 Sep 2022 11:25:05 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Tue, 6 Sept 2022 at 11:25, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Mon, Aug 1, 2022 at 11:29 PM Naeem Akhter <naeem.akhter@percona.com>\n> wrote:\n>\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world: tested, passed\n>> Implements feature: tested, passed\n>> Spec compliant: not tested\n>> Documentation: not tested\n>>\n>> Looks good to me.\n>>\n>> The new status of this patch is: Ready for Committer\n>>\n>\n> The patch has a compilation error on the latest code base, please rebase\n> your patch.\n>\n> [03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error:\n> ‘result’ was not declared in this scope\n> [03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n> [03:08:46.087] | ^~~~~~\n> [03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error:\n> expected primary-expression before ‘scanbuflen’\n> [03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n> ...\n>\n> --\n> Ibrar Ahmed\n>\n\nThe compilation and regression are working fine. I have verified it against\nthe tip of the master branch [commit: 57796a0f].\n\nOn Tue, 6 Sept 2022 at 11:25, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Mon, Aug 1, 2022 at 11:29 PM Naeem Akhter <naeem.akhter@percona.com> wrote:The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           not tested\nDocumentation:            not tested\n\nLooks good to me.\n\nThe new status of this patch is: Ready for Committer The patch has a compilation error on the latest code base, please rebase your patch.  [03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error: ‘result’ was not declared in this scope[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);[03:08:46.087] | ^~~~~~[03:08:46.087] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error: expected primary-expression before ‘scanbuflen’[03:08:46.087] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);...-- Ibrar AhmedThe compilation and regression are working fine. I have verified it against the tip of the master branch [commit: 57796a0f].", "msg_date": "Mon, 12 Sep 2022 13:15:18 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> Attached is the rebased version of the patch\n> (pageinspect_btree_multipagestats_03.patch) for the master branch.\n\nI looked through this and cleaned up the code a little (attached).\nThere are still some issues before it could be considered committable:\n\n1. Where's the documentation update?\n\n2. As this stands, there's no nice way to ask for \"all the pages\".\nIf you specify a page count that's even one too large, you get an\nerror. I think there's room for an easier-to-use way to do that.\nWe could say that the thing just silently stops at the last page,\nso that you just need to write a large page count. Or maybe it'd\nbe better to define a zero or negative page count as \"all the rest\",\nwhile still insisting that a positive count refer to real pages.\n\n3. I think it's highly likely that the new test case is not portable.\nIn particular a machine with MAXALIGN 4 would be likely to put a\ndifferent number of tuples per page, or do the page split differently\nso that the page with fewer index tuples isn't page 3. Unfortunately\nI don't seem to have a working setup like that right at the moment\nto verify; but I'd counsel trying this inside a VM or something to\nsee if it's actually likely to survive on the buildfarm. I'm not\nsure, but making the indexed column be int8 instead of int4 might\nreduce the risks here.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 12 Sep 2022 13:58:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "I wrote:\n> 3. I think it's highly likely that the new test case is not portable.\n> In particular a machine with MAXALIGN 4 would be likely to put a\n> different number of tuples per page, or do the page split differently\n> so that the page with fewer index tuples isn't page 3. Unfortunately\n> I don't seem to have a working setup like that right at the moment\n> to verify; but I'd counsel trying this inside a VM or something to\n> see if it's actually likely to survive on the buildfarm.\n\nI spun up a 32-bit VM, since that had been on my to-do list anyway,\nand it looks like I was right:\n\ndiff -U3 /usr/home/tgl/pgsql/contrib/pageinspect/expected/btree.out /usr/home/tgl/pgsql/contrib/pageinspect/results/btree.out\n--- /usr/home/tgl/pgsql/contrib/pageinspect/expected/btree.out 2022-09-12 15:15:40.432135000 -0400\n+++ /usr/home/tgl/pgsql/contrib/pageinspect/results/btree.out 2022-09-12 15:15:54.481549000 -0400\n@@ -49,11 +49,11 @@\n -[ RECORD 1 ]-+-----\n blkno | 1\n type | l\n-live_items | 367\n+live_items | 458\n dead_items | 0\n-avg_item_size | 16\n+avg_item_size | 12\n page_size | 8192\n-free_size | 808\n+free_size | 820\n btpo_prev | 0\n btpo_next | 2\n btpo_level | 0\n@@ -61,11 +61,11 @@\n... etc etc ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Sep 2022 15:18:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Mon, Sep 12, 2022 at 03:18:59PM -0400, Tom Lane wrote:\n> I spun up a 32-bit VM, since that had been on my to-do list anyway,\n> and it looks like I was right:\n\nThis feedback has not been addressed and the thread is idle four\nweeks, so I have marked this CF entry as RwF. Please feel free to\nresubmit once a new version of the patch is available.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 14:51:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Wed, 12 Oct 2022 at 10:51, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Sep 12, 2022 at 03:18:59PM -0400, Tom Lane wrote:\n> > I spun up a 32-bit VM, since that had been on my to-do list anyway,\n> > and it looks like I was right:\n>\n> This feedback has not been addressed and the thread is idle four\n> weeks, so I have marked this CF entry as RwF. Please feel free to\n> resubmit once a new version of the patch is available.\n>\n\nAttaching the version 5 of the patch that addresses all 3 points raised by\nTom Lane earlier in the thread.\n(1) Documentation is added.\n(2) Passing \"-1\" for the number of blocks required now returns all the\nremaining index pages after the starting block.\n(3) The newly added test cases work for both 32-bit and 64-bit systems.\n\n\n\n> --\n> Michael\n>", "msg_date": "Thu, 10 Nov 2022 17:01:15 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\ni've tested and verified the documentation.", "msg_date": "Mon, 21 Nov 2022 12:33:51 +0000", "msg_from": "Naeem Akhter <naeem.akhter@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows\n instead of a single row" }, { "msg_contents": "On Mon, 21 Nov 2022 at 17:34, Naeem Akhter <naeem.akhter@percona.com> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> i've tested and verified the documentation.\n\n\nRebasing the patch to the tip of the master branch.", "msg_date": "Fri, 25 Nov 2022 02:45:01 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "Hi,\n\nOn 2022-11-25 02:45:01 +0500, Hamid Akhtar wrote:\n> Rebasing the patch to the tip of the master branch.\n\nThis doesn't pass tests on cfbot. Looks like possibly some files are missing?\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4916614353649664/testrun/build/testrun/pageinspect/regress/regression.diffs\n\ndiff -U3 /tmp/cirrus-ci-build/contrib/pageinspect/expected/page.out /tmp/cirrus-ci-build/build/testrun/pageinspect/regress/results/page.out\n--- /tmp/cirrus-ci-build/contrib/pageinspect/expected/page.out\t2022-12-06 20:07:47.691479000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/pageinspect/regress/results/page.out\t2022-12-06 20:11:42.955606000 +0000\n@@ -1,4 +1,5 @@\n CREATE EXTENSION pageinspect;\n+ERROR: extension \"pageinspect\" has no installation script nor update path for version \"1.12\"\n -- Use a temp table so that effects of VACUUM are predictable\n CREATE TEMP TABLE test1 (a int, b int);\n INSERT INTO test1 VALUES (16777217, 131584);\n@@ -6,236 +7,203 @@\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Dec 2022 15:16:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On 2022-Dec-06, Andres Freund wrote:\n\n> Hi,\n> \n> On 2022-11-25 02:45:01 +0500, Hamid Akhtar wrote:\n> > Rebasing the patch to the tip of the master branch.\n> \n> This doesn't pass tests on cfbot. Looks like possibly some files are missing?\n\nThe .sql file is there all right, but meson.build is not altered to be\naware of it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 7 Dec 2022 09:01:06 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "On Wed, 7 Dec 2022 at 13:01, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Dec-06, Andres Freund wrote:\n>\n> > Hi,\n> >\n> > On 2022-11-25 02:45:01 +0500, Hamid Akhtar wrote:\n> > > Rebasing the patch to the tip of the master branch.\n> >\n> > This doesn't pass tests on cfbot. Looks like possibly some files are\n> missing?\n>\n> The .sql file is there all right, but meson.build is not altered to be\n> aware of it.\n\n\nI wasn't aware of the meson.build file. Attached is the latest version of\nthe patch that contains the updated meson.build.\n\n-- \nHamid Akhtar,\nPercona LLC, www.percona.com", "msg_date": "Wed, 7 Dec 2022 15:23:31 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nLooks good to me\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 07 Dec 2022 14:48:01 +0000", "msg_from": "Muhammad Usama <muhammad.usama@percona.com>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows\n instead of a single row" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@percona.com> writes:\n> I wasn't aware of the meson.build file. Attached is the latest version of\n> the patch that contains the updated meson.build.\n\nPushed with minor corrections, plus one major one: you missed the\npoint of aeaaf520f, that pageinspect functions that touch relations\nneed to be parallel-restricted not parallel-safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Jan 2023 13:05:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow pageinspect's bt_page_stats function to return a set of\n rows instead of a single row" } ]
[ { "msg_contents": "Hi,\n\nToday, postgres doesn't distinguish the log messages that it emits to\nserver logs via ereport/elog mechanism, based on security information or\nPII (Personally Identifiable Information) or other sensitive information\n[1]. In production environments, these log messages would be captured and\nstored (perhaps in a different intermediate database specially designed for\ntext and log analytics) for debug, analytical, reporting or\non-demand-delivery to the customers via portal/tools. In this context, the\ncustomers will expect to treat the sensitive information differently\n(perhaps encode/mask before storing) for security and compliance purposes.\nAlso, it's not safe to show all the log messages as-is for internal\ndebugging purposes as the sensitive information can be misused\nintentionally or unintentionally.\n\nToday, one can implement an emit_log_hook which can look for sensitive log\nmessages based on the errmsg i.e. \"text\" and treat them differently. But\nthe errmsg based approach has its own disadvantages - errmsg can get\ntweaked, there can be too many sensitive type log messages, not everyone\ncan rightly distinguish what a sensitive log message is and what is not,\nthe hook implementation and maintainability is a huge problem in the long\nrun.\n\nHere's an idea - what if postgres can emit log messages that have sensitive\ninformation with special error codes or flags? The emit_log_hook\nimplementers will then just need to look for those special error codes or\nflags to treat them differently.\n\nThoughts?\n\n[1]\nerrmsg(\"role \\\"%s\\\" cannot be dropped because some objects depend on it\"\nerrmsg(\"role \\\"%s\\\" already exists\"\nerrmsg(\"must have admin option on role \\\"%s\\\"\"\nerrmsg(\"role \\\"%s\\\" is a member of role \\\"%s\\\"\"\nerrmsg(\"must have admin option on role \\\"%s\\\"\"\nerrmsg(\"pg_hba.conf rejects replication connection for host \\\"%s\\\", user\n\\\"%s\\\", %s\"\nerrmsg(\"duplicate key value violates unique constraint \\\"%s\\\"\"\nlog_connections and log_disconnections messages\n.....\n.....\n\nRegards,\nBharath Rupireddy.\n\nHi,Today, postgres doesn't distinguish the log messages that it emits to server logs via ereport/elog mechanism, based on security information or PII (Personally Identifiable Information) or other sensitive information [1]. In production environments, these log messages would be captured and stored (perhaps in a different intermediate database specially designed for text and log analytics) for debug, analytical, reporting or on-demand-delivery to the customers via portal/tools. In this context, the customers will expect to treat the sensitive information differently (perhaps encode/mask before storing) for security and compliance purposes. Also, it's not safe to show all the log messages as-is for internal debugging purposes as the sensitive information can be misused intentionally or unintentionally.Today, one can implement an emit_log_hook which can look for sensitive log messages based on the errmsg i.e. \"text\" and treat them differently. But the errmsg based approach has its own disadvantages - errmsg can get tweaked, there can be too many sensitive type log messages, not everyone can rightly distinguish what a sensitive log message is and what is not, the hook implementation and maintainability is a huge problem in the long run.Here's an idea - what if postgres can emit log messages that have sensitive information with special error codes or flags? The emit_log_hook implementers will then just need to look for those special error codes or flags to treat them differently.Thoughts?[1] errmsg(\"role \\\"%s\\\" cannot be dropped because some objects depend on it\"errmsg(\"role \\\"%s\\\" already exists\"errmsg(\"must have admin option on role \\\"%s\\\"\"errmsg(\"role \\\"%s\\\" is a member of role \\\"%s\\\"\"errmsg(\"must have admin option on role \\\"%s\\\"\"errmsg(\"pg_hba.conf rejects replication connection for host \\\"%s\\\", user \\\"%s\\\", %s\"errmsg(\"duplicate key value violates unique constraint \\\"%s\\\"\"log_connections and log_disconnections messages..........Regards,\nBharath Rupireddy.", "msg_date": "Mon, 27 Jun 2022 18:41:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Emit postgres log messages that have security or PII with special\n flags/error code/elevel" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 27, 2022 at 06:41:21PM +0530, Bharath Rupireddy wrote:\n>\n> Here's an idea - what if postgres can emit log messages that have sensitive\n> information with special error codes or flags? The emit_log_hook\n> implementers will then just need to look for those special error codes or\n> flags to treat them differently.\n\nThis has been discussed multiple times in the past, and always rejected. The\nmain reason for that is that it's impossible to accurately determine whether a\nmessage contains sensitive information or not, and if it were there wouldn't be\na single definition that would fit everyone.\n\nAs a simple example, how would you handle the log emitted by this query?\n\nALTERR OLE myuser WITH PASSWORD 'my super secret password';\n\n\n", "msg_date": "Mon, 27 Jun 2022 23:34:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Emit postgres log messages that have security or PII with\n special flags/error code/elevel" } ]
[ { "msg_contents": "This was discussed previously in [1], and there seemed to be general\nconsensus in favour of it, but no new patch emerged.\n\nAttached is a patch that takes the approach of not generating an alias\nat all, which seems to be neater and simpler, and less code than\ntrying to generate a unique alias.\n\nIt still generates an eref for the subquery RTE, which has a made-up\nrelation name, but that is marked as not visible on the\nParseNamespaceItem, so it doesn't conflict with anything else, need\nnot be unique, and cannot be used for qualified references to the\nsubquery's columns.\n\nThe only place that exposes the eref's made-up relation name is the\nexisting query deparsing code in ruleutils.c, which uniquifies it and\ngenerates SQL spec-compliant output. For example:\n\nCREATE OR REPLACE VIEW test_view AS\n SELECT *\n FROM (SELECT a, b FROM foo),\n (SELECT c, d FROM bar)\n WHERE a = c;\n\n\\sv test_view\n\nCREATE OR REPLACE VIEW public.test_view AS\n SELECT subquery.a,\n subquery.b,\n subquery_1.c,\n subquery_1.d\n FROM ( SELECT foo.a,\n foo.b\n FROM foo) subquery,\n ( SELECT bar.c,\n bar.d\n FROM bar) subquery_1\n WHERE subquery.a = subquery_1.c\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/1487773980.3143.15.camel%40oopsware.de", "msg_date": "Mon, 27 Jun 2022 14:49:20 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, Jun 27, 2022 at 9:49 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> This was discussed previously in [1], and there seemed to be general\n> consensus in favour of it, but no new patch emerged.\n>\n> Attached is a patch that takes the approach of not generating an alias\n> at all, which seems to be neater and simpler, and less code than\n> trying to generate a unique alias.\n>\n> It still generates an eref for the subquery RTE, which has a made-up\n> relation name, but that is marked as not visible on the\n> ParseNamespaceItem, so it doesn't conflict with anything else, need\n> not be unique, and cannot be used for qualified references to the\n> subquery's columns.\n>\n> The only place that exposes the eref's made-up relation name is the\n> existing query deparsing code in ruleutils.c, which uniquifies it and\n> generates SQL spec-compliant output. For example:\n>\n> CREATE OR REPLACE VIEW test_view AS\n> SELECT *\n> FROM (SELECT a, b FROM foo),\n> (SELECT c, d FROM bar)\n> WHERE a = c;\n>\n> \\sv test_view\n>\n> CREATE OR REPLACE VIEW public.test_view AS\n> SELECT subquery.a,\n> subquery.b,\n> subquery_1.c,\n> subquery_1.d\n> FROM ( SELECT foo.a,\n> foo.b\n> FROM foo) subquery,\n> ( SELECT bar.c,\n> bar.d\n> FROM bar) subquery_1\n> WHERE subquery.a = subquery_1.c\n\nIt doesn't play that well if you have something called subquery though:\n\nCREATE OR REPLACE VIEW test_view AS\n SELECT *\n FROM (SELECT a, b FROM foo),\n (SELECT c, d FROM bar), (select relname from pg_class limit\n1) as subquery\n WHERE a = c;\n\n\\sv test_view\nCREATE OR REPLACE VIEW public.test_view AS\n SELECT subquery.a,\n subquery.b,\n subquery_1.c,\n subquery_1.d,\n subquery_2.relname\n FROM ( SELECT foo.a,\n foo.b\n FROM foo) subquery,\n ( SELECT bar.c,\n bar.d\n FROM bar) subquery_1,\n ( SELECT pg_class.relname\n FROM pg_class\n LIMIT 1) subquery_2\n WHERE subquery.a = subquery_1.c\n\nWhile the output is a valid query, it's not nice that it's replacing a\nuser provided alias with another one (or force an alias if you have a\nrelation called subquery). More generally, I'm -0.5 on the feature.\nI prefer to force using SQL-compliant queries, and also not take bad\nhabits.\n\n\n", "msg_date": "Mon, 27 Jun 2022 23:10:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, 27 Jun 2022 at 11:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> More generally, I'm -0.5 on the feature.\n> I prefer to force using SQL-compliant queries, and also not take bad\n> habits.\n>\n\nAs to forcing SQL-complaint queries, that ship sailed a long time ago:\nPostgres allows but does not enforce the use of SQL-compliant queries, and\nmany of its important features are extensions anyway, so forcing SQL\ncompliant queries is out of the question (although I could see the utility\nof a mode where it warns or errors on non-compliant queries, at least in\nprinciple).\n\nAs to bad habits, I'm having trouble understanding. Why do you think\nleaving the alias off a subquery is a bad habit (assuming it were allowed)?\nIf the name is never used, why are we required to supply it?\n\nOn Mon, 27 Jun 2022 at 11:12, Julien Rouhaud <rjuju123@gmail.com> wrote:More generally, I'm -0.5 on the feature.\nI prefer to force using SQL-compliant queries, and also not take bad\nhabits.As to forcing SQL-complaint queries, that ship sailed a long time ago: Postgres allows but does not enforce the use of SQL-compliant queries, and many of its important features are extensions anyway, so forcing SQL compliant queries is out of the question (although I could see the utility of a mode where it warns or errors on non-compliant queries, at least in principle).As to bad habits, I'm having trouble understanding. Why do you think leaving the alias off a subquery is a bad habit (assuming it were allowed)? If the name is never used, why are we required to supply it?", "msg_date": "Mon, 27 Jun 2022 12:03:20 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, 27 Jun 2022 at 16:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> It doesn't play that well if you have something called subquery though:\n>\n> [example that changes a user-provided alias]\n>\n> While the output is a valid query, it's not nice that it's replacing a\n> user provided alias with another one (or force an alias if you have a\n> relation called subquery).\n\nIt's already the case that user-provided aliases can get replaced by\nnew ones in the query-deparsing code, e.g.:\n\nCREATE OR REPLACE VIEW test_view AS\n SELECT x.a, y.b\n FROM foo AS x,\n (SELECT b FROM foo AS x) AS y;\n\n\\sv test_view\n\nCREATE OR REPLACE VIEW public.test_view AS\n SELECT x.a,\n y.b\n FROM foo x,\n ( SELECT x_1.b\n FROM foo x_1) y\n\nand similarly it may invent technically unnecessary aliases where\nthere were none before. The query-deparsing code has never been\nalias-preserving, unless you take care to give everything a globally\nunique alias.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 27 Jun 2022 19:24:57 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, Jun 27, 2022 at 11:25 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Mon, 27 Jun 2022 at 16:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > It doesn't play that well if you have something called subquery though:\n> >\n> > [example that changes a user-provided alias]\n> >\n> > While the output is a valid query, it's not nice that it's replacing a\n> > user provided alias with another one (or force an alias if you have a\n> > relation called subquery).\n>\n> It's already the case that user-provided aliases can get replaced by\n> new ones in the query-deparsing code, e.g.:\n>\n>\nRegardless, is there any reason to not just prefix our made-up aliases with\n\"pg_\" to make it perfectly clear they were generated by the system and are\nbasically implementation details as opposed to something that appeared in\nthe originally written query?\n\nI suppose, \"because we've haven't until now, so why start\" suffices...but\nstill doing a rename/suffixing because of query rewriting and inventing one\nwhere we made it optional seem different enough to justify implementing\nsomething different.\n\nDavid J.\n\nOn Mon, Jun 27, 2022 at 11:25 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Mon, 27 Jun 2022 at 16:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> It doesn't play that well if you have something called subquery though:\n>\n> [example that changes a user-provided alias]\n>\n> While the output is a valid query, it's not nice that it's replacing a\n> user provided alias with another one (or force an alias if you have a\n> relation called subquery).\n\nIt's already the case that user-provided aliases can get replaced by\nnew ones in the query-deparsing code, e.g.:Regardless, is there any reason to not just prefix our made-up aliases with \"pg_\" to make it perfectly clear they were generated by the system and are basically implementation details as opposed to something that appeared in the originally written query?I suppose, \"because we've haven't until now, so why start\" suffices...but still doing a rename/suffixing because of query rewriting and inventing one where we made it optional seem different enough to justify implementing something different.David J.", "msg_date": "Mon, 27 Jun 2022 11:43:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, 27 Jun 2022 at 19:43, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Mon, Jun 27, 2022 at 11:25 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> On Mon, 27 Jun 2022 at 16:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > It doesn't play that well if you have something called subquery though:\n>> >\n>> > [example that changes a user-provided alias]\n>> >\n>> > While the output is a valid query, it's not nice that it's replacing a\n>> > user provided alias with another one (or force an alias if you have a\n>> > relation called subquery).\n>>\n>> It's already the case that user-provided aliases can get replaced by\n>> new ones in the query-deparsing code, e.g.:\n>>\n>\n> Regardless, is there any reason to not just prefix our made-up aliases with \"pg_\" to make it perfectly clear they were generated by the system and are basically implementation details as opposed to something that appeared in the originally written query?\n>\n> I suppose, \"because we've haven't until now, so why start\" suffices...but still doing a rename/suffixing because of query rewriting and inventing one where we made it optional seem different enough to justify implementing something different.\n>\n\nI think \"pg_\" would be a bad idea, since it's too easily confused with\nthings like system catalogs. The obvious precedent we have for a\nmade-up alias is \"unnamed_join\", so perhaps \"unnamed_subquery\" would\nbe better.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 27 Jun 2022 19:53:45 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 27, 2022 at 12:03:20PM -0400, Isaac Morland wrote:\n> On Mon, 27 Jun 2022 at 11:12, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > More generally, I'm -0.5 on the feature.\n> > I prefer to force using SQL-compliant queries, and also not take bad\n> > habits.\n> >\n>\n> As to forcing SQL-complaint queries, that ship sailed a long time ago:\n> Postgres allows but does not enforce the use of SQL-compliant queries, and\n> many of its important features are extensions anyway, so forcing SQL\n> compliant queries is out of the question (although I could see the utility\n> of a mode where it warns or errors on non-compliant queries, at least in\n> principle).\n\nSure, but it doesn't mean that we should support even more non-compliant syntax\nwithout any restraint. In this case, I don't see much benefit as it's not\nsolving performance problem or something like that.\n\n> As to bad habits, I'm having trouble understanding. Why do you think\n> leaving the alias off a subquery is a bad habit (assuming it were allowed)?\n\nI think It's a bad habit because as far as I can see it's not supported on\nmysql or sqlserver.\n\n> If the name is never used, why are we required to supply it?\n\nI'm not saying that I'm thrilled having to do so, but it's also not a huge\ntrouble. And since it's required I have the habit to automatically put some\nrandom alias if I'm writing some one shot query that indeed doesn't need to use\nthe alias.\n\nBut similarly, I many times relied on the fact that writable CTE are executed\neven if not explicitly referenced. So by the same argument shouldn't we allow\nsomething like this?\n\nWITH (INSERT INTO t SELECT * pending WHERE ts < now())\nSELECT now() AS last_processing_time;\n\n\n", "msg_date": "Tue, 28 Jun 2022 12:32:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Tue, 28 Jun 2022 at 00:32, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> As to forcing SQL-complaint queries, that ship sailed a long time ago:\n> > Postgres allows but does not enforce the use of SQL-compliant queries,\n> and\n> > many of its important features are extensions anyway, so forcing SQL\n> > compliant queries is out of the question (although I could see the\n> utility\n> > of a mode where it warns or errors on non-compliant queries, at least in\n> > principle).\n>\n> Sure, but it doesn't mean that we should support even more non-compliant\n> syntax\n> without any restraint. In this case, I don't see much benefit as it's not\n> solving performance problem or something like that.\n>\n\nIt's improving developer performance by eliminating the need to make up\nutterly useless names. I don't care if behind the scenes names are\nassigned, although it would be even better if the names didn't exist at\nall. I just want the computer to do stuff for me that requires absolutely\nno human judgement whatsoever.\n\n> As to bad habits, I'm having trouble understanding. Why do you think\n> > leaving the alias off a subquery is a bad habit (assuming it were\n> allowed)?\n>\n> I think It's a bad habit because as far as I can see it's not supported on\n> mysql or sqlserver.\n>\n\nSo it’s a bad habit to use features of Postgres that aren’t available on\nMySQL or SQL Server?\n\nFor myself, I don’t care one bit about whether my code will run on those\nsystems, or Oracle: as far as I’m concerned I write Postgres applications,\nnot SQL applications. Of course, many people have a need to support other\nsystems, so I appreciate the care we take to document the differences from\nthe standard, and I hope we will continue to support standard queries. But\nif it’s a bad habit to use Postgres-specific features, why do we create any\nof those features?\n\n> If the name is never used, why are we required to supply it?\n>\n> But similarly, I many times relied on the fact that writable CTE are\n> executed\n> even if not explicitly referenced. So by the same argument shouldn't we\n> allow\n> something like this?\n>\n> WITH (INSERT INTO t SELECT * pending WHERE ts < now())\n> SELECT now() AS last_processing_time;\n>\n\nI’m not necessarily opposed to allowing this too. But the part which causes\nme annoyance is normal subquery naming.\n\nOn Tue, 28 Jun 2022 at 00:32, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> As to forcing SQL-complaint queries, that ship sailed a long time ago:\n> Postgres allows but does not enforce the use of SQL-compliant queries, and\n> many of its important features are extensions anyway, so forcing SQL\n> compliant queries is out of the question (although I could see the utility\n> of a mode where it warns or errors on non-compliant queries, at least in\n> principle).\n\nSure, but it doesn't mean that we should support even more non-compliant syntax\nwithout any restraint.  In this case, I don't see much benefit as it's not\nsolving performance problem or something like that.It's improving developer performance by eliminating the need to make up utterly useless names. I don't care if behind the scenes names are assigned, although it would be even better if the names didn't exist at all. I just want the computer to do stuff for me that requires absolutely no human judgement whatsoever.\n> As to bad habits, I'm having trouble understanding. Why do you think\n> leaving the alias off a subquery is a bad habit (assuming it were allowed)?\n\nI think It's a bad habit because as far as I can see it's not supported on\nmysql or sqlserver.So it’s a bad habit to use features of Postgres that aren’t available on MySQL or SQL Server?For myself, I don’t care one bit about whether my code will run on those systems, or Oracle: as far as I’m concerned I write Postgres applications, not SQL applications. Of course, many people have a need to support other systems, so I appreciate the care we take to document the differences from the standard, and I hope we will continue to support standard queries. But if it’s a bad habit to use Postgres-specific features, why do we create any of those features?\n> If the name is never used, why are we required to supply it?\nBut similarly, I many times relied on the fact that writable CTE are executed\neven if not explicitly referenced.  So by the same argument shouldn't we allow\nsomething like this?\n\nWITH (INSERT INTO t SELECT * pending WHERE ts < now())\nSELECT now() AS last_processing_time;I’m not necessarily opposed to allowing this too. But the part which causes me annoyance is normal subquery naming.", "msg_date": "Tue, 28 Jun 2022 09:07:58 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> This was discussed previously in [1], and there seemed to be general\n> consensus in favour of it, but no new patch emerged.\n\nAs I said in that thread, I'm not super enthused about this, but I was\nclearly in the minority so I think it should go forward.\n\n> Attached is a patch that takes the approach of not generating an alias\n> at all, which seems to be neater and simpler, and less code than\n> trying to generate a unique alias.\n\nHm. Looking at the code surrounding what you touched, I'm reminded\nthat we allow JOIN nodes to not have an alias, and represent that\nsituation by rte->alias == NULL. I wonder if it'd be better in the\nlong run to make alias-less subqueries work similarly, rather than\ngenerating something that after-the-fact will be indistinguishable\nfrom a user-written alias. If that turns out to not work well,\nI'd agree with \"unnamed_subquery\" as the inserted name.\n\nAlso, what about VALUES clauses? It seems inconsistent to remove\nthis restriction for sub-SELECT but not VALUES. Actually it looks\nlike your patch already does remove that restriction in gram.y,\nbut you didn't follow through elsewhere.\n\nAs far as the docs go, I think it's sufficient to mention the\ninconsistency with SQL down at the bottom; we don't need a\nredundant in-line explanation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 14:00:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Tue, 5 Jul 2022 at 19:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > This was discussed previously in [1], and there seemed to be general\n> > consensus in favour of it, but no new patch emerged.\n>\n> As I said in that thread, I'm not super enthused about this, but I was\n> clearly in the minority so I think it should go forward.\n>\n\nCool. Thanks for looking.\n\n\n> > Attached is a patch that takes the approach of not generating an alias\n> > at all, which seems to be neater and simpler, and less code than\n> > trying to generate a unique alias.\n>\n> Hm. Looking at the code surrounding what you touched, I'm reminded\n> that we allow JOIN nodes to not have an alias, and represent that\n> situation by rte->alias == NULL. I wonder if it'd be better in the\n> long run to make alias-less subqueries work similarly,\n\nThat is what the patch does: transformRangeSubselect() passes a NULL\nalias to addRangeTableEntryForSubquery(), which has been modified to\ncope with that in a similar way to addRangeTableEntryForJoin() and\nother addRangeTableEntryFor...() functions.\n\nSo for example, with the following query, this is what the output from\nthe parser looks like:\n\nSELECT * FROM (SELECT 1);\n\nquery->rtable:\n rte:\n rtekind = RTE_SUBQUERY\n alias = NULL\n eref = { aliasname = \"subquery\", colnames = ... }\n\n\n> rather than\n> generating something that after-the-fact will be indistinguishable\n> from a user-written alias. If that turns out to not work well,\n> I'd agree with \"unnamed_subquery\" as the inserted name.\n>\n\nThe result is distinguishable from a user-written alias, because\nrte->alias is NULL. I think the confusion is that when I suggested\nusing \"unnamed_subquery\", I was referring to rte->eref->aliasname, and\nI still think it's a good idea to change that, for consistency with\nunnamed joins.\n\n\n> Also, what about VALUES clauses? It seems inconsistent to remove\n> this restriction for sub-SELECT but not VALUES. Actually it looks\n> like your patch already does remove that restriction in gram.y,\n> but you didn't follow through elsewhere.\n>\n\nIt does support unnamed VALUES clauses in the FROM list (there's a\nregression test exercising that). It wasn't necessary to make any\nadditional code changes because addRangeTableEntryForValues() already\nsupported having a NULL alias, and it all just flowed through.\n\nIn fact, the grammar forces you to enclose a VALUES clause in the FROM\nlist in parentheses, so this ends up being an unnamed subquery in the\nFROM list as well. For example:\n\nSELECT * FROM (VALUES(1),(2),(3));\n\nproduces\n\nquery->rtable:\n rte:\n rtekind = RTE_SUBQUERY\n alias = NULL\n eref = { aliasname = \"subquery\", colnames = ... }\n subquery->rtable:\n rte:\n rtekind = RTE_VALUES\n alias = NULL\n eref = { aliasname = \"*VALUES*\", colnames = ... }\n\nSo it's not really any different from a normal subquery.\n\n\n> As far as the docs go, I think it's sufficient to mention the\n> inconsistency with SQL down at the bottom; we don't need a\n> redundant in-line explanation.\n\nOK, fair enough.\n\nI'll post an update in a little while, but first, I found a bug, which\nrevealed a pre-existing bug in transformLockingClause(). I'll start a\nnew thread for that, since it'd be good to get that resolved\nindependently of this patch.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 6 Jul 2022 15:09:48 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Wed, 6 Jul 2022 at 15:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I'll post an update in a little while, but first, I found a bug, which\n> revealed a pre-existing bug in transformLockingClause(). I'll start a\n> new thread for that, since it'd be good to get that resolved\n> independently of this patch.\n>\n\nAttached is an update with the following changes:\n\n* Docs updated as suggested.\n* transformLockingClause() updated to skip subquery and values rtes\nwithout aliases.\n* eref->aliasname changed to \"unnamed_subquery\" for subqueries without aliases.\n\nRegards,\nDean", "msg_date": "Sat, 9 Jul 2022 11:28:20 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Sat, Jul 9, 2022 at 3:28 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Wed, 6 Jul 2022 at 15:09, Dean Rasheed <dean.a.rasheed@gmail.com>\n> wrote:\n> >\n> > I'll post an update in a little while, but first, I found a bug, which\n> > revealed a pre-existing bug in transformLockingClause(). I'll start a\n> > new thread for that, since it'd be good to get that resolved\n> > independently of this patch.\n> >\n>\n> Attached is an update with the following changes:\n>\n> * Docs updated as suggested.\n> * transformLockingClause() updated to skip subquery and values rtes\n> without aliases.\n> * eref->aliasname changed to \"unnamed_subquery\" for subqueries without\n> aliases.\n>\n> Regards,\n> Dean\n>\nHi,\nrtename is assigned at the beginning of the loop:\n\n+ char *rtename = rte->eref->aliasname;\n\n It seems the code would be more readable if you keep the assignment in\nelse block below:\n\n+ else if (rte->rtekind == RTE_SUBQUERY ||\n+ rte->rtekind == RTE_VALUES)\n continue;\n- rtename = rte->join_using_alias->aliasname;\n }\n- else\n- rtename = rte->eref->aliasname;\n\nbecause rtename would be assigned in the `rte->rtekind == RTE_JOIN` case.\n\nCheers\n\nOn Sat, Jul 9, 2022 at 3:28 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Wed, 6 Jul 2022 at 15:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I'll post an update in a little while, but first, I found a bug, which\n> revealed a pre-existing bug in transformLockingClause(). I'll start a\n> new thread for that, since it'd be good to get that resolved\n> independently of this patch.\n>\n\nAttached is an update with the following changes:\n\n* Docs updated as suggested.\n* transformLockingClause() updated to skip subquery and values rtes\nwithout aliases.\n* eref->aliasname changed to \"unnamed_subquery\" for subqueries without aliases.\n\nRegards,\nDeanHi,rtename is assigned at the beginning of the loop:+               char       *rtename = rte->eref->aliasname; It seems the code would be more readable if you keep the assignment in else block below:+                   else if (rte->rtekind == RTE_SUBQUERY ||+                            rte->rtekind == RTE_VALUES)                        continue;-                   rtename = rte->join_using_alias->aliasname;                }-               else-                   rtename = rte->eref->aliasname;because rtename would be assigned in the `rte->rtekind == RTE_JOIN` case.Cheers", "msg_date": "Sat, 9 Jul 2022 04:30:32 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Sat, 9 Jul 2022 at 12:24, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> It seems the code would be more readable if you keep the assignment in else block below:\n>\n> + else if (rte->rtekind == RTE_SUBQUERY ||\n> + rte->rtekind == RTE_VALUES)\n> continue;\n> - rtename = rte->join_using_alias->aliasname;\n> }\n> - else\n> - rtename = rte->eref->aliasname;\n>\n> because rtename would be assigned in the `rte->rtekind == RTE_JOIN` case.\n>\n\nBut then it would need 2 else blocks, one inside the rte->alias ==\nNULL block, for when rtekind is not RTE_JOIN, RTE_SUBQUERY or\nRTE_VALUES, and another after the block, for when rte->alias != NULL.\nI find it more readable this way.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 9 Jul 2022 13:17:53 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Sat, Jul 9, 2022 at 5:18 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Sat, 9 Jul 2022 at 12:24, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > It seems the code would be more readable if you keep the assignment in\n> else block below:\n> >\n> > + else if (rte->rtekind == RTE_SUBQUERY ||\n> > + rte->rtekind == RTE_VALUES)\n> > continue;\n> > - rtename = rte->join_using_alias->aliasname;\n> > }\n> > - else\n> > - rtename = rte->eref->aliasname;\n> >\n> > because rtename would be assigned in the `rte->rtekind == RTE_JOIN` case.\n> >\n>\n> But then it would need 2 else blocks, one inside the rte->alias ==\n> NULL block, for when rtekind is not RTE_JOIN, RTE_SUBQUERY or\n> RTE_VALUES, and another after the block, for when rte->alias != NULL.\n> I find it more readable this way.\n>\n> Regards,\n> Dean\n>\n\nHi, Dean:\nThanks for the explanation.\n\nI should have looked closer :-)\n\nOn Sat, Jul 9, 2022 at 5:18 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Sat, 9 Jul 2022 at 12:24, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>  It seems the code would be more readable if you keep the assignment in else block below:\n>\n> +                   else if (rte->rtekind == RTE_SUBQUERY ||\n> +                            rte->rtekind == RTE_VALUES)\n>                         continue;\n> -                   rtename = rte->join_using_alias->aliasname;\n>                 }\n> -               else\n> -                   rtename = rte->eref->aliasname;\n>\n> because rtename would be assigned in the `rte->rtekind == RTE_JOIN` case.\n>\n\nBut then it would need 2 else blocks, one inside the rte->alias ==\nNULL block, for when rtekind is not RTE_JOIN, RTE_SUBQUERY or\nRTE_VALUES, and another after the block, for when rte->alias != NULL.\nI find it more readable this way.\n\nRegards,\nDean Hi, Dean:Thanks for the explanation.I should have looked closer :-)", "msg_date": "Sat, 9 Jul 2022 06:53:30 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Attached is an update with the following changes:\n\n> * Docs updated as suggested.\n> * transformLockingClause() updated to skip subquery and values rtes\n> without aliases.\n> * eref->aliasname changed to \"unnamed_subquery\" for subqueries without aliases.\n\nThis looks good to me. Marked RFC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Jul 2022 17:17:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, 2 Oct 2023 at 00:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n\n> The only place that exposes the eref's made-up relation name is the\n> existing query deparsing code in ruleutils.c, which uniquifies it and\n> generates SQL spec-compliant output. For example:\n>\n\nI ran into one other place: error messages.\n\nSELECT unnamed_subquery.a\nFROM (SELECT 1 AS a)\n\n> ERROR: There is an entry for table \"unnamed_subquery\", but it cannot be\nreferenced from this part of the query.invalid reference to FROM-clause\nentry for table \"unnamed_subquery\"\n\nNormally, we would find the cited name somewhere in the query. Confusing.\nNotably, the same does not happen for \"unnamed_subquery_1\":\n\nSELECT unnamed_subquery_1.a\nFROM (SELECT 1 AS a), (SELECT 1 AS a)\n\n> ERROR: missing FROM-clause entry for table \"unnamed_subquery_1\"\n\nThat's the message anybody would expect.\nAlso makes sense, as \"uniquification\" only happens in the above quoted\ncase, and all invisible aliases seem to be \"unnamed_subquery\" at this\npoint? But a bit confusing on a different level.\n\nMaybe error messages should not be aware of invisible aliases, and just\ncomplain about \"missing FROM-clause entry\"?\nNot sure whether a fix would be easy, nor whether it would be worth the\neffort. Just wanted to document the corner case issue in this thread.\n\nRegards\nErwin\n\nOn Mon, 2 Oct 2023 at 00:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:The only place that exposes the eref's made-up relation name is the\nexisting query deparsing code in ruleutils.c, which uniquifies it and\ngenerates SQL spec-compliant output. For example: I ran into one other place: error messages.SELECT unnamed_subquery.aFROM (SELECT 1 AS a)> ERROR:  There is an entry for table \"unnamed_subquery\", but it cannot be referenced from this part of the query.invalid reference to FROM-clause entry for table \"unnamed_subquery\" Normally, we would find the cited name somewhere in the query. Confusing.Notably, the same does not happen for \"unnamed_subquery_1\":SELECT unnamed_subquery_1.aFROM (SELECT 1 AS a), (SELECT 1 AS a)> ERROR:  missing FROM-clause entry for table \"unnamed_subquery_1\"That's the message anybody would expect.Also makes sense, as \"uniquification\" only happens in the above quoted case, and all invisible aliases seem to be \"unnamed_subquery\" at this point? But a bit confusing on a different level.Maybe error messages should not be aware of invisible aliases, and just complain about \"missing FROM-clause entry\"?Not sure whether a fix would be easy, nor whether it would be worth the effort. Just wanted to document the corner case issue in this thread.RegardsErwin", "msg_date": "Mon, 2 Oct 2023 01:02:04 +0200", "msg_from": "Erwin Brandstetter <brsaweda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "Erwin Brandstetter <brsaweda@gmail.com> writes:\n> On Mon, 2 Oct 2023 at 00:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>> The only place that exposes the eref's made-up relation name is the\n>> existing query deparsing code in ruleutils.c, which uniquifies it and\n>> generates SQL spec-compliant output. For example:\n\n> I ran into one other place: error messages.\n> SELECT unnamed_subquery.a\n> FROM (SELECT 1 AS a)\n> ERROR: There is an entry for table \"unnamed_subquery\", but it cannot be\n> referenced from this part of the query.invalid reference to FROM-clause\n> entry for table \"unnamed_subquery\"\n\nYeah, that's exposing more of the implementation than we really want.\n\n> Notably, the same does not happen for \"unnamed_subquery_1\":\n> SELECT unnamed_subquery_1.a\n> FROM (SELECT 1 AS a), (SELECT 1 AS a)\n> ERROR: missing FROM-clause entry for table \"unnamed_subquery_1\"\n\nActually, that happens because \"unnamed_subquery_1\" *isn't* in the\nparse tree. As implemented, both RTEs are labeled \"unnamed_subquery\"\nin the parser output, and it's ruleutils that de-duplicates them.\n\nI'm inclined to think we should avoid letting \"unnamed_subquery\"\nappear in the parse tree, too. It might not be a good idea to\ntry to leave the eref field null, but could we set it to an\nempty string instead, that is\n\n-\teref = alias ? copyObject(alias) : makeAlias(\"unnamed_subquery\", NIL);\n+\teref = alias ? copyObject(alias) : makeAlias(\"\", NIL);\n\nand then let ruleutils replace that with \"unnamed_subquery\"? This\nwould prevent accessing the subquery name in the way Erwin shows,\nbecause we don't let you write an empty identifier in SQL:\n\nregression=# select \"\".a from (select 1 as a);\nERROR: zero-length delimited identifier at or near \"\"\"\"\nLINE 1: select \"\".a from (select 1 as a);\n ^\n\nHowever, there might then be some parser error messages that\nrefer to subquery \"\", so I'm not sure if this is totally\nwithout surprises either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Oct 2023 20:01:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "On Mon, 2 Oct 2023 at 01:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Erwin Brandstetter <brsaweda@gmail.com> writes:\n> > On Mon, 2 Oct 2023 at 00:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >> The only place that exposes the eref's made-up relation name is the\n> >> existing query deparsing code in ruleutils.c, which uniquifies it and\n> >> generates SQL spec-compliant output. For example:\n>\n> > I ran into one other place: error messages.\n> > SELECT unnamed_subquery.a\n> > FROM (SELECT 1 AS a)\n> > ERROR: There is an entry for table \"unnamed_subquery\", but it cannot be\n> > referenced from this part of the query.invalid reference to FROM-clause\n> > entry for table \"unnamed_subquery\"\n>\n> Yeah, that's exposing more of the implementation than we really want.\n>\n\nNote that this isn't a new issue, specific to unnamed subqueries. The\nsame thing happens for unnamed joins:\n\ncreate table foo(a int);\ncreate table bar(a int);\nselect unnamed_join.a from foo join bar using (a);\n\nERROR: invalid reference to FROM-clause entry for table \"unnamed_join\"\nLINE 1: select unnamed_join.a from foo join bar using (a);\n ^\nDETAIL: There is an entry for table \"unnamed_join\", but it cannot be\nreferenced from this part of the query.\n\n\nAnd there's a similar problem with VALUES RTEs:\n\ninsert into foo values (1),(2) returning \"*VALUES*\".a;\n\nERROR: invalid reference to FROM-clause entry for table \"*VALUES*\"\nLINE 1: insert into foo values (1),(2) returning \"*VALUES*\".a;\n ^\nDETAIL: There is an entry for table \"*VALUES*\", but it cannot be\nreferenced from this part of the query.\n\n> I'm inclined to think we should avoid letting \"unnamed_subquery\"\n> appear in the parse tree, too. It might not be a good idea to\n> try to leave the eref field null, but could we set it to an\n> empty string instead, that is\n>\n> - eref = alias ? copyObject(alias) : makeAlias(\"unnamed_subquery\", NIL);\n> + eref = alias ? copyObject(alias) : makeAlias(\"\", NIL);\n>\n> and then let ruleutils replace that with \"unnamed_subquery\"?\n\nHmm, I think that there would be other side-effects if we did that --\nat least doing it for VALUES RTEs would also require additional\nchanges to retain current EXPLAIN output. I think perhaps it would be\nbetter to try for a more targeted fix of the parser error reporting.\n\nIn searchRangeTableForRel() we try to find any RTE that could possibly\nmatch the RangeVar, but certain kinds of RTE don't naturally have\nnames, and if they also haven't been given aliases, then they can't\npossibly match anywhere in the query (and thus it's misleading to\nreport that they can't be referred to from specific places).\n\nSo I think perhaps it's better to just have searchRangeTableForRel()\nexclude these kinds of RTE, if they haven't been given an alias.\n\nRegards,\nDean", "msg_date": "Mon, 2 Oct 2023 11:49:40 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Mon, 2 Oct 2023 at 01:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, that's exposing more of the implementation than we really want.\n\n> Note that this isn't a new issue, specific to unnamed subqueries. The\n> same thing happens for unnamed joins:\n\nTrue, and we've had few complaints about that. Still, if we can\nclean it up without too much effort, let's do so.\n\n> So I think perhaps it's better to just have searchRangeTableForRel()\n> exclude these kinds of RTE, if they haven't been given an alias.\n\nWould we need a new flag in the ParseNamespaceItem data structure,\nor will the existing data serve? I see how to do this if we add\na \"doesn't really have a name\" flag, but it's not clear to me that\nwe can reliably identify them otherwise. Maybe a test involving\nthe rtekind and whether the \"alias\" field is set would do, but\nthat way seems a bit ugly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Oct 2023 09:39:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making the subquery alias optional in the FROM clause" } ]
[ { "msg_contents": "Hi all,\n\nJust a reminder that the July 2022 commitfest will begin this coming\nFriday, July 1. I'll send out reminders this week to get your patches\nregistered/rebased, and I'll be updating stale statuses in the CF app.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 27 Jun 2022 08:33:19 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "[Commitfest 2022-07] Begins This Friday" } ]
[ { "msg_contents": "Hey,\r\n\r\nIf I understand correctly, when a Sequential Scan takes place, the ExecScan function (located in executor/execScan.c) does not retrieve all attributes per tuple in the TupleTableSlot and only retrieves the necessary attribute. So for example, let’s imagine we have a table t1 with 3 number fields, c1, c2, and c3. So in the command:\r\n\r\nSelect * from t1 where t1.c1 > 500;\r\n\r\nThe returned TupleTableSlot will have its field of tts_values in the form (X, 0, 0), where X is the real value of t1.c1 but the fields of c2 and c3 are not actually retrieved because they aren’t used. Similarly, for the command:\r\n\r\nSelect * from t1;\r\n\r\nThe TupleTableSlot will always return the values of (0, 0, 0) because no comparisons are necessary. I am working on code where I’ll need access to attributes that aren’t listed in any qualification – what code should I change in execScan, or nodeSeqScan to be able to retrieve any attribute of a tuple? Basically, being able to make execScan return (X, Y, Z) instead of (0, 0, 0) even if the command doesn’t use any attribute comparisons.\r\n\r\nMarcus\r\n\n\n\n\n\n\n\n\n\nHey,\n \nIf I understand correctly, when a Sequential Scan takes place, the ExecScan function (located in executor/execScan.c) does not retrieve all attributes per tuple in the TupleTableSlot and only retrieves the necessary attribute. So for example,\r\n let’s imagine we have a table t1 with 3 number fields, c1, c2, and c3. So in the command:\n \nSelect * from t1 where t1.c1 > 500;\n \nThe returned TupleTableSlot will have its field of tts_values in the form (X, 0, 0), where X is the real value of t1.c1 but the fields of c2 and c3 are not actually retrieved because they aren’t used. Similarly, for the command:\n \nSelect * from t1;\n \nThe TupleTableSlot will always return the values of (0, 0, 0) because no comparisons are necessary. I am working on code where I’ll need access to attributes that aren’t listed in any qualification – what code should I change in execScan,\r\n or nodeSeqScan to be able to retrieve any attribute of a tuple? Basically, being able to make execScan return (X, Y, Z) instead of (0, 0, 0) even if the command doesn’t use any attribute comparisons.\n \nMarcus", "msg_date": "Mon, 27 Jun 2022 19:00:44 +0000", "msg_from": "\"Ma, Marcus\" <marcjma@amazon.com>", "msg_from_op": true, "msg_subject": "Retrieving unused tuple attributes in ExecScan" }, { "msg_contents": "Hi,\n\nOn 2022-06-27 19:00:44 +0000, Ma, Marcus wrote:\n> If I understand correctly, when a Sequential Scan takes place, the ExecScan function (located in executor/execScan.c) does not retrieve all attributes per tuple in the TupleTableSlot and only retrieves the necessary attribute. So for example, let’s imagine we have a table t1 with 3 number fields, c1, c2, and c3. So in the command:\n> \n> Select * from t1 where t1.c1 > 500;\n> \n> The returned TupleTableSlot will have its field of tts_values in the form (X, 0, 0), where X is the real value of t1.c1 but the fields of c2 and c3 are not actually retrieved because they aren’t used. Similarly, for the command:\n> \n> Select * from t1;\n> \n> The TupleTableSlot will always return the values of (0, 0, 0) because no\n> comparisons are necessary. I am working on code where I’ll need access to\n> attributes that aren’t listed in any qualification – what code should I\n> change in execScan, or nodeSeqScan to be able to retrieve any attribute of a\n> tuple? Basically, being able to make execScan return (X, Y, Z) instead of\n> (0, 0, 0) even if the command doesn’t use any attribute comparisons.\n\nYou'll need to tell the planner that those columns are needed. It's not just\nseqscans that otherwise will discard / not compute values.\n\nWhere exactly do you need those columns and why?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Jun 2022 12:09:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Retrieving unused tuple attributes in ExecScan" }, { "msg_contents": "Hey Andres,\r\n\r\nSo I'm actually using the columns during merge join, basically I'm building a bloom filter on the outer relation and filtering out data on the inner relation of the join. I'm building the filter on the join keys, so the columns are being used further up the execution tree. However, even on a command like:\r\n\r\nSelect * from t1 inner join t2 on t1.c1 = t2.c2;\r\n\r\nThe execScan function returns slots that have (0, 0, 0) even though t1.c1 and t2.c2 will be used later on. I know that the Sort node and the MergeJoin node are able to read the actual values of the join keys, but for some reason the values aren't showing up on the SeqScan level. However, as soon as I add a qualification, such as:\r\n\r\nSelect * from t1 inner join on t1.c1 = t2.c2 where t1.c1 % 2 = 0;\r\n\r\nThe qualification makes the t1.c1 value show up during execScan, but not the t2.c2 value.\r\n\r\nMarcus\r\n\r\nOn 6/27/22, 3:10 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n Hi,\r\n\r\n On 2022-06-27 19:00:44 +0000, Ma, Marcus wrote:\r\n > If I understand correctly, when a Sequential Scan takes place, the ExecScan function (located in executor/execScan.c) does not retrieve all attributes per tuple in the TupleTableSlot and only retrieves the necessary attribute. So for example, let’s imagine we have a table t1 with 3 number fields, c1, c2, and c3. So in the command:\r\n >\r\n > Select * from t1 where t1.c1 > 500;\r\n >\r\n > The returned TupleTableSlot will have its field of tts_values in the form (X, 0, 0), where X is the real value of t1.c1 but the fields of c2 and c3 are not actually retrieved because they aren’t used. Similarly, for the command:\r\n >\r\n > Select * from t1;\r\n >\r\n > The TupleTableSlot will always return the values of (0, 0, 0) because no\r\n > comparisons are necessary. I am working on code where I’ll need access to\r\n > attributes that aren’t listed in any qualification – what code should I\r\n > change in execScan, or nodeSeqScan to be able to retrieve any attribute of a\r\n > tuple? Basically, being able to make execScan return (X, Y, Z) instead of\r\n > (0, 0, 0) even if the command doesn’t use any attribute comparisons.\r\n\r\n You'll need to tell the planner that those columns are needed. It's not just\r\n seqscans that otherwise will discard / not compute values.\r\n\r\n Where exactly do you need those columns and why?\r\n\r\n Greetings,\r\n\r\n Andres Freund\r\n\r\n", "msg_date": "Mon, 27 Jun 2022 19:29:34 +0000", "msg_from": "\"Ma, Marcus\" <marcjma@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Retrieving unused tuple attributes in ExecScan" }, { "msg_contents": "Hi,\n\n(please don't top-quote on PG lists)\n\nOn 2022-06-27 19:29:34 +0000, Ma, Marcus wrote:\n> So I'm actually using the columns during merge join, basically I'm building a bloom filter on the outer relation and filtering out data on the inner relation of the join. I'm building the filter on the join keys, so the columns are being used further up the execution tree. However, even on a command like:\n> \n> Select * from t1 inner join t2 on t1.c1 = t2.c2;\n> \n> The execScan function returns slots that have (0, 0, 0) even though t1.c1 and t2.c2 will be used later on. I know that the Sort node and the MergeJoin node are able to read the actual values of the join keys, but for some reason the values aren't showing up on the SeqScan level. However, as soon as I add a qualification, such as:\n> \n> Select * from t1 inner join on t1.c1 = t2.c2 where t1.c1 % 2 = 0;\n> \n> The qualification makes the t1.c1 value show up during execScan, but not the t2.c2 value.\n\nSlots can incrementally deform tuples. You need to call\n slot_getsomeattrs(slot, number-up-to-which-you-need-tuples)\nto reliably have columns deformed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Jun 2022 12:52:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Retrieving unused tuple attributes in ExecScan" }, { "msg_contents": "Re: So I'm actually using the columns during merge join, basically I'm building a bloom filter on the outer relation and filtering out data on the inner relation of the join. I'm building the filter on the join keys\r\n\r\nWe had a whole implementation for Bloom filtering for hash inner join, complete with costing and pushdown of the Bloom filter from the build side to the execution tree on the probe side (i.e. building a Bloom filter on the inner side of the join at the conclusion of the build phase of the hash join, then pushing it down as a semi-join filter to the probe side of the join, where it could potentially be applied to multiple scans). After a large change to that same area of the code by the community it got commented out and has been in that state ever since. It's a good example of the sort of change that really ought to be made with the community because there's too much merge burden otherwise.\r\n\r\nIt was a pretty effective optimization in some cases, though. Most commercial systems have an optimization like this, sometimes with special optimizations when the number of distinct join keys is very small. If there is interest in reviving this functionality, we could probably extract some patches and work with the community to try to get it running again. \r\n\r\n /Jim\r\n\r\n\r\n", "msg_date": "Mon, 27 Jun 2022 20:12:41 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Retrieving unused tuple attributes in ExecScan" } ]
[ { "msg_contents": "Hi all,\n(Andrew in CC.)\n\nWhen running installcheck multiple times under src/test/modules/, we\nare getting two failures as of test_pg_dump and test_oat_tests,\nbecause these keep around some roles created by the tests.\n\nKeeping around a role for test_pg_dump has been discussed already,\nwhere the buildfarm can use that for pg_upgrade, and because there are\nmany objects that depend on the role created:\nhttps://www.postgresql.org/message-id/20180904203012.GG20696@paquier.xyz\n\nNote that it would require a DROP OWNED BY and DROP ROLE, but anyway:\n--- a/src/test/modules/test_pg_dump/sql/test_pg_dump.sql\n+++ b/src/test/modules/test_pg_dump/sql/test_pg_dump.sql\n@@ -106,3 +106,5 @@ ALTER EXTENSION test_pg_dump DROP SERVER s0;\n ALTER EXTENSION test_pg_dump DROP TABLE test_pg_dump_t1;\n ALTER EXTENSION test_pg_dump DROP TYPE test_pg_dump_e1;\n ALTER EXTENSION test_pg_dump DROP VIEW test_pg_dump_v1;\n+DROP OWNED BY regress_dump_test_role;\n+DROP ROLE regress_dump_test_role;\n\nAs far as I can see, test_oat_hook has no need to keep around the\nextra role it creates as part of the regression tests, because at the\nend of the test there are no objects that depend on it. Wouldn't it\nbe better to make the test self-isolated? NO_INSTALLCHECK is set in\nthe module because of the issue with caching and the namespace search\nhooks, but it seems to me that we'd better make the test self-isolated\nin the long term, like in the attached.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 28 Jun 2022 10:12:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Repeatability of installcheck for test_oat_hooks" }, { "msg_contents": "On Tue, Jun 28, 2022 at 10:12:48AM +0900, Michael Paquier wrote:\n> As far as I can see, test_oat_hook has no need to keep around the\n> extra role it creates as part of the regression tests, because at the\n> end of the test there are no objects that depend on it. Wouldn't it\n> be better to make the test self-isolated? NO_INSTALLCHECK is set in\n> the module because of the issue with caching and the namespace search\n> hooks, but it seems to me that we'd better make the test self-isolated\n> in the long term, like in the attached.\n\nAnd actually, I have found a second issue here. The tests issue a\nGRANT on work_mem, like that:\nGRANT SET ON PARAMETER work_mem TO PUBLIC;\n\nThis has as effect to leave around an entry in pg_parameter_acl, which\nis designed this way in aclchk.c. However, this interacts with\nguc_privs.sql in unsafe_tests, because those tests include similar\nqueries GRANT queries, also on work_mem. So, if one issues an\ninstallcheck on test_oat_modules followed by an installcheck in\nunsafe_tests, the latter fails. I think that we'd better add an extra\nREVOKE to clear the contents of pg_parameter_acl.\n--\nMichael", "msg_date": "Tue, 28 Jun 2022 12:05:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Repeatability of installcheck for test_oat_hooks" } ]
[ { "msg_contents": "Hi all,\n\nWhile browsing through the recent changes with the base backup APIs, I\nhave noticed that a couple of comments did not get the renaming of the\nSQL functions to pg_backup_start/stop, as of the attached.\n\nThat's not a big deal, but let's be right.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 28 Jun 2022 13:41:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Comments referring to pg_start/stop_backup" }, { "msg_contents": "At Tue, 28 Jun 2022 13:41:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> While browsing through the recent changes with the base backup APIs, I\n> have noticed that a couple of comments did not get the renaming of the\n> SQL functions to pg_backup_start/stop, as of the attached.\n> \n> That's not a big deal, but let's be right.\n\n+1 and I don't find other instances of the same mistake.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 28 Jun 2022 14:00:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Comments referring to pg_start/stop_backup" }, { "msg_contents": "On 6/28/22 01:00, Kyotaro Horiguchi wrote:\n> At Tue, 28 Jun 2022 13:41:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> Hi all,\n>>\n>> While browsing through the recent changes with the base backup APIs, I\n>> have noticed that a couple of comments did not get the renaming of the\n>> SQL functions to pg_backup_start/stop, as of the attached.\n>>\n>> That's not a big deal, but let's be right.\n> \n> +1 and I don't find other instances of the same mistake.\n\nYes, these also look good to me. They are a bit tricky to search for so \nI can see how we missed them.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 28 Jun 2022 07:47:04 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Comments referring to pg_start/stop_backup" }, { "msg_contents": "On Tue, Jun 28, 2022 at 07:47:04AM -0400, David Steele wrote:\n> Yes, these also look good to me. They are a bit tricky to search for so I\n> can see how we missed them.\n\nThanks for double-checking. Applied.\n--\nMichael", "msg_date": "Fri, 1 Jul 2022 09:57:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Comments referring to pg_start/stop_backup" } ]
[ { "msg_contents": "(Starting a new thread)\n\nOn Sun, Jun 26, 2022 at 10:48:24AM +0800, Julien Rouhaud wrote:\n> On Thu, Jun 23, 2022 at 10:19:44AM -0400, Robert Haas wrote:\n> > On Thu, Jun 23, 2022 at 6:13 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > And should record_in / record_out use the logical position, as in:\n> > > SELECT ab::text FROM ab / SELECT (a, b)::ab;\n> > >\n> > > I would think not, as relying on a possibly dynamic order could break things if\n> > > you store the results somewhere, but YMMV.\n> >\n> > I think here the answer is yes again. I mean, consider that you could\n> > also ALTER TABLE DROP COLUMN and then ALTER TABLE ADD COLUMN with the\n> > same name. That is surely going to affect the meaning of such things.\n> > I don't think we want to have one meaning if you reorder things that\n> > way and a different meaning if you reorder things using whatever\n> > commands we create for changing the display column positions.\n>\n> It indeed would, but ALTER TABLE DROP COLUMN is a destructive operation, and\n> I'm assuming that anyone doing that is aware that it will have an impact on\n> stored data and such. I initially thought that changing the display order of\n> columns shouldn't have the same impact with the stability of otherwise\n> unchanged record definition, as it would make such reorder much more impacting.\n> But I agree that having different behaviors seems worse.\n>\n> > > Then, what about joinrels expansion? I learned that the column ordering rules\n> > > are far from being obvious, and I didn't find those in the documentation (note\n> > > that I don't know if that something actually described in the SQL standard).\n> > > So for instance, if a join is using an explicit USING clause rather than an ON\n> > > clause, the merged columns are expanded first, so:\n> > >\n> > > SELECT * FROM ab ab1 JOIN ab ab2 USING (b)\n> > >\n> > > should unexpectedly expand to (b, a, a). Is this order a strict requirement?\n> >\n> > I dunno, but I can't see why it creates a problem for this patch to\n> > maintain the current behavior. I mean, just use the logical column\n> > position instead of the physical one here and forget about the details\n> > of how it works beyond that.\n>\n> I'm not that familiar with this part of the code so I may have missed\n> something, but I didn't see any place where I could just simply do that.\n>\n> To be clear, the approach I used is to change the expansion ordering but\n> otherwise keep the current behavior, to try to minimize the changes. This is\n> done by keeping the attribute in the physical ordering pretty much everywhere,\n> including in the nsitems, and just logically reorder them during the expansion.\n> In other words all the code still knows that the 1st column is the first\n> physical column and so on.\n>\n> So in that query, the ordering is supposed to happen when handling the \"SELECT\n> *\", which makes it impossible to retain that order.\n>\n> I'm assuming that what you meant is to change the ordering when processing the\n> JOIN and retain the old \"SELECT *\" behavior, which is to emit items in the\n> order they're found. But IIUC the only way to do that would be to change the\n> order when building the nsitems themselves, and have the code believe that the\n> attributes are physically stored in the logical order. That's probably doable,\n> but that looks like a way more impacting change. Or did you mean to keep the\n> approach I used, and just have some special case for \"SELECT *\" when referring\n> to a joinrel and instead try to handle the logical expansion in the join?\n> AFAICS it would require to add some extra info in the parsing structures, as it\n> doesn't really really store any position, just relies on array offset / list\n> position and maps things that way.\n\nSo, assuming that the current JOIN expansion order shouldn't be changed, I\nimplemented the last approach I mentioned. As expected, it requires some extra\ninformation in the parsing structures. In the attached patch I added an array\nin the ParseNamespaceItem struct (p_mappings) to map the logical / physical\npositions, and iterate over that array when processing the JOIN in\ntransformFromClauseItem to emit the same tuples as if no logical order were\ndefined. Also, expandNSItemAttrs() now needs to know that when an RTE_JOIN is\nexpanded, to keep the original order.\n\nWhile at it I also fixed the column list that get automatically generated when\ndeparsing a view if the original query didn't had any alias but some DDL is\nlater executed (like renaming one of the column) making this column list\nnecessary. This isn't problematic except in one case: functions returning\n(setof) tables. For this, I also need to save a array to map the physical /\nlogical positions but as far as I can see I need to save it in the\nRangeTblEntry, only for RTE_FUNCTION, which is serialized in pg_rewrite so that\nthe deparsing can emit the correct order even if the attribute positions\nchanged between the view creation and the deparsing. This also works well but\nfeels really hackish.\n\nWith those changes, the create_view.sql test now entirely works (except some\nerror message referencing a physical position). There are still a lot of other\ntests that fail, and I didn't really dig into all of them to know if that's\nsomething normal or just some other places that needs to be fixed.\n\nAs I mentioned in my first email, I'm a bit doubtful about this approach in\ngeneral, so I'm looking for some feedback on it before investigating too much\ntime implementing something that would never be close to committable.\n>\n> > > Another problem (that probably wouldn't be a problem for system catalogs) is\n> > > that defaults are evaluated in the physical position. This example from the\n> > > regression test will clearly have a different behavior if the columns are in a\n> > > different physical order:\n> > >\n> > > CREATE TABLE INSERT_TBL (\n> > > x INT DEFAULT nextval('insert_seq'),\n> > > y TEXT DEFAULT '-NULL-',\n> > > z INT DEFAULT -1 * currval('insert_seq'),\n> > > CONSTRAINT INSERT_TBL_CON CHECK (x >= 3 AND y <> 'check failed' AND x < 8),\n> > > CHECK (x + z = 0));\n> > >\n> > > But changing the behavior to rely on the logical position seems quite\n> > > dangerous.\n> >\n> > Why?\n>\n> It feels to me like a POLA violation, and probably people wouldn't expect it to\n> behave this way (even if this is clearly some corner case problem). Even if\n> you argue that this is not simply a default display order but something more\n> like real column order, the physical position being some implementation detail,\n> it still doesn't really feels right.\n>\n> The main reason for having the possibility to change the logical position is to\n> have \"better looking\", easier to work with, relations even if you have some\n> requirements with the real physical order like trying to optimize things as\n> much as possible (reordering columns to avoid padding space, put non-nullable\n> columns first...). The order in which defaults are evaluated looks like the\n> same kind of requirements. How useful would it be if you could chose a logical\n> order, but not being able to chose the one you actually want because it would\n> break your default values?\n>\n> Anyway, per the nearby discussions I don't see much interest, especially not in\n> the context of varlena identifiers (I should have started a different thread,\n> sorry about that), so I don't think it's worth investing more efforts into it.", "msg_date": "Tue, 28 Jun 2022 16:32:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Separate the attribute physical order from logical order" }, { "msg_contents": "On 2022-Jun-28, Julien Rouhaud wrote:\n\n> So, assuming that the current JOIN expansion order shouldn't be\n> changed, I implemented the last approach I mentioned.\n\nYeah, I'm not sure that this is a good assumption. I mean, if logical\norder is the order in which users see the table columns, then why\nshouldn't JOIN expand in the same way? My feeling is that every aspect\nof user interaction should show columns ordered in logical order. When\nI said that \"only star expansion changes\" upthread, what I meant is that\nthere was no need to support any additional functionality such as\nletting the column order be changed or the server changing things\nunderneath to avoid alignment padding, etc.\n\n\nAnyway, I think your 0001 is not a good first step. I think a better\nfirst step is a patch that adds two more columns to pg_attribute:\nattphysnum and attlognum (or something like that. This is the name I\nused years ago, but if you want to choose different, it's okay.) In\n0001, these columns would all be always identical, and there's no\nfunctionality to handle the case where they differ (probably even add\nsome Assert that they are identical). The idea behind these three\ncolumns is: attnum is a column identity and it never changes from the\nfirst value that is assigned to the column. attphysnum represents the\nphysical position of the table. attlognum is the position where the\ncolumn appears for user interaction.\n\nIn a 0002 patch, you would introduce backend support for the case where\nattlognum differs from the other two; but the other two are always the\nsame and it's okay if the server misbehaves or crashes if attphysnum is\ndifferent from attnum (best: keep the asserts that they are always the\nsame). Doing it this way limits the number of cases that you have to\ndeal with, because there will be enough difficulty already. You need to\nchange RTE expansion everywhere: *-expansion, COPY, JOIN, expansion of\nSQL function results, etc ... even psql \\d ;-) But, again: the\nphysical position is always column identity and there's no way to\nreorder the columns physically for storage efficiency.\n\nYou could put ALTER TABLE support for moving columns as 0003. (So\ntesting for 0002 would just be some UPDATE sentences or some hack that\nlets you test various cases.)\n\nIn a 0004 patch, you would introduce backend support for attphysnum to\nbe different. Probably no DDL support yet, since maybe we don't want\nthat, but instead we would like the server to figure out the best\npossible packing based on alignment padding, nullability varlenability.\nSo testing for this part is again just some UPDATEs.\n\nI think 0001+0002 are already a submittable patchset.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Si quieres ser creativo, aprende el arte de perder el tiempo\"\n\n\n", "msg_date": "Tue, 28 Jun 2022 10:53:14 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 28, 2022 at 10:53:14AM +0200, Alvaro Herrera wrote:\n> On 2022-Jun-28, Julien Rouhaud wrote:\n>\n> > So, assuming that the current JOIN expansion order shouldn't be\n> > changed, I implemented the last approach I mentioned.\n>\n> Yeah, I'm not sure that this is a good assumption. I mean, if logical\n> order is the order in which users see the table columns, then why\n> shouldn't JOIN expand in the same way? My feeling is that every aspect\n> of user interaction should show columns ordered in logical order. When\n> I said that \"only star expansion changes\" upthread, what I meant is that\n> there was no need to support any additional functionality such as\n> letting the column order be changed or the server changing things\n> underneath to avoid alignment padding, etc.\n\nI'm not entirely sure of what you meant. Assuming tables a(a, z) and b(b, z),\nwhat do you think those queries should return?\n\nSELECT * FROM a JOIN b on a.z = b.z\nCurrently it returns (a.a, a.z, b.b, b.z)\n\nSELECT * FROM a JOIN b USING (z)\nCurrently it returns a.z, a.a, b.b.\n\nShould it now return (a.a, z, b.b) as long as the tables have that logical\norder, whether or not any other position (attnum / attphysnum) is different or\nstay the same as now?\n\n> Anyway, I think your 0001 is not a good first step.\n\nFWIW this is just what you were previously suggesting at [1].\n\n> I think a better\n> first step is a patch that adds two more columns to pg_attribute:\n> attphysnum and attlognum (or something like that. This is the name I\n> used years ago, but if you want to choose different, it's okay.) In\n> 0001, these columns would all be always identical, and there's no\n> functionality to handle the case where they differ (probably even add\n> some Assert that they are identical). The idea behind these three\n> columns is: attnum is a column identity and it never changes from the\n> first value that is assigned to the column. attphysnum represents the\n> physical position of the table. attlognum is the position where the\n> column appears for user interaction.\n\nI'm not following. If we keep attnum as the official identity position and\nuse attlognum as the position that should be used in any interactive command,\nwouldn't that risk to break every single client?\n\nImagine you have some framework that automatically generates queries based on\nthe catalog, if it sees table abc with:\nc: attnum 1, attphysnum 1, attlognum 3\nb: attnum 2, attphysnum 2, attlognum 2\na: attnum 3, attphysnum 3, attlognum 1\n\nand you ask that layer to generate an insert with something like {'a': 'a',\n'b': 'b', 'c': 'c'}, what would prevent it from generating:\n\nINSERT INTO abc VALUES ('c', 'b', 'a');\n\nwhile attlognum says it should have been\n\nINSERT INTO abc VALUES ('a', 'b', 'c');\n\n> In a 0002 patch, you would introduce backend support for the case where\n> attlognum differs from the other two; but the other two are always the\n> same and it's okay if the server misbehaves or crashes if attphysnum is\n> different from attnum (best: keep the asserts that they are always the\n> same). Doing it this way limits the number of cases that you have to\n> deal with, because there will be enough difficulty already. You need to\n> change RTE expansion everywhere: *-expansion, COPY, JOIN, expansion of\n> SQL function results, etc ... even psql \\d ;-) But, again: the\n> physical position is always column identity and there's no way to\n> reorder the columns physically for storage efficiency.\n\nJust to clarify my understanding, apart from the fact that I'm only using\nattphysnum (for your attnum and attphysnum) and attnum (for your attlognum), is\nthere any difference in the behavior with what I started to implement (if what\nI started to implement was finished of course) and what you're saying here?\n\nAlso, about the default values evaluation (see [2]), should it be tied to your\nattnum, attphysnum or attlognum?\n\n> You could put ALTER TABLE support for moving columns as 0003. (So\n> testing for 0002 would just be some UPDATE sentences or some hack that\n> lets you test various cases.)\n>\n> In a 0004 patch, you would introduce backend support for attphysnum to\n> be different. Probably no DDL support yet, since maybe we don't want\n> that, but instead we would like the server to figure out the best\n> possible packing based on alignment padding, nullability varlenability.\n> So testing for this part is again just some UPDATEs.\n>\n> I think 0001+0002 are already a submittable patchset.\n\nI think that supporting at least a way to specify the logical order during the\ntable creation should be easy to implement (there shouldn't be any\nquestion on whether it needs to invalidate any cache or what lock level to\nuse), and could also be added in the initial submission without much extra\nefforts, which could help with the testing.\n\n[1] https://www.postgresql.org/message-id/202108181639.xjuovrpwgkr2@alvherre.pgsql\n[2] https://www.postgresql.org/message-id/20220626024824.qnlpp6vikzjvuxs3%40jrouhaud\n\n\n", "msg_date": "Tue, 28 Jun 2022 17:32:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "On Tue, 28 Jun 2022 at 05:32, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> I think that supporting at least a way to specify the logical order during\n> the\n> table creation should be easy to implement (there shouldn't be any\n> question on whether it needs to invalidate any cache or what lock level to\n> use), and could also be added in the initial submission without much extra\n> efforts, which could help with the testing.\n>\n\nI think the meaning of “logical order” (well, the meaning it has for me, at\nleast) implies that the logical order of a table after CREATE TABLE is the\norder in which the columns were given in the table creation statement.\n\nIf there needs to be a way of specifying the physical order separately,\nthat is a different matter.\n\nALTER TABLE ADD … is another matter. Syntax there to be able to say BEFORE\nor AFTER an existing column would be nice to have. Presumably it would\nphysically add the column at the end but set the logical position as\nspecified.\n\nOn Tue, 28 Jun 2022 at 05:32, Julien Rouhaud <rjuju123@gmail.com> wrote:I think that supporting at least a way to specify the logical order during the\ntable creation should be easy to implement (there shouldn't be any\nquestion on whether it needs to invalidate any cache or what lock level to\nuse), and could also be added in the initial submission without much extra\nefforts, which could help with the testing.I think the meaning of “logical order” (well, the meaning it has for me, at least) implies that the logical order of a table after CREATE TABLE is the order in which the columns were given in the table creation statement.If there needs to be a way of specifying the physical order separately, that is a different matter.ALTER TABLE ADD … is another matter. Syntax there to be able to say BEFORE or AFTER an existing column would be nice to have. Presumably it would physically add the column at the end but set the logical position as specified.", "msg_date": "Tue, 28 Jun 2022 09:00:05 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 28, 2022 at 09:00:05AM -0400, Isaac Morland wrote:\n> On Tue, 28 Jun 2022 at 05:32, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > I think that supporting at least a way to specify the logical order during\n> > the\n> > table creation should be easy to implement (there shouldn't be any\n> > question on whether it needs to invalidate any cache or what lock level to\n> > use), and could also be added in the initial submission without much extra\n> > efforts, which could help with the testing.\n> >\n>\n> I think the meaning of “logical order” (well, the meaning it has for me, at\n> least) implies that the logical order of a table after CREATE TABLE is the\n> order in which the columns were given in the table creation statement.\n>\n> If there needs to be a way of specifying the physical order separately,\n> that is a different matter.\n\nWell, the way I see it is that the logical order is something that can be\nchanged, and therefore is the one that needs to be spelled out explicitly if\nyou want it to differ from the physical order.\n\nBut whether the physical or logical order is the one that needs explicit\nadditional syntax, it would still be nice to provide in a first iteration. And\nboth versions would be the same to implement, difficulty wise.\n>\n> ALTER TABLE ADD … is another matter. Syntax there to be able to say BEFORE\n> or AFTER an existing column would be nice to have. Presumably it would\n> physically add the column at the end but set the logical position as\n> specified.\n\nYes, but it raises some questions about lock level, cache invalidation and such\nso I chose to ignore that for the moment.\n\n\n", "msg_date": "Tue, 28 Jun 2022 21:20:22 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "On Tue, Jun 28, 2022 at 04:32:30PM +0800, Julien Rouhaud wrote:\n> psql displays a table columns information using the logical order rather the\n> physical order, and if verbose emits an addition \"Physical order\" footer if the\n> logical layout is different from the physical one.\n\nFYI: the footer would work really poorly for us, since we use hundreds of\ncolumns and sometimes over 1000 (historically up to 1600). I think it'd be\nbetter to show the physical position as an additional column, or a \\d option to\nsort by physical attnum. (I'm not sure if it'd be useful for our case to see\nthe extra columns, but at least it won't create a \"footer\" which is multiple\npages long. Actually, I've sometimes wished for a \"\\d-\" quiet mode which would\nshow everything *except* the list of column names, or perhaps only show those\ncolumns which are referenced by the list of indexes/constraints/stats\nobjects/etc).\n\nBTW, since 2 years ago, when rewriting partitions to promote a column type, we\nrecreate the parent table sorted by attlen, to minimize alignment overhead in\nnew children. AFAICT your patch is about adding an logical column order, not\nabout updating tables with a new physical order.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 28 Jun 2022 08:38:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 28, 2022 at 08:38:56AM -0500, Justin Pryzby wrote:\n> On Tue, Jun 28, 2022 at 04:32:30PM +0800, Julien Rouhaud wrote:\n> > psql displays a table columns information using the logical order rather the\n> > physical order, and if verbose emits an addition \"Physical order\" footer if the\n> > logical layout is different from the physical one.\n> \n> FYI: the footer would work really poorly for us, since we use hundreds of\n> columns and sometimes over 1000 (historically up to 1600).\n\nYeah :) As I mentioned originally at [1]: \"I also changed psql to display the\ncolumn in logical position, and emit an extra line with the physical position\nin the verbose mode, but that's a clearly poor design which would need a lot\nmore thoughts.\"\n\n> I think it'd be\n> better to show the physical position as an additional column, or a \\d option to\n> sort by physical attnum. (I'm not sure if it'd be useful for our case to see\n> the extra columns, but at least it won't create a \"footer\" which is multiple\n> pages long.\n\nYes, I was also thinking something like that could work. I just did it with\nthe extra footer for now because I needed a quick way to check in which order\nmy tables were supposed to be displayed / stored during development. As soon\nas I get a clearer picture of what approach should be used I will clearly work\non this, and all other things that still need some care.\n\n> Actually, I've sometimes wished for a \"\\d-\" quiet mode which would\n> show everything *except* the list of column names, or perhaps only show those\n> columns which are referenced by the list of indexes/constraints/stats\n> objects/etc).\n\nI never had to work on crazy wide relations like that myself but I can easily\nimagine how annoying it can get. No objection from me, although it would be\ngood to start a new thread to attract more attention and see what other are\nthinking.\n\n> BTW, since 2 years ago, when rewriting partitions to promote a column type, we\n> recreate the parent table sorted by attlen, to minimize alignment overhead in\n> new children. AFAICT your patch is about adding an logical column order, not\n> about updating tables with a new physical order.\n\nIndeed, the only thing it could do in such case is to allow you to create the\ncolumns in an optimal order in the first place, without messing with the output.\n\nBut if the people who originally creates the table don't think about alignment\nand things like that, there's still nothing that can be done with this feature.\n\nThat being said, in theory if such a feature existed, and if we also had a DDL\nto allowed to specify a different logical order at creation time, it would be\neasy to create a module that automatically reorder the columns before the table\nis created to make sure that the columns are physically stored in an optimal\nway.\n\n[1] https://www.postgresql.org/message-id/20220623101155.3dljtwradu7eik6g@jrouhaud\n\n\n", "msg_date": "Tue, 28 Jun 2022 22:13:14 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "On 2022-Jun-28, Julien Rouhaud wrote:\n\n\n> On Tue, Jun 28, 2022 at 10:53:14AM +0200, Alvaro Herrera wrote:\n\n> > My feeling is that every aspect of user interaction should show\n> > columns ordered in logical order.\n> \n> I'm not entirely sure of what you meant. Assuming tables a(a, z) and b(b, z),\n> what do you think those queries should return?\n> \n> SELECT * FROM a JOIN b on a.z = b.z\n> Currently it returns (a.a, a.z, b.b, b.z)\n> \n> SELECT * FROM a JOIN b USING (z)\n> Currently it returns a.z, a.a, b.b.\n> \n> Should it now return (a.a, z, b.b) as long as the tables have that logical\n> order, whether or not any other position (attnum / attphysnum) is different or\n> stay the same as now?\n\nFor all user-visible intents and purposes, the column order is whatever\nthe logical order is (attlognum), regardless of attnum and attphysnum.\nIf the logical order is changed, then the order of the output columns of\na join will change to match. The attnum and attphysnum are completely\nirrelevant to all these purposes. So, to answer your question, if the\njoin expands in this way at present, then it should continue to expand\nthat way if you define a table that has different attnum/attphysnum but\nthe same attlognum for those columns.\n\n\n> I'm not following. If we keep attnum as the official identity position and\n> use attlognum as the position that should be used in any interactive command,\n> wouldn't that risk to break every single client?\n\nYeah, it might break a lot of tools, but other things break tools too\nand the world just moves on.\n\nBut if you don't want to break tools, I can think of two alternatives:\n\n1. make the immutable column identity something like attidnum and\n keep attnum as the logical column order.\n This keeps tools happy, but if they try to match pg_attrdef by attnum\n bad things will happen.\n\n2. in order to avoid possible silent breakage, remove attnum altogether\n and just have attidnum, attlognum, attphysnum; then every tool is\n forced to undergo an update. Any cross-catalog relationships are now\n correct.\n\n> Imagine you have some framework that automatically generates queries based on\n> the catalog, if it sees table abc with:\n> c: attnum 1, attphysnum 1, attlognum 3\n> b: attnum 2, attphysnum 2, attlognum 2\n> a: attnum 3, attphysnum 3, attlognum 1\n\nHopefully the framework will add a column list,\n INSERT INTO abc (c,b,a) VALUES ('c', 'b', 'a');\nto avoid this problem. But if it doesn't, then yeah it will misbehave,\nand I don't think you should try to make it not misbehave.\n\n> Also, about the default values evaluation (see [2]), should it be tied to your\n> attnum, attphysnum or attlognum?\n\nDefault is tied to column identity. If you change column order, the\ndefaults don't need to change at all. Similarly, if the server decides\nto repack the columns in a different way to save alignment padding, the\ndefaults don't need to change.\n\nIf you do not provide a column identity number or you use something else\n(e.g. attlognum) to cross-references attributes from other catalogs,\nthen you'll have to edit pg_attrdef when a column moves; and any other\nreference to a column number will have to change. Or think about\npg_depend. You don't want that. This is why you need three columns,\nnot two.\n\n> I think that supporting at least a way to specify the logical order\n> during the table creation should be easy to implement\n\nAs long as it is really simple (just some stuff in CREATE TABLE, nothing\nat all in ALTER TABLE) then that sounds good. I just suggest not to\ncomplicate things too much to avoid the risk of failing the project\naltogether.\n\nFor testability, her's a crazy idea: have some test mode (maybe #ifdef\nUSE_ASSERT_CHECKING) that randomizes attlognum to start at some N >> 1,\nand only attidnum starts at 1. Then they never match and all tools need\nto ensure they handle weird cases correctly.\n\n> (there shouldn't be any question on whether it needs to invalidate any\n> cache or what lock level to use), and could also be added in the\n> initial submission without much extra efforts, which could help with\n> the testing.\n\nFamous last words :-)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n", "msg_date": "Tue, 28 Jun 2022 20:27:23 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> If you do not provide a column identity number or you use something else\n> (e.g. attlognum) to cross-references attributes from other catalogs,\n> then you'll have to edit pg_attrdef when a column moves; and any other\n> reference to a column number will have to change. Or think about\n> pg_depend. You don't want that. This is why you need three columns,\n> not two.\n\nIn previous go-rounds on this topic (of which there have been many),\nwe understood the need for attidnum as being equivalent to the familiar\nnotion that tables should have an immutable primary key, with anything\nthat users might wish to change *not* being the primary key. This\nside-steps the need to propagate changes of the pkey into referencing\ntables, which is essentially what Alvaro is pointing out you don't\nwant to have to deal with.\n\nFWIW, I'd lean to the idea that using three new column names would\nbe a good thing, because it'll force you to look at every single\nreference in the code and figure out which meaning is needed at that\nspot. There will still be a large number of wrong-meaning bugs, but\nthat disciplined step will hopefully result in \"large\" being \"tolerable\".\n\n>> I think that supporting at least a way to specify the logical order\n>> during the table creation should be easy to implement\n\n> As long as it is really simple (just some stuff in CREATE TABLE, nothing\n> at all in ALTER TABLE) then that sounds good. I just suggest not to\n> complicate things too much to avoid the risk of failing the project\n> altogether.\n\nI think that any user-reachable knobs for controlling this should be\ndesigned and built later. The initial split-up of attnum meanings\nis already going to be a huge lift, and anything at all that you can\ndo to reduce the size of that first patch is advisable. If you don't\nrealize what a large chance there is that you'll utterly fail on that\nfirst step, then you have failed to learn anything from the history\nof this topic.\n\nNow you do need something that will make the three meanings different\nin order to test that step. But I'd suggest some bit of throwaway code\nthat just assigns randomly different logical and physical orders.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Jun 2022 14:47:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "On Tue, Jun 28, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now you do need something that will make the three meanings different\n> in order to test that step. But I'd suggest some bit of throwaway code\n> that just assigns randomly different logical and physical orders.\n\nThat seems like a good idea. Might also make sense to make the\nbehavior configurable via a developer-only GUC, to enable exhaustive\ntests that use every possible permutation of physical/logical mappings\nfor a given table.\n\nPerhaps the random behavior itself should work by selecting a value\nfor the GUC at various key points via a PRNG. During CREATE TABLE, for\nexample. This approach could make it easier to reproduce failures on the\nbuildfarm.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jun 2022 11:55:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jun 28, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Now you do need something that will make the three meanings different\n>> in order to test that step. But I'd suggest some bit of throwaway code\n>> that just assigns randomly different logical and physical orders.\n\n> That seems like a good idea. Might also make sense to make the\n> behavior configurable via a developer-only GUC, to enable exhaustive\n> tests that use every possible permutation of physical/logical mappings\n> for a given table.\n> Perhaps the random behavior itself should work by selecting a value\n> for the GUC at various key points via a PRNG. During CREATE TABLE, for\n> example. This approach could make it easier to reproduce failures on the\n> buildfarm.\n\nYeah, it can't be *too* random or debugging failures will be a nightmare.\nMy point is just to not spend a lot of engineering on this part, because\nit won't be a long-term user feature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Jun 2022 16:22:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Separate the attribute physical order from logical order" } ]
[ { "msg_contents": "I have been interested in a query that returns a batch of results filtered\nby a subset of the first column of an index and ordered by the second.\n\nI created a simple (hopefully) reproducible example of the issue, the two\nqueries describe the same data but have very different costs (explain\noutput included in the attached file).\nserver_version 12.8\n\nOn slack #pgsql-hackers channel, @sfrost, it was suggested that what I\ndescribed is achieved by index skip scan. How can I get a development build\nto test this feature?", "msg_date": "Tue, 28 Jun 2022 12:46:07 +0100", "msg_from": "Alexandre Felipe <alexandre.felipe@tpro.io>", "msg_from_op": true, "msg_subject": "Testing Index Skip scan" } ]
[ { "msg_contents": "I don´t know how to create a patch, maybe someday, but for now I´m just\nsending this little problem if somebody can solve it.\n\nIn a multi schema environment where several tables has same structure is a\nlittle bit hard to know which one already has that primary key.\n\nOn log I see now on replica server.\nMessage:duplicate key value violates unique constraint \"pkcustomer\"\nDetail: Key (customer_id)=(530540) already exists.\n\nSo, I know what table is but I don´t know what schema it belongs.\n\nThanks\nMarcos\n\nI don´t know how to create a patch, maybe someday, but for now I´m just sending this little problem if somebody can solve it.In a multi schema environment where several tables has same structure is a little bit hard to know which one already has that primary key.On log I see now on replica server.Message:duplicate key value violates unique constraint \"pkcustomer\"Detail: Key (customer_id)=(530540) already exists.So, I know what table is but I don´t know what schema it belongs.Thanks Marcos", "msg_date": "Tue, 28 Jun 2022 09:19:36 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "better error description on logical replication" }, { "msg_contents": "On Tue, Jun 28, 2022 at 5:50 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I don´t know how to create a patch, maybe someday, but for now I´m just sending this little problem if somebody can solve it.\n>\n> In a multi schema environment where several tables has same structure is a little bit hard to know which one already has that primary key.\n>\n> On log I see now on replica server.\n> Message:duplicate key value violates unique constraint \"pkcustomer\"\n> Detail: Key (customer_id)=(530540) already exists.\n>\n> So, I know what table is but I don´t know what schema it belongs.\n>\n\nOn which version, have you tried this? In HEAD, I am getting below information:\nERROR: duplicate key value violates unique constraint \"idx_t1\"\nDETAIL: Key (c1)=(1) already exists.\nCONTEXT: processing remote data for replication origin \"pg_16388\"\nduring \"INSERT\" for replication target relation \"public.t1\" in\ntransaction 739 finished at 0/150D640\n\nYou can see that CONTEXT has schema information. Will that serve your purpose?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 29 Jun 2022 08:22:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: better error description on logical replication" }, { "msg_contents": "I´m using 14.4.\nThese are some lines with that error, and context is empty.\nAnd yes, if context had this info you wrote would be fine\n\n2022-06-28 08:18:23.600 -03,,,20915,,62b9c77b.51b3,1328,,2022-06-27\n12:06:35 -03,4/690182,433844252,ERROR,23505,\"duplicate key value violates\nunique constraint \"\"pkcustomer\"\"\",\"Key (customer_id)=(530540) already\nexists.\",,,,,,,,\"\",\"logical replication worker\",,5539589780750922391\n2022-06-28 08:18:23.609 -03,,,20377,,62bae37f.4f99,1,,2022-06-28 08:18:23\n-03,4/690184,0,LOG,00000,\"logical replication apply worker for subscription\n\"\"sub_replica_5\"\" has started\",,,,,,,,,\"\",\"logical replication worker\",,0\n2022-06-28 08:18:23.929 -03,,,2009,,62b35392.7d9,88468,,2022-06-22 14:38:26\n-03,,0,LOG,00000,\"background worker \"\"logical replication worker\"\" (PID\n20915) exited with exit code 1\",,,,,,,,,\"\",\"postmaster\",,0\n2022-06-28 08:18:24.151 -03,,,20377,,62bae37f.4f99,2,,2022-06-28 08:18:23\n-03,4/690187,433844253,ERROR,23505,\"duplicate key value violates unique\nconstraint \"\"pkcustomer\"\"\",\"Key (customer_id)=(530540) already\nexists.\",,,,,,,,\"\",\"logical replication worker\",,6675519194010520265\n2022-06-28 08:18:24.160 -03,,,2009,,62b35392.7d9,88469,,2022-06-22 14:38:26\n-03,,0,LOG,00000,\"background worker \"\"logical replication worker\"\" (PID\n20377) exited with exit code 1\",,,,,,,,,\"\",\"postmaster\",,0\n2\n\nthanks\nMarcos\n\n\nEm ter., 28 de jun. de 2022 às 23:52, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Tue, Jun 28, 2022 at 5:50 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >\n> > I don´t know how to create a patch, maybe someday, but for now I´m just\n> sending this little problem if somebody can solve it.\n> >\n> > In a multi schema environment where several tables has same structure is\n> a little bit hard to know which one already has that primary key.\n> >\n> > On log I see now on replica server.\n> > Message:duplicate key value violates unique constraint \"pkcustomer\"\n> > Detail: Key (customer_id)=(530540) already exists.\n> >\n> > So, I know what table is but I don´t know what schema it belongs.\n> >\n>\n> On which version, have you tried this? In HEAD, I am getting below\n> information:\n> ERROR: duplicate key value violates unique constraint \"idx_t1\"\n> DETAIL: Key (c1)=(1) already exists.\n> CONTEXT: processing remote data for replication origin \"pg_16388\"\n> during \"INSERT\" for replication target relation \"public.t1\" in\n> transaction 739 finished at 0/150D640\n>\n> You can see that CONTEXT has schema information. Will that serve your\n> purpose?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nI´m using 14.4.These are some lines with that error, and context is empty.And yes, if context had this info you wrote would be fine2022-06-28 08:18:23.600 -03,,,20915,,62b9c77b.51b3,1328,,2022-06-27 12:06:35 -03,4/690182,433844252,ERROR,23505,\"duplicate key value violates unique constraint \"\"pkcustomer\"\"\",\"Key (customer_id)=(530540) already exists.\",,,,,,,,\"\",\"logical replication worker\",,55395897807509223912022-06-28 08:18:23.609 -03,,,20377,,62bae37f.4f99,1,,2022-06-28 08:18:23 -03,4/690184,0,LOG,00000,\"logical replication apply worker for subscription \"\"sub_replica_5\"\" has started\",,,,,,,,,\"\",\"logical replication worker\",,02022-06-28 08:18:23.929 -03,,,2009,,62b35392.7d9,88468,,2022-06-22 14:38:26 -03,,0,LOG,00000,\"background worker \"\"logical replication worker\"\" (PID 20915) exited with exit code 1\",,,,,,,,,\"\",\"postmaster\",,02022-06-28 08:18:24.151 -03,,,20377,,62bae37f.4f99,2,,2022-06-28 08:18:23 -03,4/690187,433844253,ERROR,23505,\"duplicate key value violates unique constraint \"\"pkcustomer\"\"\",\"Key (customer_id)=(530540) already exists.\",,,,,,,,\"\",\"logical replication worker\",,66755191940105202652022-06-28 08:18:24.160 -03,,,2009,,62b35392.7d9,88469,,2022-06-22 14:38:26 -03,,0,LOG,00000,\"background worker \"\"logical replication worker\"\" (PID 20377) exited with exit code 1\",,,,,,,,,\"\",\"postmaster\",,02thanksMarcosEm ter., 28 de jun. de 2022 às 23:52, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Tue, Jun 28, 2022 at 5:50 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I don´t know how to create a patch, maybe someday, but for now I´m just sending this little problem if somebody can solve it.\n>\n> In a multi schema environment where several tables has same structure is a little bit hard to know which one already has that primary key.\n>\n> On log I see now on replica server.\n> Message:duplicate key value violates unique constraint \"pkcustomer\"\n> Detail: Key (customer_id)=(530540) already exists.\n>\n> So, I know what table is but I don´t know what schema it belongs.\n>\n\nOn which version, have you tried this? In HEAD, I am getting below information:\nERROR:  duplicate key value violates unique constraint \"idx_t1\"\nDETAIL:  Key (c1)=(1) already exists.\nCONTEXT:  processing remote data for replication origin \"pg_16388\"\nduring \"INSERT\" for replication target relation \"public.t1\" in\ntransaction 739 finished at 0/150D640\n\nYou can see that CONTEXT has schema information. Will that serve your purpose?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 29 Jun 2022 08:00:31 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: better error description on logical replication" }, { "msg_contents": "On Wed, Jun 29, 2022 at 4:30 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I´m using 14.4.\n>\n\nThis additional information will be available in 15 as it is committed\nas part of commit abc0910e.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 29 Jun 2022 16:41:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: better error description on logical replication" } ]
[ { "msg_contents": "Patch attached. Some kinds of emit log hooks might find it useful to\nalso compute the log_line_prefix.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 28 Jun 2022 11:52:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Export log_line_prefix(); useful for emit_log_hook." }, { "msg_contents": "On Tue, Jun 28, 2022 at 11:52:56AM -0700, Jeff Davis wrote:\n> Patch attached. Some kinds of emit log hooks might find it useful to\n> also compute the log_line_prefix.\n\nHave you played with anything specific that would require that? I\nam fine to expose this routine, being mostly curious about what kind\nof recent format implemented with the elog hook would use it.\n--\nMichael", "msg_date": "Wed, 29 Jun 2022 10:17:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Export log_line_prefix(); useful for emit_log_hook." }, { "msg_contents": "On Wed, 2022-06-29 at 10:17 +0900, Michael Paquier wrote:\n> On Tue, Jun 28, 2022 at 11:52:56AM -0700, Jeff Davis wrote:\n> > Patch attached. Some kinds of emit log hooks might find it useful\n> > to\n> > also compute the log_line_prefix.\n> \n> Have you played with anything specific that would require that? I\n> am fine to expose this routine, being mostly curious about what kind\n> of recent format implemented with the elog hook would use it.\n\nJust a slightly different format that is directly digestible by another\nsystem, while still preserving what the original messages in the file\nwould look like.\n\nThere are other ways to do it, but it's convenient. If we use, e.g.,\ncsv or json format, we lose the log_line_prefix and would need to\nregenerate it from the individual fields.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 28 Jun 2022 22:32:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Export log_line_prefix(); useful for emit_log_hook." }, { "msg_contents": "On 2022-Jun-28, Jeff Davis wrote:\n\n> Patch attached. Some kinds of emit log hooks might find it useful to\n> also compute the log_line_prefix.\n\nHmm, maybe your hypothetical book would prefer to use a different\nsetting for log line prefix than Log_line_prefix, so it would make sense\nto pass the format string as a parameter to the function instead of\nrelying on the GUC global.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Wed, 29 Jun 2022 15:09:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Export log_line_prefix(); useful for emit_log_hook." }, { "msg_contents": "On Wed, Jun 29, 2022 at 03:09:42PM +0200, Alvaro Herrera wrote:\n> Hmm, maybe your hypothetical book would prefer to use a different\n> setting for log line prefix than Log_line_prefix, so it would make sense\n> to pass the format string as a parameter to the function instead of\n> relying on the GUC global.\n\n+1.\n--\nMichael", "msg_date": "Mon, 4 Jul 2022 15:54:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Export log_line_prefix(); useful for emit_log_hook." }, { "msg_contents": "On Mon, 2022-07-04 at 15:54 +0900, Michael Paquier wrote:\n> On Wed, Jun 29, 2022 at 03:09:42PM +0200, Alvaro Herrera wrote:\n> > Hmm, maybe your hypothetical book would prefer to use a different\n> > setting for log line prefix than Log_line_prefix, so it would make\n> > sense\n> > to pass the format string as a parameter to the function instead of\n> > relying on the GUC global.\n\nThat is nicer, attached.\n\nI also renamed the function log_status_format(), and made\nlog_line_prefix() a thin wrapper over that. I think that's less\nconfusing.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 04 Jul 2022 13:24:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Export log_line_prefix(); useful for emit_log_hook." } ]
[ { "msg_contents": "Patch attached.\n\nHelpful for debugging complex extension script problems.", "msg_date": "Tue, 28 Jun 2022 12:10:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Emit extra debug message when executing extension script." }, { "msg_contents": "On 28.06.22 21:10, Jeff Davis wrote:\n> +\tereport(DEBUG1, errmsg(\"executing extension script: %s\", filename));\n\nThis should either be elog or use errmsg_internal.\n\n\n", "msg_date": "Wed, 29 Jun 2022 14:26:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On Wed, Jun 29, 2022 at 9:26 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 28.06.22 21:10, Jeff Davis wrote:\n> > + ereport(DEBUG1, errmsg(\"executing extension script: %s\", filename));\n>\n> This should either be elog or use errmsg_internal.\n\nWhy?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Jun 2022 21:39:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On Wed, 2022-06-29 at 21:39 -0400, Robert Haas wrote:\n> > This should either be elog or use errmsg_internal.\n> \n> Why?\n\nI didn't see a response, so I'm still using ereport(). I attached a new\nversion though that doesn't emit the actual script filename; instead\njust the from/to version.\n\nThe output looks nicer and I don't have to worry about whether the user\nshould be able to know the share directory or not.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 01 Jul 2022 15:24:27 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:24:27PM -0700, Jeff Davis wrote:\n> +\t\tereport(DEBUG1, errmsg(\"executing extension update script from version '%s' to '%s'\", from_version, version));\n\nnitpick: I would suggest \"executing extension script for update from\nversion X to Y.\"\n\nI personally would rather this output the name of the file. If revealing\nthe directory is a concern, perhaps we could just trim everything but the\nfile name.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:33:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On Fri, 2022-07-01 at 15:33 -0700, Nathan Bossart wrote:\n> On Fri, Jul 01, 2022 at 03:24:27PM -0700, Jeff Davis wrote:\n> > +\t\tereport(DEBUG1, errmsg(\"executing extension update\n> > script from version '%s' to '%s'\", from_version, version));\n> \n> nitpick: I would suggest \"executing extension script for update from\n> version X to Y.\"\n\nThank you. Committed with minor modification to include the extension\nname.\n\nI did end up using Peter's suggestion. I reviewed other DEBUG messages\nand it seems nearly all use elog() or errmsg_internal().\n\n> I personally would rather this output the name of the file. If\n> revealing\n> the directory is a concern, perhaps we could just trim everything but\n> the\n> file name.\n\nI could have slightly refactored the code to do this, but it didn't\nquite seem worth it for a single debug message.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 02 Jul 2022 11:39:04 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On 2022-Jun-29, Robert Haas wrote:\n\n> On Wed, Jun 29, 2022 at 9:26 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > On 28.06.22 21:10, Jeff Davis wrote:\n> > > + ereport(DEBUG1, errmsg(\"executing extension script: %s\", filename));\n> >\n> > This should either be elog or use errmsg_internal.\n> \n> Why?\n\nThe reason is that errmsg() marks the message for translation, and we\ndon't want to burden translators with messages that are of little\ninterest to most users. Using either elog() or errmsg_internal()\navoids that.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n\n\n", "msg_date": "Mon, 4 Jul 2022 11:27:53 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On Mon, Jul 4, 2022 at 5:27 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jun-29, Robert Haas wrote:\n> > On Wed, Jun 29, 2022 at 9:26 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > > On 28.06.22 21:10, Jeff Davis wrote:\n> > > > + ereport(DEBUG1, errmsg(\"executing extension script: %s\", filename));\n> > >\n> > > This should either be elog or use errmsg_internal.\n> >\n> > Why?\n>\n> The reason is that errmsg() marks the message for translation, and we\n> don't want to burden translators with messages that are of little\n> interest to most users. Using either elog() or errmsg_internal()\n> avoids that.\n\nYeah, I'm aware of that in general, but I'm not quite clear on how we\ndecide that. Do we take the view that all debug-level messages need\nnot be translated?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 14:43:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." }, { "msg_contents": "On 2022-Jul-05, Robert Haas wrote:\n\n> On Mon, Jul 4, 2022 at 5:27 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-Jun-29, Robert Haas wrote:\n\n> > > Why?\n> >\n> > The reason is that errmsg() marks the message for translation, and we\n> > don't want to burden translators with messages that are of little\n> > interest to most users. Using either elog() or errmsg_internal()\n> > avoids that.\n> \n> Yeah, I'm aware of that in general, but I'm not quite clear on how we\n> decide that. Do we take the view that all debug-level messages need\n> not be translated?\n\nYes. I don't know about others, but I do.\n\nI notice that we have a small number of other errmsg() uses in DEBUG\nmessages already. I don't think they're quite worth it. I mean, would\na user ever run amcheck with log level set to DEBUG, and care about\nany of these messages? I think I wouldn't care.\n\ncontrib/amcheck/verify_heapam.c: ereport(DEBUG1,\ncontrib/amcheck/verify_heapam.c- (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\ncontrib/amcheck/verify_heapam.c- errmsg(\"cannot verify unlogged relation \\\"%s\\\" during recovery, skipping\",\n\ncontrib/amcheck/verify_nbtree.c: ereport(DEBUG1,\ncontrib/amcheck/verify_nbtree.c- (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\ncontrib/amcheck/verify_nbtree.c- errmsg(\"cannot verify unlogged index \\\"%s\\\" during recovery, skipping\",\n\nWhy are these basic_archive messages translatable? Seems pointless.\n\ncontrib/basic_archive/basic_archive.c: ereport(DEBUG3,\ncontrib/basic_archive/basic_archive.c- (errmsg(\"archiving \\\"%s\\\" via basic_archive\", file)));\n\ncontrib/basic_archive/basic_archive.c: ereport(DEBUG3,\ncontrib/basic_archive/basic_archive.c- (errmsg(\"archive file \\\"%s\\\" already exists with identical contents\",\n\ncontrib/basic_archive/basic_archive.c: ereport(DEBUG1,\ncontrib/basic_archive/basic_archive.c- (errmsg(\"archived \\\"%s\\\" via basic_archive\", file)));\n\n\nWe also have a small number in the backend:\n\nsrc/backend/access/heap/vacuumlazy.c: ereport(DEBUG2,\nsrc/backend/access/heap/vacuumlazy.c- (errmsg(\"table \\\"%s\\\": removed %lld dead item identifiers in %u pages\",\nsrc/backend/access/heap/vacuumlazy.c- vacrel->relname, (long long) index, vacuumed_pages)));\n\nWhy is this one unconditional DEBUG2 instead of depending on\nLVRelState->message_level (ie. turn into INFO when VERBOSE is used),\nlike every other message from vacuum? It seems to me that if I say\nVERBOSE, then I may be interested in this message also. While at it,\nwhy are the index vacuuming routines not using LVRelState->message_level\neither but instead hardcode DEBUG2? Aren't they all mistakes?\n\n\nsrc/backend/replication/logical/worker.c: ereport(DEBUG1,\nsrc/backend/replication/logical/worker.c- (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" two_phase is %s\",\nsrc/backend/replication/logical/worker.c- MySubscription->name,\n\nNot sure why anybody cares about this. Why not remove the message?\n\nsrc/backend/utils/activity/pgstat.c: ereport(DEBUG2,\nsrc/backend/utils/activity/pgstat.c- (errcode_for_file_access(),\nsrc/backend/utils/activity/pgstat.c- errmsg(\"unlinked permanent statistics file \\\"%s\\\"\",\n\nHmm, why is there an errcode here, after the operation succeeds? ISTM\nthis could be an elog().\n\n\nThen we have this one:\n\n\t\tereport(DEBUG1,\n\t\t\t\t(errcode(ERRCODE_INTERNAL_ERROR),\n\t\t\t\t errmsg(\"picksplit method for column %d of index \\\"%s\\\" failed\",\n\t\t\t\t\t\tattno + 1, RelationGetRelationName(r)),\n\t\t\t\t errhint(\"The index is not optimal. To optimize it, contact a developer, or try to use the column as the second one in the CREATE INDEX command.\")));\n\nI cannot understand how is DEBUG1 a useful log level for this message.\nHow is the user going to find out that there is a problem, when this\nmessage is hidden from them? Do we tell people to run their insert\nqueries for tables with GiST indexes under DEBUG1, in case the picksplit\nmethod fails, so that they can contact a developer? How many\nuser-defined picksplit methods have been improved to cope with this\nproblem, since commit 09368d23dbf4 added this bit in April 2009? How\nmany of them have been presenting the problem since then, and not been\nfixed because nobody has noticed that there is a problem?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 5 Jul 2022 21:26:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Emit extra debug message when executing extension script." } ]
[ { "msg_contents": "Hi,\n\nI found a comparaison bug when using the PostgreSQL::Version module. See:\n\n $ perl -I. -MPostgreSQL::Version -le '\n my $v = PostgreSQL::Version->new(\"9.6\");\n \n print \"not 9.6 > 9.0\" unless $v > 9.0;\n print \"not 9.6 < 9.0\" unless $v < 9.0;\n print \"9.6 <= 9.0\" if $v <= 9.0;\n print \"9.6 >= 9.0\" if $v >= 9.0;'\n not 9.6 > 9.0\n not 9.6 < 9.0\n 9.6 <= 9.0\n 9.6 >= 9.0\n\nWhen using < or >, 9.6 is neither greater or lesser than 9.0. \nWhen using <= or >=, 9.6 is equally greater and lesser than 9.0.\nThe bug does not show up if you compare with \"9.0\" instead of 9.0.\nThis bug is triggered with devel versions, eg. 14beta1 <=> 14.\n\nThe bug appears when both objects have a different number of digit in the\ninternal array representation:\n\n $ perl -I. -MPostgreSQL::Version -MData::Dumper -le '\n print Dumper(PostgreSQL::Version->new(\"9.0\")->{num});\n print Dumper(PostgreSQL::Version->new(9.0)->{num});\n print Dumper(PostgreSQL::Version->new(14)->{num});\n print Dumper(PostgreSQL::Version->new(\"14beta1\")->{num});'\n $VAR1 = [ '9', '0' ];\n $VAR1 = [ '9' ];\n $VAR1 = [ '14' ];\n $VAR1 = [ '14', -1 ];\n\nBecause of this, The following loop in \"_version_cmp\" is wrong because we are\ncomparing two versions with different size of 'num' array:\n\n\tfor (my $idx = 0;; $idx++)\n\t{\n\t\treturn 0 unless (defined $an->[$idx] && defined $bn->[$idx]);\n\t\treturn $an->[$idx] <=> $bn->[$idx]\n\t\t if ($an->[$idx] <=> $bn->[$idx]);\n\t}\n\n\nIf we want to keep this internal array representation, the only fix I can think\nof would be to always use a 4 element array defaulted to 0. Previous examples\nwould be:\n\n $VAR1 = [ 9, 0, 0, 0 ];\n $VAR1 = [ 9, 0, 0, 0 ];\n $VAR1 = [ 14, 0, 0, 0 ];\n $VAR1 = [ 14, 0, 0, -1 ];\n\nA better fix would be to store the version internally as version_num that are\ntrivial to compute and compare. Please, find in attachment an implementation of\nthis.\n\nThe patch is a bit bigger because it improved the devel version to support\nrc/beta/alpha comparison like 14rc2 > 14rc1.\n\nMoreover, it adds a bunch of TAP tests to check various use cases.\n\nRegards,", "msg_date": "Tue, 28 Jun 2022 22:53:25 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote:\n> Hi,\n>\n> I found a comparaison bug when using the PostgreSQL::Version module. See:\n>\n> $ perl -I. -MPostgreSQL::Version -le '\n> my $v = PostgreSQL::Version->new(\"9.6\");\n> \n> print \"not 9.6 > 9.0\" unless $v > 9.0;\n> print \"not 9.6 < 9.0\" unless $v < 9.0;\n> print \"9.6 <= 9.0\" if $v <= 9.0;\n> print \"9.6 >= 9.0\" if $v >= 9.0;'\n> not 9.6 > 9.0\n> not 9.6 < 9.0\n> 9.6 <= 9.0\n> 9.6 >= 9.0\n>\n> When using < or >, 9.6 is neither greater or lesser than 9.0. \n> When using <= or >=, 9.6 is equally greater and lesser than 9.0.\n> The bug does not show up if you compare with \"9.0\" instead of 9.0.\n> This bug is triggered with devel versions, eg. 14beta1 <=> 14.\n>\n> The bug appears when both objects have a different number of digit in the\n> internal array representation:\n>\n> $ perl -I. -MPostgreSQL::Version -MData::Dumper -le '\n> print Dumper(PostgreSQL::Version->new(\"9.0\")->{num});\n> print Dumper(PostgreSQL::Version->new(9.0)->{num});\n> print Dumper(PostgreSQL::Version->new(14)->{num});\n> print Dumper(PostgreSQL::Version->new(\"14beta1\")->{num});'\n> $VAR1 = [ '9', '0' ];\n> $VAR1 = [ '9' ];\n> $VAR1 = [ '14' ];\n> $VAR1 = [ '14', -1 ];\n>\n> Because of this, The following loop in \"_version_cmp\" is wrong because we are\n> comparing two versions with different size of 'num' array:\n>\n> \tfor (my $idx = 0;; $idx++)\n> \t{\n> \t\treturn 0 unless (defined $an->[$idx] && defined $bn->[$idx]);\n> \t\treturn $an->[$idx] <=> $bn->[$idx]\n> \t\t if ($an->[$idx] <=> $bn->[$idx]);\n> \t}\n>\n>\n> If we want to keep this internal array representation, the only fix I can think\n> of would be to always use a 4 element array defaulted to 0. Previous examples\n> would be:\n>\n> $VAR1 = [ 9, 0, 0, 0 ];\n> $VAR1 = [ 9, 0, 0, 0 ];\n> $VAR1 = [ 14, 0, 0, 0 ];\n> $VAR1 = [ 14, 0, 0, -1 ];\n>\n> A better fix would be to store the version internally as version_num that are\n> trivial to compute and compare. Please, find in attachment an implementation of\n> this.\n>\n> The patch is a bit bigger because it improved the devel version to support\n> rc/beta/alpha comparison like 14rc2 > 14rc1.\n>\n> Moreover, it adds a bunch of TAP tests to check various use cases.\n\n\nNice catch, but this looks like massive overkill. I think we can very\nsimply fix the test in just a few lines of code, instead of a 190 line\nfix and a 130 line TAP test.\n\nIt was never intended to be able to compare markers like rc1 vs rc2, and\nI don't see any need for it. If you can show me a sane use case I'll\nhave another look, but right now it seems quite unnecessary.\n\nHere's my proposed fix.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 28 Jun 2022 18:17:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Tue, Jun 28, 2022 at 06:17:40PM -0400, Andrew Dunstan wrote:\n> Nice catch, but this looks like massive overkill. I think we can very\n> simply fix the test in just a few lines of code, instead of a 190 line\n> fix and a 130 line TAP test.\n> \n> It was never intended to be able to compare markers like rc1 vs rc2, and\n> I don't see any need for it. If you can show me a sane use case I'll\n> have another look, but right now it seems quite unnecessary.\n> \n> Here's my proposed fix.\n\nDo you think that we should add some tests for that? One place that\ncomes into mind is test_misc/, and this would be cheap as this does\nnot require setting up a node or such.\n--\nMichael", "msg_date": "Wed, 29 Jun 2022 10:20:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Tue, 28 Jun 2022 18:17:40 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote:\n> > ...\n> > A better fix would be to store the version internally as version_num that\n> > are trivial to compute and compare. Please, find in attachment an\n> > implementation of this.\n> >\n> > The patch is a bit bigger because it improved the devel version to support\n> > rc/beta/alpha comparison like 14rc2 > 14rc1.\n> >\n> > Moreover, it adds a bunch of TAP tests to check various use cases. \n> \n> \n> Nice catch, but this looks like massive overkill. I think we can very\n> simply fix the test in just a few lines of code, instead of a 190 line\n> fix and a 130 line TAP test.\n\nI explained why the patch was a little bit larger than required: it fixes the\nbugs and do a little bit more. The _version_cmp sub is shorter and easier to\nunderstand, I use multi-line code where I could probably fold them in a\none-liner, added some comments... Anyway, I don't feel the number of line\nchanged is \"massive\". But I can probably remove some code and shrink some other\nif it is really important...\n\nMoreover, to be honest, I don't mind the number of additional lines of TAP\ntests. Especially since it runs really, really fast and doesn't hurt day-to-day\ndevs as it is independent from other TAP tests anyway. It could be 1k, if it\nruns fast, is meaningful and helps avoiding futur regressions, I would welcome\nthe addition.\n\nIf we really want to save some bytes, I have a two lines worth of code fix that\nlooks more readable to me than fixing _version_cmp:\n\n+++ b/src/test/perl/PostgreSQL/Version.pm\n@@ -92,9 +92,13 @@ sub new\n # Split into an array\n my @numbers = split(/\\./, $arg);\n \n+ # make sure all digit of the array-represented version are set so we can\n+ # keep _version_cmp code as a \"simple\" digit-to-digit comparison loop\n+ $numbers[$_] += 0 for 0..3;\n+\n # Treat development versions as having a minor/micro version one less than\n # the first released version of that branch.\n- push @numbers, -1 if ($devel);\n+ $numbers[3] = -1 if $devel;\n \n $devel ||= \"\";\n \nBut again, in my humble opinion, the internal version array representation is\nmore a burden we should replace by the version_num...\n\n> It was never intended to be able to compare markers like rc1 vs rc2, and\n> I don't see any need for it. If you can show me a sane use case I'll\n> have another look, but right now it seems quite unnecessary.\n\nI don't have a practical use case right now, but I thought the module\nwould be more complete with these little few more line of codes. Now, keep in\nmind these TAP modules might help external projects, not just core.\n\nIn fact, I wonder what was your original use case to support\ndevel/alpha/beta/rc versions, especially since it was actually not working?\nShould we just get rid of this altogether and wait for an actual use case?\n\nCheers,\n\n\n", "msg_date": "Wed, 29 Jun 2022 11:09:37 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "\nOn 2022-06-29 We 05:09, Jehan-Guillaume de Rorthais wrote:\n> On Tue, 28 Jun 2022 18:17:40 -0400\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>> On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote:\n>>> ...\n>>> A better fix would be to store the version internally as version_num that\n>>> are trivial to compute and compare. Please, find in attachment an\n>>> implementation of this.\n>>>\n>>> The patch is a bit bigger because it improved the devel version to support\n>>> rc/beta/alpha comparison like 14rc2 > 14rc1.\n>>>\n>>> Moreover, it adds a bunch of TAP tests to check various use cases. \n>>\n>> Nice catch, but this looks like massive overkill. I think we can very\n>> simply fix the test in just a few lines of code, instead of a 190 line\n>> fix and a 130 line TAP test.\n> I explained why the patch was a little bit larger than required: it fixes the\n> bugs and do a little bit more. The _version_cmp sub is shorter and easier to\n> understand, I use multi-line code where I could probably fold them in a\n> one-liner, added some comments... Anyway, I don't feel the number of line\n> changed is \"massive\". But I can probably remove some code and shrink some other\n> if it is really important...\n>\n> Moreover, to be honest, I don't mind the number of additional lines of TAP\n> tests. Especially since it runs really, really fast and doesn't hurt day-to-day\n> devs as it is independent from other TAP tests anyway. It could be 1k, if it\n> runs fast, is meaningful and helps avoiding futur regressions, I would welcome\n> the addition.\n\n\nI don't see the point of having a TAP test at all. We have TAP tests for\ntesting the substantive products we test, not for the test suite\ninfrastructure. Otherwise, where will we stop? Shall we have tests for\nthe things that test the test suite?\n\n\n>\n> If we really want to save some bytes, I have a two lines worth of code fix that\n> looks more readable to me than fixing _version_cmp:\n>\n> +++ b/src/test/perl/PostgreSQL/Version.pm\n> @@ -92,9 +92,13 @@ sub new\n> # Split into an array\n> my @numbers = split(/\\./, $arg);\n> \n> + # make sure all digit of the array-represented version are set so we can\n> + # keep _version_cmp code as a \"simple\" digit-to-digit comparison loop\n> + $numbers[$_] += 0 for 0..3;\n> +\n> # Treat development versions as having a minor/micro version one less than\n> # the first released version of that branch.\n> - push @numbers, -1 if ($devel);\n> + $numbers[3] = -1 if $devel;\n> \n> $devel ||= \"\";\n\n\nI don't see why this is any more readable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Jul 2022 10:40:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Sun, 3 Jul 2022 10:40:21 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2022-06-29 We 05:09, Jehan-Guillaume de Rorthais wrote:\n> > On Tue, 28 Jun 2022 18:17:40 -0400\n> > Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> >> On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote: \n> >>> ...\n> >>> A better fix would be to store the version internally as version_num that\n> >>> are trivial to compute and compare. Please, find in attachment an\n> >>> implementation of this.\n> >>>\n> >>> The patch is a bit bigger because it improved the devel version to support\n> >>> rc/beta/alpha comparison like 14rc2 > 14rc1.\n> >>>\n> >>> Moreover, it adds a bunch of TAP tests to check various use cases. \n> >>\n> >> Nice catch, but this looks like massive overkill. I think we can very\n> >> simply fix the test in just a few lines of code, instead of a 190 line\n> >> fix and a 130 line TAP test. \n> > I explained why the patch was a little bit larger than required: it fixes\n> > the bugs and do a little bit more. The _version_cmp sub is shorter and\n> > easier to understand, I use multi-line code where I could probably fold\n> > them in a one-liner, added some comments... Anyway, I don't feel the number\n> > of line changed is \"massive\". But I can probably remove some code and\n> > shrink some other if it is really important...\n> >\n> > Moreover, to be honest, I don't mind the number of additional lines of TAP\n> > tests. Especially since it runs really, really fast and doesn't hurt\n> > day-to-day devs as it is independent from other TAP tests anyway. It could\n> > be 1k, if it runs fast, is meaningful and helps avoiding futur regressions,\n> > I would welcome the addition. \n> \n> \n> I don't see the point of having a TAP test at all. We have TAP tests for\n> testing the substantive products we test, not for the test suite\n> infrastructure. Otherwise, where will we stop? Shall we have tests for\n> the things that test the test suite?\n\nTons of perl module have regression tests. When questioning where testing\nshould stop, it seems the Test::More module itself is not the last frontier:\nhttps://github.com/Test-More/test-more/tree/master/t\n\nMoreover, the PostgreSQL::Version is not a TAP test module, but a module to\ndeal with PostgreSQL versions and compare them.\n\nTesting makes development faster as well when it comes to test the code.\nInstead of testing vaguely manually, you can test a whole bunch of situations\nand add accumulate some more when you think about a new one or when a bug is\nreported. Having TAP test helps to make sure the code work as expected.\n\nIt helped me when creating my patch. With all due respect, I just don't\nunderstand your arguments against them. The number of lines or questioning when\ntesting should stop doesn't hold much.\n\n> > If we really want to save some bytes, I have a two lines worth of code fix\n> > that looks more readable to me than fixing _version_cmp:\n> >\n> > +++ b/src/test/perl/PostgreSQL/Version.pm\n> > @@ -92,9 +92,13 @@ sub new\n> > # Split into an array\n> > my @numbers = split(/\\./, $arg);\n> > \n> > + # make sure all digit of the array-represented version are set so\n> > we can\n> > + # keep _version_cmp code as a \"simple\" digit-to-digit comparison\n> > loop\n> > + $numbers[$_] += 0 for 0..3;\n> > +\n> > # Treat development versions as having a minor/micro version one\n> > less than # the first released version of that branch.\n> > - push @numbers, -1 if ($devel);\n> > + $numbers[3] = -1 if $devel;\n> > \n> > $devel ||= \"\"; \n> \n> I don't see why this is any more readable.\n\nThe _version_cmp is much more readable.\n\nBut anyway, this is not the point. Using an array to compare versions where we\ncan use version_num seems like useless and buggy convolutions to me.\n\nRegards,\n\n\n", "msg_date": "Sun, 3 Jul 2022 22:12:13 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "\nOn 2022-07-03 Su 16:12, Jehan-Guillaume de Rorthais wrote:\n> On Sun, 3 Jul 2022 10:40:21 -0400\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>> On 2022-06-29 We 05:09, Jehan-Guillaume de Rorthais wrote:\n>>> On Tue, 28 Jun 2022 18:17:40 -0400\n>>> Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> \n>>>> On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote: \n>>>>> ...\n>>>>> A better fix would be to store the version internally as version_num that\n>>>>> are trivial to compute and compare. Please, find in attachment an\n>>>>> implementation of this.\n>>>>>\n>>>>> The patch is a bit bigger because it improved the devel version to support\n>>>>> rc/beta/alpha comparison like 14rc2 > 14rc1.\n>>>>>\n>>>>> Moreover, it adds a bunch of TAP tests to check various use cases. \n>>>> Nice catch, but this looks like massive overkill. I think we can very\n>>>> simply fix the test in just a few lines of code, instead of a 190 line\n>>>> fix and a 130 line TAP test. \n>>> I explained why the patch was a little bit larger than required: it fixes\n>>> the bugs and do a little bit more. The _version_cmp sub is shorter and\n>>> easier to understand, I use multi-line code where I could probably fold\n>>> them in a one-liner, added some comments... Anyway, I don't feel the number\n>>> of line changed is \"massive\". But I can probably remove some code and\n>>> shrink some other if it is really important...\n>>>\n>>> Moreover, to be honest, I don't mind the number of additional lines of TAP\n>>> tests. Especially since it runs really, really fast and doesn't hurt\n>>> day-to-day devs as it is independent from other TAP tests anyway. It could\n>>> be 1k, if it runs fast, is meaningful and helps avoiding futur regressions,\n>>> I would welcome the addition. \n>>\n>> I don't see the point of having a TAP test at all. We have TAP tests for\n>> testing the substantive products we test, not for the test suite\n>> infrastructure. Otherwise, where will we stop? Shall we have tests for\n>> the things that test the test suite?\n> Tons of perl module have regression tests. When questioning where testing\n> should stop, it seems the Test::More module itself is not the last frontier:\n> https://github.com/Test-More/test-more/tree/master/t\n>\n> Moreover, the PostgreSQL::Version is not a TAP test module, but a module to\n> deal with PostgreSQL versions and compare them.\n>\n> Testing makes development faster as well when it comes to test the code.\n> Instead of testing vaguely manually, you can test a whole bunch of situations\n> and add accumulate some more when you think about a new one or when a bug is\n> reported. Having TAP test helps to make sure the code work as expected.\n>\n> It helped me when creating my patch. With all due respect, I just don't\n> understand your arguments against them. The number of lines or questioning when\n> testing should stop doesn't hold much.\n\n\nThere is not a single TAP test in our source code that is aimed at\ntesting our test infrastructure as opposed to testing what we are\nactually in the business of building, and I'm not about to add one. This\nis quite different from, say, CPAN modules.\n\nEvery added test consumes buildfarm cycles and space on the buildfarm\nserver for the report, be it ever so small. Every added test needs\nmaintenance, be it ever so small. There's no such thing as a free test\n(apologies to Heinlein and others).\n\n\n>\n>>> If we really want to save some bytes, I have a two lines worth of code fix\n>>> that looks more readable to me than fixing _version_cmp:\n>>>\n>>> +++ b/src/test/perl/PostgreSQL/Version.pm\n>>> @@ -92,9 +92,13 @@ sub new\n>>> # Split into an array\n>>> my @numbers = split(/\\./, $arg);\n>>> \n>>> + # make sure all digit of the array-represented version are set so\n>>> we can\n>>> + # keep _version_cmp code as a \"simple\" digit-to-digit comparison\n>>> loop\n>>> + $numbers[$_] += 0 for 0..3;\n>>> +\n>>> # Treat development versions as having a minor/micro version one\n>>> less than # the first released version of that branch.\n>>> - push @numbers, -1 if ($devel);\n>>> + $numbers[3] = -1 if $devel;\n>>> \n>>> $devel ||= \"\"; \n>> I don't see why this is any more readable.\n> The _version_cmp is much more readable.\n>\n> But anyway, this is not the point. Using an array to compare versions where we\n> can use version_num seems like useless and buggy convolutions to me.\n>\n\nI think we'll just have to agree to disagree about it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Jul 2022 09:59:42 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Tue, 5 Jul 2022 09:59:42 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2022-07-03 Su 16:12, Jehan-Guillaume de Rorthais wrote:\n> > On Sun, 3 Jul 2022 10:40:21 -0400\n> > Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> >> On 2022-06-29 We 05:09, Jehan-Guillaume de Rorthais wrote: \n> >>> On Tue, 28 Jun 2022 18:17:40 -0400\n> >>> Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>> \n> >>>> On 2022-06-28 Tu 16:53, Jehan-Guillaume de Rorthais wrote: \n> >>>>> ...\n> >>>>> A better fix would be to store the version internally as version_num\n> >>>>> that are trivial to compute and compare. Please, find in attachment an\n> >>>>> implementation of this.\n> >>>>>\n> >>>>> The patch is a bit bigger because it improved the devel version to\n> >>>>> support rc/beta/alpha comparison like 14rc2 > 14rc1.\n> >>>>>\n> >>>>> Moreover, it adds a bunch of TAP tests to check various use cases. \n> >>>> Nice catch, but this looks like massive overkill. I think we can very\n> >>>> simply fix the test in just a few lines of code, instead of a 190 line\n> >>>> fix and a 130 line TAP test. \n> >>> I explained why the patch was a little bit larger than required: it fixes\n> >>> the bugs and do a little bit more. The _version_cmp sub is shorter and\n> >>> easier to understand, I use multi-line code where I could probably fold\n> >>> them in a one-liner, added some comments... Anyway, I don't feel the\n> >>> number of line changed is \"massive\". But I can probably remove some code\n> >>> and shrink some other if it is really important...\n> >>>\n> >>> Moreover, to be honest, I don't mind the number of additional lines of TAP\n> >>> tests. Especially since it runs really, really fast and doesn't hurt\n> >>> day-to-day devs as it is independent from other TAP tests anyway. It could\n> >>> be 1k, if it runs fast, is meaningful and helps avoiding futur\n> >>> regressions, I would welcome the addition. \n> >>\n> >> I don't see the point of having a TAP test at all. We have TAP tests for\n> >> testing the substantive products we test, not for the test suite\n> >> infrastructure. Otherwise, where will we stop? Shall we have tests for\n> >> the things that test the test suite? \n> > Tons of perl module have regression tests. When questioning where testing\n> > should stop, it seems the Test::More module itself is not the last frontier:\n> > https://github.com/Test-More/test-more/tree/master/t\n> >\n> > Moreover, the PostgreSQL::Version is not a TAP test module, but a module to\n> > deal with PostgreSQL versions and compare them.\n> >\n> > Testing makes development faster as well when it comes to test the code.\n> > Instead of testing vaguely manually, you can test a whole bunch of\n> > situations and add accumulate some more when you think about a new one or\n> > when a bug is reported. Having TAP test helps to make sure the code work as\n> > expected.\n> >\n> > It helped me when creating my patch. With all due respect, I just don't\n> > understand your arguments against them. The number of lines or questioning\n> > when testing should stop doesn't hold much. \n> \n> \n> There is not a single TAP test in our source code that is aimed at\n> testing our test infrastructure as opposed to testing what we are\n> actually in the business of building, and I'm not about to add one.\n\nWhatever, it helped me during the dev process of fixing this bug. Remove\nthem if you are uncomfortable with them.\n\n> Every added test consumes buildfarm cycles and space on the buildfarm\n> server for the report, be it ever so small.\n\nThey were not supposed to enter the buildfarm cycles. I wrote it earlier, they\ndo not interfere with day-to-day dev activity.\n\n> Every added test needs maintenance, be it ever so small. There's no such\n> thing as a free test (apologies to Heinlein and others).\n\nThis is the first argument I can understand.\n\n> > ...\n> > But anyway, this is not the point. Using an array to compare versions where\n> > we can use version_num seems like useless and buggy convolutions to me.\n> \n> I think we'll just have to agree to disagree about it.\n\nNoted.\n\nCheers,\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:54:10 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Tue, Jun 28, 2022 at 06:17:40PM -0400, Andrew Dunstan wrote:\n> Nice catch, but this looks like massive overkill. I think we can very\n> simply fix the test in just a few lines of code, instead of a 190 line\n> fix and a 130 line TAP test.\n> \n> It was never intended to be able to compare markers like rc1 vs rc2, and\n> I don't see any need for it. If you can show me a sane use case I'll\n> have another look, but right now it seems quite unnecessary.\n> \n> Here's my proposed fix.\n> \n> diff --git a/src/test/perl/PostgreSQL/Version.pm b/src/test/perl/PostgreSQL/Version.pm\n> index 8f70491189..8d4dbbf694 100644\n> --- a/src/test/perl/PostgreSQL/Version.pm\n\nIs this still an outstanding issue ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 3 Nov 2022 13:11:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "On Thu, 3 Nov 2022 13:11:18 -0500\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Jun 28, 2022 at 06:17:40PM -0400, Andrew Dunstan wrote:\n> > Nice catch, but this looks like massive overkill. I think we can very\n> > simply fix the test in just a few lines of code, instead of a 190 line\n> > fix and a 130 line TAP test.\n> > \n> > It was never intended to be able to compare markers like rc1 vs rc2, and\n> > I don't see any need for it. If you can show me a sane use case I'll\n> > have another look, but right now it seems quite unnecessary.\n> > \n> > Here's my proposed fix.\n> > \n> > diff --git a/src/test/perl/PostgreSQL/Version.pm\n> > b/src/test/perl/PostgreSQL/Version.pm index 8f70491189..8d4dbbf694 100644\n> > --- a/src/test/perl/PostgreSQL/Version.pm \n> \n> Is this still an outstanding issue ?\n\nThe issue still exists on current HEAD:\n\n $ perl -Isrc/test/perl/ -MPostgreSQL::Version -le \\\n 'print \"bug\" if PostgreSQL::Version->new(\"9.6\") <= 9.0'\n bug\n\nRegards,\n\n\n\n", "msg_date": "Fri, 4 Nov 2022 15:06:29 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "\nOn 2022-11-04 Fr 10:06, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 3 Nov 2022 13:11:18 -0500\n> Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> On Tue, Jun 28, 2022 at 06:17:40PM -0400, Andrew Dunstan wrote:\n>>> Nice catch, but this looks like massive overkill. I think we can very\n>>> simply fix the test in just a few lines of code, instead of a 190 line\n>>> fix and a 130 line TAP test.\n>>>\n>>> It was never intended to be able to compare markers like rc1 vs rc2, and\n>>> I don't see any need for it. If you can show me a sane use case I'll\n>>> have another look, but right now it seems quite unnecessary.\n>>>\n>>> Here's my proposed fix.\n>>>\n>>> diff --git a/src/test/perl/PostgreSQL/Version.pm\n>>> b/src/test/perl/PostgreSQL/Version.pm index 8f70491189..8d4dbbf694 100644\n>>> --- a/src/test/perl/PostgreSQL/Version.pm \n>> Is this still an outstanding issue ?\n> The issue still exists on current HEAD:\n>\n> $ perl -Isrc/test/perl/ -MPostgreSQL::Version -le \\\n> 'print \"bug\" if PostgreSQL::Version->new(\"9.6\") <= 9.0'\n> bug\n>\n> Regards,\n>\n\nOops. this slipped off mt radar. I'll apply a fix shortly, thanks for\nthe reminder.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:11:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" }, { "msg_contents": "\nOn 2022-11-17 Th 17:11, Andrew Dunstan wrote:\n> On 2022-11-04 Fr 10:06, Jehan-Guillaume de Rorthais wrote:\n>> On Thu, 3 Nov 2022 13:11:18 -0500\n>> Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>>> On Tue, Jun 28, 2022 at 06:17:40PM -0400, Andrew Dunstan wrote:\n>>>> Nice catch, but this looks like massive overkill. I think we can very\n>>>> simply fix the test in just a few lines of code, instead of a 190 line\n>>>> fix and a 130 line TAP test.\n>>>>\n>>>> It was never intended to be able to compare markers like rc1 vs rc2, and\n>>>> I don't see any need for it. If you can show me a sane use case I'll\n>>>> have another look, but right now it seems quite unnecessary.\n>>>>\n>>>> Here's my proposed fix.\n>>>>\n>>>> diff --git a/src/test/perl/PostgreSQL/Version.pm\n>>>> b/src/test/perl/PostgreSQL/Version.pm index 8f70491189..8d4dbbf694 100644\n>>>> --- a/src/test/perl/PostgreSQL/Version.pm \n>>> Is this still an outstanding issue ?\n>> The issue still exists on current HEAD:\n>>\n>> $ perl -Isrc/test/perl/ -MPostgreSQL::Version -le \\\n>> 'print \"bug\" if PostgreSQL::Version->new(\"9.6\") <= 9.0'\n>> bug\n>>\n>> Regards,\n>>\n> Oops. this slipped off mt radar. I'll apply a fix shortly, thanks for\n> the reminder.\n>\n>\n\nDone.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 18 Nov 2022 08:50:47 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fix proposal for comparaison bugs in PostgreSQL::Version" } ]
[ { "msg_contents": "Here's a patch to clarify the BRIN indexes documentation, particularly with\nregards\nto autosummarize, vacuum and autovacuum. It basically breaks down a big\nblob of a\nparagraph into multiple paragraphs for clarity, plus explicitly tells how\nsummarization\nhappens manually or automatically.\n\nI also added cross-references to various relevant sections, including the\ncreate index\npage.\n\nOn this topic... I'm not familiar with with the internals of BRIN indexes\nand in\nbackend/access/common/reloptions.c I see:\n\n {\n \"autosummarize\",\n \"Enables automatic summarization on this BRIN index\",\n RELOPT_KIND_BRIN,\n AccessExclusiveLock\n },\n\nIs the exclusive lock on the index why autosummarize is off by default?\n\nWhat would be the downside (if any) of having autosummarize=on by default?\n\nRoberto\n\n--\nCrunchy Data - passion for open source PostgreSQL", "msg_date": "Tue, 28 Jun 2022 17:22:34 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": true, "msg_subject": "doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Tue, Jun 28, 2022 at 05:22:34PM -0600, Roberto Mello wrote:\n> Here's a patch to clarify the BRIN indexes documentation, particularly with\n> regards to autosummarize, vacuum and autovacuum. It basically breaks down a\n> big blob of a paragraph into multiple paragraphs for clarity, plus explicitly\n> tells how summarization happens manually or automatically.\n\nSee also this older thread\nhttps://www.postgresql.org/message-id/flat/20220224193520.GY9008@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Jun 2022 07:04:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On 2022-Jun-28, Roberto Mello wrote:\n\n> Here's a patch to clarify the BRIN indexes documentation, particularly with\n> regards to autosummarize, vacuum and autovacuum. It basically breaks\n> down a big blob of a paragraph into multiple paragraphs for clarity,\n> plus explicitly tells how summarization happens manually or\n> automatically.\n\n[Some of] these additions are wrong actually. It says that autovacuum\nwill not summarize new entries; but it does. If you just let the table\nsit idle, any autovacuum run that cleans the table will also summarize\nany ranges that need summarization.\n\nWhat 'autosummarization=off' means is that the behavior to trigger an\nimmediate summarization of a range once it becomes full is not default.\nThis is very different.\n\nAs for the new <para></para>s that you added, I'd say they're\nstylistically wrong. Each paragraph is supposed to be one fully\ncontained idea; what these tags do is split each idea across several\nsmaller paragraphs. This is likely subjective though.\n\n> On this topic... I'm not familiar with with the internals of BRIN\n> indexes and in backend/access/common/reloptions.c I see:\n> \n> {\n> \"autosummarize\",\n> \"Enables automatic summarization on this BRIN index\",\n> RELOPT_KIND_BRIN,\n> AccessExclusiveLock\n> },\n> \n> Is the exclusive lock on the index why autosummarize is off by default?\n\nNo. The lock level mentioned here is what needs to be taken in order to\nchange the value of this option.\n\n> What would be the downside (if any) of having autosummarize=on by default?\n\nI'm not aware of any. Maybe we should turn it on by default.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 4 Jul 2022 17:20:11 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "What about this?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)", "msg_date": "Mon, 4 Jul 2022 21:38:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Mon, Jul 04, 2022 at 09:38:42PM +0200, Alvaro Herrera wrote:\n> What about this?\n> \n> -- \n> Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n> \"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n> diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml\n> index caf1ea4cef..0a715d41c7 100644\n> --- a/doc/src/sgml/brin.sgml\n> +++ b/doc/src/sgml/brin.sgml\n> @@ -73,31 +73,55 @@\n> summarized range, that range does not automatically acquire a summary\n> tuple; those tuples remain unsummarized until a summarization run is\n> invoked later, creating initial summaries.\n> - This process can be invoked manually using the\n> - <function>brin_summarize_range(regclass, bigint)</function> or\n> - <function>brin_summarize_new_values(regclass)</function> functions;\n> - automatically when <command>VACUUM</command> processes the table;\n> - or by automatic summarization executed by autovacuum, as insertions\n> - occur. (This last trigger is disabled by default and can be enabled\n> - with the <literal>autosummarize</literal> parameter.)\n> - Conversely, a range can be de-summarized using the\n> - <function>brin_desummarize_range(regclass, bigint)</function> function,\n> - which is useful when the index tuple is no longer a very good\n> - representation because the existing values have changed.\n> + </para>\n> +\n\nI feel that somewhere in this paragraph it should be mentioned that is\noff by default.\n\notherwise, +1\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 4 Jul 2022 15:49:52 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Mon, Jul 04, 2022 at 09:38:42PM +0200, Alvaro Herrera wrote:\n> + There are several triggers for initial summarization of a page range\n> + to occur. If the table is vacuumed, either because\n> + <xref linkend=\"sql-vacuum\" /> has been manually invoked or because\n> + autovacuum causes it,\n> + all existing unsummarized page ranges are summarized.\n\nI'd say \"If the table is vacuumed manually or by autovacuum, ...\"\n(Or \"either manually or by autovacuum, ...\")\n\n> + Also, if the index has the\n> + <xref linkend=\"index-reloption-autosummarize\"/> parameter set to on,\n\nMaybe say \"If the autovacuum parameter is enabled\" (this may avoid needing to\nrevise it later if we change the default).\n\n> + then any run of autovacuum in the database will summarize all\n\nI'd avoid saying \"run\" and instead say \"then anytime autovacuum runs in that\ndatabase, all ...\"\n\n> + unsummarized page ranges that have been completely filled recently,\n> + regardless of whether the table is processed by autovacuum for other\n> + reasons; see below.\n\nsay \"whether the table itself\" and remove \"for other reasons\" ?\n\n> <para>\n> When autosummarization is enabled, each time a page range is filled a\n\nMaybe: filled comma\n\n> - request is sent to autovacuum for it to execute a targeted summarization\n> - for that range, to be fulfilled at the end of the next worker run on the\n> - same database. If the request queue is full, the request is not recorded\n> - and a message is sent to the server log:\n> + request is sent to <literal>autovacuum</literal> for it to execute a targeted\n> + summarization for that range, to be fulfilled at the end of the next\n> + autovacuum worker run on the same database. If the request queue is full, the\n\n\"to be fulfilled the next time an autovacuum worker finishes running in that\ndatabase.\"\n\nor\n\n\"to be fulfilled by an autovacuum worker the next it finishes running in that\ndatabase.\"\n\n> +++ b/doc/src/sgml/ref/create_index.sgml\n> @@ -580,6 +580,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class=\n> <para>\n> Defines whether a summarization run is invoked for the previous page\n> range whenever an insertion is detected on the next one.\n> + See <xref linkend=\"brin-operation\"/> for more details.\n> + The default is <literal>off</literal>.\n\nMaybe \"invoked\" should say \"queued\" ?\n\nAlso, a reminder that this was never addressed (I wish the project had a way to\nkeep track of known issues).\n\nhttps://www.postgresql.org/message-id/20201113160007.GQ30691@telsasoft.com\n|error_severity of brin work item\n|left | could not open relation with OID 292103095\n|left | processing work entry for relation \"ts.child.alarms_202010_alarm_clear_time_idx\"\n|Those happen following a REINDEX job on that index.\n\nThis inline patch includes my changes as well as yours.\nAnd the attached patch is my changes only.\n\ndiff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml\nindex caf1ea4cef1..90897a4af07 100644\n--- a/doc/src/sgml/brin.sgml\n+++ b/doc/src/sgml/brin.sgml\n@@ -73,31 +73,55 @@\n summarized range, that range does not automatically acquire a summary\n tuple; those tuples remain unsummarized until a summarization run is\n invoked later, creating initial summaries.\n- This process can be invoked manually using the\n- <function>brin_summarize_range(regclass, bigint)</function> or\n- <function>brin_summarize_new_values(regclass)</function> functions;\n- automatically when <command>VACUUM</command> processes the table;\n- or by automatic summarization executed by autovacuum, as insertions\n- occur. (This last trigger is disabled by default and can be enabled\n- with the <literal>autosummarize</literal> parameter.)\n- Conversely, a range can be de-summarized using the\n- <function>brin_desummarize_range(regclass, bigint)</function> function,\n- which is useful when the index tuple is no longer a very good\n- representation because the existing values have changed.\n </para>\n \n <para>\n- When autosummarization is enabled, each time a page range is filled a\n- request is sent to autovacuum for it to execute a targeted summarization\n- for that range, to be fulfilled at the end of the next worker run on the\n- same database. If the request queue is full, the request is not recorded\n- and a message is sent to the server log:\n+ There are several ways to trigger the initial summarization of a page range.\n+ If the table is vacuumed, either manually or by\n+ <link linkend=\"autovacuum\">autovacuum</link>,\n+ all existing unsummarized page ranges are summarized.\n+ Also, if the index's\n+ <xref linkend=\"index-reloption-autosummarize\"/> parameter is enabled,\n+ whenever autovacuum runs in that database, summarization will\n+ occur for all\n+ unsummarized page ranges that have been filled,\n+ regardless of whether the table itself is processed by autovacuum; see below.\n+\n+ Lastly, the following functions can be used:\n+\n+ <simplelist>\n+ <member>\n+ <function>brin_summarize_range(regclass, bigint)</function>\n+ summarizes all unsummarized ranges\n+ </member>\n+ <member>\n+ <function>brin_summarize_new_values(regclass)</function>\n+ summarizes one specific range, if it is unsummarized\n+ </member>\n+ </simplelist>\n+ </para>\n+\n+ <para>\n+ When autosummarization is enabled, each time a page range is filled, a\n+ request is sent to <literal>autovacuum</literal> to execute a targeted\n+ summarization for that range, to be fulfilled the next time an autovacuum\n+ worker finishes running in that database. If the request queue is full, the\n+ request is not recorded and a message is sent to the server log:\n <screen>\n LOG: request for BRIN range summarization for index \"brin_wi_idx\" page 128 was not recorded\n </screen>\n When this happens, the range will be summarized normally during the next\n regular vacuum of the table.\n </para>\n+\n+ <para>\n+ Conversely, a range can be de-summarized using the\n+ <function>brin_desummarize_range(regclass, bigint)</function> function,\n+ which is useful when the index tuple is no longer a very good\n+ representation because the existing values have changed.\n+ See <xref linkend=\"functions-admin-index\"/> for details.\n+ </para>\n+\n </sect2>\n </sect1>\n \ndiff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml\nindex 9ffcdc629e6..a5bac9f7373 100644\n--- a/doc/src/sgml/ref/create_index.sgml\n+++ b/doc/src/sgml/ref/create_index.sgml\n@@ -578,8 +578,10 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class=\n </term>\n <listitem>\n <para>\n- Defines whether a summarization run is invoked for the previous page\n+ Defines whether a summarization run is queued for the previous page\n range whenever an insertion is detected on the next one.\n+ See <xref linkend=\"brin-operation\"/> for more details.\n+ The default is <literal>off</literal>.\n </para>\n </listitem>\n </varlistentry>", "msg_date": "Mon, 4 Jul 2022 16:22:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On 2022-Jul-04, Jaime Casanova wrote:\n\n> I feel that somewhere in this paragraph it should be mentioned that is\n> off by default.\n\nOK, I added it.\n\nOn 2022-Jul-04, Justin Pryzby wrote:\n\n> [ lots of comments ]\n\nOK, I have adopted all your proposed changes, thanks for submitting in\nboth forms. I did some more wordsmithing and pushed, to branches 12 and\nup. 11 fails 'make check', I think for lack of Docbook id tags, and I\ndidn't want to waste more time. Kindly re-read the result and let me\nknow if I left something unaddressed, or made something worse. The\nupdated text is already visible in the website:\nhttps://www.postgresql.org/docs/devel/brin-intro.html\n\n(Having almost-immediate doc refreshes is an enormous improvement.\nThanks Magnus.)\n\n> Also, a reminder that this was never addressed (I wish the project had a way to\n> keep track of known issues).\n> \n> https://www.postgresql.org/message-id/20201113160007.GQ30691@telsasoft.com\n> |error_severity of brin work item\n\nYeah, I've not forgotten that item. I can't promise I'll get it fixed\nsoon, but it's on my list.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n", "msg_date": "Tue, 5 Jul 2022 13:47:27 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Mon, Jul 4, 2022 at 9:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n>\n> [Some of] these additions are wrong actually. It says that autovacuum\n> will not summarize new entries; but it does. If you just let the table\n> sit idle, any autovacuum run that cleans the table will also summarize\n> any ranges that need summarization.\n>\n> What 'autosummarization=off' means is that the behavior to trigger an\n> immediate summarization of a range once it becomes full is not default.\n> This is very different.\n>\n\nWithout having read through the code, I'll take your word for it. I simply\nwent with what was written on this phrase of the docs:\n\n\"or by automatic summarization executed by autovacuum, as insertions occur.\n(This last trigger is disabled by default and can be enabled with the\nautosummarize parameter.)\"\n\nTo me this did not indicate a third behavior, which is what you are\ndescribing, so I'm glad we're having this discussion to clarify it.\n\nAs for the new <para></para>s that you added, I'd say they're\n> stylistically wrong. Each paragraph is supposed to be one fully\n> contained idea; what these tags do is split each idea across several\n> smaller paragraphs. This is likely subjective though.\n>\n\nWhile I don't disagree with you, readability is more important. We have\nlots of places (such as that one on the docs) where we have a big blob of\ntext, reducing readability, IMHO. In the source they are broken by new\nlines, but in the rendered HTML, which is what the vast majority of people\nread, they get rendered into a big blob-looking-thing.\n\n> What would be the downside (if any) of having autosummarize=on by default?\n>\n> I'm not aware of any. Maybe we should turn it on by default.\n>\n\n +1\n\nThanks for looking at this Alvaro.\n\nRoberto\n--\nCunchy Data -- passion for open source PostgreSQL\n\nOn Mon, Jul 4, 2022 at 9:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n[Some of] these additions are wrong actually.  It says that autovacuum\nwill not summarize new entries; but it does.  If you just let the table\nsit idle, any autovacuum run that cleans the table will also summarize\nany ranges that need summarization.\n\nWhat 'autosummarization=off' means is that the behavior to trigger an\nimmediate summarization of a range once it becomes full is not default.\nThis is very different.Without having read through the code, I'll take your word for it. I simply went with what was written on this phrase of the docs:\"or by automatic summarization executed by autovacuum, as insertions \noccur. (This last trigger is disabled by default and can be enabled with\n the autosummarize parameter.)\"To me this did not indicate a third behavior, which is what you are describing, so I'm glad we're having this discussion to clarify it.\nAs for the new <para></para>s that you added, I'd say they're\nstylistically wrong.  Each paragraph is supposed to be one fully\ncontained idea; what these tags do is split each idea across several\nsmaller paragraphs.  This is likely subjective though.While I don't disagree with you, readability is more important. We have lots of places (such as that one on the docs) where we have a big blob of text, reducing readability, IMHO. In the source they are broken by new lines, but in the rendered HTML, which is what the vast majority of people read, they get rendered into a big blob-looking-thing.\n> What would be the downside (if any) of having autosummarize=on by default?\n\nI'm not aware of any.  Maybe we should turn it on by default. +1Thanks for looking at this Alvaro.Roberto--Cunchy Data -- passion for open source PostgreSQL", "msg_date": "Tue, 5 Jul 2022 12:33:19 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Tue, Jul 5, 2022 at 5:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> OK, I have adopted all your proposed changes, thanks for submitting in\n> both forms. I did some more wordsmithing and pushed, to branches 12 and\n> up. 11 fails 'make check', I think for lack of Docbook id tags, and I\n> didn't want to waste more time. Kindly re-read the result and let me\n> know if I left something unaddressed, or made something worse. The\n> updated text is already visible in the website:\n> https://www.postgresql.org/docs/devel/brin-intro.html\n>\n\nYou removed the reference to the functions' documentation at\nfunctions-admin-index choosing instead to duplicate a summarized\nversion of the docs, and to boot getting the next block to be blobbed\ntogether with it.\n\nKeeping with the reduced-readability theme, you made the paragraphs\neven bigger. While I do appreciate the time to clarify things a bit, as was\nmy original intent with the patch,\n\nWe should be writing documentation with the user in mind, not for our\ndeveloper eyes. Different target audiences. It is less helpful to have\nawesome features that don't get used because users can't really\ngrasp the docs.\n\nParagraphs such as this feel like we're playing \"summary bingo\":\n\n When a new page is created that does not fall within the last\n summarized range, the range that the new page belongs into\n does not automatically acquire a summary tuple;\n those tuples remain unsummarized until a summarization run is\n invoked later, creating the initial summary for that range\n\nRoberto\n\n--\nCrunchy Data -- passion for open source PostgreSQL\n\nOn Tue, Jul 5, 2022 at 5:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:OK, I have adopted all your proposed changes, thanks for submitting in\nboth forms.  I did some more wordsmithing and pushed, to branches 12 and\nup.  11 fails 'make check', I think for lack of Docbook id tags, and I\ndidn't want to waste more time.  Kindly re-read the result and let me\nknow if I left something unaddressed, or made something worse.  The\nupdated text is already visible in the website:\nhttps://www.postgresql.org/docs/devel/brin-intro.htmlYou removed the reference to the functions' documentation at functions-admin-index choosing instead to duplicate a summarized version of the docs, and to boot getting the next block to be blobbed together with it.Keeping with the reduced-readability theme, you made the paragraphs even bigger. While I do appreciate the time to clarify things a bit, as wasmy original intent with the patch, We should be writing documentation with the user in mind, not for our developer eyes. Different target audiences. It is less helpful to have awesome features that don't get used because users can't really grasp the docs.Paragraphs such as this feel like we're playing \"summary bingo\":     When a new page is created that does not fall within the last     summarized range, the range that the new page belongs into     does not automatically acquire a summary tuple;     those tuples remain unsummarized until a summarization run is     invoked later, creating the initial summary for that rangeRoberto--Crunchy Data -- passion for open source PostgreSQL", "msg_date": "Tue, 5 Jul 2022 12:54:59 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On 2022-Jul-05, Roberto Mello wrote:\n\n> You removed the reference to the functions' documentation at\n> functions-admin-index choosing instead to duplicate a summarized\n> version of the docs, and to boot getting the next block to be blobbed\n> together with it.\n\nActually, my first instinct was to move the interesting parts to the\nfunctions docs, then reference those, removing the duplicate bits. But\nI was discouraged when I read it, because it is just a table in a place\nnot really appropriate for a larger discussion on it. Also, a reference\nto it is not direct, but rather it goes to a table that contains a lot\nof other stuff.\n\n> Keeping with the reduced-readability theme, you made the paragraphs\n> even bigger. While I do appreciate the time to clarify things a bit, as was\n> my original intent with the patch, [...]\n\nHmm, which paragraph are you referring to? I'm not aware of having made\nany paragraph bigger, quite the opposite. In the original text, the\nparagraph \"At the time of creation,\" is 13 lines on a browser window\nthat is half the screen; in the patched text, that has been replaced by\nthree paragraphs that are 7, 6, and 4 lines long, plus a separate one\nfor the de-summarization bits at the end of the page, which is 3 lines\nlong.\n\n> We should be writing documentation with the user in mind, not for our\n> developer eyes. Different target audiences. It is less helpful to have\n> awesome features that don't get used because users can't really\n> grasp the docs.\n\nI try to do that. I guess I fail more frequently that I should.\n\n> Paragraphs such as this feel like we're playing \"summary bingo\":\n> \n> When a new page is created that does not fall within the last\n> summarized range, the range that the new page belongs into\n> does not automatically acquire a summary tuple;\n> those tuples remain unsummarized until a summarization run is\n> invoked later, creating the initial summary for that range\n\nYeah, I am aware that the word \"summary\" and variations occur way too\nmany times. Maybe it is possible to replace \"summary tuple\" with \"BRIN\ntuple\" for example; can you propose some synonym for \"summarized\" and\n\"unsummarized\"? Perhaps something like this:\n\n> When a new page is created that does not fall within the last\n> summarized range, the range that the new page belongs into\n> does not automatically acquire a BRIN tuple;\n> those [pages] remain uncovered by the BRIN index until a summarization run is\n> invoked later, creating the initial BRIN tuple for that range\n\n(I also replaced the word \"tuples\" with \"pages\" in one spot.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n", "msg_date": "Tue, 5 Jul 2022 21:46:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On Tue, Jul 05, 2022 at 01:47:27PM +0200, Alvaro Herrera wrote:\n> OK, I have adopted all your proposed changes, thanks for submitting in\n> both forms. I did some more wordsmithing and pushed, to branches 12 and\n> up. 11 fails 'make check', I think for lack of Docbook id tags, and I\n> didn't want to waste more time. Kindly re-read the result and let me\n> know if I left something unaddressed, or made something worse. The\n> updated text is already visible in the website:\n> https://www.postgresql.org/docs/devel/brin-intro.html\n\nOne issue:\n\n+ summarized range, the range that the new page belongs into\n+ does not automatically acquire a summary tuple;\n\n\"belongs into\" sounds wrong - \"belongs to\" is better.\n\nI'll put that change into my \"typos\" branch to fix later if it's not addressed\nin this thread.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 Jul 2022 14:58:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" }, { "msg_contents": "On 2022-Jul-05, Justin Pryzby wrote:\n\n> One issue:\n> \n> + summarized range, the range that the new page belongs into\n> + does not automatically acquire a summary tuple;\n> \n> \"belongs into\" sounds wrong - \"belongs to\" is better.\n\nHah, and I was wondering if \"belongs in\" was any better.\n\n> I'll put that change into my \"typos\" branch to fix later if it's not addressed\n> in this thread.\n\nRoberto has some more substantive comments on the new text, so let's try\nand fix everything together. This time, I'll let you guys come up with\na new patch.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 6 Jul 2022 09:45:31 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: doc: BRIN indexes and autosummarize" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the release announcement for PostgreSQL 15 Beta \r\n2. Please provide feedback on technical accuracy and if there are \r\nglaring omissions.\r\n\r\nPlease provide any feedback prior to 2022-06-22 0:00 AoE.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 28 Jun 2022 20:04:43 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:\n> Hi,\n> \n\n'not advise you to run PostgreSQL 15 Beta 1' should be\n'not advise you to run PostgreSQL 15 Beta 2'\n\n\nErik\n\n\n", "msg_date": "Wed, 29 Jun 2022 08:12:03 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "> Upgrading to PostgreSQL 15 Beta 2\n> ---------------------------------\n>\n> To upgrade to PostgreSQL 15 Beta 2 from an earlier version of PostgreSQL,\n> you will need to use a strategy similar to upgrading between major versions of\n> PostgreSQL (e.g. `pg_upgrade` or `pg_dump` / `pg_restore`). For more\n> information, please visit the documentation section on\n> [upgrading](https://www.postgresql.org/docs/15/static/upgrading.html).\n\nIs the major version upgrade still needed if they are upgrading from 15 Beta 1?\n\n\n", "msg_date": "Wed, 29 Jun 2022 07:55:55 +0100", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "On 6/29/22 2:55 AM, Pantelis Theodosiou wrote:\r\n>> Upgrading to PostgreSQL 15 Beta 2\r\n>> ---------------------------------\r\n>>\r\n>> To upgrade to PostgreSQL 15 Beta 2 from an earlier version of PostgreSQL,\r\n>> you will need to use a strategy similar to upgrading between major versions of\r\n>> PostgreSQL (e.g. `pg_upgrade` or `pg_dump` / `pg_restore`). For more\r\n>> information, please visit the documentation section on\r\n>> [upgrading](https://www.postgresql.org/docs/15/static/upgrading.html).\r\n> \r\n> Is the major version upgrade still needed if they are upgrading from 15 Beta 1?\r\n\r\nNo, but it would be required if you a upgrading from a different \r\nversion. The language attempts to be a \"catch all\" to account for the \r\ndifferent cases.\r\n\r\nJonathan", "msg_date": "Wed, 29 Jun 2022 08:56:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "On 6/29/22 2:12 AM, Erik Rijkers wrote:\r\n> Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:\r\n>> Hi,\r\n>>\r\n> \r\n> 'not advise you to run PostgreSQL 15 Beta 1'    should be\r\n> 'not advise you to run PostgreSQL 15 Beta 2'\r\n\r\nThanks; I adjusted the copy.\r\n\r\nJonathan", "msg_date": "Wed, 29 Jun 2022 08:57:10 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 6/29/22 2:55 AM, Pantelis Theodosiou wrote:\n>> Is the major version upgrade still needed if they are upgrading from 15 Beta 1?\n\n> No, but it would be required if you a upgrading from a different \n> version. The language attempts to be a \"catch all\" to account for the \n> different cases.\n\nActually, I think you do need the hard-way upgrade, because there was a\ncatversion bump since beta1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Jun 2022 09:30:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "On 6/29/22 9:30 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 6/29/22 2:55 AM, Pantelis Theodosiou wrote:\r\n>>> Is the major version upgrade still needed if they are upgrading from 15 Beta 1?\r\n> \r\n>> No, but it would be required if you a upgrading from a different\r\n>> version. The language attempts to be a \"catch all\" to account for the\r\n>> different cases.\r\n> \r\n> Actually, I think you do need the hard-way upgrade, because there was a\r\n> catversion bump since beta1.\r\n\r\nOh -- I didn't see that when I scanned the commit logs, but good to know.\r\n\r\nJonathan", "msg_date": "Wed, 29 Jun 2022 13:47:08 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:\n> Hi,\n> \n> Attached is a draft of the release announcement for PostgreSQL 15 Beta \n> 2. Please provide feedback on technical accuracy and if there are \n> glaring omissions.\n\nHardly 'glaring' but still:\n\n'Multiples fixes' should be\n'Multiple fixes'\n\n\n> Please provide any feedback prior to 2022-06-22 0:00 AoE.\n> \n> Thanks,\n> \n> Jonathan\n\n\n", "msg_date": "Wed, 29 Jun 2022 21:07:52 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" }, { "msg_contents": "On 6/29/22 3:07 PM, Erik Rijkers wrote:\r\n> Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:\r\n>> Hi,\r\n>>\r\n>> Attached is a draft of the release announcement for PostgreSQL 15 Beta \r\n>> 2. Please provide feedback on technical accuracy and if there are \r\n>> glaring omissions.\r\n> \r\n> Hardly 'glaring' but still:\r\n> \r\n> 'Multiples fixes'  should be\r\n> 'Multiple fixes'\r\n\r\nI think this could make or break the announcement :)\r\n\r\nThanks for the catch -- fixed in the local copy. I'll read through again \r\nfor other typos prior to release.\r\n\r\nJonathan", "msg_date": "Wed, 29 Jun 2022 20:41:23 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 beta 2 release announcement draft" } ]
[ { "msg_contents": "Hackers,\n\nI noticed while doing some memory context related work that since we\nnow use generation.c memory contexts for tuplesorts (40af10b57) that\ntuplesort_putindextuplevalues() causes memory \"leaks\" in the\ngeneration context due to index_form_tuple() being called while we're\nswitched into the state->tuplecontext.\n\nI use the word \"leak\" here slightly loosely. It's only a leak due to\nhow generation.c uses no free lists to allow reuse pfree'd memory.\n\nIt looks like the code has been this way ever since 9f03ca915 (Avoid\ncopying index tuples when building an index.) That commit did add a\nbig warning at the top of index_form_tuple() that the function must be\ncareful to not leak any memory.\n\nA quick fix would be just to add a new bool field, e.g, usegencxt to\ntuplesort_begin_common() and pass that as false in all functions apart\nfrom tuplesort_begin_heap(). That way we'll always be using an aset.c\ncontext when we call index_form_tuple().\n\nHowever, part of me thinks that 9f03ca915 is standing in the way of us\ndoing more in the future to optimize how we store tuples during sorts.\nWe might, one day, want to consider using a hand-rolled bump\nallocator. If we ever do that we'd need to undo the work done by\n9f03ca915.\n\nDoes anyone have any thoughts on this?\n\nHere's a reproducer from the regression tests:\n\nCREATE TABLE no_index_cleanup (i INT PRIMARY KEY, t TEXT);\n-- Use uncompressed data stored in toast.\nCREATE INDEX no_index_cleanup_idx ON no_index_cleanup(t);\nALTER TABLE no_index_cleanup ALTER COLUMN t SET STORAGE EXTERNAL;\nINSERT INTO no_index_cleanup(i, t) VALUES (generate_series(1,30),\n repeat('1234567890',269));\n-- index cleanup option is ignored if VACUUM FULL\nVACUUM (INDEX_CLEANUP TRUE, FULL TRUE) no_index_cleanup;\n\nDavid\n\n\n", "msg_date": "Wed, 29 Jun 2022 12:59:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Wed, 29 Jun 2022 at 12:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> I noticed while doing some memory context related work that since we\n> now use generation.c memory contexts for tuplesorts (40af10b57) that\n> tuplesort_putindextuplevalues() causes memory \"leaks\" in the\n> generation context due to index_form_tuple() being called while we're\n> switched into the state->tuplecontext.\n\nI've attached a draft patch which changes things so that we don't use\ngeneration contexts for sorts being done for index builds.\n\nDavid", "msg_date": "Thu, 30 Jun 2022 15:54:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Wed, 29 Jun 2022 at 12:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> I noticed while doing some memory context related work that since we\n> now use generation.c memory contexts for tuplesorts (40af10b57) that\n> tuplesort_putindextuplevalues() causes memory \"leaks\" in the\n> generation context due to index_form_tuple() being called while we're\n> switched into the state->tuplecontext.\n\nI voiced my dislike for the patch I came up with to fix this issue to\nAndres. He suggested that I just add a version of index_form_tuple\nthat can be given a MemoryContext pointer to allocate the returned\ntuple into.\n\nI like that idea much better, so I've attached a patch to fix it that way.\n\nIf there are no objections, I plan to push this in the next 24 hours.\n\nDavid", "msg_date": "Wed, 6 Jul 2022 13:34:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Wed, 6 Jul 2022 at 13:34, David Rowley <dgrowleyml@gmail.com> wrote:\n> If there are no objections, I plan to push this in the next 24 hours.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Jul 2022 08:16:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Tue, Jul 5, 2022 at 9:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I voiced my dislike for the patch I came up with to fix this issue to\n> Andres. He suggested that I just add a version of index_form_tuple\n> that can be given a MemoryContext pointer to allocate the returned\n> tuple into.\n>\n> I like that idea much better, so I've attached a patch to fix it that way.\n>\n> If there are no objections, I plan to push this in the next 24 hours.\n\nApologies for not having looked at this thread sooner, but for what\nit's worth, I think this is a fine solution.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 17:49:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Thu, Jul 7, 2022 at 3:16 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Pushed.\n\nHmm, the commit appeared on git.postgresql.org, but apparently not in\nmy email nor the list archives.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 08:41:13 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On Thu, 7 Jul 2022 at 13:41, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Thu, Jul 7, 2022 at 3:16 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > Pushed.\n>\n> Hmm, the commit appeared on git.postgresql.org, but apparently not in\n> my email nor the list archives.\n\nStrange. I'd suspect a temporary hiccup in whatever code pushes the\ncommits onto the mailing list, but I see that my fe3caa143 from\nyesterday was also missed.\n\nThe only difference in my workflow is that I'm sshing to the machine I\npush from via another room rather than sitting right in front of it\nlike I normally am. I struggle to imagine why that would cause this to\nhappen.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Jul 2022 14:13:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 7 Jul 2022 at 13:41, John Naylor <john.naylor@enterprisedb.com> wrote:\n>> Hmm, the commit appeared on git.postgresql.org, but apparently not in\n>> my email nor the list archives.\n\n> Strange. I'd suspect a temporary hiccup in whatever code pushes the\n> commits onto the mailing list, but I see that my fe3caa143 from\n> yesterday was also missed.\n\nCaught in list moderation maybe?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 23:15:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" }, { "msg_contents": "On 7/6/22 23:15, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> On Thu, 7 Jul 2022 at 13:41, John Naylor <john.naylor@enterprisedb.com> wrote:\n>>> Hmm, the commit appeared on git.postgresql.org, but apparently not in\n>>> my email nor the list archives.\n> \n>> Strange. I'd suspect a temporary hiccup in whatever code pushes the\n>> commits onto the mailing list, but I see that my fe3caa143 from\n>> yesterday was also missed.\n> \n> Caught in list moderation maybe?\n\nActually, yes they are:\n8<-----------------------\nDate: 2022-07-06 07:41:02\nList: pgsql-committers\nReason: sender is not a confirmed email address\nFrom: drowley(at)postgresql(dot)org\nSize: 4890 bytes\nSubject: pgsql: Remove size increase in ExprEvalStep caused by hashed saops\n\nDate: 2022-07-06 20:14:25\nList: pgsql-committers\nReason: sender is not a confirmed email address\nSpam score: -7.1\nFrom: drowley(at)postgresql(dot)org\nSize: 4703 bytes\nSubject: pgsql: Overload index_form_tuple to allow the memory context to \nbe supp\n8<-----------------------\n(I manually did the (at) and (dot) obfuscation)\n\nI don't ordinarily moderate the pgsql-committers list, and don't know \noffhand who does, so am a bit hesitant to approve them myself. But \nperhaps I should?\n\nI guess another good question is why the email address no longer \nconfirmed...\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 06:46:12 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: tuplesort Generation memory contexts don't play nicely with index\n builds" } ]
[ { "msg_contents": "Over on [1] I noticed that the user had set force_parallel_mode to\n\"on\" in the hope that would trick the planner into making their query\nrun more quickly. Of course, that's not what they want since that GUC\nis only there to inject some parallel nodes into the plan in order to\nverify the tuple communication works.\n\nI get the idea that Robert might have copped some flak about this at\nsome point, given that he wrote the blog post at [2].\n\nThe user would have realised this if they'd read the documentation\nabout the GUC. However, I imagine they only went as far as finding a\nGUC with a name which appears to be exactly what they need. I mean,\nwhat else could force_parallel_mode possibly do?\n\nShould we maybe rename it to something less tempting? Maybe\ndebug_parallel_query?\n\nI wonder if \\dconfig *parallel* is going to make force_parallel_mode\neven easier to find once PG15 is out.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/DB4PR02MB8774E06D595D3088BE04ED92E7B99%40DB4PR02MB8774.eurprd02.prod.outlook.com\n[2] https://www.enterprisedb.com/postgres-tutorials/using-forceparallelmode-correctly-postgresql\n\n\n", "msg_date": "Wed, 29 Jun 2022 15:23:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 2022-06-29 at 15:23 +1200, David Rowley wrote:\n> Over on [1] I noticed that the user had set force_parallel_mode to\n> \"on\" in the hope that would trick the planner into making their query\n> run more quickly.  Of course, that's not what they want since that GUC\n> is only there to inject some parallel nodes into the plan in order to\n> verify the tuple communication works.\n> \n> I get the idea that Robert might have copped some flak about this at\n> some point, given that he wrote the blog post at [2].\n> \n> The user would have realised this if they'd read the documentation\n> about the GUC. However, I imagine they only went as far as finding a\n> GUC with a name which appears to be exactly what they need.  I mean,\n> what else could force_parallel_mode possibly do?\n> \n> Should we maybe rename it to something less tempting? Maybe\n> debug_parallel_query?\n> \n> I wonder if \\dconfig *parallel* is going to make force_parallel_mode\n> even easier to find once PG15 is out.\n> \n> [1] https://www.postgresql.org/message-id/DB4PR02MB8774E06D595D3088BE04ED92E7B99%40DB4PR02MB8774.eurprd02.prod.outlook.com\n> [2] https://www.enterprisedb.com/postgres-tutorials/using-forceparallelmode-correctly-postgresql\n\nI share the sentiment, but at the same time am worried about an unnecessary\ncompatibility break. The parameter is not in \"postgresql.conf\" and\ndocumented as a \"developer option\", which should already be warning enough.\n\nPerhaps some stronger wording in the documetation would be beneficial.\nI have little sympathy with people who set unusual parameters without\neven glancing at the documentation.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 29 Jun 2022 08:57:24 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, Jun 29, 2022 at 03:23:27PM +1200, David Rowley wrote:\n> Over on [1] I noticed that the user had set force_parallel_mode to\n> \"on\" in the hope that would trick the planner into making their query\n> run more quickly. Of course, that's not what they want since that GUC\n> is only there to inject some parallel nodes into the plan in order to\n> verify the tuple communication works.\n> \n> I get the idea that Robert might have copped some flak about this at\n> some point, given that he wrote the blog post at [2].\n> \n> The user would have realised this if they'd read the documentation\n> about the GUC. However, I imagine they only went as far as finding a\n> GUC with a name which appears to be exactly what they need. I mean,\n> what else could force_parallel_mode possibly do?\n\nNote that it was already changed to be a developer GUC\nhttps://www.postgresql.org/message-id/20210404012546.GK6592%40telsasoft.com\n\nAnd I asked if that re-classification should be backpatched:\n> It's to their benefit and ours if they don't do that on v10-13 for the next 5\n> years, not just v14-17.\n\nSince the user in this recent thread is running v13.7, I'm *guessing* that\nif that had been backpatched, they wouldn't have made this mistake.\n\n> I wonder if \\dconfig *parallel* is going to make force_parallel_mode\n> even easier to find once PG15 is out.\n\nMaybe. Another consequence is that if someone *does* set f_p_m, it may be a\nbit easier and more likely for a local admin to discover it (before mailing the\npgsql lists).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Jun 2022 07:31:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 30 Jun 2022 at 00:31, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Since the user in this recent thread is running v13.7, I'm *guessing* that\n> if that had been backpatched, they wouldn't have made this mistake.\n\nI wasn't aware of that change. Thanks for highlighting it.\n\nMaybe it's worth seeing if fewer mistakes are made now that we've\nchanged the GUC into a developer option.\n\nDavid\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:42:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 29 Jun 2022 at 18:57, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Perhaps some stronger wording in the documetation would be beneficial.\n> I have little sympathy with people who set unusual parameters without\n> even glancing at the documentation.\n\nMy thoughts are that the documentation is ok as is. I have a feeling\nthe misusages come from stumbling upon a GUC that has a name which\nseems to indicate the GUC does exactly what they want.\n\nDavid\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:49:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 30 Jun 2022 at 00:31, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Jun 29, 2022 at 03:23:27PM +1200, David Rowley wrote:\n> > Over on [1] I noticed that the user had set force_parallel_mode to\n> > \"on\" in the hope that would trick the planner into making their query\n> > run more quickly. Of course, that's not what they want since that GUC\n> > is only there to inject some parallel nodes into the plan in order to\n> > verify the tuple communication works.\n\n> Note that it was already changed to be a developer GUC\n> https://www.postgresql.org/message-id/20210404012546.GK6592%40telsasoft.com\n\n> Since the user in this recent thread is running v13.7, I'm *guessing* that\n> if that had been backpatched, they wouldn't have made this mistake.\n\nI was just reading [1] where a PG15 user made this mistake, so it\nseems people are still falling for it even now it's been changed to a\ndeveloper GUC.\n\nI don't really share Laurenz's worry [2] about compatibility break\nfrom renaming this GUC. I think the legitimate usages of this setting\nare probably far more rare than the illegitimate ones. I'm not overly\nconcerned about renaming if it helps stop people from making this\nmistake. I believe the current name is just too conveniently named and\nthat users are likely just to incorrectly assume it does exactly what\nthey want because what else could it possibly do?!\n\nI think something like debug_parallel_query is much less likely to be misused.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAN4ko3B4y75pg5Ro_oAjWf8L1HYSYgXcDgsS6nzOTvQOkKnM1Q@mail.gmail.com\n[2] https://www.postgresql.org/message-id/26139c03e118bec967c77da374d947e9ecf81333.camel@cybertec.at\n\n\n", "msg_date": "Thu, 2 Feb 2023 00:40:45 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, Feb 1, 2023 at 6:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I don't really share Laurenz's worry [2] about compatibility break\n> from renaming this GUC. I think the legitimate usages of this setting\n> are probably far more rare than the illegitimate ones. I'm not overly\n> concerned about renaming if it helps stop people from making this\n> mistake. I believe the current name is just too conveniently named and\n> that users are likely just to incorrectly assume it does exactly what\n> they want because what else could it possibly do?!\n>\n> I think something like debug_parallel_query is much less likely to be\nmisused.\n\n+1 on both points.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Feb 1, 2023 at 6:41 PM David Rowley <dgrowleyml@gmail.com> wrote:>> I don't really share Laurenz's worry [2] about compatibility break> from renaming this GUC. I think the legitimate usages of this setting> are probably far more rare than the illegitimate ones. I'm not overly> concerned about renaming if it helps stop people from making this> mistake. I believe the current name is just too conveniently named and> that users are likely just to incorrectly assume it does exactly what> they want because what else could it possibly do?!>> I think something like debug_parallel_query is much less likely to be misused.+1 on both points.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Feb 2023 19:24:43 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 2 Feb 2023 at 01:24, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Wed, Feb 1, 2023 at 6:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I don't really share Laurenz's worry [2] about compatibility break\n> > from renaming this GUC. I think the legitimate usages of this setting\n> > are probably far more rare than the illegitimate ones. I'm not overly\n> > concerned about renaming if it helps stop people from making this\n> > mistake. I believe the current name is just too conveniently named and\n> > that users are likely just to incorrectly assume it does exactly what\n> > they want because what else could it possibly do?!\n> >\n> > I think something like debug_parallel_query is much less likely to be misused.\n>\n> +1 on both points.\n\nI've attached a patch which does the renaming to debug_parallel_query.\nI've made it so the old name can still be used. This is only intended\nto temporarily allow backward compatibility until buildfarm member\nowners can change their configs to use debug_parallel_query instead of\nforce_parallel_mode.\n\nDavid", "msg_date": "Thu, 9 Feb 2023 09:36:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've attached a patch which does the renaming to debug_parallel_query.\n> I've made it so the old name can still be used.\n\nThere's a better way to do that last, which is to add the translation to\nmap_old_guc_names[]. I am not very sure what happens if you have multiple\nGUC entries pointing at the same underlying variable, but I bet that\nit isn't great.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 17:26:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 9 Feb 2023 at 11:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've attached a patch which does the renaming to debug_parallel_query.\n> > I've made it so the old name can still be used.\n>\n> There's a better way to do that last, which is to add the translation to\n> map_old_guc_names[]. I am not very sure what happens if you have multiple\n> GUC entries pointing at the same underlying variable, but I bet that\n> it isn't great.\n\nThanks for pointing that out. That might mean we can keep the\ntranslation long-term as it won't appear in pg_settings and \\dconfig,\nor we might want to remove it if we want to be more deliberate about\nbreaking things for users who are misusing it. We maybe could just\nconsider that if/when all buildfarm animals are all using the new\nname.\n\nAttached updated patch.\n\nDavid", "msg_date": "Thu, 9 Feb 2023 11:50:37 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, Feb 9, 2023 at 5:50 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Attached updated patch.\n\nLooks good at a glance, just found a spurious word:\n\n+ \"by forcing the planner into to generate plans which contains nodes \"\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Feb 9, 2023 at 5:50 AM David Rowley <dgrowleyml@gmail.com> wrote:> Attached updated patch.Looks good at a glance, just found a spurious word:+\t\t\t\t\t\t \"by forcing the planner into to generate plans which contains nodes \"--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Feb 2023 15:20:17 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 9 Feb 2023 at 21:20, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Looks good at a glance, just found a spurious word:\n>\n> + \"by forcing the planner into to generate plans which contains nodes \"\n\nThanks for looking. I'll fix that.\n\nLikely the hardest part to get right here is the new name. Can anyone\nthink of anything better than debug_parallel_query?\n\nDavid\n\n\n", "msg_date": "Fri, 10 Feb 2023 09:25:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On 2023-02-09 Th 15:25, David Rowley wrote:\n> On Thu, 9 Feb 2023 at 21:20, John Naylor<john.naylor@enterprisedb.com> wrote:\n>> Looks good at a glance, just found a spurious word:\n>>\n>> + \"by forcing the planner into to generate plans which contains nodes \"\n> Thanks for looking. I'll fix that.\n>\n> Likely the hardest part to get right here is the new name. Can anyone\n> think of anything better than debug_parallel_query?\n>\n\nWFM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-09 Th 15:25, David Rowley\n wrote:\n\n\nOn Thu, 9 Feb 2023 at 21:20, John Naylor <john.naylor@enterprisedb.com> wrote:\n\n\nLooks good at a glance, just found a spurious word:\n\n+ \"by forcing the planner into to generate plans which contains nodes \"\n\n\n\nThanks for looking. I'll fix that.\n\nLikely the hardest part to get right here is the new name. Can anyone\nthink of anything better than debug_parallel_query?\n\n\n\n\n\nWFM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 10 Feb 2023 10:33:55 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Sat, 11 Feb 2023 at 04:34, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2023-02-09 Th 15:25, David Rowley wrote:\n> Likely the hardest part to get right here is the new name. Can anyone\n> think of anything better than debug_parallel_query?\n>\n>\n> WFM\n\nThanks for chipping in.\n\nI've attached a patch which fixes the problem John mentioned.\n\nI feel like nobody is against doing this rename, so I'd quite like to\nget this done pretty soon. If anyone else wants to voice their\nopinion in regards to this, please feel free to do so. +1s are more\nreassuring than silence. I just want to get this right so we never\nhave to think about it again.\n\nIf nobody is against this then I'd like to push the attached in about\n24 hours from now.\n\nDavid", "msg_date": "Tue, 14 Feb 2023 00:16:27 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On 2023-02-13 Mo 06:16, David Rowley wrote:\n> On Sat, 11 Feb 2023 at 04:34, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> On 2023-02-09 Th 15:25, David Rowley wrote:\n>> Likely the hardest part to get right here is the new name. Can anyone\n>> think of anything better than debug_parallel_query?\n>>\n>>\n>> WFM\n> Thanks for chipping in.\n>\n> I've attached a patch which fixes the problem John mentioned.\n>\n> I feel like nobody is against doing this rename, so I'd quite like to\n> get this done pretty soon. If anyone else wants to voice their\n> opinion in regards to this, please feel free to do so. +1s are more\n> reassuring than silence. I just want to get this right so we never\n> have to think about it again.\n>\n> If nobody is against this then I'd like to push the attached in about\n> 24 hours from now.\n>\n\nIt's just occurred to me that this could break the buildfarm fairly \ncomprehensively. I just took a count and we have 74 members using \nforce_parallel_mode. Maybe we need to keep force_parallel_mode as an \nalternative spelling for debug_parallel_query until we can get them all \nswitched over. I know it's more trouble ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-13 Mo 06:16, David Rowley\n wrote:\n\n\nOn Sat, 11 Feb 2023 at 04:34, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\n\nOn 2023-02-09 Th 15:25, David Rowley wrote:\nLikely the hardest part to get right here is the new name. Can anyone\nthink of anything better than debug_parallel_query?\n\n\nWFM\n\n\n\nThanks for chipping in.\n\nI've attached a patch which fixes the problem John mentioned.\n\nI feel like nobody is against doing this rename, so I'd quite like to\nget this done pretty soon. If anyone else wants to voice their\nopinion in regards to this, please feel free to do so. +1s are more\nreassuring than silence. I just want to get this right so we never\nhave to think about it again.\n\nIf nobody is against this then I'd like to push the attached in about\n24 hours from now.\n\n\n\n\n\nIt's just occurred to me that this could break the buildfarm\n fairly comprehensively. I just took a count and we have 74 members\n using force_parallel_mode. Maybe we need to keep\n force_parallel_mode as an alternative spelling for\n debug_parallel_query until we can get them all switched over. I\n know it's more trouble ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 14 Feb 2023 17:27:15 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 15 Feb 2023 at 11:27, Andrew Dunstan <andrew@dunslane.net> wrote:\n> It's just occurred to me that this could break the buildfarm fairly comprehensively. I just took a count and we have 74 members using force_parallel_mode. Maybe we need to keep force_parallel_mode as an alternative spelling for debug_parallel_query until we can get them all switched over. I know it's more trouble ...\n\nYeah, I mentioned in [1] about that and took measures there to keep\nthe old name in place. In the latest patch, there's an entry in\nmap_old_guc_names[] to allow the old name to work. I think the\nbuildfarm will still work ok because of that.\n\nWhat I'm not so sure about is how to go about getting all owners to\nchange the config for versions >= PG16. Is that a question of emailing\neach owner individually to ask them if they can make the change? Or\nshould we just forever keep the map_old_guc_names[] entry for this?\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvrT8eq0UwgetGtQE7XLC8HFN8weqembtvYxMVgtWbcnjQ@mail.gmail.com\n\n\n", "msg_date": "Wed, 15 Feb 2023 11:32:30 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On 2023-02-14 Tu 17:32, David Rowley wrote:\n> On Wed, 15 Feb 2023 at 11:27, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> It's just occurred to me that this could break the buildfarm fairly comprehensively. I just took a count and we have 74 members using force_parallel_mode. Maybe we need to keep force_parallel_mode as an alternative spelling for debug_parallel_query until we can get them all switched over. I know it's more trouble ...\n> Yeah, I mentioned in [1] about that and took measures there to keep\n> the old name in place. In the latest patch, there's an entry in\n> map_old_guc_names[] to allow the old name to work. I think the\n> buildfarm will still work ok because of that.\n\n\nOops, I missed or forgot that.\n\n\n>\n> What I'm not so sure about is how to go about getting all owners to\n> change the config for versions >= PG16. Is that a question of emailing\n> each owner individually to ask them if they can make the change? Or\n> should we just forever keep the map_old_guc_names[] entry for this?\n\n\n\nWe'll email them once this is in. Most people are fairly reponsive.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-14 Tu 17:32, David Rowley\n wrote:\n\n\nOn Wed, 15 Feb 2023 at 11:27, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nIt's just occurred to me that this could break the buildfarm fairly comprehensively. I just took a count and we have 74 members using force_parallel_mode. Maybe we need to keep force_parallel_mode as an alternative spelling for debug_parallel_query until we can get them all switched over. I know it's more trouble ...\n\n\n\nYeah, I mentioned in [1] about that and took measures there to keep\nthe old name in place. In the latest patch, there's an entry in\nmap_old_guc_names[] to allow the old name to work. I think the\nbuildfarm will still work ok because of that.\n\n\n\nOops, I missed or forgot that.\n\n\n\n\n\nWhat I'm not so sure about is how to go about getting all owners to\nchange the config for versions >= PG16. Is that a question of emailing\neach owner individually to ask them if they can make the change? Or\nshould we just forever keep the map_old_guc_names[] entry for this?\n\n\n\n\n\nWe'll email them once this is in. Most people are fairly\n reponsive.\n\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 14 Feb 2023 20:10:01 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 15 Feb 2023 at 14:10, Andrew Dunstan <andrew@dunslane.net> wrote:\n> We'll email them once this is in. Most people are fairly reponsive.\n\nI pushed the rename patch earlier.\n\nHow should we go about making contact with the owners? I'm thinking\nit may be better coming from you, especially if you think technical\ndetails of what exactly should be changed should be included in the\nemail. But I can certainly have a go if you'd rather I did it or you\ndon't have time for this.\n\nThanks\n\nDavid\n\n\n", "msg_date": "Thu, 16 Feb 2023 00:05:05 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On 2023-02-15 We 06:05, David Rowley wrote:\n> On Wed, 15 Feb 2023 at 14:10, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> We'll email them once this is in. Most people are fairly reponsive.\n> I pushed the rename patch earlier.\n>\n> How should we go about making contact with the owners? I'm thinking\n> it may be better coming from you, especially if you think technical\n> details of what exactly should be changed should be included in the\n> email. But I can certainly have a go if you'd rather I did it or you\n> don't have time for this.\n>\n\nLeave it with me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-15 We 06:05, David Rowley\n wrote:\n\n\nOn Wed, 15 Feb 2023 at 14:10, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nWe'll email them once this is in. Most people are fairly reponsive.\n\n\n\nI pushed the rename patch earlier.\n\nHow should we go about making contact with the owners? I'm thinking\nit may be better coming from you, especially if you think technical\ndetails of what exactly should be changed should be included in the\nemail. But I can certainly have a go if you'd rather I did it or you\ndon't have time for this.\n\n\n\n\n\nLeave it with me.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 15 Feb 2023 07:53:15 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Thu, 16 Feb 2023 at 00:05, David Rowley <dgrowleyml@gmail.com> wrote:\n> I pushed the rename patch earlier.\n>\n> How should we go about making contact with the owners?\n\nAfter a quick round of making direct contact with the few remaining\nbuildfarm machine owners which are still using force_parallel_mode,\nwe're now down to just one remainder (warbler).\n\nIn preparation for when that's ticked off, I'd like to gather people's\nthoughts about if we should remove force_parallel_mode from v16?\n\nMy thoughts are that providing we can get the remaining animal off it\nand remove the GUC alias before beta1, we should remove it. Renaming\nthe GUC to debug_parallel_query was entirely aimed at breaking things\nfor people who are (mistakenly) using it, so I don't see why we\nshouldn't remove it.\n\nDoes anyone feel differently?\n\nDavid\n\n\n", "msg_date": "Wed, 12 Apr 2023 09:45:15 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> In preparation for when that's ticked off, I'd like to gather people's\n> thoughts about if we should remove force_parallel_mode from v16?\n\nTo clarify, you just mean removing that alias, right? +1.\nI don't see a reason to wait longer once the buildfarm is on board.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Apr 2023 17:53:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 12 Apr 2023 at 09:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > In preparation for when that's ticked off, I'd like to gather people's\n> > thoughts about if we should remove force_parallel_mode from v16?\n>\n> To clarify, you just mean removing that alias, right? +1.\n> I don't see a reason to wait longer once the buildfarm is on board.\n\nYip, alias. i.e:\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex ea67cfa5e5..7d3b20168a 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -186,7 +186,6 @@ static const unit_conversion time_unit_conversion_table[] =\n static const char *const map_old_guc_names[] = {\n \"sort_mem\", \"work_mem\",\n \"vacuum_mem\", \"maintenance_work_mem\",\n- \"force_parallel_mode\", \"debug_parallel_query\",\n NULL\n };\n\nDavid\n\n\n", "msg_date": "Wed, 12 Apr 2023 10:00:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" }, { "msg_contents": "On Wed, 12 Apr 2023 at 09:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't see a reason to wait longer once the buildfarm is on board.\n\nI did a final sweep of the latest runs for each animal this morning.\nEverything has been switched over to debug_parallel_query, so I've\ngone and pushed the patch to remove the mapping.\n\nFor the record, the only things I see mentioning force_parallel_mode\nin there are:\n\nhoverfly\n <td>force_parallel_mode; RANDOMIZE_ALLOCATED_MEMORY</td>\nmantid\n <td>force_parallel_mode on REL_10_STABLE and later</td>\n\n'force_parallel_mode = regress'\n\n'force_parallel_mode = regress'\n\n'force_parallel_mode = regress'\n\n'force_parallel_mode = regress'\n\n'force_parallel_mode = regress'\nmandrill\n <td>force_parallel_mode; RANDOMIZE_ALLOCATED_MEMORY</td>\nseawasp\n<a href=\"https://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=98a88bc2bc\">98a88bc2bc</a>\nThu Mar 2 22:47:20 2023 UTC Harden new test case against\nforce_parallel_mode = regress.\n<a href=\"https://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=5352ca22e0\">5352ca22e0</a>\nWed Feb 15 08:21:59 2023 UTC Rename force_parallel_mode to\ndebug_parallel_query\n\nseawasp's is just references to older commits. The rest seem like\njust outdated comments.\n\nDavid\n\n\n", "msg_date": "Fri, 14 Apr 2023 10:30:57 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we do something to help stop users mistakenly using\n force_parallel_mode?" } ]
[ { "msg_contents": "Hi hackers,\nI wrote a test for pg_prewarm extension. and I wrote it with the aim of improving test coverage, and feedback is always welcome.\n\n---\nRegards\nDongWook Lee", "msg_date": "Wed, 29 Jun 2022 14:38:12 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Add test of pg_prewarm extenion" }, { "msg_contents": "On Wed, Jun 29, 2022 at 02:38:12PM +0900, Dong Wook Lee wrote:\n> Hi hackers,\n> I wrote a test for pg_prewarm extension. and I wrote it with the aim of improving test coverage, and feedback is always welcome.\n\nThe test fails when USE_PREFETCH isn't defined.\nhttp://cfbot.cputube.org/dongwook-lee.html\n\nYou can accommodate that by adding an \"alternate\" output file, named like\npg_prewarm_0.out\n\nBTW, you can test your patches the same as cfbot does (before mailing the list)\non 4 OSes by pushing a branch to a github account. See ./src/tools/ci/README\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Jun 2022 21:24:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Jun 29, 2022 at 02:38:12PM +0900, Dong Wook Lee wrote:\n>> I wrote a test for pg_prewarm extension. and I wrote it with the aim of improving test coverage, and feedback is always welcome.\n\n> The test fails when USE_PREFETCH isn't defined.\n> You can accommodate that by adding an \"alternate\" output file, named like\n> pg_prewarm_0.out\n\nFWIW, I'd tend to just not bother exercising the prefetch case.\nIt doesn't seem worth maintaining an alternate expected-file for that,\nsince it's not meaningfully different from the other code paths\nas far as this code is concerned, and testing PrefetchBuffer itself\nisn't the responsibility of this test.\n\nI tried this patch locally and was disturbed to see that the\ncode coverage of autoprewarm.c is still very low. It looks like\napw_load_buffers never reaches any of the actual prewarming code,\nbecause it never successfully opens AUTOPREWARM_FILE. This seems a\nbit odd to me, but maybe it's because you start and immediately stop\nthe database without causing it to do anything that would populate\nshared buffers? This bit:\n\n+ok ($logfile =~\n+ qr/autoprewarm successfully prewarmed 0 of 1 previously-loaded blocks/);\n\nis certainly a red flag that little of interest happened.\n\nKeep in mind also that the logfile accumulates over stops and\nrestarts. As you've coded this test, you don't know which DB start\nemitted the matching line, so the test proves a lot less than it\nought to.\n\nI wonder also about race conditions. On fast machines, or those\nwith weird schedulers, the test script might reach slurp_file\nbefore autoprewarm has had a chance to emit the log entry you want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Jul 2022 14:25:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "Hi,\nFirst of all, thank you for your feedback.\n\n2022년 7월 31일 (일) 오전 3:25, Tom Lane <tgl@sss.pgh.pa.us>님이 작성:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Wed, Jun 29, 2022 at 02:38:12PM +0900, Dong Wook Lee wrote:\n> >> I wrote a test for pg_prewarm extension. and I wrote it with the aim of improving test coverage, and feedback is always welcome.\n>\n> > The test fails when USE_PREFETCH isn't defined.\n> > You can accommodate that by adding an \"alternate\" output file, named like\n> > pg_prewarm_0.out\n>\n> FWIW, I'd tend to just not bother exercising the prefetch case.\n> It doesn't seem worth maintaining an alternate expected-file for that,\n> since it's not meaningfully different from the other code paths\n> as far as this code is concerned, and testing PrefetchBuffer itself\n> isn't the responsibility of this test.\n>\n> I tried this patch locally and was disturbed to see that the\n> code coverage of autoprewarm.c is still very low. It looks like\n> apw_load_buffers never reaches any of the actual prewarming code,\n> because it never successfully opens AUTOPREWARM_FILE. This seems a\n> bit odd to me, but maybe it's because you start and immediately stop\n> the database without causing it to do anything that would populate\n> shared buffers? This bit:\n>\n> +ok ($logfile =~\n> + qr/autoprewarm successfully prewarmed 0 of 1 previously-loaded blocks/);\n>\n> is certainly a red flag that little of interest happened.\n\nI think it was because I didn't have much data either.\nAfter adding data, coverage increased significantly. (11.6% -> 73.6%)\n\n>\n> Keep in mind also that the logfile accumulates over stops and\n> restarts. As you've coded this test, you don't know which DB start\n> emitted the matching line, so the test proves a lot less than it\n> ought to.\n>\n> I wonder also about race conditions. On fast machines, or those\n> with weird schedulers, the test script might reach slurp_file\n> before autoprewarm has had a chance to emit the log entry you want.\n\nI have no idea how to deal with race conditions.\nDoes anybody know how to deal with this?", "msg_date": "Mon, 1 Aug 2022 15:27:54 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "On Mon, Aug 1, 2022 at 5:16 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n>\n> > Keep in mind also that the logfile accumulates over stops and\n> > restarts. As you've coded this test, you don't know which DB start\n> > emitted the matching line, so the test proves a lot less than it\n> > ought to.\n> >\n> > I wonder also about race conditions. On fast machines, or those\n> > with weird schedulers, the test script might reach slurp_file\n> > before autoprewarm has had a chance to emit the log entry you want.\n>\n> I have no idea how to deal with race conditions.\n> Does anybody know how to deal with this?\n\nCouldn't you use $node->wait_for_log() instead?\n\n\n", "msg_date": "Mon, 1 Aug 2022 17:55:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Aug 1, 2022 at 5:16 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n>> I have no idea how to deal with race conditions.\n>> Does anybody know how to deal with this?\n\n> Couldn't you use $node->wait_for_log() instead?\n\nYeah. The standard usage pattern for that also covers the issue\nof not re-examining prior chunks of the log.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Aug 2022 10:27:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "Thank you for letting me know.\nI edited my patch with `wait_for_log()`.\n\n2022년 8월 1일 (월) 오후 6:55, Julien Rouhaud <rjuju123@gmail.com>님이 작성:\n>\n> On Mon, Aug 1, 2022 at 5:16 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n> >\n> > > Keep in mind also that the logfile accumulates over stops and\n> > > restarts. As you've coded this test, you don't know which DB start\n> > > emitted the matching line, so the test proves a lot less than it\n> > > ought to.\n> > >\n> > > I wonder also about race conditions. On fast machines, or those\n> > > with weird schedulers, the test script might reach slurp_file\n> > > before autoprewarm has had a chance to emit the log entry you want.\n> >\n> > I have no idea how to deal with race conditions.\n> > Does anybody know how to deal with this?\n>\n> Couldn't you use $node->wait_for_log() instead?", "msg_date": "Mon, 1 Aug 2022 23:33:38 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "2022년 8월 1일 (월) 오후 11:33, Dong Wook Lee <sh95119@gmail.com>님이 작성:\n>\n> Thank you for letting me know.\n> I edited my patch with `wait_for_log()`.\n>\n> 2022년 8월 1일 (월) 오후 6:55, Julien Rouhaud <rjuju123@gmail.com>님이 작성:\n> >\n> > On Mon, Aug 1, 2022 at 5:16 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n> > >\n> > > > Keep in mind also that the logfile accumulates over stops and\n> > > > restarts. As you've coded this test, you don't know which DB start\n> > > > emitted the matching line, so the test proves a lot less than it\n> > > > ought to.\n> > > >\n> > > > I wonder also about race conditions. On fast machines, or those\n> > > > with weird schedulers, the test script might reach slurp_file\n> > > > before autoprewarm has had a chance to emit the log entry you want.\n> > >\n> > > I have no idea how to deal with race conditions.\n> > > Does anybody know how to deal with this?\n> >\n> > Couldn't you use $node->wait_for_log() instead?\n\nPlease forgive my carelessness.\nAfter trimming the code a little more, I sent the patch again.", "msg_date": "Mon, 1 Aug 2022 23:53:35 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add test of pg_prewarm extenion" }, { "msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n>>> Couldn't you use $node->wait_for_log() instead?\n\n> After trimming the code a little more, I sent the patch again.\n\nThis is much better, but still has some issues:\n\n* The prefetch test might as well not be there, because\ncheck_pg_config(\"#USE_PREFETCH 1\") will never succeed: there is no\nsuch string in pg_config.h. I don't actually see any good\n(future-proof) way to determine whether USE_PREFETCH is enabled from\nthe available configuration data. After some thought I concluded we\ncould just try the function and accept either success or \"prefetch is\nnot supported by this build\".\n\n* The script had no actual tests, so far as Test::More is concerned.\nI'm not sure that that causes any real problems, but I reformulated\nthe pg_prewarm() tests to verify that a sane-looking result is\nreturned.\n\n* I also added a test of autoprewarm_dump_now(), just to get the\nline coverage count over the magic 75% figure.\n\n* You left out a .gitignore file.\n\nI made some other cosmetic changes (mostly, running it through\npgperltidy) and pushed it. Thanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Aug 2022 18:09:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add test of pg_prewarm extenion" } ]
[ { "msg_contents": "Hi Hackers,\nI just wrote test about pg_rowlocks extension.\nI added sql and spec test for locking state.\n\n---\nRegards\nDongWook Lee", "msg_date": "Wed, 29 Jun 2022 14:53:26 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "add test: pg_rowlocks extension" }, { "msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> I just wrote test about pg_rowlocks extension.\n> I added sql and spec test for locking state.\n\nI think this could be cut down quite a bit. Do we really need\nboth a SQL test and an isolation test? Seems like you could\neasily do everything in the isolation test.\n\nAlso, it is not a good idea to go creating superusers in a contrib\ntest: we support \"make installcheck\" for these tests, but people don't\nespecially like new superusers cropping up in their installations.\nI doubt that we need *any* of the permissions-ish tests that you\npropose adding here; those are not part of the module's own\nfunctionality, and we don't generally have similar tests in other\ncontrib modules.\n\nIf you do keep any of it, remember to drop the roles you create ---\nleaving global objects behind is not OK. (For one thing, it\nbreaks doing repeat \"make installcheck\"s.)\n\nAnother thing that's bad style is the \"drop table if exists\".\nThis should be running in an empty database, and if somehow it's\nnot, destroying pre-existing objects would be pretty unfriendly.\nBetter to fail at the CREATE.\n\nSee also my comments about your pg_buffercache patch, which\nlargely apply here too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Jul 2022 17:32:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: add test: pg_rowlocks extension" }, { "msg_contents": "2022년 7월 31일 (일) 오전 6:32, Tom Lane <tgl@sss.pgh.pa.us>님이 작성:\n>\n> Dong Wook Lee <sh95119@gmail.com> writes:\n> > I just wrote test about pg_rowlocks extension.\n> > I added sql and spec test for locking state.\n>\n> I think this could be cut down quite a bit. Do we really need\n> both a SQL test and an isolation test? Seems like you could\n> easily do everything in the isolation test.\n\nI agree with your optionion.\n\n> Also, it is not a good idea to go creating superusers in a contrib\n> test: we support \"make installcheck\" for these tests, but people don't\n> especially like new superusers cropping up in their installations.\n> I doubt that we need *any* of the permissions-ish tests that you\n> propose adding here; those are not part of the module's own\n> functionality, and we don't generally have similar tests in other\n> contrib modules.\n\nI agree it's right to remove that part.\n\n> If you do keep any of it, remember to drop the roles you create ---\n> leaving global objects behind is not OK. (For one thing, it\n> breaks doing repeat \"make installcheck\"s.)\n>\n> Another thing that's bad style is the \"drop table if exists\".\n> This should be running in an empty database, and if somehow it's\n> not, destroying pre-existing objects would be pretty unfriendly.\n> Better to fail at the CREATE.\n\nThank you for the good explanation. It will be very helpful to write a\ntest in the future.\n\n> See also my comments about your pg_buffercache patch, which\n> largely apply here too.\nOK. I will add the `.gitignore` file.\n\nI will revise my patch and submit it again as soon as possible.\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:32:10 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add test: pg_rowlocks extension" }, { "msg_contents": "I modified my previous patch by reflecting the feedback.\nand I wrote most of the queries for the test after looking at the file below.\n\n- ref: (https://github.com/postgres/postgres/blob/master/src/test/isolation/specs/tuplelock-conflict.spec)\n\nThe coverage of the test is approximately 81.5%.\n\nIf there is any problem, I would appreciate it if you let me know anytime.\nThank you always for your kind reply.", "msg_date": "Tue, 2 Aug 2022 21:03:58 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add test: pg_rowlocks extension" }, { "msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> I modified my previous patch by reflecting the feedback.\n> and I wrote most of the queries for the test after looking at the file below.\n\nPushed with some revisions. Notably, I didn't see any point in\nrepeating each test case four times, so I trimmed it down to once\nper case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Sep 2022 15:07:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: add test: pg_rowlocks extension" }, { "msg_contents": "On Fri, Sep 2, 2022 at 4:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pushed with some revisions. Notably, I didn't see any point in\n> repeating each test case four times, so I trimmed it down to once\n> per case.\n\nI checked it.\nThank you for correcting it in a better way.\n\n\n", "msg_date": "Fri, 2 Sep 2022 18:34:58 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add test: pg_rowlocks extension" } ]
[ { "msg_contents": "Hi,\n\nchipmunk (an armv6l-powered original Raspberry Pi model 1?) has failed\nin a couple of weird ways recently on 14 and master.\n\nOn 14 I see what appears to be a corrupted log file name:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-06-16%2006%3A48%3A07\n\ncp: cannot stat\n\\342\\200\\230/home/pgbfarm/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_primary_data/archives/000000010000000000000003\\342\\200\\231:\nNo such file or directory\n\nOn master, you can ignore this failure, because it was addressed by 93759c66:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-05-11%2015%3A26%3A01\n\nThen there's this one-off, that smells like WAL corruption:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-06-13%2015%3A12%3A44\n\n2022-06-13 23:02:06.988 EEST [30121:5] LOG: incorrect resource\nmanager data checksum in record at 0/79B4FE0\n\nHmmm. I suppose it's remotely possible that Linux/armv6l ext4 suffers\nfrom concurrency bugs like Linux/sparc. In that particular kernel\nbug's case it's zeroes, so I guess it'd be easier to speculate about\nif the log message included the checksum when it fails like that...\n\n\n", "msg_date": "Thu, 30 Jun 2022 10:07:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Strange failures on chipmunk" }, { "msg_contents": "On Thu, Jun 30, 2022 at 10:07:18AM +1200, Thomas Munro wrote:\n> Then there's this one-off, that smells like WAL corruption:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-06-13%2015%3A12%3A44\n> \n> 2022-06-13 23:02:06.988 EEST [30121:5] LOG: incorrect resource\n> manager data checksum in record at 0/79B4FE0\n> \n> Hmmm. I suppose it's remotely possible that Linux/armv6l ext4 suffers\n> from concurrency bugs like Linux/sparc.\n\nRunning sparc64-ext4-zeros.c from\nhttps://marc.info/?l=linux-sparc&m=164539269632667&w=2 could confirm that\npossibility.\n\n\n", "msg_date": "Wed, 29 Jun 2022 23:31:55 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Strange failures on chipmunk" }, { "msg_contents": "On 30/06/2022 09:31, Noah Misch wrote:\n> On Thu, Jun 30, 2022 at 10:07:18AM +1200, Thomas Munro wrote:\n>> Then there's this one-off, that smells like WAL corruption:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-06-13%2015%3A12%3A44\n>>\n>> 2022-06-13 23:02:06.988 EEST [30121:5] LOG: incorrect resource\n>> manager data checksum in record at 0/79B4FE0\n>>\n>> Hmmm. I suppose it's remotely possible that Linux/armv6l ext4 suffers\n>> from concurrency bugs like Linux/sparc.\n> \n> Running sparc64-ext4-zeros.c from\n> https://marc.info/?l=linux-sparc&m=164539269632667&w=2 could confirm that\n> possibility.\n\nI ran sparc64-ext4-zeros on chipmunk for 10 minutes, and it didn't print \nanything.\n\nIt's possible that the SD card on chipmunk is simply wearing out and \nflipping bits. I can try to replace it. Anyone have suggestions on a \ntest program I could run on the SD card, after replacing it, to verify \nif it was indeed worn out?\n\n- Heikki\n\n\n", "msg_date": "Thu, 30 Jun 2022 11:21:18 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Strange failures on chipmunk" }, { "msg_contents": "On Thu, Jun 30, 2022 at 8:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I ran sparc64-ext4-zeros on chipmunk for 10 minutes, and it didn't print\n> anything.\n\nThanks for checking.\n\n> It's possible that the SD card on chipmunk is simply wearing out and\n> flipping bits. I can try to replace it. Anyone have suggestions on a\n> test program I could run on the SD card, after replacing it, to verify\n> if it was indeed worn out?\n\nBTW its disk is full.\n\nFWIW I run RPi4 build bots on higher end USB3.x sticks (SanDisk\nExtreme Pro, I'm sure there are others), and the performance is orders\nof magnitude higher and more consistent than the micro SD and\ncheap/random USB sticks I tried. Admittedly they cost more than the\nRPi4 board themselves (back when you could get them).\n\nI noticed another (presumed) Raspberry Pi apparently behaving\nstrangely at the storage level (guessing it's a Pi by the armv7l\narchitecture): dangomushi appears to get files mixed up. Here it is\ntrying to compile a log file last week:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-07-14%2017%3A58%3A38\n\nAnd the week before it tried to compile some Perl:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-07-09%2015%3A30%3A07\n\n\n", "msg_date": "Fri, 22 Jul 2022 16:35:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange failures on chipmunk" }, { "msg_contents": "On Fri, Jul 22, 2022 at 04:35:30PM +1200, Thomas Munro wrote:\n> I noticed another (presumed) Raspberry Pi apparently behaving\n> strangely at the storage level (guessing it's a Pi by the armv7l\n> architecture): dangomushi appears to get files mixed up. Here it is\n> trying to compile a log file last week:\n\nThis is a PI2.\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-07-14%2017%3A58%3A38\n> \n> And the week before it tried to compile some Perl:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-07-09%2015%3A30%3A07\n\nThe buildfarm runs are part of a SD card that's been running for a\ncouple of years now, so I would not be surprised that the issue comes\nfrom the years using it. A couple of fsck's did not show up anything,\nthough, but I am keeping an eye on it.\n--\nMichael", "msg_date": "Fri, 22 Jul 2022 15:23:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strange failures on chipmunk" } ]
[ { "msg_contents": "Hello,\n\nDuring checking regression tests of TRUNCATE on foreign\ntables for other patch [1], I found that there is no test\nfor foreign tables that don't support TRUNCATE. \n\nWhen a foreign table has handler but doesn't support TRUNCATE,\nan error \"cannot truncate foreign table xxx\" occurs. So, what\nabout adding a test this message output? We can add this test\nfor file_fdw because it is one of the such foreign data wrappers.\n\nI attached a patch.\n\n[1] https://postgr.es/m/20220527172543.0a2fdb469cf048b81c0967d3@sraoss.co.jp\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 30 Jun 2022 10:48:12 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "\n\nOn 2022/06/30 10:48, Yugo NAGATA wrote:\n> Hello,\n> \n> During checking regression tests of TRUNCATE on foreign\n> tables for other patch [1], I found that there is no test\n> for foreign tables that don't support TRUNCATE.\n> \n> When a foreign table has handler but doesn't support TRUNCATE,\n> an error \"cannot truncate foreign table xxx\" occurs. So, what\n> about adding a test this message output? We can add this test\n> for file_fdw because it is one of the such foreign data wrappers.\n> \n> I attached a patch.\n\nThanks for the patch! It looks good to me.\nI changed the status of this patch to ready-for-committer,\nand will commit it barring any objeciton.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jul 2022 00:25:24 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2022/06/30 10:48, Yugo NAGATA wrote:\n>> When a foreign table has handler but doesn't support TRUNCATE,\n>> an error \"cannot truncate foreign table xxx\" occurs. So, what\n>> about adding a test this message output? We can add this test\n>> for file_fdw because it is one of the such foreign data wrappers.\n\n> Thanks for the patch! It looks good to me.\n> I changed the status of this patch to ready-for-committer,\n> and will commit it barring any objeciton.\n\nThis seems like a fairly pointless expenditure of test cycles\nto me. Perhaps more importantly, what will you do when\nsomebody adds truncate support to that FDW? I don't think\nthere's an inherent reason for it to be read-only.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Jul 2022 11:33:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "\n\nOn 2022/07/08 0:33, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2022/06/30 10:48, Yugo NAGATA wrote:\n>>> When a foreign table has handler but doesn't support TRUNCATE,\n>>> an error \"cannot truncate foreign table xxx\" occurs. So, what\n>>> about adding a test this message output? We can add this test\n>>> for file_fdw because it is one of the such foreign data wrappers.\n> \n>> Thanks for the patch! It looks good to me.\n>> I changed the status of this patch to ready-for-committer,\n>> and will commit it barring any objeciton.\n> \n> This seems like a fairly pointless expenditure of test cycles\n> to me. Perhaps more importantly, what will you do when\n> somebody adds truncate support to that FDW?\n\nOne idea is to create dummy FDW (like foreign_data.sql regression test does) not supporting TRUNCATE and use it for the test.\n\nBTW, file_fdw already has the similar test cases for INSERT, UPDATE and DELETE, as follows.\n\n-- updates aren't supported\nINSERT INTO agg_csv VALUES(1,2.0);\nERROR: cannot insert into foreign table \"agg_csv\"\nUPDATE agg_csv SET a = 1;\nERROR: cannot update foreign table \"agg_csv\"\nDELETE FROM agg_csv WHERE a = 100;\nERROR: cannot delete from foreign table \"agg_csv\"\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jul 2022 01:06:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "At Fri, 8 Jul 2022 01:06:18 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/07/08 0:33, Tom Lane wrote:\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> >> On 2022/06/30 10:48, Yugo NAGATA wrote:\n> >>> When a foreign table has handler but doesn't support TRUNCATE,\n> >>> an error \"cannot truncate foreign table xxx\" occurs. So, what\n> >>> about adding a test this message output? We can add this test\n> >>> for file_fdw because it is one of the such foreign data wrappers.\n> > \n> >> Thanks for the patch! It looks good to me.\n> >> I changed the status of this patch to ready-for-committer,\n> >> and will commit it barring any objeciton.\n> > This seems like a fairly pointless expenditure of test cycles\n> > to me. Perhaps more importantly, what will you do when\n> > somebody adds truncate support to that FDW?\n\nAs Fujii-san mentioned below, file_fdw has tests for INSERT/UPDATE and\nDELETE. If somebody added DELETE to file_fdw, the test for DELETE\nrejection would be turned into a normal test of the DELETE function.\nI don't see a difference between TRUNCATE and other updating commands\nfrom this point of view.\n\n> One idea is to create dummy FDW (like foreign_data.sql regression test\n> does) not supporting TRUNCATE and use it for the test.\n\nI think the proposed test is not that for FDW framework, but for a\nspecific FDW module, file_fdw.\n\n> BTW, file_fdw already has the similar test cases for INSERT, UPDATE\n> and DELETE, as follows.\n> \n> -- updates aren't supported\n> INSERT INTO agg_csv VALUES(1,2.0);\n> ERROR: cannot insert into foreign table \"agg_csv\"\n> UPDATE agg_csv SET a = 1;\n> ERROR: cannot update foreign table \"agg_csv\"\n> DELETE FROM agg_csv WHERE a = 100;\n> ERROR: cannot delete from foreign table \"agg_csv\"\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Jul 2022 09:44:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "On Fri, 08 Jul 2022 09:44:10 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Fri, 8 Jul 2022 01:06:18 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > \n> > \n> > On 2022/07/08 0:33, Tom Lane wrote:\n> > > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > >> On 2022/06/30 10:48, Yugo NAGATA wrote:\n> > >>> When a foreign table has handler but doesn't support TRUNCATE,\n> > >>> an error \"cannot truncate foreign table xxx\" occurs. So, what\n> > >>> about adding a test this message output? We can add this test\n> > >>> for file_fdw because it is one of the such foreign data wrappers.\n> > > \n> > >> Thanks for the patch! It looks good to me.\n> > >> I changed the status of this patch to ready-for-committer,\n> > >> and will commit it barring any objeciton.\n> > > This seems like a fairly pointless expenditure of test cycles\n> > > to me. Perhaps more importantly, what will you do when\n> > > somebody adds truncate support to that FDW?\n> \n> As Fujii-san mentioned below, file_fdw has tests for INSERT/UPDATE and\n> DELETE. If somebody added DELETE to file_fdw, the test for DELETE\n> rejection would be turned into a normal test of the DELETE function.\n> I don't see a difference between TRUNCATE and other updating commands\n> from this point of view.\n> \n> > One idea is to create dummy FDW (like foreign_data.sql regression test\n> > does) not supporting TRUNCATE and use it for the test.\n> \n> I think the proposed test is not that for FDW framework, but for a\n> specific FDW module, file_fdw.\n\nYes, the patch is an improvement for the test of flie_fdw. \n\nIf we want to test foreign table modifications for the FDW framework, \nwe will have to add such tests in foreign_data.sql, because foreign\ntable modifications are tested only for postgres_fdw and file_fdw. \n\n> > BTW, file_fdw already has the similar test cases for INSERT, UPDATE\n> > and DELETE, as follows.\n> > \n> > -- updates aren't supported\n> > INSERT INTO agg_csv VALUES(1,2.0);\n> > ERROR: cannot insert into foreign table \"agg_csv\"\n> > UPDATE agg_csv SET a = 1;\n> > ERROR: cannot update foreign table \"agg_csv\"\n> > DELETE FROM agg_csv WHERE a = 100;\n> > ERROR: cannot delete from foreign table \"agg_csv\"\n> \n> Agreed.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 8 Jul 2022 11:07:44 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "On Fri, Jul 8, 2022 at 11:07 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> On Fri, 08 Jul 2022 09:44:10 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > At Fri, 8 Jul 2022 01:06:18 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > > On 2022/07/08 0:33, Tom Lane wrote:\n\n> > > >> On 2022/06/30 10:48, Yugo NAGATA wrote:\n> > > >>> When a foreign table has handler but doesn't support TRUNCATE,\n> > > >>> an error \"cannot truncate foreign table xxx\" occurs. So, what\n> > > >>> about adding a test this message output? We can add this test\n> > > >>> for file_fdw because it is one of the such foreign data wrappers.\n\n> > > > This seems like a fairly pointless expenditure of test cycles\n> > > > to me. Perhaps more importantly, what will you do when\n> > > > somebody adds truncate support to that FDW?\n\n> > As Fujii-san mentioned below, file_fdw has tests for INSERT/UPDATE and\n> > DELETE. If somebody added DELETE to file_fdw, the test for DELETE\n> > rejection would be turned into a normal test of the DELETE function.\n> > I don't see a difference between TRUNCATE and other updating commands\n> > from this point of view.\n\nI agree on this point.\n\n> > > One idea is to create dummy FDW (like foreign_data.sql regression test\n> > > does) not supporting TRUNCATE and use it for the test.\n\n> If we want to test foreign table modifications for the FDW framework,\n> we will have to add such tests in foreign_data.sql, because foreign\n> table modifications are tested only for postgres_fdw and file_fdw.\n\nRather than doing so, I'd vote for adding a test case to file_fdw, as\nproposed in the patch, because that would be much simpler and much\nless expensive.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:03:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "\n\nOn 2022/07/08 17:03, Etsuro Fujita wrote:\n> Rather than doing so, I'd vote for adding a test case to file_fdw, as\n> proposed in the patch, because that would be much simpler and much\n> less expensive.\n\nSo ISTM that most agreed to push Nagata-san's patch adding the test for TRUNCATE on foreign table with file_fdw. So barring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 Jul 2022 16:52:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "\n\nOn 2022/07/15 16:52, Fujii Masao wrote:\n> \n> \n> On 2022/07/08 17:03, Etsuro Fujita wrote:\n>> Rather than doing so, I'd vote for adding a test case to file_fdw, as\n>> proposed in the patch, because that would be much simpler and much\n>> less expensive.\n> \n> So ISTM that most agreed to push Nagata-san's patch adding the test for TRUNCATE on foreign table with file_fdw. So barring any objection, I will commit the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Jul 2022 09:38:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" }, { "msg_contents": "On Wed, 20 Jul 2022 09:38:17 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2022/07/15 16:52, Fujii Masao wrote:\n> > \n> > \n> > On 2022/07/08 17:03, Etsuro Fujita wrote:\n> >> Rather than doing so, I'd vote for adding a test case to file_fdw, as\n> >> proposed in the patch, because that would be much simpler and much\n> >> less expensive.\n> > \n> > So ISTM that most agreed to push Nagata-san's patch adding the test for TRUNCATE on foreign table with file_fdw. So barring any objection, I will commit the patch.\n> \n> Pushed. Thanks!\n\nThanks!\n\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 20 Jul 2022 09:50:54 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Add a test for \"cannot truncate foreign table\"" } ]
[ { "msg_contents": "Hi,\n\nI found that the assertion failure and the segmentation fault could\nhappen by running pg_backup_start(), pg_backup_stop() and BASE_BACKUP\nreplication command, in v15 or before.\n\nHere is the procedure to reproduce the assertion failure.\n\n1. Connect to the server as the REPLICATION user who is granted\n EXECUTE to run pg_backup_start() and pg_backup_stop().\n\n $ psql\n =# CREATE ROLE foo REPLICATION LOGIN;\n =# GRANT EXECUTE ON FUNCTION pg_backup_start TO foo;\n =# GRANT EXECUTE ON FUNCTION pg_backup_stop TO foo;\n =# \\q\n\n $ psql \"replication=database user=foo dbname=postgres\"\n\n2. Run pg_backup_start() and pg_backup_stop().\n\n => SELECT pg_backup_start('test', true);\n => SELECT pg_backup_stop();\n\n3. Run BASE_BACKUP replication command with smaller MAX_RATE so that\n it can take a long time to finish.\n\n => BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\n\n4. Terminate the replication connection while it's running BASE_BACKUP.\n\n $ psql\n =# SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE backend_type = 'walsender';\n\nThis procedure can cause the following assertion failure.\n\nTRAP: FailedAssertion(\"XLogCtl->Insert.runningBackups > 0\", File: \"xlog.c\", Line: 8779, PID: 69434)\n0 postgres 0x000000010ab2ff7f ExceptionalCondition + 223\n1 postgres 0x000000010a455126 do_pg_abort_backup + 102\n2 postgres 0x000000010a8e13aa shmem_exit + 218\n3 postgres 0x000000010a8e11ed proc_exit_prepare + 125\n4 postgres 0x000000010a8e10f3 proc_exit + 19\n5 postgres 0x000000010ab3171c errfinish + 1100\n6 postgres 0x000000010a91fa80 ProcessInterrupts + 1376\n7 postgres 0x000000010a886907 throttle + 359\n8 postgres 0x000000010a88675d bbsink_throttle_archive_contents + 29\n9 postgres 0x000000010a885aca bbsink_archive_contents + 154\n10 postgres 0x000000010a885a2a bbsink_forward_archive_contents + 218\n11 postgres 0x000000010a884a99 bbsink_progress_archive_contents + 89\n12 postgres 0x000000010a881aba bbsink_archive_contents + 154\n13 postgres 0x000000010a881598 sendFile + 1816\n14 postgres 0x000000010a8806c5 sendDir + 3573\n15 postgres 0x000000010a8805d9 sendDir + 3337\n16 postgres 0x000000010a87e262 perform_base_backup + 1250\n17 postgres 0x000000010a87c734 SendBaseBackup + 500\n18 postgres 0x000000010a89a7f8 exec_replication_command + 1144\n19 postgres 0x000000010a92319a PostgresMain + 2154\n20 postgres 0x000000010a82b702 BackendRun + 50\n21 postgres 0x000000010a82acfc BackendStartup + 524\n22 postgres 0x000000010a829b2c ServerLoop + 716\n23 postgres 0x000000010a827416 PostmasterMain + 6470\n24 postgres 0x000000010a703e19 main + 809\n25 libdyld.dylib 0x00007fff2072ff3d start + 1\n\n\nHere is the procedure to reproduce the segmentation fault.\n\n1. Connect to the server as the REPLICATION user who is granted\n EXECUTE to run pg_backup_stop().\n\n $ psql\n =# CREATE ROLE foo REPLICATION LOGIN;\n =# GRANT EXECUTE ON FUNCTION pg_backup_stop TO foo;\n =# \\q\n\n $ psql \"replication=database user=foo dbname=postgres\"\n\n2. Run BASE_BACKUP replication command with smaller MAX_RATE so that\n it can take a long time to finish.\n\n => BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\n\n3. Press Ctrl-C to cancel BASE_BACKUP while it's running.\n\n4. Run pg_backup_stop().\n\n => SELECT pg_backup_stop();\n\nThis procedure can cause the following segmentation fault.\n\n LOG: server process (PID 69449) was terminated by signal 11: Segmentation fault: 11\n DETAIL: Failed process was running: SELECT pg_backup_stop();\n\n\nThe root cause of these failures seems that sessionBackupState flag\nis not reset to SESSION_BACKUP_NONE even when BASE_BACKUP is aborted.\nSo attached patch changes do_pg_abort_backup callback so that\nit resets sessionBackupState. I confirmed that, with the patch,\nthose assertion failure and segmentation fault didn't happen.\n\nBut this change has one issue that; if BASE_BACKUP is run while\na backup is already in progress in the session by pg_backup_start()\nand that session is terminated, the change causes XLogCtl->Insert.runningBackups\nto be decremented incorrectly. That is, XLogCtl->Insert.runningBackups\nis incremented by two by pg_backup_start() and BASE_BACKUP,\nbut it's decremented only by one by the termination of the session.\n\nTo address this issue, I think that we should disallow BASE_BACKUP\nto run while a backup is already in progress in the *same* session\nas we already do this for pg_backup_start(). Thought? I included\nthe code to disallow that in the attached patch.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 30 Jun 2022 12:28:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "At Thu, 30 Jun 2022 12:28:43 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> The root cause of these failures seems that sessionBackupState flag\n> is not reset to SESSION_BACKUP_NONE even when BASE_BACKUP is aborted.\n> So attached patch changes do_pg_abort_backup callback so that\n> it resets sessionBackupState. I confirmed that, with the patch,\n> those assertion failure and segmentation fault didn't happen.\n> \n> But this change has one issue that; if BASE_BACKUP is run while\n> a backup is already in progress in the session by pg_backup_start()\n> and that session is terminated, the change causes\n> XLogCtl->Insert.runningBackups\n> to be decremented incorrectly. That is, XLogCtl->Insert.runningBackups\n> is incremented by two by pg_backup_start() and BASE_BACKUP,\n> but it's decremented only by one by the termination of the session.\n> \n> To address this issue, I think that we should disallow BASE_BACKUP\n> to run while a backup is already in progress in the *same* session\n> as we already do this for pg_backup_start(). Thought? I included\n> the code to disallow that in the attached patch.\n\nIt seems like to me that the root cause is the callback is registered\ntwice. The callback does not expect to be called more than once (at\nleast per one increment of runningBackups).\n\nregister_persistent_abort_backup_hanedler() prevents duplicate\nregsitration of the callback so I think perform_base_backup should use\nthis function instead of protecting by the PG_*_ERROR_CLEANUP()\nsection.\n\nPlease find the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 01 Jul 2022 11:46:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "At Fri, 01 Jul 2022 11:46:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Please find the attached.\n\nMmm. It forgot the duplicate-call prevention and query-cancel\nhandling... The first one is the same as you posted but the second one\nis still a problem..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Jul 2022 11:56:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "At Fri, 01 Jul 2022 11:56:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 01 Jul 2022 11:46:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Please find the attached.\n> \n> Mmm. It forgot the duplicate-call prevention and query-cancel\n> handling... The first one is the same as you posted but the second one\n> is still a problem..\n\nSo this is the first cut of that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 01 Jul 2022 12:05:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "\n\nOn 2022/07/01 12:05, Kyotaro Horiguchi wrote:\n> At Fri, 01 Jul 2022 11:56:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> At Fri, 01 Jul 2022 11:46:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> Please find the attached.\n>>\n>> Mmm. It forgot the duplicate-call prevention and query-cancel\n>> handling... The first one is the same as you posted but the second one\n>> is still a problem..\n> \n> So this is the first cut of that.\n\nThanks for reviewing the patch!\n\n+\tPG_FINALLY();\n+\t{\n \t\tendptr = do_pg_backup_stop(labelfile->data, !opt->nowait, &endtli);\n \t}\n-\tPG_END_ENSURE_ERROR_CLEANUP(do_pg_abort_backup, BoolGetDatum(false));\n-\n+\tPG_END_TRY();\n\nThis change makes perform_base_backup() call do_pg_backup_stop() even when an error is reported while taking a backup, i.e., between PG_TRY() and PG_FINALLY(). Why do_pg_backup_stop() needs to be called in such an error case? It not only cleans up the backup state but also writes the backup-end WAL record, waits for WAL archiving. In an error case, I think that only the cleanup of the backup state is necessary. So it seems ok to use do_pg_abort_backup() in that case, as it is for now.\n\nSo I'm still thinking that the patch I posted is simpler and enough.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:02:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 30, 2022 at 12:29 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I found that the assertion failure and the segmentation fault could\n> happen by running pg_backup_start(), pg_backup_stop() and BASE_BACKUP\n> replication command, in v15 or before.\n>\n> Here is the procedure to reproduce the assertion failure.\n>\n> 1. Connect to the server as the REPLICATION user who is granted\n> EXECUTE to run pg_backup_start() and pg_backup_stop().\n>\n> $ psql\n> =# CREATE ROLE foo REPLICATION LOGIN;\n> =# GRANT EXECUTE ON FUNCTION pg_backup_start TO foo;\n> =# GRANT EXECUTE ON FUNCTION pg_backup_stop TO foo;\n> =# \\q\n>\n> $ psql \"replication=database user=foo dbname=postgres\"\n>\n> 2. Run pg_backup_start() and pg_backup_stop().\n>\n> => SELECT pg_backup_start('test', true);\n> => SELECT pg_backup_stop();\n>\n> 3. Run BASE_BACKUP replication command with smaller MAX_RATE so that\n> it can take a long time to finish.\n>\n> => BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\n>\n> 4. Terminate the replication connection while it's running BASE_BACKUP.\n>\n> $ psql\n> =# SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE backend_type = 'walsender';\n>\n> This procedure can cause the following assertion failure.\n>\n> TRAP: FailedAssertion(\"XLogCtl->Insert.runningBackups > 0\", File: \"xlog.c\", Line: 8779, PID: 69434)\n> 0 postgres 0x000000010ab2ff7f ExceptionalCondition + 223\n> 1 postgres 0x000000010a455126 do_pg_abort_backup + 102\n> 2 postgres 0x000000010a8e13aa shmem_exit + 218\n> 3 postgres 0x000000010a8e11ed proc_exit_prepare + 125\n> 4 postgres 0x000000010a8e10f3 proc_exit + 19\n> 5 postgres 0x000000010ab3171c errfinish + 1100\n> 6 postgres 0x000000010a91fa80 ProcessInterrupts + 1376\n> 7 postgres 0x000000010a886907 throttle + 359\n> 8 postgres 0x000000010a88675d bbsink_throttle_archive_contents + 29\n> 9 postgres 0x000000010a885aca bbsink_archive_contents + 154\n> 10 postgres 0x000000010a885a2a bbsink_forward_archive_contents + 218\n> 11 postgres 0x000000010a884a99 bbsink_progress_archive_contents + 89\n> 12 postgres 0x000000010a881aba bbsink_archive_contents + 154\n> 13 postgres 0x000000010a881598 sendFile + 1816\n> 14 postgres 0x000000010a8806c5 sendDir + 3573\n> 15 postgres 0x000000010a8805d9 sendDir + 3337\n> 16 postgres 0x000000010a87e262 perform_base_backup + 1250\n> 17 postgres 0x000000010a87c734 SendBaseBackup + 500\n> 18 postgres 0x000000010a89a7f8 exec_replication_command + 1144\n> 19 postgres 0x000000010a92319a PostgresMain + 2154\n> 20 postgres 0x000000010a82b702 BackendRun + 50\n> 21 postgres 0x000000010a82acfc BackendStartup + 524\n> 22 postgres 0x000000010a829b2c ServerLoop + 716\n> 23 postgres 0x000000010a827416 PostmasterMain + 6470\n> 24 postgres 0x000000010a703e19 main + 809\n> 25 libdyld.dylib 0x00007fff2072ff3d start + 1\n>\n>\n> Here is the procedure to reproduce the segmentation fault.\n>\n> 1. Connect to the server as the REPLICATION user who is granted\n> EXECUTE to run pg_backup_stop().\n>\n> $ psql\n> =# CREATE ROLE foo REPLICATION LOGIN;\n> =# GRANT EXECUTE ON FUNCTION pg_backup_stop TO foo;\n> =# \\q\n>\n> $ psql \"replication=database user=foo dbname=postgres\"\n>\n> 2. Run BASE_BACKUP replication command with smaller MAX_RATE so that\n> it can take a long time to finish.\n>\n> => BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\n>\n> 3. Press Ctrl-C to cancel BASE_BACKUP while it's running.\n>\n> 4. Run pg_backup_stop().\n>\n> => SELECT pg_backup_stop();\n>\n> This procedure can cause the following segmentation fault.\n>\n> LOG: server process (PID 69449) was terminated by signal 11: Segmentation fault: 11\n> DETAIL: Failed process was running: SELECT pg_backup_stop();\n>\n>\n> The root cause of these failures seems that sessionBackupState flag\n> is not reset to SESSION_BACKUP_NONE even when BASE_BACKUP is aborted.\n> So attached patch changes do_pg_abort_backup callback so that\n> it resets sessionBackupState. I confirmed that, with the patch,\n> those assertion failure and segmentation fault didn't happen.\n\nThe change looks good to me. I've also confirmed the change fixed the issues.\n\n> But this change has one issue that; if BASE_BACKUP is run while\n> a backup is already in progress in the session by pg_backup_start()\n> and that session is terminated, the change causes XLogCtl->Insert.runningBackups\n> to be decremented incorrectly. That is, XLogCtl->Insert.runningBackups\n> is incremented by two by pg_backup_start() and BASE_BACKUP,\n> but it's decremented only by one by the termination of the session.\n>\n> To address this issue, I think that we should disallow BASE_BACKUP\n> to run while a backup is already in progress in the *same* session\n> as we already do this for pg_backup_start(). Thought? I included\n> the code to disallow that in the attached patch.\n\n+1\n\n@@ -233,6 +233,12 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)\n StringInfo labelfile;\n StringInfo tblspc_map_file;\n backup_manifest_info manifest;\n+ SessionBackupState status = get_backup_status();\n+\n+ if (status == SESSION_BACKUP_RUNNING)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"a backup is already in progress in this session\")));\n\nI think we can move it to the beginning of SendBaseBackup() so we can\navoid bbsink initialization and cleanup in the error case.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:09:48 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On 2022/07/01 15:09, Masahiko Sawada wrote:\n> The change looks good to me. I've also confirmed the change fixed the issues.\n\nThanks for the review and test!\n\n> @@ -233,6 +233,12 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)\n> StringInfo labelfile;\n> StringInfo tblspc_map_file;\n> backup_manifest_info manifest;\n> + SessionBackupState status = get_backup_status();\n> +\n> + if (status == SESSION_BACKUP_RUNNING)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"a backup is already in progress in this session\")));\n> \n> I think we can move it to the beginning of SendBaseBackup() so we can\n> avoid bbsink initialization and cleanup in the error case.\n\nSounds good idea to me. I updated the patch in that way. Attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 1 Jul 2022 15:32:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:32:50PM +0900, Fujii Masao wrote:\n> Sounds good idea to me. I updated the patch in that way. Attached.\n\nSkimming quickly through the thread, this failure requires a\ntermination of a backend running BASE_BACKUP. This is basically\nsomething done by the TAP test added in 0475a97f with a WAL sender\nkilled, and MAX_RATE being used to make sure that we have enough time\nto kill the WAL sender even on fast machines. So you could add a\nregression test, no?\n--\nMichael", "msg_date": "Fri, 1 Jul 2022 15:41:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "\n\nOn 2022/07/01 15:41, Michael Paquier wrote:\n> On Fri, Jul 01, 2022 at 03:32:50PM +0900, Fujii Masao wrote:\n>> Sounds good idea to me. I updated the patch in that way. Attached.\n> \n> Skimming quickly through the thread, this failure requires a\n> termination of a backend running BASE_BACKUP. This is basically\n> something done by the TAP test added in 0475a97f with a WAL sender\n> killed, and MAX_RATE being used to make sure that we have enough time\n> to kill the WAL sender even on fast machines. So you could add a\n> regression test, no?\n\nFor the test, BASE_BACKUP needs to be canceled after it finishes do_pg_backup_start(), i.e., checkpointing, and before it calls do_pg_backup_stop(). So the timing to cancel that seems more severe than the test added in 0475a97f. I'm afraid that some tests can easily cancel the BASE_BACKUP while it's performing a checkpoint in do_pg_backup_start(). So for now I'm thinking to avoid such an unstable test.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 6 Jul 2022 23:27:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Wed, Jul 06, 2022 at 11:27:58PM +0900, Fujii Masao wrote:\n> For the test, BASE_BACKUP needs to be canceled after it finishes\n> do_pg_backup_start(), i.e., checkpointing, and before it calls\n> do_pg_backup_stop(). So the timing to cancel that seems more severe\n> than the test added in 0475a97f. I'm afraid that some tests can\n> easily cancel the BASE_BACKUP while it's performing a checkpoint in\n> do_pg_backup_start(). So for now I'm thinking to avoid such an\n> unstable test.\n\nHmm. In order to make sure that the checkpoint of the base backup is\ncompleted, and assuming that the checkpoint is fast while the base\nbackup has a max rate, you could rely on a query that does a\npoll_query_until() on pg_control_checkpoint(), no? As long as you use\nIPC::Run::start, pg_basebackup would be async so the polling query and\nthe cancellation can be done in parallel of it. 0475a97 did almost\nthat, except that it waits for the WAL sender to be started.\n--\nMichael", "msg_date": "Thu, 7 Jul 2022 09:09:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "\n\nOn 2022/07/07 9:09, Michael Paquier wrote:\n> On Wed, Jul 06, 2022 at 11:27:58PM +0900, Fujii Masao wrote:\n>> For the test, BASE_BACKUP needs to be canceled after it finishes\n>> do_pg_backup_start(), i.e., checkpointing, and before it calls\n>> do_pg_backup_stop(). So the timing to cancel that seems more severe\n>> than the test added in 0475a97f. I'm afraid that some tests can\n>> easily cancel the BASE_BACKUP while it's performing a checkpoint in\n>> do_pg_backup_start(). So for now I'm thinking to avoid such an\n>> unstable test.\n> \n> Hmm. In order to make sure that the checkpoint of the base backup is\n> completed, and assuming that the checkpoint is fast while the base\n> backup has a max rate, you could rely on a query that does a\n> poll_query_until() on pg_control_checkpoint(), no? As long as you use\n> IPC::Run::start, pg_basebackup would be async so the polling query and\n> the cancellation can be done in parallel of it. 0475a97 did almost\n> that, except that it waits for the WAL sender to be started.\n\nThere seems to be some corner cases where we cannot rely on that.\n\nIf \"spread\" checkpoint is already running when BASE_BACKUP is executed, poll_query_until() may report the end of that \"spread\" checkpoint before BASE_BACKUP internally starts its checkpoint. Which may cause the test to fail.\n\nIf BASE_BACKUP is accidentally canceled after poll_query_until() reports the end of checkpoint but before do_pg_backup_start() finishes (i.e., before entering the error cleanup block using do_pg_abort_backup callback), the test may fail.\n\nProbably we may be able to decrease the risk of those test failures by using some techniques, e.g., adding the fixed wait time before requesting the cancel. But I'm not sure if it's worth adding the test for the corner case issue that I reported at the risk of adding the unstable test. The issue could happen only when both BASE_BACKUP and low level API for backup are eecuted via logical replication walsender mode, and BASE_BACKUP is canceled or terminated.\n\nBut if many think that it's worth adding the test, I will give a try. But even in that case, I think it's better to commit the proposed patch at first to fix the bug, and then to write the patch adding the test.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 Jul 2022 23:58:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Thu, Jul 7, 2022 at 10:58 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> But if many think that it's worth adding the test, I will give a try. But even in that case, I think it's better to commit the proposed patch at first to fix the bug, and then to write the patch adding the test.\n\nI don't think that we necessarily need to have a test for this patch.\nIt's true that we don't really have good test coverage of write-ahead\nlogging and recovery, but this doesn't seem like the most important\nthing to be testing in that area, either, and developing stable tests\nfor stuff like this can be a lot of work.\n\nI do kind of feel like the patch is fixing two separate bugs. The\nchange to SendBaseBackup() is fixing the problem that, because there's\nSQL access on replication connections, we could try to start a backup\nin the middle of another backup by mixing and matching the two\ndifferent methods of doing backups. The change to do_pg_abort_backup()\nis fixing the fact that, after aborting a base backup, we don't reset\nthe session state properly so that another backup can be tried\nafterwards.\n\nI don't know if it's worth committing them separately - they are very\nsmall fixes. But it would probably at least be good to highlight in\nthe commit message that there are two different issues.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 08:56:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Fri, Jul 08, 2022 at 08:56:14AM -0400, Robert Haas wrote:\n> On Thu, Jul 7, 2022 at 10:58 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> But if many think that it's worth adding the test, I will give a\n>> try. But even in that case, I think it's better to commit the\n>> proposed patch at first to fix the bug, and then to write the patch\n>> adding the test.\n\nI have looked at that in details, and it is possible to rely on\npg_stat_activity.wait_event to be BaseBackupThrottle, which would make\nsure that the checkpoint triggered at the beginning of the backup\nfinishes and that we are in the middle of the base backup. The\ncommand for the test should be a psql command with two -c switches\nwithout ON_ERROR_STOP, so as the second pg_backup_stop() starts after\nBASE_BACKUP is cancelled using the same connection, for something like\nthat:\npsql -c \"BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\" \\\n -c \"select pg_backup_stop()\" \"replication=database\"\n\nThe last part of the test should do a pump_until() and capture \"backup\nis not in progress\" from the stderr output of the command run.\n\nThis is leading me to the attached, that crashes quickly without the\nfix and passes with the fix.\n\n> It's true that we don't really have good test coverage of write-ahead\n> logging and recovery, but this doesn't seem like the most important\n> thing to be testing in that area, either, and developing stable tests\n> for stuff like this can be a lot of work.\n\nWell, stability does not seem like a problem to me here.\n\n> I do kind of feel like the patch is fixing two separate bugs. The\n> change to SendBaseBackup() is fixing the problem that, because there's\n> SQL access on replication connections, we could try to start a backup\n> in the middle of another backup by mixing and matching the two\n> different methods of doing backups. The change to do_pg_abort_backup()\n> is fixing the fact that, after aborting a base backup, we don't reset\n> the session state properly so that another backup can be tried\n> afterwards.\n> \n> I don't know if it's worth committing them separately - they are very\n> small fixes. But it would probably at least be good to highlight in\n> the commit message that there are two different issues.\n\nGrouping both fixes in the same commit sounds fine by me. No\nobjections from here.\n--\nMichael", "msg_date": "Thu, 14 Jul 2022 17:00:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On 2022/07/14 17:00, Michael Paquier wrote:\n> On Fri, Jul 08, 2022 at 08:56:14AM -0400, Robert Haas wrote:\n>> On Thu, Jul 7, 2022 at 10:58 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> But if many think that it's worth adding the test, I will give a\n>>> try. But even in that case, I think it's better to commit the\n>>> proposed patch at first to fix the bug, and then to write the patch\n>>> adding the test.\n> \n> I have looked at that in details,\n\nThanks!\n\n\n and it is possible to rely on\n> pg_stat_activity.wait_event to be BaseBackupThrottle, which would make\n\nISTM that you can also use pg_stat_progress_basebackup.phase.\n\n\n> sure that the checkpoint triggered at the beginning of the backup\n> finishes and that we are in the middle of the base backup. The\n> command for the test should be a psql command with two -c switches\n> without ON_ERROR_STOP, so as the second pg_backup_stop() starts after\n> BASE_BACKUP is cancelled using the same connection, for something like\n> that:\n> psql -c \"BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\" \\\n> -c \"select pg_backup_stop()\" \"replication=database\"\n> \n> The last part of the test should do a pump_until() and capture \"backup\n> is not in progress\" from the stderr output of the command run.\n> \n> This is leading me to the attached, that crashes quickly without the\n> fix and passes with the fix.\n\nThanks for the patch! But I'm still not sure if it's worth adding only this test for the corner case while we don't have basic tests for BASE_BACKUP, pg_backup_start and pg_backup_stop.\n\nBTW, if we decide to add that test, are you planning to back-patch it?\n\n\n> \n>> It's true that we don't really have good test coverage of write-ahead\n>> logging and recovery, but this doesn't seem like the most important\n>> thing to be testing in that area, either, and developing stable tests\n>> for stuff like this can be a lot of work.\n> \n> Well, stability does not seem like a problem to me here.\n> \n>> I do kind of feel like the patch is fixing two separate bugs. The\n>> change to SendBaseBackup() is fixing the problem that, because there's\n>> SQL access on replication connections, we could try to start a backup\n>> in the middle of another backup by mixing and matching the two\n>> different methods of doing backups. The change to do_pg_abort_backup()\n>> is fixing the fact that, after aborting a base backup, we don't reset\n>> the session state properly so that another backup can be tried\n>> afterwards.\n>>\n>> I don't know if it's worth committing them separately - they are very\n>> small fixes. But it would probably at least be good to highlight in\n>> the commit message that there are two different issues.\n> \n> Grouping both fixes in the same commit sounds fine by me. No\n> objections from here.\n\nThis sounds fine to me, too. On the other hand, it's also fine for me to push the changes separately so that we can easily identify each change later. So I separated the patch into two ones.\n\nSince one of them failed to be applied to v14 or before cleanly, I also created the patch for those back branches. So I attached three patches.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 15 Jul 2022 16:46:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Fri, Jul 15, 2022 at 04:46:32PM +0900, Fujii Masao wrote:\n> On 2022/07/14 17:00, Michael Paquier wrote:\n>> and it is possible to rely on\n>> pg_stat_activity.wait_event to be BaseBackupThrottle, which would make\n> \n> ISTM that you can also use pg_stat_progress_basebackup.phase.\n\nIndeed, as of \"streaming database files\". That should work.\n\n> Thanks for the patch! But I'm still not sure if it's worth adding\n> only this test for the corner case while we don't have basic tests\n> for BASE_BACKUP, pg_backup_start and pg_backup_stop.\n> \n> BTW, if we decide to add that test, are you planning to back-patch it?\n\nI was thinking about doing that only on HEAD. One thing interesting\nabout this patch is that it can also be used as a point of reference\nfor other future things.\n\n> This sounds fine to me, too. On the other hand, it's also fine for\n> me to push the changes separately so that we can easily identify\n> each change later. So I separated the patch into two ones. \n> \n> Since one of them failed to be applied to v14 or before cleanly, I\n> also created the patch for those back branches. So I attached three\n> patches. \n\nFine by me.\n--\nMichael", "msg_date": "Sat, 16 Jul 2022 11:36:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "\n\nOn 2022/07/16 11:36, Michael Paquier wrote:\n> I was thinking about doing that only on HEAD. One thing interesting\n> about this patch is that it can also be used as a point of reference\n> for other future things.\n\nOk, here are review comments:\n\n+my $connstr =\n+ $node->connstr('postgres') . \" replication=database dbname=postgres\";\n\nSince the result of connstr() includes \"dbname=postgres\", you don't need to add \"dbname=postgres\" again.\n\n+# The psql command should fail on pg_stop_backup().\n\nTypo: s/pg_stop_backup/pg_stop_backup\n\nI reported two trouble cases; they are the cases where BASE_BACKUP is canceled and terminated, respectively. But you added the test only for one of them. Is this intentional?\n\n>> Since one of them failed to be applied to v14 or before cleanly, I\n>> also created the patch for those back branches. So I attached three\n>> patches.\n> \n> Fine by me.\n\nI pushed these bugfix patches at first. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Jul 2022 14:00:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Wed, Jul 20, 2022 at 02:00:00PM +0900, Fujii Masao wrote:\n> I reported two trouble cases; they are the cases where BASE_BACKUP\n> is canceled and terminated, respectively. But you added the test\n> only for one of them. Is this intentional?\n\nNope. The one I have implemented was the fanciest case among the\ntwo, so I just focused on it.\n\nAdding an extra test to cover the second scenario is easier. So I\nhave added one as of the attached, addressing your other comments\nwhile on it. I have also decided to add the tests at the bottom of\n001_stream_rep.pl, as these are quicker than a node initialization.\n--\nMichael", "msg_date": "Wed, 20 Jul 2022 15:49:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" }, { "msg_contents": "On Wed, Jul 20, 2022 at 03:49:17PM +0900, Michael Paquier wrote:\n> Adding an extra test to cover the second scenario is easier. So I\n> have added one as of the attached, addressing your other comments\n> while on it. I have also decided to add the tests at the bottom of\n> 001_stream_rep.pl, as these are quicker than a node initialization.\n\nHearing nothing, I have looked at that again and applied the two tests\non HEAD as of ad34146.\n--\nMichael", "msg_date": "Mon, 1 Aug 2022 09:33:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backup command and functions can cause assertion failure and\n segmentation fault" } ]
[ { "msg_contents": "Hi,\n\nI found that there is no index item for MERGE command, in the docs.\nAttached is the patch that adds the indexterm for MERGE to merge.sgml.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 30 Jun 2022 12:56:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add index item for MERGE." }, { "msg_contents": "On 2022-Jun-30, Fujii Masao wrote:\n\n> Hi,\n> \n> I found that there is no index item for MERGE command, in the docs.\n> Attached is the patch that adds the indexterm for MERGE to merge.sgml.\n\n+1 LGTM, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 30 Jun 2022 11:57:30 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add index item for MERGE." }, { "msg_contents": "\n\nOn 2022/06/30 18:57, Alvaro Herrera wrote:\n> On 2022-Jun-30, Fujii Masao wrote:\n> \n>> Hi,\n>>\n>> I found that there is no index item for MERGE command, in the docs.\n>> Attached is the patch that adds the indexterm for MERGE to merge.sgml.\n> \n> +1 LGTM, thanks.\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:26:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add index item for MERGE." } ]
[ { "msg_contents": "Hi hackers,\n\nWhile commit 960869da08 added some information about connections that \nhave been successfully authenticated, there is no metrics for \nconnections that have not (or did not reached the authentication stage).\n\nAdding metrics about failed connections attempts could also help, for \nexample with proper sampling, to:\n\n * detect spikes in failed login attempts\n * check if there is a correlation between spikes in successful and\n failed connection attempts\n\nWhile the number of successful connections could also already been \ntracked with the ClientAuthentication_hook (and also the ones that \nfailed the authentication) we are missing metrics about:\n\n * why the connection failed (could be bad password, bad database, bad\n user, missing CONNECT privilege...)\n * number of times the authentication stage has not been reached\n * why the authentication stage has not been reached (bad startup\n packets, timeout while processing startup packet,...)\n\nThose missing metrics (in addition to the ones that can be already \ngathered) could provide value for:\n\n * security investigations\n * anomalies detections\n * tracking application misconfigurations\n\nIn an attempt to be able to provide those metrics, please find attached \na patch proposal to add new hooks in the connection path, that would be \nfired if:\n\n * there is a bad startup packet\n * there is a timeout while processing the startup packet\n * user does not have CONNECT privilege\n * database does not exist\n\nFor safety those hooks request the use of a const Port parameter, so \nthat they could be used only for reporting purpose (for example, we are \nworking on an extension to record detailed login metrics counters).\n\nAnother option could be to add those metrics in the engine itself \n(instead of providing new hooks to get them), but the new hooks option \ngives more flexibility on how to render and exploit them (there is a lot \nof information in the Port Struct that one could be interested with).\n\nI’m adding this patch proposal to the commitfest.\nLooking forward to your feedback,\n\nRegards,\nBertrand", "msg_date": "Thu, 30 Jun 2022 10:01:00 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Thu, Jun 30, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> While commit 960869da08 added some information about connections that have been successfully authenticated, there is no metrics for connections that have not (or did not reached the authentication stage).\n>\n> Adding metrics about failed connections attempts could also help, for example with proper sampling, to:\n>\n> detect spikes in failed login attempts\n> check if there is a correlation between spikes in successful and failed connection attempts\n>\n> While the number of successful connections could also already been tracked with the ClientAuthentication_hook (and also the ones that failed the authentication) we are missing metrics about:\n>\n> why the connection failed (could be bad password, bad database, bad user, missing CONNECT privilege...)\n> number of times the authentication stage has not been reached\n> why the authentication stage has not been reached (bad startup packets, timeout while processing startup packet,...)\n>\n> Those missing metrics (in addition to the ones that can be already gathered) could provide value for:\n>\n> security investigations\n> anomalies detections\n> tracking application misconfigurations\n>\n> In an attempt to be able to provide those metrics, please find attached a patch proposal to add new hooks in the connection path, that would be fired if:\n>\n> there is a bad startup packet\n> there is a timeout while processing the startup packet\n> user does not have CONNECT privilege\n> database does not exist\n>\n> For safety those hooks request the use of a const Port parameter, so that they could be used only for reporting purpose (for example, we are working on an extension to record detailed login metrics counters).\n>\n> Another option could be to add those metrics in the engine itself (instead of providing new hooks to get them), but the new hooks option gives more flexibility on how to render and exploit them (there is a lot of information in the Port Struct that one could be interested with).\n>\n> I’m adding this patch proposal to the commitfest.\n> Looking forward to your feedback,\n\n+1 for the idea. I've seen numerous cases where the login metrics\n(especially failed connections) are handy in analyzing stuff. And I'm\nokay with the hook approach than the postgres emitting the necessary\nmetrics. However, I'm personally not okay with having multiple hooks\nas proposed in the v1 patch. Can we think of having a single hook or\nenhancing the existing ClientAuthentication_hook where we pass a\nPURPOSE parameter (CONN_SUCCESS, CONN_FAILURE, CONN_FOO, CONN_BAR\n....) tp the hook? With this approach, we don't need to spread out the\npostgres code with many hooks and the hook implementers will look at\nthe PURPOSE parameter and deal with it accordingly.\n\nOn the security aspect, we must ensure we don't leak any sensitive\ninformation such as password or SSH key to the new hook - if PGPORT\nhas this information, maybe we need to mask that structure a bit\nbefore handing it off to the hook.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 30 Jun 2022 14:53:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 6/30/22 11:23 AM, Bharath Rupireddy wrote:\n> On Thu, Jun 30, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi hackers,\n>>\n>> While commit 960869da08 added some information about connections that have been successfully authenticated, there is no metrics for connections that have not (or did not reached the authentication stage).\n>>\n>> Adding metrics about failed connections attempts could also help, for example with proper sampling, to:\n>>\n>> detect spikes in failed login attempts\n>> check if there is a correlation between spikes in successful and failed connection attempts\n>>\n>> While the number of successful connections could also already been tracked with the ClientAuthentication_hook (and also the ones that failed the authentication) we are missing metrics about:\n>>\n>> why the connection failed (could be bad password, bad database, bad user, missing CONNECT privilege...)\n>> number of times the authentication stage has not been reached\n>> why the authentication stage has not been reached (bad startup packets, timeout while processing startup packet,...)\n>>\n>> Those missing metrics (in addition to the ones that can be already gathered) could provide value for:\n>>\n>> security investigations\n>> anomalies detections\n>> tracking application misconfigurations\n>>\n>> In an attempt to be able to provide those metrics, please find attached a patch proposal to add new hooks in the connection path, that would be fired if:\n>>\n>> there is a bad startup packet\n>> there is a timeout while processing the startup packet\n>> user does not have CONNECT privilege\n>> database does not exist\n>>\n>> For safety those hooks request the use of a const Port parameter, so that they could be used only for reporting purpose (for example, we are working on an extension to record detailed login metrics counters).\n>>\n>> Another option could be to add those metrics in the engine itself (instead of providing new hooks to get them), but the new hooks option gives more flexibility on how to render and exploit them (there is a lot of information in the Port Struct that one could be interested with).\n>>\n>> I’m adding this patch proposal to the commitfest.\n>> Looking forward to your feedback,\n> +1 for the idea. I've seen numerous cases where the login metrics\n> (especially failed connections) are handy in analyzing stuff. And I'm\n> okay with the hook approach than the postgres emitting the necessary\n> metrics.\n\nThanks for looking at it!\n\n> However, I'm personally not okay with having multiple hooks\n> as proposed in the v1 patch.\n\nI agree that it would be great to reduce the number of proposed hooks.\n\nBut,\n\n> Can we think of having a single hook\n\nThe proposed hooks are triggered during errors (means that the \nconnection attempt break) and:\n\n- In the connection paths that will not reach the \nClientAuthentication_hook at all: those are the ones related to the bad \nstartup packet and timeout while processing the startup packet.\n\nor\n\n- After the ClientAuthentication_hook is fired: those are the bad db \noid, bad db name and bad perm ones.\n\nSo, It does look like having only one hook would require refactoring in \nthe connection path and I'm not sure if this is worth it.\n\n> or\n> enhancing the existing ClientAuthentication_hook where we pass a\n> PURPOSE parameter (CONN_SUCCESS, CONN_FAILURE, CONN_FOO, CONN_BAR\n> ....) tp the hook?\n\nI think one could already \"predict\" the bad db and bad perm errors \nwithin the current ClientAuthentication_hook.\n\nBut in case of multiple \"possible\" errors (within the same connection \nattempt) how could we know for sure the one that will be actually \nreported? That's why i think the best way is to put new hooks as close \nas possible to the place where the related errors are reported.\n\nWhat do you think?\n\n> With this approach, we don't need to spread out the\n> postgres code with many hooks and the hook implementers will look at\n> the PURPOSE parameter and deal with it accordingly.\n>\n> On the security aspect, we must ensure we don't leak any sensitive\n> information such as password or SSH key to the new hook - if PGPORT\n> has this information, maybe we need to mask that structure a bit\n> before handing it off to the hook.\n\nYeah good point, I'll have a closer look if there is any information \nthat could be present in the Port struct before the \nClientAuthentication_hook is fired (means during the ones that are \nrelated to the startup packet) and that we would want to mask.\n\nRegards,\n\nBertrand\n\n\n\n", "msg_date": "Fri, 1 Jul 2022 09:48:40 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Fri, Jul 01, 2022 at 09:48:40AM +0200, Drouvot, Bertrand wrote:\n>> However, I'm personally not okay with having multiple hooks\n>> as proposed in the v1 patch.\n> \n> I agree that it would be great to reduce the number of proposed hooks.\n> \n> But,\n> \n>> Can we think of having a single hook\n> \n> The proposed hooks are triggered during errors (means that the connection\n> attempt break) and:\n> \n> - In the connection paths that will not reach the ClientAuthentication_hook\n> at all: those are the ones related to the bad startup packet and timeout\n> while processing the startup packet.\n> \n> or\n> \n> - After the ClientAuthentication_hook is fired: those are the bad db oid,\n> bad db name and bad perm ones.\n> \n> So, It does look like having only one hook would require refactoring in the\n> connection path and I'm not sure if this is worth it.\n> \n>> or\n>> enhancing the existing ClientAuthentication_hook where we pass a\n>> PURPOSE parameter (CONN_SUCCESS, CONN_FAILURE, CONN_FOO, CONN_BAR\n>> ....) tp the hook?\n> \n> I think one could already \"predict\" the bad db and bad perm errors within\n> the current ClientAuthentication_hook.\n> \n> But in case of multiple \"possible\" errors (within the same connection\n> attempt) how could we know for sure the one that will be actually reported?\n> That's why i think the best way is to put new hooks as close as possible to\n> the place where the related errors are reported.\n> \n> What do you think?\n\nCould we model this after fmgr_hook? The first argument in that hook\nindicates where it is being called from. This doesn't alleviate the need\nfor several calls to the hook in the authentication logic, but extension\nauthors would only need to define one hook.\n\nThat being said, I don't see why this information couldn't be provided in a\nsystem view. IMO it is generically useful.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 16:00:27 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Fri, Jul 1, 2022 at 5:00 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n>\n>\n> That being said, I don't see why this information couldn't be provided in a\n> system view. IMO it is generically useful.\n\n\n+1 for a system view with appropriate permissions, in addition to the\nhooks.\n\nThat would make the information easily accessible to a number or monitoring\nsystems besides the admin.\n\nRoberto\n\n—\nCrunchy Data — passion for open source PostgreSQL\n\n>\n\nOn Fri, Jul 1, 2022 at 5:00 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n\nThat being said, I don't see why this information couldn't be provided in a\nsystem view.  IMO it is generically useful.+1 for a system view with appropriate permissions, in addition to the hooks. That would make the information easily accessible to a number or monitoring systems besides the admin.Roberto—Crunchy Data — passion for open source PostgreSQL", "msg_date": "Fri, 1 Jul 2022 18:49:09 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/2/22 1:00 AM, Nathan Bossart wrote:\n> Could we model this after fmgr_hook? The first argument in that hook\n> indicates where it is being called from. This doesn't alleviate the need\n> for several calls to the hook in the authentication logic, but extension\n> authors would only need to define one hook.\n\nI like the idea and indeed fmgr.h looks a good place to model it.\n\nAttached a new patch version doing so.\n\nThanks\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Jul 2022 14:53:24 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/2/22 2:49 AM, Roberto Mello wrote:\n>\n> On Fri, Jul 1, 2022 at 5:00 PM Nathan Bossart \n> <nathandbossart@gmail.com> wrote:\n>\n>\n>\n> That being said, I don't see why this information couldn't be\n> provided in a\n> system view.  IMO it is generically useful.\n>\n>\n> +1 for a system view with appropriate permissions, in addition to the \n> hooks.\n>\n> That would make the information easily accessible to a number or \n> monitoring systems besides the admin.\n>\nAgree about that.\n\nI'll start another thread and propose a dedicated patch for the \n\"internal counters\" and how to expose them.\n\nThanks for the feedback,\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 7/2/22 2:49 AM, Roberto Mello wrote:\n\n\n\n\n\n\n\nOn Fri, Jul 1, 2022 at\n 5:00 PM Nathan Bossart <nathandbossart@gmail.com>\n wrote:\n\n\n\n\n That being said, I don't see why this information couldn't\n be provided in a\n system view.  IMO it is generically useful.\n\n\n+1 for a system view with appropriate\n permissions, in addition to the hooks. \n\n\nThat would make the information easily\n accessible to a number or monitoring systems besides the\n admin.\n\n\n\n\n\n\nAgree about that.\nI'll start another thread and propose a dedicated patch for the\n \"internal counters\" and how to expose them.\n\nThanks for the feedback,\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Jul 2022 14:58:50 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Mon, Jul 4, 2022 at 5:54 AM Drouvot, Bertrand <bdrouvot@amazon.com>\nwrote:\n\n> Hi,\n>\n> On 7/2/22 1:00 AM, Nathan Bossart wrote:\n> > Could we model this after fmgr_hook? The first argument in that hook\n> > indicates where it is being called from. This doesn't alleviate the need\n> > for several calls to the hook in the authentication logic, but extension\n> > authors would only need to define one hook.\n>\n> I like the idea and indeed fmgr.h looks a good place to model it.\n>\n> Attached a new patch version doing so.\n>\n> Thanks\n>\n> --\n>\n> Bertrand Drouvot\n> Amazon Web Services: https://aws.amazon.com\n\nHi,\n+ FCET_SPT, /* startup packet timeout */\n+ FCET_BSP, /* bad startup packet */\n\nLooking at existing enum type, such as FmgrHookEventType, the part after\nunderscore is a word.\nI think it would be good to follow existing practice and make the enums\nmore readable.\n\nCheers\n\nOn Mon, Jul 4, 2022 at 5:54 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:Hi,\n\nOn 7/2/22 1:00 AM, Nathan Bossart wrote:\n> Could we model this after fmgr_hook?  The first argument in that hook\n> indicates where it is being called from.  This doesn't alleviate the need\n> for several calls to the hook in the authentication logic, but extension\n> authors would only need to define one hook.\n\nI like the idea and indeed fmgr.h looks a good place to model it.\n\nAttached a new patch version doing so.\n\nThanks\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.comHi,+   FCET_SPT,   /* startup packet timeout */+   FCET_BSP,   /* bad startup packet */Looking at existing enum type, such as FmgrHookEventType, the part after underscore is a word.I think it would be good to follow existing practice and make the enums more readable.Cheers", "msg_date": "Mon, 4 Jul 2022 06:12:42 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Mon, Jul 4, 2022 at 6:29 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> On 7/2/22 2:49 AM, Roberto Mello wrote:\n>\n> On Fri, Jul 1, 2022 at 5:00 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>\n>> That being said, I don't see why this information couldn't be provided in a\n>> system view. IMO it is generically useful.\n>\n> +1 for a system view with appropriate permissions, in addition to the hooks.\n>\n> That would make the information easily accessible to a number or monitoring systems besides the admin.\n>\n> Agree about that.\n\nAre we going to have it as a part of shared memory stats? Or a\nseparate shared memory for connection stats exposing these via a\nfunction and a view can be built on this function like\npg_get_replication_slots and pg_replication_slots?\n\n> I'll start another thread and propose a dedicated patch for the \"internal counters\" and how to expose them.\n\nIMHO, let's have the discussion here in this thread and the patch can be 0002.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 5 Jul 2022 12:37:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/4/22 3:12 PM, Zhihong Yu wrote:\n> Hi,\n> +   FCET_SPT,   /* startup packet timeout */\n> +   FCET_BSP,   /* bad startup packet */\n>\n> Looking at existing enum type, such as FmgrHookEventType, the part \n> after underscore is a word.\n> I think it would be good to follow existing practice and make the \n> enums more readable.\n\nFair point.\n\nAttached a new version which makes the enums more readable.\n\nThanks for the feedback\n\nRegards\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com", "msg_date": "Tue, 5 Jul 2022 09:17:03 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Mon, Jul 4, 2022 at 6:23 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 7/2/22 1:00 AM, Nathan Bossart wrote:\n> > Could we model this after fmgr_hook? The first argument in that hook\n> > indicates where it is being called from. This doesn't alleviate the need\n> > for several calls to the hook in the authentication logic, but extension\n> > authors would only need to define one hook.\n>\n> I like the idea and indeed fmgr.h looks a good place to model it.\n>\n> Attached a new patch version doing so.\n\nThanks for the patch. Can we think of enhancing\nClientAuthentication_hook_type itself i.e. make it a generic hook for\nall sorts of authentication metrics, info etc. with the type parameter\nembedded to it instead of new hook FailedConnection_hook?We can either\nadd a new parameter for the \"event\" (the existing\nClientAuthentication_hook_type implementers will have problems), or\nembed/multiplex the \"event\" info to existing Port structure or status\nvariable (macro or enum) (existing implementers will not have\ncompatibility problems). IMO, this looks cleaner going forward.\n\nOn the v2 patch:\n\n1. Why do we need to place the hook and structures in fmgr.h? Why not in auth.h?\n\n2. Timeout Handler is a signal handler, called as part of SIGALRM\nsignal handler, most of the times, signal handlers ought to be doing\nsmall things, now that we are handing off the control to hook, which\ncan do any long running work (writing to a remote storage, file,\naggregate etc.), I don't think it's the right thing to do here.\n static void\n StartupPacketTimeoutHandler(void)\n {\n+ if (FailedConnection_hook)\n+ (*FailedConnection_hook) (FCET_STARTUP_PACKET_TIMEOUT, MyProcPort);\n+ ereport(COMMERROR,\n+ (errcode(ERRCODE_PROTOCOL_VIOLATION),\n+ errmsg(\"timeout while processing startup packet\")));\n\n3. On \"not letting these hooks (ClientAuthentication_hook_type or\nFailedConnection_hook_type) expose sensitive info via Port structure\"\n- it seems like the Port structure has sensitive info like HbaLine,\nhost address, username, etc. but that's what it is so I think we are\nokay with the structure as-is.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 5 Jul 2022 13:07:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On 6/30/22 5:23 AM, Bharath Rupireddy wrote:\n> <snip>\n> On the security aspect, we must ensure we don't leak any sensitive\n> information such as password or SSH key to the new hook - if PGPORT\n> has this information, maybe we need to mask that structure a bit\n> before handing it off to the hook.\n\nCan you elaborate more on why you see this as necessary? Extensions run \nin-process and have no real memory access limits, \"masking\", which \nreally means copying data to another struct, is just extra work and \noverhead with no actual security gain, IMO.\n\n\n\n\n\n", "msg_date": "Tue, 5 Jul 2022 09:27:06 -0400", "msg_from": "\"Brindle, Joshua\" <joshuqbr@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On 7/5/22 03:37, Bharath Rupireddy wrote:\n> On Mon, Jul 4, 2022 at 6:23 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 7/2/22 1:00 AM, Nathan Bossart wrote:\n>> > Could we model this after fmgr_hook? The first argument in that hook\n>> > indicates where it is being called from. This doesn't alleviate the need\n>> > for several calls to the hook in the authentication logic, but extension\n>> > authors would only need to define one hook.\n>>\n>> I like the idea and indeed fmgr.h looks a good place to model it.\n>>\n>> Attached a new patch version doing so.\n\nI was thinking along the same lines, so +1 for the general approach\n\n> Thanks for the patch. Can we think of enhancing\n> ClientAuthentication_hook_type itself i.e. make it a generic hook for\n> all sorts of authentication metrics, info etc. with the type parameter\n> embedded to it instead of new hook FailedConnection_hook?We can either\n> add a new parameter for the \"event\" (the existing\n> ClientAuthentication_hook_type implementers will have problems), or\n> embed/multiplex the \"event\" info to existing Port structure or status\n> variable (macro or enum) (existing implementers will not have\n> compatibility problems). IMO, this looks cleaner going forward.\n\nNot sure I like this though -- I'll have to think about that\n\n> On the v2 patch:\n> \n> 1. Why do we need to place the hook and structures in fmgr.h? Why not in auth.h?\n\nagreed -- it does not belong in fmgr.h\n\n> 2. Timeout Handler is a signal handler, called as part of SIGALRM\n> signal handler, most of the times, signal handlers ought to be doing\n> small things, now that we are handing off the control to hook, which\n> can do any long running work (writing to a remote storage, file,\n> aggregate etc.), I don't think it's the right thing to do here.\n> static void\n> StartupPacketTimeoutHandler(void)\n> {\n> + if (FailedConnection_hook)\n> + (*FailedConnection_hook) (FCET_STARTUP_PACKET_TIMEOUT, MyProcPort);\n> + ereport(COMMERROR,\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> + errmsg(\"timeout while processing startup packet\")));\n\nWhy add the ereport()?\n\nBut more to Bharath's point, perhaps this is a case that is better \nserved by incrementing a stat counter and not exposed as a hook?\n\nAlso, a teeny nit:\n8<--------------\n+\tif (status != STATUS_OK) {\n+\t\tif (FailedConnection_hook)\n8<--------------\n\ndoes not follow usual practice and probably should be:\n\n8<--------------\n+\tif (status != STATUS_OK)\n+\t{\n+\t\tif (FailedConnection_hook)\n8<--------------\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 18:11:12 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/6/22 12:11 AM, Joe Conway wrote:\n>\n> On 7/5/22 03:37, Bharath Rupireddy wrote:\n>> On Mon, Jul 4, 2022 at 6:23 PM Drouvot, Bertrand \n>> <bdrouvot@amazon.com> wrote:\n>>> On 7/2/22 1:00 AM, Nathan Bossart wrote:\n>>> > Could we model this after fmgr_hook?  The first argument in that hook\n>>> > indicates where it is being called from.  This doesn't alleviate \n>>> the need\n>>> > for several calls to the hook in the authentication logic, but \n>>> extension\n>>> > authors would only need to define one hook.\n>>>\n>>> I like the idea and indeed fmgr.h looks a good place to model it.\n>>>\n>>> Attached a new patch version doing so.\n>\n> I was thinking along the same lines, so +1 for the general approach\n\nThanks for the review!\n\n>\n>> Thanks for the patch. Can we think of enhancing\n>> ClientAuthentication_hook_type itself i.e. make it a generic hook for\n>> all sorts of authentication metrics, info etc. with the type parameter\n>> embedded to it instead of new hook FailedConnection_hook?We can either\n>> add a new parameter for the \"event\" (the existing\n>> ClientAuthentication_hook_type implementers will have problems), or\n>> embed/multiplex the \"event\" info to existing Port structure or status\n>> variable (macro or enum) (existing implementers will not have\n>> compatibility problems).  IMO, this looks cleaner going forward.\n>\n> Not sure I like this though -- I'll have to think about that\n\nNot sure about this one neither.\n\nThe \"enhanced\" ClientAuthentication_hook will have to be fired at the \nsame places as the new FailedConnection_hook is, but i think those \nplaces are not necessary linked to real authentication per say (making \nthe name confusing).\n\n>\n>> On the v2 patch:\n>>\n>> 1. Why do we need to place the hook and structures in fmgr.h? Why not \n>> in auth.h?\n>\n> agreed -- it does not belong in fmgr.h\n\nMoved to auth.h.\n\n>\n>> 2. Timeout Handler is a signal handler, called as part of SIGALRM\n>> signal handler, most of the times, signal handlers ought to be doing\n>> small things, now that we are handing off the control to hook, which\n>> can do any long running work (writing to a remote storage, file,\n>> aggregate etc.), I don't think it's the right thing to do here.\n>>   static void\n>>   StartupPacketTimeoutHandler(void)\n>>   {\n>> + if (FailedConnection_hook)\n>> + (*FailedConnection_hook) (FCET_STARTUP_PACKET_TIMEOUT, MyProcPort);\n>> + ereport(COMMERROR,\n>> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n>> + errmsg(\"timeout while processing startup packet\")));\n>\n> Why add the ereport()?\n\nremoved it.\n\n>\n> But more to Bharath's point, perhaps this is a case that is better\n> served by incrementing a stat counter and not exposed as a hook?\n\nI think that the advantage of the hook is that it gives the extension \nauthor the ability/flexibility to aggregate the counter based on \ninformation available in the Port Struct (say the client addr for \nexample) at this stage.\n\nWhat about to aggregate the stat counter based on the client addr? (Not \nsure if there is more useful information (than the client addr) at this \nstage though)\n\nThat said, i agree that having a hook in a time out handler might not be \nthe right thing to do (even if at the end that would be to the extension \nauthor responsibility to do \"small things\" in it), so it has been \nremoved in the new attached version.\n\n>\n> Also, a teeny nit:\n> 8<--------------\n> +       if (status != STATUS_OK) {\n> +               if (FailedConnection_hook)\n> 8<--------------\n>\n> does not follow usual practice and probably should be:\n>\n> 8<--------------\n> +       if (status != STATUS_OK)\n> +       {\n> +               if (FailedConnection_hook)\n> 8<--------------\n>\n>\nThanks!, fixed.\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Jul 2022 10:13:51 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/5/22 9:07 AM, Bharath Rupireddy wrote:\n> On Mon, Jul 4, 2022 at 6:29 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 7/2/22 2:49 AM, Roberto Mello wrote:\n>>\n>> On Fri, Jul 1, 2022 at 5:00 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>> That being said, I don't see why this information couldn't be provided in a\n>>> system view. IMO it is generically useful.\n>> +1 for a system view with appropriate permissions, in addition to the hooks.\n>>\n>> That would make the information easily accessible to a number or monitoring systems besides the admin.\n>>\n>> Agree about that.\n> Are we going to have it as a part of shared memory stats? Or a\n> separate shared memory for connection stats exposing these via a\n> function and a view can be built on this function like\n> pg_get_replication_slots and pg_replication_slots?\n\nThanks for looking at it.\n\nI don't have any proposal yet, I'll have to look at it.\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Jul 2022 10:18:08 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On 7/6/22 04:13, Drouvot, Bertrand wrote:\n> On 7/6/22 12:11 AM, Joe Conway wrote:\n>> On 7/5/22 03:37, Bharath Rupireddy wrote:\n>>> 2. Timeout Handler is a signal handler, called as part of SIGALRM\n>>> signal handler, most of the times, signal handlers ought to be doing\n>>> small things, now that we are handing off the control to hook, which\n>>> can do any long running work (writing to a remote storage, file,\n>>> aggregate etc.), I don't think it's the right thing to do here.\n>>>   static void\n>>>   StartupPacketTimeoutHandler(void)\n>>>   {\n>>> + if (FailedConnection_hook)\n>>> + (*FailedConnection_hook) (FCET_STARTUP_PACKET_TIMEOUT, MyProcPort);\n\n>> But more to Bharath's point, perhaps this is a case that is better\n>> served by incrementing a stat counter and not exposed as a hook?\n> \n> I think that the advantage of the hook is that it gives the extension\n> author the ability/flexibility to aggregate the counter based on\n> information available in the Port Struct (say the client addr for\n> example) at this stage.\n> \n> What about to aggregate the stat counter based on the client addr? (Not\n> sure if there is more useful information (than the client addr) at this\n> stage though)\n> \n> That said, i agree that having a hook in a time out handler might not be\n> the right thing to do (even if at the end that would be to the extension\n> author responsibility to do \"small things\" in it), so it has been\n> removed in the new attached version.\n\nIt isn't clear to me if having a hook in the timeout handler is a \nnonstarter -- perhaps a comment with suitable warning for prospective \nextension authors is enough? Anyone else want to weigh in on this issue \nspecifically?\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 15:56:10 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> It isn't clear to me if having a hook in the timeout handler is a \n> nonstarter -- perhaps a comment with suitable warning for prospective \n> extension authors is enough? Anyone else want to weigh in on this issue \n> specifically?\n\nIt doesn't seem like a great place for a hook, because the list of stuff\nyou could safely do there would be mighty short, possibly the empty set.\nWrite to shared memory? Not too safe. Write to a file? Even less.\nWrite to local memory? Pointless, because we're about to _exit(1).\nPretty much anything I can think of that you'd want to do is something\nwe've already decided the core code can't safely do, and putting it\nin a hook won't make it safer.\n\nIf someone wants to argue for this hook, I'd like to see a credible\nexample of a *safe* use-case, keeping in mind the points raised in\nthe comments in BackendInitialize and process_startup_packet_die.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Jul 2022 16:10:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Fri, Jul 8, 2022 at 1:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joe Conway <mail@joeconway.com> writes:\n> > It isn't clear to me if having a hook in the timeout handler is a\n> > nonstarter -- perhaps a comment with suitable warning for prospective\n> > extension authors is enough? Anyone else want to weigh in on this issue\n> > specifically?\n>\n> It doesn't seem like a great place for a hook, because the list of stuff\n> you could safely do there would be mighty short, possibly the empty set.\n> Write to shared memory? Not too safe. Write to a file? Even less.\n> Write to local memory? Pointless, because we're about to _exit(1).\n> Pretty much anything I can think of that you'd want to do is something\n> we've already decided the core code can't safely do, and putting it\n> in a hook won't make it safer.\n\nI agree with this. But, all of the areas that v2-0003 touched for\nconnectivity failures, they typically are emitting\nereport(FATAL,/ereport(COMMERROR, (in ProcessStartupPacket) and we\nhave emit_log_hook already being exposed and the implementers can,\nliterally, do anything the hook.\n\nLooking at v2-0003 patch and emit_log_hook, how about we filter out\nfor those connectivity errors either based on error codes and if they\naren't unique, perhaps passing special flags to ereport API indicating\nthat it's a connectivity error and in the emit_log_hook we can look\nfor those connectivity error codes or flags to collect the stats about\nthe failure connections (with MyProcPort being present in\nemit_log_hook)? This way, we don't need a new hook. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:54:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Fri, Jul 8, 2022 at 1:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It doesn't seem like a great place for a hook, because the list of stuff\n>> you could safely do there would be mighty short, possibly the empty set.\n\n> I agree with this. But, all of the areas that v2-0003 touched for\n> connectivity failures, they typically are emitting\n> ereport(FATAL,/ereport(COMMERROR, (in ProcessStartupPacket) and we\n> have emit_log_hook already being exposed and the implementers can,\n> literally, do anything the hook.\n\nThis is utterly off-point, because those calls are not inside\nsignal handlers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 09:43:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/7/22 10:10 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> It isn't clear to me if having a hook in the timeout handler is a\n>> nonstarter -- perhaps a comment with suitable warning for prospective\n>> extension authors is enough? Anyone else want to weigh in on this issue\n>> specifically?\n> It doesn't seem like a great place for a hook, because the list of stuff\n> you could safely do there would be mighty short, possibly the empty set.\n> Write to shared memory? Not too safe. Write to a file? Even less.\n> Write to local memory? Pointless, because we're about to _exit(1).\n> Pretty much anything I can think of that you'd want to do is something\n> we've already decided the core code can't safely do, and putting it\n> in a hook won't make it safer.\n>\n> If someone wants to argue for this hook, I'd like to see a credible\n> example of a *safe* use-case, keeping in mind the points raised in\n> the comments in BackendInitialize and process_startup_packet_die.\n\nThe use case would be to increment a counter in shared memory (or most \nprobably within an hash table) to reflect the number of times a startup \npacket timeout occurred.\n\nReading the comments in/related to BackendInitialize() I understand \nthat's definitely not safe to write in shared memory for the \nEXEC_BACKEND case, but wouldn't it be safe for the non EXEC_BACKEND case?\n\nBTW, it makes me realize that the hook being fired in the bad startup \npacket case:\n\n         /*\n          * Stop here if it was bad or a cancel packet. ProcessStartupPacket\n          * already did any appropriate error reporting.\n          */\n         if (status != STATUS_OK)\n+       {\n+               if (FailedConnection_hook)\n+                       (*FailedConnection_hook) \n(FCET_BAD_STARTUP_PACKET, port);\n                 proc_exit(0);\n+       }\n\nis not safe for the EXEC_BACKEND case.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 11 Jul 2022 08:18:46 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/11/22 8:18 AM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 7/7/22 10:10 PM, Tom Lane wrote:\n>> Joe Conway <mail@joeconway.com> writes:\n>>> It isn't clear to me if having a hook in the timeout handler is a\n>>> nonstarter -- perhaps a comment with suitable warning for prospective\n>>> extension authors is enough? Anyone else want to weigh in on this issue\n>>> specifically?\n>> It doesn't seem like a great place for a hook, because the list of stuff\n>> you could safely do there would be mighty short, possibly the empty set.\n>> Write to shared memory?  Not too safe.  Write to a file?  Even less.\n>> Write to local memory?  Pointless, because we're about to _exit(1).\n>> Pretty much anything I can think of that you'd want to do is something\n>> we've already decided the core code can't safely do, and putting it\n>> in a hook won't make it safer.\n>>\n>> If someone wants to argue for this hook, I'd like to see a credible\n>> example of a *safe* use-case, keeping in mind the points raised in\n>> the comments in BackendInitialize and process_startup_packet_die.\n>\n> The use case would be to increment a counter in shared memory (or most \n> probably within an hash table) to reflect the number of times a \n> startup packet timeout occurred.\n>\n> Reading the comments in/related to BackendInitialize() I understand \n> that's definitely not safe to write in shared memory for the \n> EXEC_BACKEND case, but wouldn't it be safe for the non EXEC_BACKEND case?\n>\n> BTW, it makes me realize that the hook being fired in the bad startup \n> packet case:\n>\n>         /*\n>          * Stop here if it was bad or a cancel packet. \n> ProcessStartupPacket\n>          * already did any appropriate error reporting.\n>          */\n>         if (status != STATUS_OK)\n> +       {\n> +               if (FailedConnection_hook)\n> +                       (*FailedConnection_hook) \n> (FCET_BAD_STARTUP_PACKET, port);\n>                 proc_exit(0);\n> +       }\n>\n> is not safe for the EXEC_BACKEND case.\n>\nWhat about the idea to trigger the hook for the STARTUP PACKET TIMEOUT \nand BAD STARTUP PACKET only for the non EXEC_BACKEND/Windows cases?\n\nI'm tempted to think it's better to have some cases where one could \nbenefit from the hook as opposed to none.\n\nThoughts?\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 12 Jul 2022 14:58:27 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Fri, Jul 8, 2022 at 5:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Looking at v2-0003 patch and emit_log_hook, how about we filter out\n> for those connectivity errors either based on error codes and if they\n> aren't unique, perhaps passing special flags to ereport API indicating\n> that it's a connectivity error and in the emit_log_hook we can look\n> for those connectivity error codes or flags to collect the stats about\n> the failure connections (with MyProcPort being present in\n> emit_log_hook)? This way, we don't need a new hook. Thoughts?\n\nBertrand and Other Hackers, above comment may have been lost in the\nwild - any thoughts on it?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 14 Jul 2022 15:13:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi Bharath,\n\nOn 7/14/22 11:43 AM, Bharath Rupireddy wrote:\n> On Fri, Jul 8, 2022 at 5:54 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Looking at v2-0003 patch and emit_log_hook, how about we filter out\n>> for those connectivity errors either based on error codes and if they\n>> aren't unique, perhaps passing special flags to ereport API indicating\n>> that it's a connectivity error and in the emit_log_hook we can look\n>> for those connectivity error codes or flags to collect the stats about\n>> the failure connections (with MyProcPort being present in\n>> emit_log_hook)? This way, we don't need a new hook. Thoughts?\n> Bertrand and Other Hackers, above comment may have been lost in the\n> wild - any thoughts on it?\n\nThanks for your feedback!\n\nI can see 2 issues with that approach:\n\n- We’ll not be able to track the “startup timeout case” (well, we may \nnot be able to track it anyway depending of what next to [1] will be) as \nit does not emit any log messages.\n- We’ll depend of the log_min_messages value (means \nedata->output_to_server needs to be true for the emit_log_hook to be \ntriggered).\n\n[1]: \nhttps://www.postgresql.org/message-id/a1558d12-c1c4-0fe5-f8a5-2b6c2294e55f%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 2 Aug 2022 15:23:35 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 7/12/22 2:58 PM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 7/11/22 8:18 AM, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> The use case would be to increment a counter in shared memory (or \n>> most probably within an hash table) to reflect the number of times a \n>> startup packet timeout occurred.\n>>\n>> Reading the comments in/related to BackendInitialize() I understand \n>> that's definitely not safe to write in shared memory for the \n>> EXEC_BACKEND case, but wouldn't it be safe for the non EXEC_BACKEND \n>> case?\n>>\n>> BTW, it makes me realize that the hook being fired in the bad startup \n>> packet case:\n>>\n>>         /*\n>>          * Stop here if it was bad or a cancel packet. \n>> ProcessStartupPacket\n>>          * already did any appropriate error reporting.\n>>          */\n>>         if (status != STATUS_OK)\n>> +       {\n>> +               if (FailedConnection_hook)\n>> +                       (*FailedConnection_hook) \n>> (FCET_BAD_STARTUP_PACKET, port);\n>>                 proc_exit(0);\n>> +       }\n>>\n>> is not safe for the EXEC_BACKEND case.\n>>\n> What about the idea to trigger the hook for the STARTUP PACKET TIMEOUT \n> and BAD STARTUP PACKET only for the non EXEC_BACKEND/Windows cases?\n>\n> I'm tempted to think it's better to have some cases where one could \n> benefit from the hook as opposed to none.\n>\n> Thoughts?\n\nPlease find attached v2-0004-connection_hooks.patch as an attempt of \ndoing so.\n\nThanks\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 Aug 2022 12:51:20 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Tue, Aug 2, 2022 at 6:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi Bharath,\n>\n> On 7/14/22 11:43 AM, Bharath Rupireddy wrote:\n> > On Fri, Jul 8, 2022 at 5:54 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Looking at v2-0003 patch and emit_log_hook, how about we filter out\n> >> for those connectivity errors either based on error codes and if they\n> >> aren't unique, perhaps passing special flags to ereport API indicating\n> >> that it's a connectivity error and in the emit_log_hook we can look\n> >> for those connectivity error codes or flags to collect the stats about\n> >> the failure connections (with MyProcPort being present in\n> >> emit_log_hook)? This way, we don't need a new hook. Thoughts?\n> > Bertrand and Other Hackers, above comment may have been lost in the\n> > wild - any thoughts on it?\n>\n> Thanks for your feedback!\n>\n> I can see 2 issues with that approach:\n>\n> - We’ll not be able to track the “startup timeout case” (well, we may\n> not be able to track it anyway depending of what next to [1] will be) as\n> it does not emit any log messages.\n>\n> [1]:\n> https://www.postgresql.org/message-id/a1558d12-c1c4-0fe5-f8a5-2b6c2294e55f%40amazon.com\n\nYes, we wanted to be very quick in StartupPacketTimeoutHandler because\nit is a timeout signal handler after all.\n\n> - We’ll depend of the log_min_messages value (means\n> edata->output_to_server needs to be true for the emit_log_hook to be\n> triggered).\n\nHm, we can just say that 'log_min_message setting will enable/disable\nthe feature'.\n\nI agree with your first point of not having an error in\nStartupPacketTimeoutHandler hence I don't think using emit log hook\nfor the connection failure stats helps.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Sat, 13 Aug 2022 14:17:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> Please find attached v2-0004-connection_hooks.patch\n\n /*\n * Stop here if it was bad or a cancel packet. ProcessStartupPacket\n * already did any appropriate error reporting.\n */\n if (status != STATUS_OK)\n+ {\n+#ifndef EXEC_BACKEND\n+ if (FailedConnection_hook)\n+ (*FailedConnection_hook) (FCET_BAD_STARTUP_PACKET, port);\n+#endif\n proc_exit(0);\n+ }\n\nPer the comment above the if condition, the `status != OK` may\nrepresent a cancel packet, as well. Clearly, a cancel packet is not\nthe same as a _bad_ packet. So I think here you need to differentiate\nbetween a cancel packet and a genuinely bad packet; I don't see\nanything good coming good out of us, or the hook-developer, lumping\nthose 2 cases together.\n\nI think we can reduce the number of places the hook is called, if we\ncall the hook from proc_exit(), and all the other places we simply set\na global variable to signify the reason for the failure. The case of\n_exit(1) from the signal-handler cannot use such a mechanism, but I\nthink all the other cases of interest can simply register one of the\nFCET_* value, and the hook call from proc_exit() can pass that value\nto the hook.\n\nIf we can convinces ourselves that we can use proc_exit(1) in\nStartupPacketTimeoutHandler(), instead of calling _exit(1), I think we\ncal eliminate replace all call sites for this hook with\nset-global-variable variant.\n\n> ...\n> * This should be the only function to call exit().\n> * -cim 2/6/90\n>...\n> proc_exit(int code)\n\nThe comment on proc_exit() claims that should be the only place\ncalling exit(), except the add-on/extension hooks. So there must be a\nstrong reason why the signal-handler uses _exit() to bypass all\ncallbacks.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Sat, 13 Aug 2022 22:45:53 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "(reposting the same review, with many grammatical fixes)\n\nOn Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> Please find attached v2-0004-connection_hooks.patch\n\n /*\n * Stop here if it was bad or a cancel packet. ProcessStartupPacket\n * already did any appropriate error reporting.\n */\n if (status != STATUS_OK)\n+ {\n+#ifndef EXEC_BACKEND\n+ if (FailedConnection_hook)\n+ (*FailedConnection_hook) (FCET_BAD_STARTUP_PACKET, port);\n+#endif\n proc_exit(0);\n+ }\n\nPer the comment above the if condition, the `status != OK` may\nrepresent a cancel packet, as well. Clearly, a cancel packet is not\nthe same as a _bad_ packet. So I think here you need to differentiate\nbetween a cancel packet and a genuinely bad packet; I don't see\nanything good coming out of us, or the hook-developer, lumping\nthose 2 cases together.\n\nI think we can reduce the number of places the hook is called, if we\ncall the hook from proc_exit(), and at all the other places we simply set\na global variable to signify the reason for the failure. The case of\n_exit(1) from the signal-handler cannot use such a mechanism, but I\nthink all the other cases of interest can simply register one of the\nFCET_* values, and let the call from proc_exit() pass that value\nto the hook.\n\nIf we can convince ourselves that we can use proc_exit(1) in\nStartupPacketTimeoutHandler(), instead of calling _exit(1), I think we\ncal replace all call sites for this hook with the\nset-global-variable variant.\n\n> ...\n> * This should be the only function to call exit().\n> * -cim 2/6/90\n>...\n> proc_exit(int code)\n\nThe comment on proc_exit() claims that it should be the only place\ncalling exit(), except that the add-on/extension hooks may ignore this.\nSo there must be a strong reason why the signal-handler uses _exit()\nto bypass all callbacks.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Sat, 13 Aug 2022 22:52:33 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 8/14/22 7:52 AM, Gurjeet Singh wrote:\n> On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Please find attached v2-0004-connection_hooks.patch\n> /*\n> * Stop here if it was bad or a cancel packet. ProcessStartupPacket\n> * already did any appropriate error reporting.\n> */\n> if (status != STATUS_OK)\n> + {\n> +#ifndef EXEC_BACKEND\n> + if (FailedConnection_hook)\n> + (*FailedConnection_hook) (FCET_BAD_STARTUP_PACKET, port);\n> +#endif\n> proc_exit(0);\n> + }\n>\n> Per the comment above the if condition, the `status != OK` may\n> represent a cancel packet, as well. Clearly, a cancel packet is not\n> the same as a _bad_ packet. So I think here you need to differentiate\n> between a cancel packet and a genuinely bad packet; I don't see\n> anything good coming out of us, or the hook-developer, lumping\n> those 2 cases together.\n\nThanks for the feedback!\n\nYeah, good point. I agree that it would be better to make the distinction.\n\n> I think we can reduce the number of places the hook is called, if we\n> call the hook from proc_exit(), and at all the other places we simply set\n> a global variable to signify the reason for the failure. The case of\n> _exit(1) from the signal-handler cannot use such a mechanism, but I\n> think all the other cases of interest can simply register one of the\n> FCET_* values, and let the call from proc_exit() pass that value\n> to the hook.\n\nThat looks like a good idea to me. I'm tempted to rewrite the patch that \nway (and addressing the first comment in the same time).\n\nCurious to hear about others hackers thoughts too.\n\n> If we can convince ourselves that we can use proc_exit(1) in\n> StartupPacketTimeoutHandler(), instead of calling _exit(1), I think we\n> cal replace all call sites for this hook with the\n> set-global-variable variant.\n\nI can see why this is not safe in the EXEC_BACKEND case, but it's not \nclear to me why it would not be safe in the non EXEC_BACKEND case.\n\n>> ...\n>> * This should be the only function to call exit().\n>> * -cim 2/6/90\n>> ...\n>> proc_exit(int code)\n> The comment on proc_exit() claims that it should be the only place\n> calling exit(), except that the add-on/extension hooks may ignore this.\n> So there must be a strong reason why the signal-handler uses _exit()\n> to bypass all callbacks.\n\nyeah.\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 16 Aug 2022 10:01:13 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Tue, Aug 16, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> On 8/14/22 7:52 AM, Gurjeet Singh wrote:\n> > On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> > I think we can reduce the number of places the hook is called, if we\n> > call the hook from proc_exit(), and at all the other places we simply set\n> > a global variable to signify the reason for the failure. The case of\n> > _exit(1) from the signal-handler cannot use such a mechanism, but I\n> > think all the other cases of interest can simply register one of the\n> > FCET_* values, and let the call from proc_exit() pass that value\n> > to the hook.\n>\n> That looks like a good idea to me. I'm tempted to rewrite the patch that\n> way (and addressing the first comment in the same time).\n>\n> Curious to hear about others hackers thoughts too.\n\nIMO, calling the hook from proc_exit() is not a good design as\nproc_exit() is a generic code called from many places in the source\ncode, even the simple code of kind if(call_failed_conn_hook) {\nfalied_conn_hook(params);} can come in the way of many exit code paths\nwhich is undesirable, and the likelihood of introducing new bugs may\nincrease.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Tue, 16 Aug 2022 13:40:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 8/16/22 10:10 AM, Bharath Rupireddy wrote:\n> On Tue, Aug 16, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 8/14/22 7:52 AM, Gurjeet Singh wrote:\n>>> On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>> I think we can reduce the number of places the hook is called, if we\n>>> call the hook from proc_exit(), and at all the other places we simply set\n>>> a global variable to signify the reason for the failure. The case of\n>>> _exit(1) from the signal-handler cannot use such a mechanism, but I\n>>> think all the other cases of interest can simply register one of the\n>>> FCET_* values, and let the call from proc_exit() pass that value\n>>> to the hook.\n>> That looks like a good idea to me. I'm tempted to rewrite the patch that\n>> way (and addressing the first comment in the same time).\n>>\n>> Curious to hear about others hackers thoughts too.\n> IMO, calling the hook from proc_exit() is not a good design as\n> proc_exit() is a generic code called from many places in the source\n> code, even the simple code of kind if(call_failed_conn_hook) {\n> falied_conn_hook(params);} can come in the way of many exit code paths\n> which is undesirable, and the likelihood of introducing new bugs may\n> increase.\n\nThanks for the feedback.\n\nWhat do you think about calling the hook only if the new global variable \nis not equal to its default value (which would mean don't trigger the \nhook)?\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 16 Aug 2022 10:25:19 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Tue, Aug 16, 2022 at 1:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 8/16/22 10:10 AM, Bharath Rupireddy wrote:\n> > On Tue, Aug 16, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> On 8/14/22 7:52 AM, Gurjeet Singh wrote:\n> >>> On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >>> I think we can reduce the number of places the hook is called, if we\n> >>> call the hook from proc_exit(), and at all the other places we simply set\n> >>> a global variable to signify the reason for the failure. The case of\n> >>> _exit(1) from the signal-handler cannot use such a mechanism, but I\n> >>> think all the other cases of interest can simply register one of the\n> >>> FCET_* values, and let the call from proc_exit() pass that value\n> >>> to the hook.\n> >> That looks like a good idea to me. I'm tempted to rewrite the patch that\n> >> way (and addressing the first comment in the same time).\n> >>\n> >> Curious to hear about others hackers thoughts too.\n> > IMO, calling the hook from proc_exit() is not a good design as\n> > proc_exit() is a generic code called from many places in the source\n> > code, even the simple code of kind if(call_failed_conn_hook) {\n> > falied_conn_hook(params);} can come in the way of many exit code paths\n> > which is undesirable, and the likelihood of introducing new bugs may\n> > increase.\n>\n> Thanks for the feedback.\n>\n> What do you think about calling the hook only if the new global variable\n> is not equal to its default value (which would mean don't trigger the\n> hook)?\n\nIMO, that's not a good design as explained above. Why should the\nfailed connection hook related code get hit for each and every\nproc_exit() call? Here, the code duplication i.e. the number of places\nthe failed connection hook gets called mustn't be the reason to move\nthat code to proc_exit().\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Tue, 16 Aug 2022 15:46:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "On Tue, Aug 16, 2022 at 3:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 1:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >\n> > Hi,\n> >\n> > On 8/16/22 10:10 AM, Bharath Rupireddy wrote:\n> > > On Tue, Aug 16, 2022 at 1:31 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> > >> On 8/14/22 7:52 AM, Gurjeet Singh wrote:\n> > >>> On Mon, Aug 8, 2022 at 3:51 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> > >>> I think we can reduce the number of places the hook is called, if we\n> > >>> call the hook from proc_exit(), and at all the other places we simply set\n> > >>> a global variable to signify the reason for the failure. The case of\n> > >>> _exit(1) from the signal-handler cannot use such a mechanism, but I\n> > >>> think all the other cases of interest can simply register one of the\n> > >>> FCET_* values, and let the call from proc_exit() pass that value\n> > >>> to the hook.\n> > >> That looks like a good idea to me. I'm tempted to rewrite the patch that\n> > >> way (and addressing the first comment in the same time).\n> > >>\n> > >> Curious to hear about others hackers thoughts too.\n\nI agree that we need feedback from long-timers here, on the decision\nof whether to use proc_exit() for this purpose.\n\n> > > IMO, calling the hook from proc_exit() is not a good design as\n> > > proc_exit() is a generic code called from many places in the source\n> > > code, even the simple code of kind if(call_failed_conn_hook) {\n> > > falied_conn_hook(params);} can come in the way of many exit code paths\n> > > which is undesirable, and the likelihood of introducing new bugs may\n> > > increase.\n> >\n> > Thanks for the feedback.\n> >\n> > What do you think about calling the hook only if the new global variable\n> > is not equal to its default value (which would mean don't trigger the\n> > hook)?\n>\n> IMO, that's not a good design as explained above. Why should the\n> failed connection hook related code get hit for each and every\n> proc_exit() call? Here, the code duplication i.e. the number of places\n> the failed connection hook gets called mustn't be the reason to move\n> that code to proc_exit().\n\nI agree, it doesn't feel _clean_, having to maintain a global\nvariable, pass it to hook at exit, etc. But the alternative feels less\ncleaner.\n\nThis hook needs to be called when the process has decided to exit, so\nit makes sense to place this call in stack above proc_exit(), whose\nsole job is to let the process die gracefully, and take care of things\non the way out.\n\nThere are quite a few places in core that leverage proc_exit()'s\nfacilities (by registering on_proc_exit callbacks), so an\nextension/hook doing so wouldn't be out of the ordinary; (apparently\ncontrib/sepgsql has already set the precedent on an extension using\nthe on_proc_exit callback). Admittedly, in this case the core will be\nmanaging and passing it the additional global variable needed to\nrecord the connection failure reason, FCET_*.\n\nIf we agree that proc_exit() is a good place to place this call, then\nthis hook can be converted into a on_proc_exit callback. If the global\nvariable is exported, then the extension(s) can access it in the\ncallback to ascertain why the process is exiting, and proc_exit()\nwon't have to know anything special about the extension, or hook, or\nthe global variable.\n\nThe on_proc_exit callback method wouldn't work for the _exit() called\nin StartupPacketTimeoutHandler(), so that will need to be handled\nseparately.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 16 Aug 2022 07:26:33 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "This looks like it was a good discussion -- last summer. But it\ndoesn't seem to be a patch under active development now.\n\nIt sounds like there were some design constraints that still need some\nnew ideas to solve and a new patch will be needed to address them.\n\nShould this be marked Returned With Feedback?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 3 Apr 2023 18:08:06 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" }, { "msg_contents": "Hi,\n\nOn 4/4/23 12:08 AM, Gregory Stark (as CFM) wrote:\n> This looks like it was a good discussion -- last summer. But it\n> doesn't seem to be a patch under active development now.\n> \n> It sounds like there were some design constraints that still need some\n> new ideas to solve and a new patch will be needed to address them.\n> \n> Should this be marked Returned With Feedback?\n> \n\nI just marked it as Returned With Feedback.\n\nI may re-open it later on to resume the discussion or share\nnew ideas though.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 12:11:55 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch proposal: New hooks in the connection path" } ]
[ { "msg_contents": "Hello,\n\nI propose supporting TRUNCATE triggers on foreign tables\nbecause some FDW now supports TRUNCATE. I think such triggers\nare useful for audit logging or for preventing undesired\ntruncate.\n\nPatch attached.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 30 Jun 2022 19:38:48 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "\n\nOn 2022/06/30 19:38, Yugo NAGATA wrote:\n> Hello,\n> \n> I propose supporting TRUNCATE triggers on foreign tables\n> because some FDW now supports TRUNCATE. I think such triggers\n> are useful for audit logging or for preventing undesired\n> truncate.\n> \n> Patch attached.\n\nThanks for the patch! It looks good to me except the following thing.\n\n <entry align=\"center\"><command>TRUNCATE</command></entry>\n <entry align=\"center\">&mdash;</entry>\n- <entry align=\"center\">Tables</entry>\n+ <entry align=\"center\">Tables and foreign tables</entry>\n </row>\n\nYou added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jul 2022 00:54:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "Hello Fujii-san,\n\nThank you for reviewing the patch!\n\nOn Fri, 8 Jul 2022 00:54:37 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2022/06/30 19:38, Yugo NAGATA wrote:\n> > Hello,\n> > \n> > I propose supporting TRUNCATE triggers on foreign tables\n> > because some FDW now supports TRUNCATE. I think such triggers\n> > are useful for audit logging or for preventing undesired\n> > truncate.\n> > \n> > Patch attached.\n> \n> Thanks for the patch! It looks good to me except the following thing.\n> \n> <entry align=\"center\"><command>TRUNCATE</command></entry>\n> <entry align=\"center\">&mdash;</entry>\n> - <entry align=\"center\">Tables</entry>\n> + <entry align=\"center\">Tables and foreign tables</entry>\n> </row>\n> \n> You added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n\nOops, I forgot it. I attached the updated patch.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 8 Jul 2022 11:19:59 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "\n\nOn 2022/07/08 11:19, Yugo NAGATA wrote:\n>> You added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n> \n> Oops, I forgot it. I attached the updated patch.\n\nThanks for updating the patch! LGTM.\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jul 2022 14:06:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "2022年7月8日(金) 14:06 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> On 2022/07/08 11:19, Yugo NAGATA wrote:\n> >> You added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n> >\n> > Oops, I forgot it. I attached the updated patch.\n>\n> Thanks for updating the patch! LGTM.\n> Barring any objection, I will commit the patch.\n\nAn observation: as-is the patch would make it possible to create a truncate\ntrigger for a foreign table whose FDW doesn't support truncation, which seems\nsomewhat pointless, possible source of confusion etc.:\n\n postgres=# CREATE TRIGGER ft_trigger\n AFTER TRUNCATE ON fb_foo\n EXECUTE FUNCTION fb_foo_trg();\n CREATE TRIGGER\n\n postgres=# TRUNCATE fb_foo;\n ERROR: cannot truncate foreign table \"fb_foo\"\n\nIt would be easy enough to check for this, e.g.:\n\n else if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n {\n FdwRoutine *fdwroutine = GetFdwRoutineForRelation(rel, false);\n\n if (!fdwroutine->ExecForeignTruncate)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"foreign data wrapper does not support\ntable truncation\")));\n ...\n\nwhich results in:\n\n postgres=# CREATE TRIGGER ft_trigger\n AFTER TRUNCATE ON fb_foo\n EXECUTE FUNCTION fb_foo_trg();\n ERROR: foreign data wrapper does not support table truncation\n\nwhich IMO is preferable to silently accepting DDL which will never\nactually do anything.\n\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Fri, 8 Jul 2022 16:50:10 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "On Fri, 8 Jul 2022 16:50:10 +0900\nIan Lawrence Barwick <barwick@gmail.com> wrote:\n\n> 2022年7月8日(金) 14:06 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> > On 2022/07/08 11:19, Yugo NAGATA wrote:\n> > >> You added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n> > >\n> > > Oops, I forgot it. I attached the updated patch.\n> >\n> > Thanks for updating the patch! LGTM.\n> > Barring any objection, I will commit the patch.\n> \n> An observation: as-is the patch would make it possible to create a truncate\n> trigger for a foreign table whose FDW doesn't support truncation, which seems\n> somewhat pointless, possible source of confusion etc.:\n> \n> postgres=# CREATE TRIGGER ft_trigger\n> AFTER TRUNCATE ON fb_foo\n> EXECUTE FUNCTION fb_foo_trg();\n> CREATE TRIGGER\n> \n> postgres=# TRUNCATE fb_foo;\n> ERROR: cannot truncate foreign table \"fb_foo\"\n> \n> It would be easy enough to check for this, e.g.:\n> \n> else if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n> {\n> FdwRoutine *fdwroutine = GetFdwRoutineForRelation(rel, false);\n> \n> if (!fdwroutine->ExecForeignTruncate)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"foreign data wrapper does not support\n> table truncation\")));\n> ...\n> \n> which results in:\n> \n> postgres=# CREATE TRIGGER ft_trigger\n> AFTER TRUNCATE ON fb_foo\n> EXECUTE FUNCTION fb_foo_trg();\n> ERROR: foreign data wrapper does not support table truncation\n> \n> which IMO is preferable to silently accepting DDL which will never\n> actually do anything.\n\nAt beginning, I also thought such check would be necessary, but I noticed that\nit is already possible to create insert/delete/update triggers for a foreign\ntable whose FDW doesn't support such operations. So, I discarded this idea from\nthe proposed patch for consistency. \n\nIf we want to add such prevention, we will need similar checks for\nINSERT/DELETE/UPDATE not only TRUNCATE. However, I think such fix is independent\nfrom this and it can be proposed as another patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:10:11 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "2022年7月8日(金) 17:10 Yugo NAGATA <nagata@sraoss.co.jp>:\n>\n> On Fri, 8 Jul 2022 16:50:10 +0900\n> Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> > 2022年7月8日(金) 14:06 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> > > On 2022/07/08 11:19, Yugo NAGATA wrote:\n> > > >> You added \"foreign tables\" for BEFORE statement-level trigger as the above, but ISTM that you also needs to do that for AFTER statement-level trigger. No?\n> > > >\n> > > > Oops, I forgot it. I attached the updated patch.\n> > >\n> > > Thanks for updating the patch! LGTM.\n> > > Barring any objection, I will commit the patch.\n> >\n> > An observation: as-is the patch would make it possible to create a truncate\n> > trigger for a foreign table whose FDW doesn't support truncation, which seems\n> > somewhat pointless, possible source of confusion etc.:\n> >\n> > postgres=# CREATE TRIGGER ft_trigger\n> > AFTER TRUNCATE ON fb_foo\n> > EXECUTE FUNCTION fb_foo_trg();\n> > CREATE TRIGGER\n> >\n> > postgres=# TRUNCATE fb_foo;\n> > ERROR: cannot truncate foreign table \"fb_foo\"\n> >\n> > It would be easy enough to check for this, e.g.:\n> >\n> > else if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n> > {\n> > FdwRoutine *fdwroutine = GetFdwRoutineForRelation(rel, false);\n> >\n> > if (!fdwroutine->ExecForeignTruncate)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"foreign data wrapper does not support\n> > table truncation\")));\n> > ...\n> >\n> > which results in:\n> >\n> > postgres=# CREATE TRIGGER ft_trigger\n> > AFTER TRUNCATE ON fb_foo\n> > EXECUTE FUNCTION fb_foo_trg();\n> > ERROR: foreign data wrapper does not support table truncation\n> >\n> > which IMO is preferable to silently accepting DDL which will never\n> > actually do anything.\n>\n> At beginning, I also thought such check would be necessary, but I noticed that\n> it is already possible to create insert/delete/update triggers for a foreign\n> table whose FDW doesn't support such operations. So, I discarded this idea from\n> the proposed patch for consistency.\n>\n> If we want to add such prevention, we will need similar checks for\n> INSERT/DELETE/UPDATE not only TRUNCATE. However, I think such fix is independent\n> from this and it can be proposed as another patch.\n\nAh OK, makes sense from that point of view. Thanks for the clarification!\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:13:32 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "\n\nOn 2022/07/08 17:13, Ian Lawrence Barwick wrote:\n>> If we want to add such prevention, we will need similar checks for\n>> INSERT/DELETE/UPDATE not only TRUNCATE. However, I think such fix is independent\n>> from this and it can be proposed as another patch.\n> \n> Ah OK, makes sense from that point of view. Thanks for the clarification!\n\nSo I pushed the v2 patch that Yugo-san posted. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 12 Jul 2022 09:24:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" }, { "msg_contents": "On Tue, 12 Jul 2022 09:24:20 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2022/07/08 17:13, Ian Lawrence Barwick wrote:\n> >> If we want to add such prevention, we will need similar checks for\n> >> INSERT/DELETE/UPDATE not only TRUNCATE. However, I think such fix is independent\n> >> from this and it can be proposed as another patch.\n> > \n> > Ah OK, makes sense from that point of view. Thanks for the clarification!\n> \n> So I pushed the v2 patch that Yugo-san posted. Thanks!\n\nThanks!\n\n\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 12 Jul 2022 16:29:51 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Support TRUNCATE triggers on foreign tables" } ]
[ { "msg_contents": "Hi,\n\nI was just looking at the list of predefined roles that we have, and\npg_checkpointer is conspicuously not like the others:\n\nrhaas=# select rolname from pg_authid where oid!=10;\n rolname\n---------------------------\n pg_database_owner\n pg_read_all_data\n pg_write_all_data\n pg_monitor\n pg_read_all_settings\n pg_read_all_stats\n pg_stat_scan_tables\n pg_read_server_files\n pg_write_server_files\n pg_execute_server_program\n pg_signal_backend\n pg_checkpointer\n(12 rows)\n\nAlmost all of these are verbs or verb phrases: having this role gives\nyou the ability to read all data, or write all data, or read all\nsettings, or whatever. But you can't say that having this role gives\nyou the ability to checkpointer. It gives you the ability to\ncheckpoint, or to perform a checkpoint.\n\nSo I think the name is inconsistent with our usual convention, and we\nought to fix it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:48:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Thu, 30 Jun 2022 at 08:48, Robert Haas <robertmhaas@gmail.com> wrote:\n\nAlmost all of these are verbs or verb phrases: having this role gives\n> you the ability to read all data, or write all data, or read all\n> settings, or whatever. But you can't say that having this role gives\n> you the ability to checkpointer. It gives you the ability to\n> checkpoint, or to perform a checkpoint.\n>\n> So I think the name is inconsistent with our usual convention, and we\n> ought to fix it.\n>\n\nI was going to point out that pg_database_owner is the same way, but it is\nfundamentally different in that it has no special allowed access and is\nmeant to be the target of permission grants rather than being granted to\nother roles.\n\n+1 to rename it to pg_checkpoint or to some similar name.\n\nOn Thu, 30 Jun 2022 at 08:48, Robert Haas <robertmhaas@gmail.com> wrote:\nAlmost all of these are verbs or verb phrases: having this role gives\nyou the ability to read all data, or write all data, or read all\nsettings, or whatever. But you can't say that having this role gives\nyou the ability to checkpointer. It gives you the ability to\ncheckpoint, or to perform a checkpoint.\n\nSo I think the name is inconsistent with our usual convention, and we\nought to fix it.I was going to point out that pg_database_owner is the same way, but it is fundamentally different in that it has no special allowed access and is meant to be the target of permission grants rather than being granted to other roles.+1 to rename it to pg_checkpoint or to some similar name.", "msg_date": "Thu, 30 Jun 2022 08:57:04 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Thu, Jun 30, 2022 at 08:57:04AM -0400, Isaac Morland wrote:\n> I was going to point out that pg_database_owner is the same way, but it is\n> fundamentally different in that it has no special allowed access and is\n> meant to be the target of permission grants rather than being granted to\n> other roles.\n> \n> +1 to rename it to pg_checkpoint or to some similar name.\n\nWe are still in beta, so, FWIW, I am fine to adjust this name even if\nit means an extra catversion bump.\n\n\"checkpoint\" is not a verb (right?), so would something like \n\"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\nlarger picture?\n--\nMichael", "msg_date": "Fri, 1 Jul 2022 10:22:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Thu, 30 Jun 2022 at 21:22, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jun 30, 2022 at 08:57:04AM -0400, Isaac Morland wrote:\n> > I was going to point out that pg_database_owner is the same way, but it\n> is\n> > fundamentally different in that it has no special allowed access and is\n> > meant to be the target of permission grants rather than being granted to\n> > other roles.\n> >\n> > +1 to rename it to pg_checkpoint or to some similar name.\n>\n> We are still in beta, so, FWIW, I am fine to adjust this name even if\n> it means an extra catversion bump.\n>\n> \"checkpoint\" is not a verb (right?), so would something like\n> \"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\n> larger picture?\n>\n\nI would argue it’s OK. In the Postgres context, I can imagine someone\nsaying they’re going to checkpoint the database, and the actual command is\njust CHECKPOINT. Changing from checkpointer to checkpoint means that we’re\ndescribing the action rather than what a role member is.\n\nIf we are going to put a more standard verb in there, I would use execute\nrather than perform, because that is what the documentation says members of\nthis role can do — “Allow executing the CHECKPOINT command”. Zooming out a\nlittle, I think we normally talk about executing commands rather than\nperforming them, so this is consistent with those other uses; otherwise we\nshould reconsider what the documentation itself says to match\nother commands that we talk about running.\n\nOK, I think I’ve bikeshedded enough. I’m just happy to have all these new\nroles to avoid handing out full superuser access routinely.\n\nOn Thu, 30 Jun 2022 at 21:22, Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jun 30, 2022 at 08:57:04AM -0400, Isaac Morland wrote:\n> I was going to point out that pg_database_owner is the same way, but it is\n> fundamentally different in that it has no special allowed access and is\n> meant to be the target of permission grants rather than being granted to\n> other roles.\n> \n> +1 to rename it to pg_checkpoint or to some similar name.\n\nWe are still in beta, so, FWIW, I am fine to adjust this name even if\nit means an extra catversion bump.\n\n\"checkpoint\" is not a verb (right?), so would something like \n\"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\nlarger picture?I would argue it’s OK. In the Postgres context, I can imagine someone saying they’re going to checkpoint the database, and the actual command is just CHECKPOINT. Changing from checkpointer to checkpoint means that we’re describing the action rather than what a role member is.If we are going to put a more standard verb in there, I would use execute rather than perform, because that is what the documentation says members of this role can do — “Allow executing the CHECKPOINT command”. Zooming out a little, I think we normally talk about executing commands rather than performing them, so this is consistent with those other uses; otherwise we should reconsider what the documentation itself says to match other commands that we talk about running.OK, I think I’ve bikeshedded enough. I’m just happy to have all these new roles to avoid handing out full superuser access routinely.", "msg_date": "Thu, 30 Jun 2022 22:58:51 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Fri, Jul 01, 2022 at 10:22:16AM +0900, Michael Paquier wrote:\n> On Thu, Jun 30, 2022 at 08:57:04AM -0400, Isaac Morland wrote:\n> > I was going to point out that pg_database_owner is the same way, but it is\n> > fundamentally different in that it has no special allowed access and is\n> > meant to be the target of permission grants rather than being granted to\n> > other roles.\n> > \n> > +1 to rename it to pg_checkpoint or to some similar name.\n> \n> We are still in beta, so, FWIW, I am fine to adjust this name even if\n> it means an extra catversion bump.\n> \n> \"checkpoint\" is not a verb (right?), so would something like \n\nWhy not ? There's a *command* called \"checkpoint\".\nIt is also a noun.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 30 Jun 2022 22:03:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Thu, Jun 30, 2022 at 9:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \"checkpoint\" is not a verb (right?), so would something like\n> \"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\n> larger picture?\n\nIt's true that the dictionary describes checkpoint as a noun, but I\nthink in a database context it is often used as a verb. One example is\nthe CHECKPOINT command itself: command names are all verbs, and\nCHECKPOINT is a command name, so we're using it as a verb. Similarly I\nthink you can talk about needing to checkpoint the database. Therefore\nI think pg_checkpoint is OK, and it has the advantage of being short.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 09:18:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Fri, Jul 1, 2022 at 3:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jun 30, 2022 at 9:22 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > \"checkpoint\" is not a verb (right?), so would something like\n> > \"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\n> > larger picture?\n>\n> It's true that the dictionary describes checkpoint as a noun, but I\n> think in a database context it is often used as a verb. One example is\n> the CHECKPOINT command itself: command names are all verbs, and\n> CHECKPOINT is a command name, so we're using it as a verb. Similarly I\n> think you can talk about needing to checkpoint the database. Therefore\n> I think pg_checkpoint is OK, and it has the advantage of being short.\n>\n\n+1 for pg_checkpoint on that -- let's not make it longer than necessary.\n\nAnd yes, +1 for actually changing it. It's a lot cheaper to change it now\nthan it will be in the future. Yes, it would've been even cheaper to have\nalready changed it, but we can't go back in time...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 1, 2022 at 3:18 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jun 30, 2022 at 9:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \"checkpoint\" is not a verb (right?), so would something like\n> \"pg_perform_checkpoint\" rather than \"pg_checkpoint\" fit better in the\n> larger picture?\n\nIt's true that the dictionary describes checkpoint as a noun, but I\nthink in a database context it is often used as a verb. One example is\nthe CHECKPOINT command itself: command names are all verbs, and\nCHECKPOINT is a command name, so we're using it as a verb. Similarly I\nthink you can talk about needing to checkpoint the database. Therefore\nI think pg_checkpoint is OK, and it has the advantage of being short.+1 for pg_checkpoint on that -- let's not make it longer than necessary.And yes, +1 for actually changing it. It's a lot cheaper to change it now than it will be in the future.  Yes, it would've been even cheaper to have already changed it, but we can't go back in time...--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 1 Jul 2022 15:36:48 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:36:48PM +0200, Magnus Hagander wrote:\n> +1 for pg_checkpoint on that -- let's not make it longer than necessary.\n> \n> And yes, +1 for actually changing it. It's a lot cheaper to change it now\n> than it will be in the future. Yes, it would've been even cheaper to have\n> already changed it, but we can't go back in time...\n\nYeah, pg_checkpoint seems like the obvious alternative to pg_checkpointer.\nI didn't see a patch in this thread yet, so I've put one together. I did\nnot include the catversion bump.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 1 Jul 2022 14:50:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Fri, Jul 1, 2022 at 5:50 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Fri, Jul 01, 2022 at 03:36:48PM +0200, Magnus Hagander wrote:\n> > +1 for pg_checkpoint on that -- let's not make it longer than necessary.\n> >\n> > And yes, +1 for actually changing it. It's a lot cheaper to change it now\n> > than it will be in the future. Yes, it would've been even cheaper to have\n> > already changed it, but we can't go back in time...\n>\n> Yeah, pg_checkpoint seems like the obvious alternative to pg_checkpointer.\n> I didn't see a patch in this thread yet, so I've put one together. I did\n> not include the catversion bump.\n\nHearing several votes in favor and none opposed, committed and\nback-patched to v15. I added the catversion bump, but left out the .po\nfile changes, figuring it was better to let those files get updated\nvia the normal process.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 13:38:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Tue, Jul 05, 2022 at 01:38:43PM -0400, Robert Haas wrote:\n> Hearing several votes in favor and none opposed, committed and\n> back-patched to v15.\n\nThanks.\n\n> I added the catversion bump, but left out the .po\n> file changes, figuring it was better to let those files get updated\n> via the normal process.\n\nI'll keep this in mind for future patches. The changes looked pretty\nobvious, so I wasn't sure whether to include it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 12:42:13 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "On Tue, Jul 5, 2022 at 3:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Tue, Jul 05, 2022 at 01:38:43PM -0400, Robert Haas wrote:\n> > Hearing several votes in favor and none opposed, committed and\n> > back-patched to v15.\n>\n> Thanks.\n>\n> > I added the catversion bump, but left out the .po\n> > file changes, figuring it was better to let those files get updated\n> > via the normal process.\n>\n> I'll keep this in mind for future patches. The changes looked pretty\n> obvious, so I wasn't sure whether to include it.\n\nI believe Peter periodically runs a script that bulk copies everything\nover from the translation repository.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 17:09:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 5, 2022 at 3:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Tue, Jul 05, 2022 at 01:38:43PM -0400, Robert Haas wrote:\n>>> I added the catversion bump, but left out the .po\n>>> file changes, figuring it was better to let those files get updated\n>>> via the normal process.\n\n>> I'll keep this in mind for future patches. The changes looked pretty\n>> obvious, so I wasn't sure whether to include it.\n\n> I believe Peter periodically runs a script that bulk copies everything\n> over from the translation repository.\n\nIndeed. If we did commit anything, it would just be wiped out in the\nnext bulk update. The authoritative versions of the .po files are in\nthe pgtranslation repo, not gitmaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 18:27:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_checkpointer is not a verb or verb phrase" } ]
[ { "msg_contents": "Hi,\n\nI was dismayed to learn that VACUUM VERBOSE on a table no longer tells\nyou anything about whether any pages were skipped due to pins. Now the\nobvious explanation for that is that we no longer skip pages entirely\njust because we find that they are pinned. But I think failing to\nfully process a page due to a pin is still a noteworthy event, and I\nthink that VACUUM VERBOSE should tell you how many times that\nhappened.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:57:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "vacuum verbose no longer reveals anything about pins" }, { "msg_contents": "On Thu, Jun 30, 2022 at 5:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I was dismayed to learn that VACUUM VERBOSE on a table no longer tells\n> you anything about whether any pages were skipped due to pins.\n\nVACUUM VERBOSE will show a dedicated line that reports on the number\nof pages that we couldn't get a cleanup lock on, if and only if we\ncouldn't do useful work as a result. In practice this means pages that\nhad one or more fully DEAD tuples that couldn't be removed due to our\ninability to prune. In my view this is strictly better than reporting\non the number of \"skipped due to pins\" pages.\n\nIn the case of any pages that we couldn't get a cleanup lock on that\ndidn't have any DEAD tuples (pages that are not reported on at all),\nVACUUM hasn't missed any work whatsoever. It even does heap vacuuming,\nwhich doesn't require a cleanup lock in either the first or the second\nheap pass. What's the problem with not reporting on that?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:33:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum verbose no longer reveals anything about pins" }, { "msg_contents": "On Thu, Jun 30, 2022 at 11:33 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jun 30, 2022 at 5:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I was dismayed to learn that VACUUM VERBOSE on a table no longer tells\n> > you anything about whether any pages were skipped due to pins.\n>\n> VACUUM VERBOSE will show a dedicated line that reports on the number\n> of pages that we couldn't get a cleanup lock on, if and only if we\n> couldn't do useful work as a result. In practice this means pages that\n> had one or more fully DEAD tuples that couldn't be removed due to our\n> inability to prune. In my view this is strictly better than reporting\n> on the number of \"skipped due to pins\" pages.\n\nAh, I missed that. I think that in the test case I was using, there\nwas a conflicting pin but there were no dead tuples, so that line\nwasn't present in the output.\n\n> In the case of any pages that we couldn't get a cleanup lock on that\n> didn't have any DEAD tuples (pages that are not reported on at all),\n> VACUUM hasn't missed any work whatsoever. It even does heap vacuuming,\n> which doesn't require a cleanup lock in either the first or the second\n> heap pass. What's the problem with not reporting on that?\n\nMaybe nothing. I just thought you'd completely removed all reporting\non this, and I'm glad that's not so.\n\nThanks for the quick response.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Jun 2022 11:43:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: vacuum verbose no longer reveals anything about pins" }, { "msg_contents": "On Thu, Jun 30, 2022 at 8:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Ah, I missed that. I think that in the test case I was using, there\n> was a conflicting pin but there were no dead tuples, so that line\n> wasn't present in the output.\n\nEven if there was a DEAD tuple, your test would still have to account\nfor opportunistic pruning before the cursor acquires its blocking pin\n(I'm guessing that you're using a cursor for this). The earlier\npruning could turn the DEAD tuple into either an LP_UNUSED item (in\nthe case of a heap-only tuple) or an LP_DEAD stub line pointer.\n\nAs I said, any existing LP_DEAD items will get put in the dead_items\narray by lazy_scan_noprune, very much like it would in the\ncleanup-lock-acquired/lazy_scan_prune case.\n\n> Maybe nothing. I just thought you'd completely removed all reporting\n> on this, and I'm glad that's not so.\n\nThe important idea behind all this is that VACUUM now uses a slightly\nmore abstract definition of scanned_pages. This is far easier to work\nwith at a high level, especially in the instrumentation code used by\nVACUUM VERBOSE.\n\nYou may have also noticed that VACUUM VERBOSE/autovacuum logging\nactually shows the number of scanned pages in Postgres 15, for the\nfirst time -- even though that's very basic information. The revised\ndefinition of scanned_pages enabled that enhancement. We are no longer\nencumbered by the existence of a no-cleanup-lock-page special case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jun 2022 10:20:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum verbose no longer reveals anything about pins" } ]
[ { "msg_contents": "Hi Hackers,\n\n\n\nI have been using pg_rewind in production for 2 years. One of the things that I noticed in pg_rewind is if it doesn't know what to do with a file \"it copies\". I understand it's the more safer option. After all, the alternative, pg_basebackup copies all the files from source to target.\n\n\n\nHowever, this is making pg_rewind inefficient when we have a high number of WAL files. Majority of the data (in most of my cases 95%+) that it copies are WAL files which are anyway same between the source and target. Skipping those same WAL files from copying will improve the speed of pg_rewind a lot.\n\n\n\n1. Does pg_rewind need to copy WAL files before the WAL that contains the last common check point?\n\n\n\nHeikki's presentation https://pgsessions.com/assets/archives/pg_rewind-presentation-paris.pdf gave me a good overview and also explained the behavior what I mentioned.\n\n\n\nThanks,\n\nVignesh\nHi Hackers,I have been using pg_rewind in production for 2 years. One of the things that I noticed in pg_rewind is if it doesn't know what to do with a file \"it copies\". I understand it's the more safer option. After all, the alternative, pg_basebackup copies all the files from source to target.However, this is making pg_rewind inefficient when we have a high number of WAL files. Majority of the data (in most of my cases 95%+) that it copies are WAL files which are anyway same between the source and target. Skipping those same WAL files from copying will improve the speed of pg_rewind a lot.1. Does pg_rewind need to copy WAL files before the WAL that contains the last common check point?Heikki's presentation https://pgsessions.com/assets/archives/pg_rewind-presentation-paris.pdf gave me a good overview and also explained the behavior what I mentioned.Thanks,Vignesh", "msg_date": "Thu, 30 Jun 2022 06:22:28 -0700", "msg_from": "vignesh ravichandran <admin@viggy28.dev>", "msg_from_op": true, "msg_subject": "Making pg_rewind faster" }, { "msg_contents": "Hi everyone!\n\nHere's the attached patch submission to optimize pg_rewind performance when many WAL files are retained on server. This patch avoids replaying (copying over) older WAL segment files that fall before the point of divergence between the source and target servers.\n\nThanks,\nJustin\n________________________________\nFrom: Justin Kwan <jkwan@cloudflare.com>\nSent: July 15, 2022 6:13 PM\nTo: vignesh ravichandran <admin@viggy28.dev>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; justinpkwan@outlook.com <justinpkwan@outlook.com>\nSubject: Re: Making pg_rewind faster\n\nLooping in my other email.\n\nOn Thu, Jun 30, 2022 at 6:22 AM vignesh ravichandran <admin@viggy28.dev<mailto:admin@viggy28.dev>> wrote:\nHi Hackers,\n\nI have been using pg_rewind in production for 2 years. One of the things that I noticed in pg_rewind is if it doesn't know what to do with a file \"it copies\". I understand it's the more safer option. After all, the alternative, pg_basebackup copies all the files from source to target.\n\nHowever, this is making pg_rewind inefficient when we have a high number of WAL files. Majority of the data (in most of my cases 95%+) that it copies are WAL files which are anyway same between the source and target. Skipping those same WAL files from copying will improve the speed of pg_rewind a lot.\n\n1. Does pg_rewind need to copy WAL files before the WAL that contains the last common check point?\n\nHeikki's presentation https://pgsessions.com/assets/archives/pg_rewind-presentation-paris.pdf gave me a good overview and also explained the behavior what I mentioned.\n\nThanks,\nVignesh", "msg_date": "Fri, 15 Jul 2022 22:24:54 +0000", "msg_from": "Justin Kwan <justinpkwan@outlook.com>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Hi everyone!\n\nI've also attached the pg_rewind optimization patch file for Postgres version 14.4. The previous patch file targets version Postgres version 15 Beta 1/2.\n\nThanks,\nJustin\n________________________________\nFrom: Justin Kwan <jkwan@cloudflare.com>\nSent: July 15, 2022 6:13 PM\nTo: vignesh ravichandran <admin@viggy28.dev>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; justinpkwan@outlook.com <justinpkwan@outlook.com>\nSubject: Re: Making pg_rewind faster\n\nLooping in my other email.\n\nOn Thu, Jun 30, 2022 at 6:22 AM vignesh ravichandran <admin@viggy28.dev<mailto:admin@viggy28.dev>> wrote:\nHi Hackers,\n\nI have been using pg_rewind in production for 2 years. One of the things that I noticed in pg_rewind is if it doesn't know what to do with a file \"it copies\". I understand it's the more safer option. After all, the alternative, pg_basebackup copies all the files from source to target.\n\nHowever, this is making pg_rewind inefficient when we have a high number of WAL files. Majority of the data (in most of my cases 95%+) that it copies are WAL files which are anyway same between the source and target. Skipping those same WAL files from copying will improve the speed of pg_rewind a lot.\n\n1. Does pg_rewind need to copy WAL files before the WAL that contains the last common check point?\n\nHeikki's presentation https://pgsessions.com/assets/archives/pg_rewind-presentation-paris.pdf gave me a good overview and also explained the behavior what I mentioned.\n\nThanks,\nVignesh", "msg_date": "Sat, 16 Jul 2022 03:16:27 +0000", "msg_from": "Justin Kwan <justinpkwan@outlook.com>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Justin Kwan <justinpkwan@outlook.com> writes:\n> I've also attached the pg_rewind optimization patch file for Postgres version 14.4. The previous patch file targets version Postgres version 15 Beta 1/2.\n\nIt's very unlikely that we would consider committing such changes into\nreleased branches. In fact, it's too late even for v15. You should\nbe submitting non-bug-fix patches against master (v16-to-be).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Jul 2022 14:40:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Hi Tom,\n\nThank you for taking a look at this and that sounds good. I will send over a patch compatible with Postgres v16.\n\nJustin\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: July 17, 2022 2:40 PM\nTo: Justin Kwan <justinpkwan@outlook.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; jkwan@cloudflare.com <jkwan@cloudflare.com>; vignesh ravichandran <admin@viggy28.dev>; hlinnaka@iki.fi <hlinnaka@iki.fi>\nSubject: Re: Making pg_rewind faster\n\nJustin Kwan <justinpkwan@outlook.com> writes:\n> I've also attached the pg_rewind optimization patch file for Postgres version 14.4. The previous patch file targets version Postgres version 15 Beta 1/2.\n\nIt's very unlikely that we would consider committing such changes into\nreleased branches. In fact, it's too late even for v15. You should\nbe submitting non-bug-fix patches against master (v16-to-be).\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHi Tom,\n\n\n\n\nThank you for taking a look at this and that sounds good. I will send over a patch compatible with Postgres v16.\n\n\n\n\nJustin\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: July 17, 2022 2:40 PM\nTo: Justin Kwan <justinpkwan@outlook.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; jkwan@cloudflare.com <jkwan@cloudflare.com>; vignesh ravichandran <admin@viggy28.dev>; hlinnaka@iki.fi <hlinnaka@iki.fi>\nSubject: Re: Making pg_rewind faster\n \n\n\nJustin Kwan <justinpkwan@outlook.com> writes:\n> I've also attached the pg_rewind optimization patch file for Postgres version 14.4. The previous patch file targets version Postgres version 15 Beta 1/2.\n\nIt's very unlikely that we would consider committing such changes into\nreleased branches.  In fact, it's too late even for v15.  You should\nbe submitting non-bug-fix patches against master (v16-to-be).\n\n                        regards, tom lane", "msg_date": "Mon, 18 Jul 2022 17:14:00 +0000", "msg_from": "Justin Kwan <justinpkwan@outlook.com>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "On Mon, Jul 18, 2022 at 05:14:00PM +0000, Justin Kwan wrote:\n> Thank you for taking a look at this and that sounds good. I will\n> send over a patch compatible with Postgres v16.\n\n+$node_2->psql(\n+ 'postgres',\n+ \"SELECT extract(epoch from modification) FROM pg_stat_file('pg_wal/000000010000000000000003');\",\n+ stdout => \\my $last_common_tli1_wal_last_modified_at);\nPlease note that you should not rely on the FS-level stats for\nanything that touches the WAL segments. A rough guess about what you\ncould here to make sure that only the set of WAL segments you are\nlooking for is being copied over would be to either:\n- Scan the logs produced by pg_rewind and see if the segments are\ncopied or not, depending on the divergence point (aka the last\ncheckpoint before WAL forked).\n- Clean up pg_wal/ in the target node before running pg_rewind,\nchecking that only the segments you want are available once the\noperation completes.\n--\nMichael", "msg_date": "Tue, 19 Jul 2022 14:36:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Hi Michael,\n\nNot sure if this email went through previously but thank you for your feedback, I've incorporated your suggestions by scanning the logs produced from pg_rewind when asserting that certain WAL segment files were skipped from being copied over to the target server.\n\nI've also updated the pg_rewind patch file to target the Postgres master branch (version 16 to be). Please see attached.\n\nThanks,\nJustin\n\n________________________________\nFrom: Justin Kwan <justinpkwan@outlook.com>\nSent: July 18, 2022 1:14 PM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; jkwan@cloudflare.com <jkwan@cloudflare.com>; vignesh ravichandran <admin@viggy28.dev>; hlinnaka@iki.fi <hlinnaka@iki.fi>\nSubject: Re: Making pg_rewind faster\n\nHi Tom,\n\nThank you for taking a look at this and that sounds good. I will send over a patch compatible with Postgres v16.\n\nJustin\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: July 17, 2022 2:40 PM\nTo: Justin Kwan <justinpkwan@outlook.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; vignesh <vignesh@cloudflare.com>; jkwan@cloudflare.com <jkwan@cloudflare.com>; vignesh ravichandran <admin@viggy28.dev>; hlinnaka@iki.fi <hlinnaka@iki.fi>\nSubject: Re: Making pg_rewind faster\n\nJustin Kwan <justinpkwan@outlook.com> writes:\n> I've also attached the pg_rewind optimization patch file for Postgres version 14.4. The previous patch file targets version Postgres version 15 Beta 1/2.\n\nIt's very unlikely that we would consider committing such changes into\nreleased branches. In fact, it's too late even for v15. You should\nbe submitting non-bug-fix patches against master (v16-to-be).\n\n regards, tom lane", "msg_date": "Thu, 28 Jul 2022 22:46:28 +0000", "msg_from": "Justin Kwan <justinpkwan@outlook.com>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Hi, Justin!\n\nOn Fri, Jul 29, 2022 at 1:05 PM Justin Kwan <justinpkwan@outlook.com> wrote:\n> Not sure if this email went through previously but thank you for your feedback, I've incorporated your suggestions by scanning the logs produced from pg_rewind when asserting that certain WAL segment files were skipped from being copied over to the target server.\n>\n> I've also updated the pg_rewind patch file to target the Postgres master branch (version 16 to be). Please see attached.\n\nThank you for the revision.\n\nI've taken a look at this patch. Overall it looks good to me. I also\ndon't see any design objections in the thread.\n\nA couple of points from me:\n1) I would prefer to evade hard-coded names for WAL segments in the\ntap tests. Could we calculate the names in the tap tests based on the\ndiverge point, etc.?\n2) Patch contains some indentation with spaces, which should be done\nin tabs. Please consider either manually fixing this or running\npgindent over modified files.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 13 Sep 2022 20:50:20 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "Hi,\n\nOn 2022-09-13 20:50:20 +0300, Alexander Korotkov wrote:\n> On Fri, Jul 29, 2022 at 1:05 PM Justin Kwan <justinpkwan@outlook.com> wrote:\n> > Not sure if this email went through previously but thank you for your feedback, I've incorporated your suggestions by scanning the logs produced from pg_rewind when asserting that certain WAL segment files were skipped from being copied over to the target server.\n> >\n> > I've also updated the pg_rewind patch file to target the Postgres master branch (version 16 to be). Please see attached.\n> \n> Thank you for the revision.\n> \n> I've taken a look at this patch. Overall it looks good to me. I also\n> don't see any design objections in the thread.\n> \n> A couple of points from me:\n> 1) I would prefer to evade hard-coded names for WAL segments in the\n> tap tests. Could we calculate the names in the tap tests based on the\n> diverge point, etc.?\n> 2) Patch contains some indentation with spaces, which should be done\n> in tabs. Please consider either manually fixing this or running\n> pgindent over modified files.\n\nThis patch currently fails because it hasn't been adjusted for\ncommit c47885bd8b6\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2022-09-19 18:03:17 -0700\n \n Split TESTDIR into TESTLOGDIR and TESTDATADIR\n\nThe adjustment is trivial. Attached, together with also producing an error\nmessage rather than just dying wordlessly.\n\n\nIt doesn't seem quite right to read pg_rewind's logs by reading\nregress_log_001_basic. Too easy to confuse different runs of pg_rewind\netc. I'd suggest trying to redirect the log to a different file.\n\n\nWith regard to Alexander's point about whitespace:\n\n.git/rebase-apply/patch:25: indent with spaces.\n /* Handle WAL segment file. */\n.git/rebase-apply/patch:26: indent with spaces.\n const char *fname;\n.git/rebase-apply/patch:27: indent with spaces.\n char *slash;\n.git/rebase-apply/patch:29: indent with spaces.\n /* Split filepath into directory & filename. */\n.git/rebase-apply/patch:30: indent with spaces.\n slash = strrchr(path, '/');\nwarning: squelched 29 whitespace errors\nwarning: 34 lines add whitespace errors.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 2 Oct 2022 10:44:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "On Sun, Oct 02, 2022 at 10:44:25AM -0700, Andres Freund wrote:\n> It doesn't seem quite right to read pg_rewind's logs by reading\n> regress_log_001_basic. Too easy to confuse different runs of pg_rewind\n> etc. I'd suggest trying to redirect the log to a different file.\n\nHardcoding log file names in the test increases the overall\nmaintenance, even if renaming these would be easy to track and fix if\nthe naming convention is changed. Anyway, I think that what this\npatch should do is to use command_checks_all() in RewindTest.pm as it\nis the test API able to check after a status and multiple expected\noutputs, which is what the changes in 001 and 008 are doing.\nRewindTest::run_pg_rewind() needs to be a bit tweaked to accept these\nregex patterns in input.\n\n+ if (file_segno < last_common_segno)\n+ {\n+ pg_log_debug(\"WAL file entry \\\"%s\\\" not copied to target\", fname);\n+ return FILE_ACTION_NONE;\n+ }\nThere may be something I am missing here, but there is no need to care\nabout segments with a TLI older than lastcommontliIndex, no?\n\ndecide_wal_file_action() assumes that the WAL segment exists on the\ntarget and the source. This looks bug-prone to me without at least an\nassertion.\n\nfile_entry_t has an entry to track if a file is a relation file. I\nthink that it would be much cleaner to track if we are handling a WAL\nsegment when inserting an entry in insert_filehash_entry(), so\nisrelfile could be replaced by an enum with three values: relation\nfile, WAL segment and the rest.\n--\nMichael", "msg_date": "Thu, 6 Oct 2022 16:08:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" }, { "msg_contents": "On Thu, Oct 06, 2022 at 04:08:45PM +0900, Michael Paquier wrote:\n> file_entry_t has an entry to track if a file is a relation file. I\n> think that it would be much cleaner to track if we are handling a WAL\n> segment when inserting an entry in insert_filehash_entry(), so\n> isrelfile could be replaced by an enum with three values: relation\n> file, WAL segment and the rest.\n\nThis review has been done a few weeks ago, and there has been no\nupdate since, so I am marking this entry as returned with feedback in\nthe CF app.\n--\nMichael", "msg_date": "Wed, 30 Nov 2022 16:03:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Making pg_rewind faster" } ]
[ { "msg_contents": "Hi,\nGiven a query:\nSELECT * FROM t1, t2 WHERE t1.r << t2.r\nwhere t1.r, t2.r are of range type,\ncurrently PostgreSQL will estimate a constant selectivity for the << predicate,\nwhich is equal to 0.005, not utilizing the statistics that the optimizer\ncollects for range attributes.\n\nWe have worked out a theory for inequality join selectivity estimation\n(http://arxiv.org/abs/2206.07396), and implemented it for range\ntypes it in this patch.\n\nThe algorithm in this patch re-uses the currently collected statistics for\nrange types, which is the bounds histogram. It works fairly accurate for the\noperations <<, >>, &&, &<, &>, <=, >= with estimation error of about 0.5%.\nThe patch also implements selectivity estimation for the\noperations @>, <@ (contains and is contained in), but their accuracy is not\nstable, since the bounds histograms assume independence between the range\nbounds. A point to discuss is whether or not to keep these last two operations.\nThe patch also includes the selectivity estimation for multirange types,\ntreating a multirange as a single range which is its bounding box.\n\nThe same algorithm in this patch is applicable to inequality joins of scalar\ntypes. We, however, don't implement it for scalars, since more work is needed\nto make use of the other statistics available for scalars, such as the MCV.\nThis is left as a future work.\n\n--\nMahmoud SAKR - Univeristé Libre de Bruxelles\nThis work is done by Diogo Repas, Zhicheng Luo, Maxime Schoemans, and myself", "msg_date": "Thu, 30 Jun 2022 16:31:30 +0200", "msg_from": "Mahmoud Sakr <mahmoud.sakr@ulb.be>", "msg_from_op": true, "msg_subject": "Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hello Mahmoud,\n\nThanks for the patch and sorry for not taking a look earlier.\n\nOn 6/30/22 16:31, Mahmoud Sakr wrote:\n> Hi,\n> Given a query:\n> SELECT * FROM t1, t2 WHERE t1.r << t2.r\n> where t1.r, t2.r are of range type,\n> currently PostgreSQL will estimate a constant selectivity for the << predicate,\n> which is equal to 0.005, not utilizing the statistics that the optimizer\n> collects for range attributes.\n> \n> We have worked out a theory for inequality join selectivity estimation\n> (http://arxiv.org/abs/2206.07396), and implemented it for range\n> types it in this patch.\n> \n\nInteresting. Are there any particular differences compared to how we\nestimate for example range clauses on regular columns?\n\n> The algorithm in this patch re-uses the currently collected statistics for\n> range types, which is the bounds histogram. It works fairly accurate for the\n> operations <<, >>, &&, &<, &>, <=, >= with estimation error of about 0.5%.\n\nRight. I think 0.5% is roughly expected for the default statistics\ntarget, which creates 100 histogram bins, each representing ~1% of the\nvalues. Which on average means ~0.5% error.\n\n> The patch also implements selectivity estimation for the\n> operations @>, <@ (contains and is contained in), but their accuracy is not\n> stable, since the bounds histograms assume independence between the range\n> bounds. A point to discuss is whether or not to keep these last two operations.\n\nThat's a good question. I think the independence assumption is rather\nfoolish in this case, so I wonder if we could \"stabilize\" this by making\nsome different - less optimistic - assumption. Essentially, we have an\nestimates for lower/upper boundaries:\n\n P1 = P(lower(var1) <= lower(var2))\n P2 = P(upper(var2) <= upper(var1))\n\nand independence means we take (P1*P2). But maybe we should be very\npessimistic and use e.g. Min(P1,P2)? Or maybe something in between?\n\nAnother option is to use the length histogram, right? I mean, we know\nwhat the average length is, and it should be possible to use that to\ncalculate how \"far\" ranges in a histogram can overlap.\n\n> The patch also includes the selectivity estimation for multirange types,\n> treating a multirange as a single range which is its bounding box.\n> \n\nOK. But ideally we'd cross-check elements of the two multiranges, no?\n\n> The same algorithm in this patch is applicable to inequality joins of scalar\n> types. We, however, don't implement it for scalars, since more work is needed\n> to make use of the other statistics available for scalars, such as the MCV.\n> This is left as a future work.\n> \n\nSo if the column(s) contain a couple very common (multi)ranges that make\nit into an MCV, we'll ignore those? That's a bit unfortunate, because\nthose MCV elements are potentially the main contributors to selectivity.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Jan 2023 18:25:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Also, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\na proper comment, not just \"this is a copy from rangetypes\".\n\nHowever, it seems the two functions are exactly the same. Would the\nfunctions diverge in the future? If not, maybe there should be just a\nsingle shared function?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Jan 2023 19:07:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi Tomas,\nThanks for picking up the patch and for the interesting discussions that\nyou bring !\n\n> Interesting. Are there any particular differences compared to how we\n> estimate for example range clauses on regular columns?\nThe theory is the same for scalar types. Yet, the statistics that are currently\ncollected for scalar types include other synopsis than the histogram, such\nas MCV, which should be incorporated in the estimation. The theory for using\nthe additional statistics is ready in the paper, but we didn't yet implement it.\nWe thought of sharing the ready part, till the time allows us to implement the\nrest, or other developers continue it.\n\n> Right. I think 0.5% is roughly expected for the default statistics\n> target, which creates 100 histogram bins, each representing ~1% of the\n> values. Which on average means ~0.5% error.\nSince this patch deals with join selectivity, we are then crossing 100*100 bins.\nThe ~0.5% error estimation comes from our experiments, rather than a\nmathematical analysis.\n\n> independence means we take (P1*P2). But maybe we should be very\n> pessimistic and use e.g. Min(P1,P2)? Or maybe something in between?\n>\n> Another option is to use the length histogram, right? I mean, we know\n> what the average length is, and it should be possible to use that to\n> calculate how \"far\" ranges in a histogram can overlap.\nThe independence assumption exists if we use the lower and upper\nhistograms. It equally exists if we use the lower and length histograms.\nIn both cases, the link between the two histograms is lost during their\nconstruction.\nYou discussion brings an interesting trade-off of optimistic v.s. pessimistic\nestimations. A typical way to deal with such a trade-off is to average the\ntwo, for example is model validation in machine learning, Do you think we\nshould implement something like\naverage( (P1*P2), Min(P1,P2) )?\n\n> OK. But ideally we'd cross-check elements of the two multiranges, no?\n\n> So if the column(s) contain a couple very common (multi)ranges that make\n> it into an MCV, we'll ignore those? That's a bit unfortunate, because\n> those MCV elements are potentially the main contributors to selectivity.\nBoth ideas would require collecting more detailed statistics, for\nexample similar\nto arrays. In this patch, we restricted ourselves to the existing statistics.\n\n\n> Also, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\n> a proper comment, not just \"this is a copy from rangetypes\".\nRight, the comment should elaborate more that the collected statistics are\ncurrently that same as rangetypes but may potentially deviate.\n\n> However, it seems the two functions are exactly the same. Would the\n> functions diverge in the future? If not, maybe there should be just a\n> single shared function?\nIndeed, it is possible that the two functions will deviate if that statistics\nof multirange types will be refined.\n\n--\nBest regards\nMahmoud SAKR\n\nOn Wed, Jan 18, 2023 at 7:07 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Also, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\n> a proper comment, not just \"this is a copy from rangetypes\".\n>\n> However, it seems the two functions are exactly the same. Would the\n> functions diverge in the future? If not, maybe there should be just a\n> single shared function?\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Jan 2023 20:23:20 +0100", "msg_from": "Mahmoud Sakr <mahmoud.sakr@ulb.be>", "msg_from_op": true, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "On 1/18/23 20:23, Mahmoud Sakr wrote:\n> Hi Tomas,\n> Thanks for picking up the patch and for the interesting discussions that\n> you bring !\n> \n>> Interesting. Are there any particular differences compared to how we\n>> estimate for example range clauses on regular columns?\n>\n> The theory is the same for scalar types. Yet, the statistics that are currently\n> collected for scalar types include other synopsis than the histogram, such\n> as MCV, which should be incorporated in the estimation. The theory for using\n> the additional statistics is ready in the paper, but we didn't yet implement it.\n> We thought of sharing the ready part, till the time allows us to implement the\n> rest, or other developers continue it.\n> \n\nI see. We don't have MCV stats for range types, so the algorithms don't\ninclude that. But we have that for scalars, so the code would need to be\nmodified to consider that too.\n\nHowever, I wonder how much could that improve the estimates for range\nqueries on scalar types. I mean, we already get pretty good estimates\nfor those, so I guess we wouldn't get much.\n\n>> Right. I think 0.5% is roughly expected for the default statistics\n>> target, which creates 100 histogram bins, each representing ~1% of the\n>> values. Which on average means ~0.5% error.\n> Since this patch deals with join selectivity, we are then crossing 100*100 bins.\n> The ~0.5% error estimation comes from our experiments, rather than a\n> mathematical analysis.\n> \n\nAh, understood. Even for joins there's probably a fairly close\nrelationship between the bin size and estimation error, but it's\ncertainly more complex.\n\nBTW the experiments are those described in section 6 of the paper,\ncorrect? I wonder how uniform (or skewed) the data was - in terms of\nrange length, etc. Or how it works for other operators, not just for\n\"<<\" as in the paper.\n\n>> independence means we take (P1*P2). But maybe we should be very\n>> pessimistic and use e.g. Min(P1,P2)? Or maybe something in between?\n>>\n>> Another option is to use the length histogram, right? I mean, we know\n>> what the average length is, and it should be possible to use that to\n>> calculate how \"far\" ranges in a histogram can overlap.\n> The independence assumption exists if we use the lower and upper\n> histograms. It equally exists if we use the lower and length histograms.\n> In both cases, the link between the two histograms is lost during their\n> construction.\n> You discussion brings an interesting trade-off of optimistic v.s. pessimistic\n> estimations. A typical way to deal with such a trade-off is to average the\n> two, for example is model validation in machine learning, Do you think we\n> should implement something like\n> average( (P1*P2), Min(P1,P2) )?\n> \n\nI don't know.\n\nAFAICS the independence assumption is used not only because it's very\ncheap/simple to implement, but also because it actually is a reasonable\nassumption for a fair number of data sets (particularly in OLTP).\n\nYou're right it's an optimistic estimate, but for many data sets it's\nactually quite reasonable.\n\nI'm not sure that applies to range boundaries - the upper/lower bounds\nseem pretty strongly correlated. So maybe using a more pessimistic\nformula would be appropriate.\n\nI was thinking the length histogram might allow an alternative,\napproach, because it says what fraction of ranges has what length. So\nfor a \"fixed\" lower boundary, we may check each of those fractions. Of\ncourse, this assumes consistent range length distribution (so if ranges\nat one end are much longer, that won't work).\n\n>> OK. But ideally we'd cross-check elements of the two multiranges, no?\n> \n>> So if the column(s) contain a couple very common (multi)ranges that make\n>> it into an MCV, we'll ignore those? That's a bit unfortunate, because\n>> those MCV elements are potentially the main contributors to selectivity.\n> Both ideas would require collecting more detailed statistics, for\n> example similar\n> to arrays. In this patch, we restricted ourselves to the existing statistics.\n> \n\nAh, I didn't realize we don't actually build MCV for range types. In\nthat case the current behavior makes perfect sense.\n\n> \n>> Also, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\n>> a proper comment, not just \"this is a copy from rangetypes\".\n> Right, the comment should elaborate more that the collected statistics are\n> currently that same as rangetypes but may potentially deviate.\n> \n>> However, it seems the two functions are exactly the same. Would the\n>> functions diverge in the future? If not, maybe there should be just a\n>> single shared function?\n> Indeed, it is possible that the two functions will deviate if that statistics\n> of multirange types will be refined.\n> \n\nRight, but are there any such plans? Also, what's the likelihood we'll\nadd new statistics to only one of the places (e.g. for multiranges but\nnot plain ranges)?\n\nI'd keep a single function until we actually need two. That's also\neasier for maintenance - with two it's easy to fix a bug in one place\nand forget about the other, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 20 Jan 2023 15:25:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi Mahmoud,\n\nI finally had time to properly read the paper today - the general\napproach mostly matches how I imagined the estimation would work for\ninequalities, but it's definitely nice to see the algorithm properly\nformalized and analyzed.\n\nWhat seems a bit strange to me is that the patch only deals with range\ntypes, leaving the scalar cases unchanged. I understand why (not having\na MCV simplifies it a lot), but I'd bet joins on range types are waaaay\nless common than inequality joins on scalar types. I don't even remember\nseeing inequality join on a range column, TBH.\n\nThat doesn't mean the patch is wrong, of course. But I'd expect users to\nbe surprised we handle range types better than \"old\" scalar types (which\nrange types build on, in some sense).\n\nDid you have any plans to work on improving estimates for the scalar\ncase too? Or did you do the patch needed for the paper, and have no\nplans to continue working on this?\n\nI'm also wondering about not having MCV for ranges. I was a bit\nsurprised we don't build MCV in compute_range_stats(), and perhaps we\nshould start building those - if there are common ranges, this might\nsignificantly improve some of the estimates (just like for scalar\ncolumns). Which would mean the estimates for range types are just as\ncomplex as for scalars. Of course, we don't do that now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 21 Jan 2023 22:12:27 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi Tomas,\n\n> I finally had time to properly read the paper today - the general\n> approach mostly matches how I imagined the estimation would work for\n> inequalities, but it's definitely nice to see the algorithm properly\n> formalized and analyzed.\n\nAwesome, thanks for this interest!\n\n> What seems a bit strange to me is that the patch only deals with range\n> types, leaving the scalar cases unchanged. I understand why (not having\n> a MCV simplifies it a lot), but I'd bet joins on range types are waaaay\n> less common than inequality joins on scalar types. I don't even remember\n> seeing inequality join on a range column, TBH.\n>\n> That doesn't mean the patch is wrong, of course. But I'd expect users to\n> be surprised we handle range types better than \"old\" scalar types (which\n> range types build on, in some sense).\n>\n> Did you have any plans to work on improving estimates for the scalar\n> case too? Or did you do the patch needed for the paper, and have no\n> plans to continue working on this?\n\nI fully agree. Scalars are way more important.\nI join you in the call for Diogo and Zhicheng to continue this implementation,\nspecially after the interest you show towards this patch. The current patch\nwas a course project (taught by me and Maxime), which was specific to range\ntypes. But indeed the solution generalizes well to scalars. Hopefully after the\ncurrent exam session (Feb), there will be time to continue the implementation.\nNevertheless, it makes sense to do two separate patches: this one, and\nthe scalars. The code of the two patches is located in different files, and\nthe estimation algorithms have slight differences.\n\n> I'm also wondering about not having MCV for ranges. I was a bit\n> surprised we don't build MCV in compute_range_stats(), and perhaps we\n> should start building those - if there are common ranges, this might\n> significantly improve some of the estimates (just like for scalar\n> columns). Which would mean the estimates for range types are just as\n> complex as for scalars. Of course, we don't do that now.\n\nGood question. Our intuition is that MCV will not be useful for ranges.\nMaxime has done an experiment and confirmed this intuition. Here is his\nexperiment and explanation:\nCreate a table with 126000 int4ranges.\nAll ranges have their middle between 0 and 1000 and a length between 90\nand 110.\nThe ranges are created in the following way:\n- 10 different ranges, each duplicated 1000 times\n- 20 different ranges, each duplicated 500 times\n- 40 different ranges, each duplicated 100 times\n- 200 different ranges, each duplicated 10 times\n- 100000 different ranges, not duplicated\nTwo such tables (t1 and t2) were created in the same way but with different\ndata. Then he ran the following query:\n\nEXPLAIN ANALYZE SELECT count(*)\nFROM t, t2 WHERE t && t2\n\nThe results (using our patch) where the following:\nPlan rows: 2991415662\nActual rows: 2981335423\n\nSo the estimation accuracy is still fairly good for such data with a lot\nof repeated values, even without having MCV statistics.\nThe only error that can happen in our algorithms comes from the last bin in\nwhich we assume uniform distribution of values. Duplicate values\n(end bound, not ranges) might make this assumption be wrong, which would\ncreate inaccurate estimations. However, this is still only incorrect\nfor the last\nbin and all the others are correct.\nMCV's are mainly useful for equality, which is not an operation we cover, and\nprobably not an important predicate for ranges. What do you think?\n\nBest regards,\nMahmoud\n\n\n", "msg_date": "Wed, 25 Jan 2023 20:11:33 +0100", "msg_from": "Mahmoud Sakr <mahmoud.sakr@ulb.be>", "msg_from_op": true, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi Tomas,\r\n\r\nAs a quick update, the paper related to this work has finally been published in Mathematics (https://www.mdpi.com/2227-7390/11/6/1383).\r\nDuring revision we also added a figure showing a comparison of our algorithm vs the existing algorithms in Oracle, SQL Server, MySQL and PostgreSQL, which can be found in the experiments section of the paper.\r\nAs can be seen, our algorithm outperforms even Oracle and SQL Server.\r\n\r\nDuring this revision we also found a small bug, so we are working on a revision of the patch, which fixes this.\r\n\r\n\r\nAlso, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\r\na proper comment, not just \"this is a copy from rangetypes\".\r\n\r\n\r\nRight, the comment should elaborate more that the collected statistics are\r\ncurrently that same as rangetypes but may potentially deviate.\r\n\r\n\r\n\r\nHowever, it seems the two functions are exactly the same. Would the\r\nfunctions diverge in the future? If not, maybe there should be just a\r\nsingle shared function?\r\n\r\n\r\nIndeed, it is possible that the two functions will deviate if that statistics\r\nof multirange types will be refined.\r\n\r\n\r\n\r\nRight, but are there any such plans? Also, what's the likelihood we'll\r\nadd new statistics to only one of the places (e.g. for multiranges but\r\nnot plain ranges)?\r\n\r\nI'd keep a single function until we actually need two. That's also\r\neasier for maintenance - with two it's easy to fix a bug in one place\r\nand forget about the other, etc.\r\n\r\nRegarding our previous discussion about the duplication of calc_hist_join_selectivity in rangetypes_selfuncs.c and multirangetypes_selfuncs.c, we can also remove this duplication in the revision if needed.\r\nNote that currently, there are no external functions shared between rangetypes_selfuncs.c and multirangetypes_selfuncs.c.\r\nAny function that was used in both files was duplicated as a static function.\r\nThe functions calc_hist_selectivity_scalar, calc_length_hist_frac, calc_hist_selectivity_contained and calc_hist_selectivity_contains are examples of this, where the function is identical but has been declared static in both files.\r\nThat said, we can remove the duplication of calc_hist_join_selectivity if it still needed.\r\nWe would, however, require some guidance as to where to put the external definition of this function, as there does not appear to be a rangetypes_selfuncs.h header.\r\nShould it simply go into utils/selfuncs.h or should we create a new header file?\r\n\r\nBest regards,\r\nMaxime Schoemans\r\n\n\n\n\n\n\r\nHi Tomas,\n\r\nAs a quick update, the paper related to this work has finally been published in Mathematics (https://www.mdpi.com/2227-7390/11/6/1383).\r\nDuring revision we also added a figure showing a comparison of our algorithm vs the existing algorithms in Oracle, SQL Server, MySQL and PostgreSQL, which can be found in the experiments section of the paper.\r\nAs can be seen, our algorithm outperforms even Oracle and SQL Server.\n\r\nDuring this revision we also found a small bug, so we are working on a revision of the patch, which fixes this.\n\n\n\n\n\n\nAlso, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\r\na proper comment, not just \"this is a copy from rangetypes\".\r\n\n\nRight, the comment should elaborate more that the collected statistics are\r\ncurrently that same as rangetypes but may potentially deviate.\r\n\r\n\n\nHowever, it seems the two functions are exactly the same. Would the\r\nfunctions diverge in the future? If not, maybe there should be just a\r\nsingle shared function?\r\n\n\nIndeed, it is possible that the two functions will deviate if that statistics\r\nof multirange types will be refined.\r\n\r\n\n\nRight, but are there any such plans? Also, what's the likelihood we'll\r\nadd new statistics to only one of the places (e.g. for multiranges but\r\nnot plain ranges)?\r\n\r\nI'd keep a single function until we actually need two. That's also\r\neasier for maintenance - with two it's easy to fix a bug in one place\r\nand forget about the other, etc.\n\n\r\nRegarding our previous discussion about the duplication of calc_hist_join_selectivity in rangetypes_selfuncs.c and multirangetypes_selfuncs.c, we can also remove this duplication in the revision if needed.\r\nNote that currently, there are no external functions shared between rangetypes_selfuncs.c and multirangetypes_selfuncs.c.\r\nAny function that was used in both files was duplicated as a static function.\r\nThe functions calc_hist_selectivity_scalar, calc_length_hist_frac, calc_hist_selectivity_contained and calc_hist_selectivity_contains are examples of this, where the function is identical but has been declared static in both files.\r\nThat said, we can remove the duplication of calc_hist_join_selectivity if it still needed.\r\nWe would, however, require some guidance as to where to put the external definition of this function, as there does not appear to be a rangetypes_selfuncs.h header.\r\nShould it simply go into utils/selfuncs.h or should we create a new header file?\n\r\nBest regards,\r\nMaxime Schoemans", "msg_date": "Mon, 20 Mar 2023 15:34:47 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi,\r\n\r\nIn the selectivity algorithm, the division was applied after adding the remaining histogram buckets of histogram2 that don't overlap with histogram1.\r\nThis could lead to reducing selectivity by half, e.g., in the case that histogram2 is completely right of histogram1.\r\nThe correct calculation is to divide by two before adding the remainder.\r\nThis patch-fix does the needed.\r\n\r\nBest regards,\r\nMaxime Schoemans\r\n\r\nOn 20/03/2023 16:34, maxime wrote:\r\nHi Tomas,\r\n\r\nAs a quick update, the paper related to this work has finally been published in Mathematics (https://www.mdpi.com/2227-7390/11/6/1383).\r\nDuring revision we also added a figure showing a comparison of our algorithm vs the existing algorithms in Oracle, SQL Server, MySQL and PostgreSQL, which can be found in the experiments section of the paper.\r\nAs can be seen, our algorithm outperforms even Oracle and SQL Server.\r\n\r\nDuring this revision we also found a small bug, so we are working on a revision of the patch, which fixes this.\r\n\r\n\r\nAlso, calc_hist_selectivity_contains in multirangetypes_selfuncs.c needs\r\na proper comment, not just \"this is a copy from rangetypes\".\r\n\r\n\r\nRight, the comment should elaborate more that the collected statistics are\r\ncurrently that same as rangetypes but may potentially deviate.\r\n\r\n\r\n\r\nHowever, it seems the two functions are exactly the same. Would the\r\nfunctions diverge in the future? If not, maybe there should be just a\r\nsingle shared function?\r\n\r\n\r\nIndeed, it is possible that the two functions will deviate if that statistics\r\nof multirange types will be refined.\r\n\r\n\r\n\r\nRight, but are there any such plans? Also, what's the likelihood we'll\r\nadd new statistics to only one of the places (e.g. for multiranges but\r\nnot plain ranges)?\r\n\r\nI'd keep a single function until we actually need two. That's also\r\neasier for maintenance - with two it's easy to fix a bug in one place\r\nand forget about the other, etc.\r\n\r\nRegarding our previous discussion about the duplication of calc_hist_join_selectivity in rangetypes_selfuncs.c and multirangetypes_selfuncs.c, we can also remove this duplication in the revision if needed.\r\nNote that currently, there are no external functions shared between rangetypes_selfuncs.c and multirangetypes_selfuncs.c.\r\nAny function that was used in both files was duplicated as a static function.\r\nThe functions calc_hist_selectivity_scalar, calc_length_hist_frac, calc_hist_selectivity_contained and calc_hist_selectivity_contains are examples of this, where the function is identical but has been declared static in both files.\r\nThat said, we can remove the duplication of calc_hist_join_selectivity if it still needed.\r\nWe would, however, require some guidance as to where to put the external definition of this function, as there does not appear to be a rangetypes_selfuncs.h header.\r\nShould it simply go into utils/selfuncs.h or should we create a new header file?\r\n\r\nBest regards,\r\nMaxime Schoemans", "msg_date": "Mon, 19 Jun 2023 09:49:22 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "This is a quick correction as the last patch contained a missing semicolon.\r\n\r\nRegards,\r\nMaxime Schoemans", "msg_date": "Mon, 19 Jun 2023 16:26:09 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hello!\n\nThank you for the patch, very interesting article.\nThe patch doesn't apply to the current postgres version. Could you please\nupdate it?\n\nRegards,\nDamir Belyalov,\nPostgres Professional\n\nHello!Thank you for the patch, very interesting article.The patch doesn't apply to the current postgres version. Could you please update it?Regards,Damir Belyalov,Postgres Professional", "msg_date": "Fri, 7 Jul 2023 11:08:48 +0300", "msg_from": "Damir Belyalov <dam.bel07@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi,\r\n\r\nThank you for picking up this patch.\r\n\r\n > The patch doesn't apply to the current postgres version. Could you \r\nplease update it?\r\nIndeed, the code was initially written on pg15.\r\nYou can find attached a new version of the patch that can be applied on \r\nthe current master branch of postgres.\r\n\r\nPlease let us know if there is anything else we can do.\r\n\r\nBest regards,\r\nMaxime Schoemans", "msg_date": "Fri, 7 Jul 2023 15:40:43 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Schoemans Maxime <maxime.schoemans@ulb.be> writes:\n> You can find attached a new version of the patch that can be applied on \n> the current master branch of postgres.\n\nI took a brief look through this very interesting work. I concur\nwith Tomas that it feels a little odd that range join selectivity\nwould become smarter than scalar inequality join selectivity, and\nthat we really ought to prioritize applying these methods to that\ncase. Still, that's a poor reason to not take the patch.\n\nI also agree with the upthread criticism that having two identical\nfunctions in different source files will be a maintenance nightmare.\nDon't do it. When and if there's a reason for the behavior to\ndiverge between the range and multirange cases, it'd likely be\nbetter to handle that by passing in a flag to say what to do.\n\nBut my real unhappiness with the patch as-submitted is the test cases,\nwhich require rowcount estimates to be reproduced exactly. We know\nvery well that ANALYZE estimates are not perfectly stable and tend to\nvary across platforms. As a quick check I tried the patch within a\n32-bit VM, and it passed, which surprised me a bit ... but it would\nsurprise me a lot if we got these same numbers on every machine in\nthe buildfarm. We need a more forgiving test method. Usually the\napproach is to set up a test case where the improved accuracy of\nthe estimate changes the planner's choice of plan compared to what\nyou got before, since that will normally not be too prone to change\nfrom variations of a percent or two in the estimates. Another idea\ncould be something like\n\n\tSELECT (estimate/actual BETWEEN 0.9 AND 1.1) AS ok FROM ...\n\nwhich just gives a true/false output instead of an exact number.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Nov 2023 14:46:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "On 14/11/2023 20:46, Tom Lane wrote:\r\n> I took a brief look through this very interesting work. I concur\r\n> with Tomas that it feels a little odd that range join selectivity\r\n> would become smarter than scalar inequality join selectivity, and\r\n> that we really ought to prioritize applying these methods to that\r\n> case. Still, that's a poor reason to not take the patch.\r\n\r\nIndeed, we started with ranges as this was the simpler case (no MCV) and \r\nwas the topic of a course project.\r\nThe idea is to later write a second patch that applies these ideas to \r\nscalar inequality while also handling MCV's correctly.\r\n\r\n> I also agree with the upthread criticism that having two identical\r\n> functions in different source files will be a maintenance nightmare.\r\n> Don't do it. When and if there's a reason for the behavior to\r\n> diverge between the range and multirange cases, it'd likely be\r\n> better to handle that by passing in a flag to say what to do.\r\n\r\nThe duplication is indeed not ideal. However, there are already 8 other \r\nduplicate functions between the two files.\r\nI would thus suggest to leave the duplication in this patch and create a \r\nsecond one that removes all duplication from the two files, instead of \r\njust removing the duplication for our new function.\r\nWhat are your thoughts on this? If we do this, should the function \r\ndefinitions go in rangetypes.h or should we create a new \r\nrangetypes_selfuncs.h header?\r\n\r\n> But my real unhappiness with the patch as-submitted is the test cases,\r\n> which require rowcount estimates to be reproduced exactly.\r\n\r\n> We need a more forgiving test method. Usually the\r\n> approach is to set up a test case where the improved accuracy of\r\n> the estimate changes the planner's choice of plan compared to what\r\n> you got before, since that will normally not be too prone to change\r\n> from variations of a percent or two in the estimates.\r\n\r\nI have changed the test method to produce query plans for a 3-way range \r\njoin.\r\nThe plans for the different operators differ due to the computed \r\nselectivity estimation, which was not the case before this patch.\r\n\r\nRegards,\r\nMaxime Schoemans", "msg_date": "Mon, 20 Nov 2023 20:17:22 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "Hi!\n\nThank you for your work on the subject, I think it's a really useful \nfeature and it allows optimizer to estimate more correctly clauses with \nsuch type of operator.\n\nI rewieved your patch and noticed that some comments are repeated into \nmultirangejoinsel functions, I suggest combining them.\n\nThe proposed changes are in the attached patch.\n\n\nIf this topic is about calculating selectivity, have you thought about \nadding cardinality calculation test for queries with this type of operator?\n\nFor example, you could form queries similar to those that you use in \nsrc/test/regress/sql/multirangetypes.sql and \nsrc/test/regress/sql/rangetypes.sql.\n\nI added a few in the attached patch.\n\n-- \nRegards,\nAlena Rybakina", "msg_date": "Thu, 30 Nov 2023 17:50:36 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "On Tue, 21 Nov 2023 at 01:47, Schoemans Maxime <maxime.schoemans@ulb.be> wrote:\n>\n> On 14/11/2023 20:46, Tom Lane wrote:\n> > I took a brief look through this very interesting work. I concur\n> > with Tomas that it feels a little odd that range join selectivity\n> > would become smarter than scalar inequality join selectivity, and\n> > that we really ought to prioritize applying these methods to that\n> > case. Still, that's a poor reason to not take the patch.\n>\n> Indeed, we started with ranges as this was the simpler case (no MCV) and\n> was the topic of a course project.\n> The idea is to later write a second patch that applies these ideas to\n> scalar inequality while also handling MCV's correctly.\n>\n> > I also agree with the upthread criticism that having two identical\n> > functions in different source files will be a maintenance nightmare.\n> > Don't do it. When and if there's a reason for the behavior to\n> > diverge between the range and multirange cases, it'd likely be\n> > better to handle that by passing in a flag to say what to do.\n>\n> The duplication is indeed not ideal. However, there are already 8 other\n> duplicate functions between the two files.\n> I would thus suggest to leave the duplication in this patch and create a\n> second one that removes all duplication from the two files, instead of\n> just removing the duplication for our new function.\n> What are your thoughts on this? If we do this, should the function\n> definitions go in rangetypes.h or should we create a new\n> rangetypes_selfuncs.h header?\n>\n> > But my real unhappiness with the patch as-submitted is the test cases,\n> > which require rowcount estimates to be reproduced exactly.\n>\n> > We need a more forgiving test method. Usually the\n> > approach is to set up a test case where the improved accuracy of\n> > the estimate changes the planner's choice of plan compared to what\n> > you got before, since that will normally not be too prone to change\n> > from variations of a percent or two in the estimates.\n>\n> I have changed the test method to produce query plans for a 3-way range\n> join.\n> The plans for the different operators differ due to the computed\n> selectivity estimation, which was not the case before this patch.\n\nOne of the tests was aborted at [1], kindly post an updated patch for the same:\n[04:55:42.797] src/tools/ci/cores_backtrace.sh linux /tmp/cores\n[04:56:03.640] dumping /tmp/cores/postgres-6-24094.core for\n/tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin/postgres\n\n[04:57:24.199] Core was generated by `postgres: old_node: postgres\nregression [local] EXPLAIN '.\n[04:57:24.199] Program terminated with signal SIGABRT, Aborted.\n[04:57:24.199] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n[04:57:24.199] Download failed: Invalid argument. Continuing without\nsource file ./signal/../sysdeps/unix/sysv/linux/raise.c.\n[04:57:26.803]\n[04:57:26.803] Thread 1 (Thread 0x7f121d9ec380 (LWP 24094)):\n[04:57:26.803] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n[04:57:26.803] set = {__val = {4194304, 0, 4636737291354636288,\n4636737291354636288, 0, 0, 64, 64, 128, 128, 192, 192, 256, 256, 0,\n0}}\n[04:57:26.803] pid = <optimized out>\n[04:57:26.803] tid = <optimized out>\n[04:57:26.803] ret = <optimized out>\n[04:57:26.803] #1 0x00007f122003d537 in __GI_abort () at abort.c:79\n...\n...\n[04:57:38.774] #6 0x00007f357ad95788 in __asan::__asan_report_load1\n(addr=addr@entry=107477261711120) at\n../../../../src/libsanitizer/asan/asan_rtl.cpp:117\n[04:57:38.774] bp = 140731433585840\n[04:57:38.774] pc = <optimized out>\n[04:57:38.774] local_stack = 139867680821632\n[04:57:38.774] sp = 140731433585832\n[04:57:38.774] #7 0x000055d5b155c37c in range_cmp_bound_values\n(typcache=typcache@entry=0x629000030b60, b1=b1@entry=0x61c000017708,\nb2=b2@entry=0x61c0000188b8) at rangetypes.c:2090\n[04:57:38.774] No locals.\n[04:57:38.774] #8 0x000055d5b1567bb2 in calc_hist_join_selectivity\n(typcache=typcache@entry=0x629000030b60,\nhist1=hist1@entry=0x61c0000188b8, nhist1=nhist1@entry=101,\nhist2=hist2@entry=0x61c0000170b8, nhist2=nhist2@entry=101) at\nrangetypes_selfuncs.c:1298\n[04:57:38.774] i = 0\n[04:57:38.774] j = 101\n[04:57:38.774] selectivity = <optimized out>\n[04:57:38.774] cur_sel1 = <optimized out>\n[04:57:38.774] cur_sel2 = <optimized out>\n[04:57:38.774] prev_sel1 = <optimized out>\n[04:57:38.774] prev_sel2 = <optimized out>\n[04:57:38.774] cur_sync = {val = <optimized out>, infinite =\n<optimized out>, inclusive = <optimized out>, lower = <optimized out>}\n[04:57:38.774] #9 0x000055d5b1569190 in rangejoinsel\n(fcinfo=<optimized out>) at rangetypes_selfuncs.c:1495\n\n[1] - https://cirrus-ci.com/task/5507789477380096\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 5 Jan 2024 16:07:17 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "On 05/01/2024 11:37, vignesh C wrote:\r\n > One of the tests was aborted at [1], kindly post an updated patch for \r\nthe same:\r\n\r\nThank you for notifying us.\r\nI believe I fixed the issue but it is hard to be certain as the issue \r\ndid not arise when running the regression tests locally.\r\n\r\nRegards,\r\nMaxime", "msg_date": "Fri, 5 Jan 2024 17:39:50 +0000", "msg_from": "Schoemans Maxime <maxime.schoemans@ulb.be>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "On Fri, 5 Jan 2024 at 23:09, Schoemans Maxime <maxime.schoemans@ulb.be> wrote:\n>\n> On 05/01/2024 11:37, vignesh C wrote:\n> > One of the tests was aborted at [1], kindly post an updated patch for\n> the same:\n>\n> Thank you for notifying us.\n> I believe I fixed the issue but it is hard to be certain as the issue\n> did not arise when running the regression tests locally.\n\nI'm noticing this issue is not yet resolved, the CFBot is still\nfailing at [1] with:\n#7 0x000055cddc25cd93 in range_cmp_bound_values\n(typcache=typcache@entry=0x629000030b60, b1=b1@entry=0x61c000016f08,\nb2=b2@entry=0x61c0000180b8) at rangetypes.c:2090\n[19:55:02.591] No locals.\n[19:55:02.591] #8 0x000055cddc2685c1 in calc_hist_join_selectivity\n(typcache=typcache@entry=0x629000030b60,\nhist1=hist1@entry=0x61c0000180b8, nhist1=nhist1@entry=101,\nhist2=hist2@entry=0x61c0000168b8, nhist2=nhist2@entry=101) at\nrangetypes_selfuncs.c:1295\n[19:55:02.591] i = 0\n[19:55:02.591] j = 101\n[19:55:02.591] selectivity = 0\n[19:55:02.591] prev_sel1 = -1\n[19:55:02.591] prev_sel2 = 0\n[19:55:02.591] #9 0x000055cddc269aaa in rangejoinsel\n(fcinfo=<optimized out>) at rangetypes_selfuncs.c:1479\n[19:55:02.591] root = <optimized out>\n[19:55:02.591] operator = <optimized out>\n[19:55:02.591] args = <optimized out>\n[19:55:02.591] sjinfo = <optimized out>\n[19:55:02.591] vardata1 = {var = <optimized out>, rel = <optimized\nout>, statsTuple = <optimized out>, freefunc = <optimized out>,\nvartype = <optimized out>, atttype = <optimized out>, atttypmod =\n<optimized out>, isunique = <optimized out>, acl_ok = <optimized out>}\n[19:55:02.591] vardata2 = {var = <optimized out>, rel = <optimized\nout>, statsTuple = <optimized out>, freefunc = <optimized out>,\nvartype = <optimized out>, atttype = <optimized out>, atttypmod =\n<optimized out>, isunique = <optimized out>, acl_ok = <optimized out>}\n[19:55:02.591] hist1 = {staop = <optimized out>, stacoll = <optimized\nout>, valuetype = <optimized out>, values = <optimized out>, nvalues =\n<optimized out>, numbers = <optimized out>, nnumbers = <optimized\nout>, values_arr = <optimized out>, numbers_arr = <optimized out>}\n[19:55:02.591] hist2 = {staop = <optimized out>, stacoll = <optimized\nout>, valuetype = <optimized out>, values = <optimized out>, nvalues =\n<optimized out>, numbers = <optimized out>, nnumbers = <optimized\nout>, values_arr = <optimized out>, numbers_arr = <optimized out>}\n[19:55:02.591] sslot = {staop = <optimized out>, stacoll = <optimized\nout>, valuetype = <optimized out>, values = <optimized out>, nvalues =\n<optimized out>, numbers = <optimized out>, nnumbers = <optimized\nout>, values_arr = <optimized out>, numbers_arr = <optimized out>}\n[19:55:02.591] reversed = <optimized out>\n[19:55:02.591] selec = 0.001709375000000013\n[19:55:02.591] typcache = 0x629000030b60\n[19:55:02.591] stats1 = <optimized out>\n[19:55:02.591] stats2 = <optimized out>\n[19:55:02.591] empty_frac1 = 0\n[19:55:02.591] empty_frac2 = 0\n[19:55:02.591] null_frac1 = 0\n[19:55:02.591] null_frac2 = 0\n[19:55:02.591] nhist1 = 101\n[19:55:02.591] nhist2 = 101\n[19:55:02.591] hist1_lower = 0x61c0000168b8\n[19:55:02.591] hist1_upper = 0x61c0000170b8\n[19:55:02.591] hist2_lower = 0x61c0000178b8\n[19:55:02.591] hist2_upper = 0x61c0000180b8\n[19:55:02.591] empty = <optimized out>\n[19:55:02.591] i = <optimized out>\n[19:55:02.591] __func__ = \"rangejoinsel\"\n[19:55:02.591] #10 0x000055cddc3b761f in FunctionCall5Coll\n(flinfo=flinfo@entry=0x7ffc1628d710, collation=collation@entry=0,\narg1=arg1@entry=107545982648856, arg2=arg2@entry=3888,\narg3=arg3@entry=107820862916056, arg4=arg4@entry=0, arg5=<optimized\nout>) at fmgr.c:1242\n[19:55:02.591] fcinfodata = {fcinfo = {flinfo = <optimized out>,\ncontext = <optimized out>, resultinfo = <optimized out>, fncollation =\n<optimized out>, isnull = <optimized out>, nargs = <optimized out>,\nargs = 0x0}, fcinfo_data = {<optimized out> <repeats 112 times>}}\n[19:55:02.591] fcinfo = 0x7ffc1628d5e0\n[19:55:02.591] result = <optimized out>\n[19:55:02.591] __func__ = \"FunctionCall5Coll\"\n[19:55:02.591] #11 0x000055cddc3b92ee in OidFunctionCall5Coll\n(functionId=8355, collation=collation@entry=0,\narg1=arg1@entry=107545982648856, arg2=arg2@entry=3888,\narg3=arg3@entry=107820862916056, arg4=arg4@entry=0, arg5=<optimized\nout>) at fmgr.c:1460\n[19:55:02.591] flinfo = {fn_addr = <optimized out>, fn_oid =\n<optimized out>, fn_nargs = <optimized out>, fn_strict = <optimized\nout>, fn_retset = <optimized out>, fn_stats = <optimized out>,\nfn_extra = <optimized out>, fn_mcxt = <optimized out>, fn_expr =\n<optimized out>}\n[19:55:02.591] #12 0x000055cddbe834ae in join_selectivity\n(root=root@entry=0x61d00017c218, operatorid=operatorid@entry=3888,\nargs=0x6210003bc5d8, inputcollid=0,\njointype=jointype@entry=JOIN_INNER,\nsjinfo=sjinfo@entry=0x7ffc1628db30) at\n../../../../src/include/postgres.h:324\n[19:55:02.591] oprjoin = <optimized out>\n[19:55:02.591] result = <optimized out>\n[19:55:02.591] __func__ = \"join_selectivity\"\n[19:55:02.591] #13 0x000055cddbd8c18c in clause_selectivity_ext\n(root=root@entry=0x61d00017c218, clause=0x6210003bc678,\nvarRelid=varRelid@entry=0, jointype=jointype@entry=JOIN_INNER,\nsjinfo=sjinfo@entry=0x7ffc1628db30,\nuse_extended_stats=use_extended_stats@entry=true) at clausesel.c:841\n\nI have changed the status to \"Waiting on Author\", feel free to post an\nupdated version, check CFBot and update the Commitfest entry\naccordingly.\n\n[1] - https://cirrus-ci.com/task/5698117824151552\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 17 Jan 2024 16:18:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" }, { "msg_contents": "I cannot figure out why it aborts.\n\nas Tom mentioned in upthread about the test cases.\nsimilar to src/test/regress/sql/stats_ext.sql check_estimated_rows function.\nwe can test it by something:\n\ncreate or replace function check_estimated_rows(text) returns table (ok bool)\nlanguage plpgsql as\n$$\ndeclare\n ln text;\n tmp text[];\n first_row bool := true;\nbegin\n for ln in\n execute format('explain analyze %s', $1)\n loop\n if first_row then\n first_row := false;\n tmp := regexp_match(ln, 'rows=(\\d*) .* rows=(\\d*)');\n return query select 0.2 < tmp[1]::float8 / tmp[2]::float8\nand tmp[1]::float8 / tmp[2]::float8 < 5;\n end if;\n end loop;\nend;\n$$;\n\nselect * from check_estimated_rows($$select * from test_range_join_1,\ntest_range_join_2 where ir1 && ir2$$);\nselect * from check_estimated_rows($$select * from test_range_join_1,\ntest_range_join_2 where ir1 << ir2$$);\nselect * from check_estimated_rows($$select * from test_range_join_1,\ntest_range_join_2 where ir1 >> ir2$$);\n\nDo you need 3 tables to do the test? because we need to actually run\nthe query then compare the estimated row\nand actually returned rows.\nIf you really execute the query with 3 table joins, it will take a lot of time.\nSo two tables join with where quql should be fine?\n\n/* Fast-forwards i and j to start of iteration */\n+ for (i = 0; range_cmp_bound_values(typcache, &hist1[i], &hist2[0]) < 0; i++);\n+ for (j = 0; range_cmp_bound_values(typcache, &hist2[j], &hist1[0]) < 0; j++);\n+\n+ /* Do the estimation on overlapping regions */\n+ while (i < nhist1 && j < nhist2)\n+ {\n+ double cur_sel1,\n+ cur_sel2;\n+ RangeBound cur_sync;\n+\n+ if (range_cmp_bound_values(typcache, &hist1[i], &hist2[j]) < 0)\n+ cur_sync = hist1[i++];\n+ else if (range_cmp_bound_values(typcache, &hist1[i], &hist2[j]) > 0)\n+ cur_sync = hist2[j++];\n+ else\n+ {\n+ /* If equal, skip one */\n+ cur_sync = hist1[i];\n+\n\nthis part range_cmp_bound_values \"if else if\" part computed twice, you\ncan just do\n`\nint cmp;\ncmp = range_cmp_bound_values(typcache, &hist1[i], &hist2[j]);\nif cmp <0 then\nelse if cmp > 0 then\nelse then\n`\n\nalso. I think you can put the following into main while loop.\n+ for (i = 0; range_cmp_bound_values(typcache, &hist1[i], &hist2[0]) < 0; i++);\n+ for (j = 0; range_cmp_bound_values(typcache, &hist2[j], &hist1[0]) < 0; j++);\n\nsplit range and multirange into 2 patches might be a good idea.\nseems: same function (calc_hist_join_selectivity) with same function\nsignature in src/backend/utils/adt/multirangetypes_selfuncs.c\nand src/backend/utils/adt/rangetypes_selfuncs.c,\npreviously mail complaints not resolved.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:10:37 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement missing join selectivity estimation for range types" } ]
[ { "msg_contents": "Hello all,\n\nI'll be opening the July Commitfest in approximately 24 hours, so you\nhave a little more time to register any patchsets you'd like the\ncommunity to review. And remember to keep your review karma positive:\nover the next month, try to review other patches of equivalent\ncomplexity to the patches you're posting.\n\nI'll be making a pass through the CF app today to fix up any stale statuses.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 30 Jun 2022 08:26:44 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "[Commitfest 2022-07] Begins Tomorrow" } ]
[ { "msg_contents": "Hello!\n\nThis patch adds a `--tableam=TABLEAM` option to the pgbench command line\nwhich allows the user to specify which table am is used to create tables\ninitialized with `-i`.\n\nThis change was originally authored by Alexander Korotkov, I have updated\nit and added a test to the pgbench runner. I'm hoping to make the deadline\nfor this currently open Commit Fest?\n\nMy goal is to add a couple more regression tests but the implementation is\ncomplete.\n\nThanks in advance for any comments or questions!\n\n-Michel", "msg_date": "Thu, 30 Jun 2022 09:09:17 -0700", "msg_from": "Michel Pelletier <michel@supabase.io>", "msg_from_op": true, "msg_subject": "PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Thu, Jun 30, 2022 at 09:09:17AM -0700, Michel Pelletier wrote:\n> This change was originally authored by Alexander Korotkov, I have updated\n> it and added a test to the pgbench runner. I'm hoping to make the deadline\n> for this currently open Commit Fest?\n\nThis is failing check-world\nhttp://cfbot.cputube.org/michel-pelletier.html\n\nBTW, you can test your patches the same as cfbot does (before mailing the list)\non 4 OSes by pushing a branch to a github account. See ./src/tools/ci/README\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 30 Jun 2022 11:51:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Thu, 30 Jun 2022 at 09:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Jun 30, 2022 at 09:09:17AM -0700, Michel Pelletier wrote:\n> > This change was originally authored by Alexander Korotkov, I have updated\n> > it and added a test to the pgbench runner. I'm hoping to make the\n> deadline\n> > for this currently open Commit Fest?\n>\n> This is failing check-world\n> http://cfbot.cputube.org/michel-pelletier.html\n>\n> BTW, you can test your patches the same as cfbot does (before mailing the\n> list)\n> on 4 OSes by pushing a branch to a github account. See\n> ./src/tools/ci/README\n>\n> Ah that's very helpful thank you! This is my first patch submission so\nsorry for any mixups.\n\n-Michel\n\nOn Thu, 30 Jun 2022 at 09:51, Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jun 30, 2022 at 09:09:17AM -0700, Michel Pelletier wrote:\n> This change was originally authored by Alexander Korotkov, I have updated\n> it and added a test to the pgbench runner.  I'm hoping to make the deadline\n> for this currently open Commit Fest?\n\nThis is failing check-world\nhttp://cfbot.cputube.org/michel-pelletier.html\n\nBTW, you can test your patches the same as cfbot does (before mailing the list)\non 4 OSes by pushing a branch to a github account.  See ./src/tools/ci/READMEAh that's very helpful thank you!  This is my first patch submission so sorry for any mixups.-Michel", "msg_date": "Thu, 30 Jun 2022 10:50:35 -0700", "msg_from": "Michel Pelletier <michel@supabase.io>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "I've got CI setup and building and the tests now pass, I was missing a\nCASCADE in my test. New patch attached:\n\n\n\nOn Thu, 30 Jun 2022 at 10:50, Michel Pelletier <michel@supabase.io> wrote:\n\n> On Thu, 30 Jun 2022 at 09:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> On Thu, Jun 30, 2022 at 09:09:17AM -0700, Michel Pelletier wrote:\n>> > This change was originally authored by Alexander Korotkov, I have\n>> updated\n>> > it and added a test to the pgbench runner. I'm hoping to make the\n>> deadline\n>> > for this currently open Commit Fest?\n>>\n>> This is failing check-world\n>> http://cfbot.cputube.org/michel-pelletier.html\n>>\n>> BTW, you can test your patches the same as cfbot does (before mailing the\n>> list)\n>> on 4 OSes by pushing a branch to a github account. See\n>> ./src/tools/ci/README\n>>\n>> Ah that's very helpful thank you! This is my first patch submission so\n> sorry for any mixups.\n>\n> -Michel\n>", "msg_date": "Thu, 30 Jun 2022 13:07:53 -0700", "msg_from": "Michel Pelletier <michel@supabase.io>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Thu, Jun 30, 2022 at 01:07:53PM -0700, Michel Pelletier wrote:\n> I've got CI setup and building and the tests now pass, I was missing a\n> CASCADE in my test. New patch attached:\n\nThe exact same patch has been proposed back in November 2020:\nhttps://www.postgresql.org/message-id/0177f78c-4702-69c9-449d-93cc93c7f8c0@highgo.ca\n\nAnd the conclusion back then is that one can already achieve this by\nusing PGOPTIONS:\nPGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n\nSo there is no need to complicate more pgbench, particularly when it\ncomes to partitioned tables where USING is not supported. Your patch\ntouches this area of the client code to bypass the backend error.\n--\nMichael", "msg_date": "Fri, 1 Jul 2022 10:06:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Fri, Jul 01, 2022 at 10:06:49AM +0900, Michael Paquier wrote:\n> And the conclusion back then is that one can already achieve this by\n> using PGOPTIONS:\n> PGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n> \n> So there is no need to complicate more pgbench, particularly when it\n> comes to partitioned tables where USING is not supported. Your patch\n> touches this area of the client code to bypass the backend error.\n\nActually, it could be a good thing to mention that directly in the\ndocs of pgbench.\n--\nMichael", "msg_date": "Fri, 1 Jul 2022 10:09:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Thu, 30 Jun 2022 at 18:09, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Jul 01, 2022 at 10:06:49AM +0900, Michael Paquier wrote:\n> > And the conclusion back then is that one can already achieve this by\n> > using PGOPTIONS:\n> > PGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n> >\n> > So there is no need to complicate more pgbench, particularly when it\n> > comes to partitioned tables where USING is not supported. Your patch\n> > touches this area of the client code to bypass the backend error.\n>\n> Actually, it could be a good thing to mention that directly in the\n> docs of pgbench.\n>\n\nI've attached a documentation patch that mentions and links to the\nPGOPTIONS documentation per your suggestion. I'll keep the other patch on\nthe back burner, perhaps in the future there will be demand for a command\nline option as more TAMs are created.\n\nThanks,\n\n-Michel\n\n\n> --\n> Michael\n>", "msg_date": "Tue, 12 Jul 2022 21:33:41 -0700", "msg_from": "Michel Pelletier <michel@supabase.io>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Wed, Jul 13, 2022 at 12:33 AM Michel Pelletier <michel@supabase.io>\nwrote:\n\n> On Thu, 30 Jun 2022 at 18:09, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Fri, Jul 01, 2022 at 10:06:49AM +0900, Michael Paquier wrote:\n>> > And the conclusion back then is that one can already achieve this by\n>> > using PGOPTIONS:\n>> > PGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n>> >\n>> > So there is no need to complicate more pgbench, particularly when it\n>> > comes to partitioned tables where USING is not supported. Your patch\n>> > touches this area of the client code to bypass the backend error.\n>>\n>> Actually, it could be a good thing to mention that directly in the\n>> docs of pgbench.\n>>\n>\n> I've attached a documentation patch that mentions and links to the\n> PGOPTIONS documentation per your suggestion. I'll keep the other patch on\n> the back burner, perhaps in the future there will be demand for a command\n> line option as more TAMs are created.\n>\n>>\n>>\nThe documentation change looks good to me\n\nOn Wed, Jul 13, 2022 at 12:33 AM Michel Pelletier <michel@supabase.io> wrote:On Thu, 30 Jun 2022 at 18:09, Michael Paquier <michael@paquier.xyz> wrote:On Fri, Jul 01, 2022 at 10:06:49AM +0900, Michael Paquier wrote:\n> And the conclusion back then is that one can already achieve this by\n> using PGOPTIONS:\n> PGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n> \n> So there is no need to complicate more pgbench, particularly when it\n> comes to partitioned tables where USING is not supported.  Your patch\n> touches this area of the client code to bypass the backend error.\n\nActually, it could be a good thing to mention that directly in the\ndocs of pgbench.I've attached a documentation patch that mentions and links to the PGOPTIONS documentation per your suggestion.  I'll keep the other patch on the back burner, perhaps in the future there will be demand for a command line option as more TAMs are created.The documentation change looks good to me", "msg_date": "Sun, 17 Jul 2022 17:07:59 -0400", "msg_from": "Mason Sharp <masonlists@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Mon, Jul 18, 2022 at 12:08 AM Mason Sharp <masonlists@gmail.com> wrote:\n> On Wed, Jul 13, 2022 at 12:33 AM Michel Pelletier <michel@supabase.io> wrote:\n>>\n>> On Thu, 30 Jun 2022 at 18:09, Michael Paquier <michael@paquier.xyz> wrote:\n>>>\n>>> On Fri, Jul 01, 2022 at 10:06:49AM +0900, Michael Paquier wrote:\n>>> > And the conclusion back then is that one can already achieve this by\n>>> > using PGOPTIONS:\n>>> > PGOPTIONS='-c default_table_access_method=wuzza' pgbench [...]\n>>> >\n>>> > So there is no need to complicate more pgbench, particularly when it\n>>> > comes to partitioned tables where USING is not supported. Your patch\n>>> > touches this area of the client code to bypass the backend error.\n>>>\n>>> Actually, it could be a good thing to mention that directly in the\n>>> docs of pgbench.\n>>\n>>\n>> I've attached a documentation patch that mentions and links to the PGOPTIONS documentation per your suggestion. I'll keep the other patch on the back burner, perhaps in the future there will be demand for a command line option as more TAMs are created.\n>>>\n>>>\n>\n> The documentation change looks good to me\n\nLooks good to me as well. I'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:53:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "On Mon, Jul 18, 2022 at 01:53:21PM +0300, Alexander Korotkov wrote:\n> Looks good to me as well. I'm going to push this if no objections.\n\nFWIW, I find the extra mention of PGOPTIONS with the specific point of\ntable AMs added within the part of the environment variables a bit\nconfusing, because we already mention PGOPTIONS for serializable\ntransactions a bit down. Hence, my choice would be the addition of an\nextra paragraph in the \"Notes\", named \"Table Access Methods\", just\nbefore or after \"Good Practices\". My 2c.\n--\nMichael", "msg_date": "Tue, 19 Jul 2022 10:47:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" }, { "msg_contents": "Hi!\n\nOn Tue, Jul 19, 2022 at 4:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Jul 18, 2022 at 01:53:21PM +0300, Alexander Korotkov wrote:\n> > Looks good to me as well. I'm going to push this if no objections.\n>\n> FWIW, I find the extra mention of PGOPTIONS with the specific point of\n> table AMs added within the part of the environment variables a bit\n> confusing, because we already mention PGOPTIONS for serializable\n> transactions a bit down. Hence, my choice would be the addition of an\n> extra paragraph in the \"Notes\", named \"Table Access Methods\", just\n> before or after \"Good Practices\". My 2c.\n\nThank you. Pushed applying the suggestion above.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 20 Jul 2022 15:51:55 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add Table Access Method option to pgbench" } ]
[ { "msg_contents": "Hello hackers,\n\nCurrently the red-black tree implementation only has an equality search.\nOther extensions might need other comparison searches, like less-or-equal\nor greater-or-equal. For example OrioleDB defines a greater-or-equal search\non its postgres fork:\n\nhttps://github.com/orioledb/postgres/blob/4c18ae94c20e3e95c374b9947f1ace7d1d6497a1/src/backend/lib/rbtree.c#L164-L186\n\nSo I thought this might be valuable to have in core. I've added\nless-or-equal and greater-or-equal searches functions plus tests in\nthe test_rbtree module. I can add the remaining less/great searches if this\nis deemed worth it.\n\nAlso I refactored the sentinel used in the rbtree.c to use C99 designators.\n\nThanks in advance for any feedback!\n\n--\nSteve Chavez\nEngineering at https://supabase.com/", "msg_date": "Thu, 30 Jun 2022 11:51:22 -0500", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Add red-black tree missing comparison searches" }, { "msg_contents": "Please add this to the commitfest at\nhttps://commitfest.postgresql.org/38/ so it doesn't get missed. The\ncommitfest starts imminently so best add it today.\n\n\n", "msg_date": "Thu, 30 Jun 2022 13:09:02 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "Yes, I've already added it here: https://commitfest.postgresql.org/38/3742/\n\nThanks!\n\nOn Thu, 30 Jun 2022 at 12:09, Greg Stark <stark@mit.edu> wrote:\n\n> Please add this to the commitfest at\n> https://commitfest.postgresql.org/38/ so it doesn't get missed. The\n> commitfest starts imminently so best add it today.\n>\n\nYes, I've already added it here: https://commitfest.postgresql.org/38/3742/Thanks!On Thu, 30 Jun 2022 at 12:09, Greg Stark <stark@mit.edu> wrote:Please add this to the commitfest at\nhttps://commitfest.postgresql.org/38/ so it doesn't get missed. The\ncommitfest starts imminently so best add it today.", "msg_date": "Thu, 30 Jun 2022 12:15:41 -0500", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "Hi, Steve!\n\nThank you for working on this.\n\nOn Thu, Jun 30, 2022 at 7:51 PM Steve Chavez <steve@supabase.io> wrote:\n> Currently the red-black tree implementation only has an equality search. Other extensions might need other comparison searches, like less-or-equal or greater-or-equal. For example OrioleDB defines a greater-or-equal search on its postgres fork:\n>\n> https://github.com/orioledb/postgres/blob/4c18ae94c20e3e95c374b9947f1ace7d1d6497a1/src/backend/lib/rbtree.c#L164-L186\n>\n> So I thought this might be valuable to have in core. I've added less-or-equal and greater-or-equal searches functions plus tests in the test_rbtree module. I can add the remaining less/great searches if this is deemed worth it.\n\nLooks good. But I think we can support strict inequalities too (e.g.\nless and greater without equals). Could you please make it a bool\nargument equal_matches?\n\n> Also I refactored the sentinel used in the rbtree.c to use C99 designators.\n\nCould you please extract this change as a separate patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 30 Jun 2022 22:34:38 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "Hey Alexander,\n\n> But I think we can support strict inequalities too (e.g.\nless and greater without equals). Could you please make it a bool\nargument equal_matches?\n\nSure, an argument is a good idea to keep the code shorter.\n\n> Could you please extract this change as a separate patch.\n\nDone!\n\nOn Thu, 30 Jun 2022 at 14:34, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi, Steve!\n>\n> Thank you for working on this.\n>\n> On Thu, Jun 30, 2022 at 7:51 PM Steve Chavez <steve@supabase.io> wrote:\n> > Currently the red-black tree implementation only has an equality search.\n> Other extensions might need other comparison searches, like less-or-equal\n> or greater-or-equal. For example OrioleDB defines a greater-or-equal search\n> on its postgres fork:\n> >\n> >\n> https://github.com/orioledb/postgres/blob/4c18ae94c20e3e95c374b9947f1ace7d1d6497a1/src/backend/lib/rbtree.c#L164-L186\n> >\n> > So I thought this might be valuable to have in core. I've added\n> less-or-equal and greater-or-equal searches functions plus tests in the\n> test_rbtree module. I can add the remaining less/great searches if this is\n> deemed worth it.\n>\n> Looks good. But I think we can support strict inequalities too (e.g.\n> less and greater without equals). Could you please make it a bool\n> argument equal_matches?\n>\n> > Also I refactored the sentinel used in the rbtree.c to use C99\n> designators.\n>\n> Could you please extract this change as a separate patch.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>", "msg_date": "Sat, 2 Jul 2022 14:38:41 -0500", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "Hi, Steve!\n\nOn Sat, Jul 2, 2022 at 10:38 PM Steve Chavez <steve@supabase.io> wrote:\n> > But I think we can support strict inequalities too (e.g.\n> less and greater without equals). Could you please make it a bool\n> argument equal_matches?\n>\n> Sure, an argument is a good idea to keep the code shorter.\n>\n> > Could you please extract this change as a separate patch.\n>\n> Done!\n\nThank you!\n\nI did some improvements to the test suite, run pgindent and wrote\ncommit messages.\n\nI think this is quite straightforward and low-risk patch. I'm going\nto push it if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 6 Jul 2022 21:53:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "---------- Forwarded message ---------\nFrom: Steve Chavez <steve@supabase.io>\nDate: Wed, 6 Jul 2022 at 18:14\nSubject: Re: Add red-black tree missing comparison searches\nTo: Alexander Korotkov <aekorotkov@gmail.com>\n\n\nThanks Alexander!\n\nwrt to the new patch. I think the following comment is misleading since\nkeyDeleted can be true or false:\n\n+ /* switch equal_match to false so we only find greater matches now */\n+ node = (IntRBTreeNode *) rbt_find_great(tree, (RBTNode *) &searchNode,\n+ keyDeleted);\n\nMaybe it should be the same used for searching lesser keys:\n\n+ /*\n+ * Find the next key. If the current key is deleted, we can pass\n+ * equal_match == true and still find the next one.\n+ */\n\nOn Wed, 6 Jul 2022 at 13:53, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi, Steve!\n>\n> On Sat, Jul 2, 2022 at 10:38 PM Steve Chavez <steve@supabase.io> wrote:\n> > > But I think we can support strict inequalities too (e.g.\n> > less and greater without equals). Could you please make it a bool\n> > argument equal_matches?\n> >\n> > Sure, an argument is a good idea to keep the code shorter.\n> >\n> > > Could you please extract this change as a separate patch.\n> >\n> > Done!\n>\n> Thank you!\n>\n> I did some improvements to the test suite, run pgindent and wrote\n> commit messages.\n>\n> I think this is quite straightforward and low-risk patch. I'm going\n> to push it if no objections.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n---------- Forwarded message ---------From: Steve Chavez <steve@supabase.io>Date: Wed, 6 Jul 2022 at 18:14Subject: Re: Add red-black tree missing comparison searchesTo: Alexander Korotkov <aekorotkov@gmail.com>Thanks Alexander!wrt to the new patch. I think the following comment is misleading since keyDeleted can be true or false:+\t\t/* switch equal_match to false so we only find greater matches now */+\t\tnode = (IntRBTreeNode *) rbt_find_great(tree, (RBTNode *) &searchNode,+\t\t\t\t\t\t\t\t\t\t\t\tkeyDeleted);Maybe it should be the same used for searching lesser keys:+\t\t/*+\t\t * Find the next key.  If the current key is deleted, we can pass+\t\t * equal_match == true and still find the next one.+\t\t */On Wed, 6 Jul 2022 at 13:53, Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi, Steve!\n\nOn Sat, Jul 2, 2022 at 10:38 PM Steve Chavez <steve@supabase.io> wrote:\n> > But I think we can support strict inequalities too (e.g.\n> less and greater without equals).  Could you please make it a bool\n> argument equal_matches?\n>\n> Sure, an argument is a good idea to keep the code shorter.\n>\n> > Could you please extract this change as a separate patch.\n>\n> Done!\n\nThank you!\n\nI did some improvements to the test suite, run pgindent and wrote\ncommit messages.\n\nI think this is quite straightforward and low-risk patch.  I'm going\nto push it if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 6 Jul 2022 18:15:45 -0500", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Fwd: Add red-black tree missing comparison searches" }, { "msg_contents": "On Thu, Jul 7, 2022 at 2:16 AM Steve Chavez <steve@supabase.io> wrote:\n> Thanks Alexander!\n>\n> wrt to the new patch. I think the following comment is misleading since keyDeleted can be true or false:\n>\n> + /* switch equal_match to false so we only find greater matches now */\n> + node = (IntRBTreeNode *) rbt_find_great(tree, (RBTNode *) &searchNode,\n> + keyDeleted);\n>\n> Maybe it should be the same used for searching lesser keys:\n>\n> + /*\n> + * Find the next key. If the current key is deleted, we can pass\n> + * equal_match == true and still find the next one.\n> + */\n\nThank you for catching this.\nThe revised version of patch is attached!\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 7 Jul 2022 13:43:55 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add red-black tree missing comparison searches" }, { "msg_contents": "On Thu, Jul 7, 2022 at 1:43 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Jul 7, 2022 at 2:16 AM Steve Chavez <steve@supabase.io> wrote:\n> > Thanks Alexander!\n> >\n> > wrt to the new patch. I think the following comment is misleading since keyDeleted can be true or false:\n> >\n> > + /* switch equal_match to false so we only find greater matches now */\n> > + node = (IntRBTreeNode *) rbt_find_great(tree, (RBTNode *) &searchNode,\n> > + keyDeleted);\n> >\n> > Maybe it should be the same used for searching lesser keys:\n> >\n> > + /*\n> > + * Find the next key. If the current key is deleted, we can pass\n> > + * equal_match == true and still find the next one.\n> > + */\n>\n> Thank you for catching this.\n> The revised version of patch is attached!\n\nPushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 8 Jul 2022 22:01:24 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add red-black tree missing comparison searches" } ]
[ { "msg_contents": "Hackers,\n\nwe have supported custom nodes for while. Custom slots are a bit more\n\"recent\" feature. Since we now support custom slots, which could\nhandle custom tuple format, why not allow custom nodes to use them?\n\nFor instance, a custom table access method can have its own tuple\nformat and use a custom node to provide some custom type of scan. The\nability to set a custom slot would save us from tuple format\nconversion (thank happened to me while working on OrioleDB). I think\nother users of custom nodes may also benefit.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 30 Jun 2022 22:41:10 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Allow specification of custom slot for custom nodes" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI've looked at this patch and don't see any problems with it. It is minimally invasive, it doesn't affect functionality unless anyone (e.g. extension) sets its own slotOps in CustomScanState.\r\nFurthermore, the current patch very slightly modifies patch 0b03e5951bf0 with the intention of introducing extensibility. So I think adding more extensibility regarding different tuple formats is an excellent thing to do.\r\n\r\nI'm going to mark it as RfC if there are no objections.\r\n\r\nKind regards,\r\nPavel Borisov, \r\nSupabase\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 21 Nov 2022 13:33:30 +0000", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow specification of custom slot for custom nodes" }, { "msg_contents": "On Mon, Nov 21, 2022 at 4:34 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> I've looked at this patch and don't see any problems with it. It is minimally invasive, it doesn't affect functionality unless anyone (e.g. extension) sets its own slotOps in CustomScanState.\n> Furthermore, the current patch very slightly modifies patch 0b03e5951bf0 with the intention of introducing extensibility. So I think adding more extensibility regarding different tuple formats is an excellent thing to do.\n>\n> I'm going to mark it as RfC if there are no objections.\n\nThank you for your feedback. I also don't see how this patch could\naffect anybody.\nI'm going to push this if there are no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 21 Nov 2022 23:50:10 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow specification of custom slot for custom nodes" }, { "msg_contents": "2022年11月22日(火) 5:50 Alexander Korotkov <aekorotkov@gmail.com>:\n>\n> On Mon, Nov 21, 2022 at 4:34 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > I've looked at this patch and don't see any problems with it. It is minimally invasive, it doesn't affect functionality unless anyone (e.g. extension) sets its own slotOps in CustomScanState.\n> > Furthermore, the current patch very slightly modifies patch 0b03e5951bf0 with the intention of introducing extensibility. So I think adding more extensibility regarding different tuple formats is an excellent thing to do.\n> >\n> > I'm going to mark it as RfC if there are no objections.\n>\n> Thank you for your feedback. I also don't see how this patch could\n> affect anybody.\n> I'm going to push this if there are no objections.\n\nI see this was pushed (cee1209514) so have closed it in the CF app.\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Sat, 26 Nov 2022 20:04:55 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow specification of custom slot for custom nodes" }, { "msg_contents": "On Sat, Nov 26, 2022 at 2:05 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> 2022年11月22日(火) 5:50 Alexander Korotkov <aekorotkov@gmail.com>:\n> >\n> > On Mon, Nov 21, 2022 at 4:34 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > The following review has been posted through the commitfest application:\n> > > make installcheck-world: tested, passed\n> > > Implements feature: tested, passed\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > >\n> > > I've looked at this patch and don't see any problems with it. It is minimally invasive, it doesn't affect functionality unless anyone (e.g. extension) sets its own slotOps in CustomScanState.\n> > > Furthermore, the current patch very slightly modifies patch 0b03e5951bf0 with the intention of introducing extensibility. So I think adding more extensibility regarding different tuple formats is an excellent thing to do.\n> > >\n> > > I'm going to mark it as RfC if there are no objections.\n> >\n> > Thank you for your feedback. I also don't see how this patch could\n> > affect anybody.\n> > I'm going to push this if there are no objections.\n>\n> I see this was pushed (cee1209514) so have closed it in the CF app.\n\nYes, I forgot to do this. Thank you.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 2 Dec 2022 18:12:37 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow specification of custom slot for custom nodes" } ]
[ { "msg_contents": "Hi,\n\nThere are instances where pgstat_setup_memcxt() and\npgstat_prep_pending_entry() are invoked before the CacheMemoryContext\nhas been created.  This results in PgStat* contexts being created\nwithout a parent context.  Most easily reproduced/seen in autovacuum\nworker via pgstat_setup_memcxt(). \n\nAttached is a patch to address this.\n\nTo see the issue one can add a line similar to this to the top of  \nMemoryContextCreate() in mcxt.c\nfprintf(stderr, \"pid: %d ctxname: %s parent is %s CacheMemoryContext is %s\\n\", MyProcPid, name, parent == NULL ? \"NULL\" : \"not NULL\", CacheMemoryContext == NULL ? \"NULL\" : \"Not NULL\")\nand startup postgres and grep for the above after autovacuum workers\nhave been invoked\n\n...snip...\npid: 1427756 ctxname: PgStat Pending parent is NULL CacheMemoryContext is NULL                             \npid: 1427756 ctxname: PgStat Shared Ref parent is NULL CacheMemoryContext is NULL \n...snip...\n\nor\n\nstartup postgres, attach gdb to postgres following child, break at\npgstat_setup_memcxt and wait for autovacuum worker to start...\n\n...snip...\n─── Source ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 384  AllocSetContextCreateInternal(MemoryContext parent,\n 385                                const char *name,\n 386                                Size minContextSize,\n 387                                Size initBlockSize,\n 388                                Size maxBlockSize)\n 389  {\n 390      int            freeListIndex;\n 391      Size        firstBlockSize;\n 392      AllocSet    set;\n 393      AllocBlock    block;\n─── Stack ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n[0] from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n[1] from 0x000055b7e3f41c88 in pgstat_setup_memcxt+2544 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:979\n[2] from 0x000055b7e3f41c88 in pgstat_get_entry_ref+2648 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:410\n[3] from 0x000055b7e3f420ea in pgstat_get_entry_ref_locked+26 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:598\n[4] from 0x000055b7e3f3e2c4 in pgstat_report_autovac+36 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_database.c:68\n[5] from 0x000055b7e3e7f267 in AutoVacWorkerMain+807 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1694\n[6] from 0x000055b7e3e7f435 in StartAutoVacWorker+133 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1493\n[7] from 0x000055b7e3e87367 in StartAutovacuumWorker+557 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5539\n[8] from 0x000055b7e3e87367 in sigusr1_handler+935 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5244\n[9] from 0x00007fb02bca7420 in __restore_rt\n[+]\n─── Threads ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n[1] id 1174088 name postgres from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n─── Variables ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\narg parent = 0x0, name = 0x55b7e416f179 \"PgStat Shared Ref\": 80 'P', minContextSize = 0, initBlockSize = 1024, maxBlockSize = 8192\nloc firstBlockSize = <optimized out>, set = <optimized out>, block = <optimized out>, __FUNCTION__ = \"AllocSetContextCreateInternal\", __func__ = \"AllocSetContextCreateInternal\"\n────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> > > print CacheMemoryContext == NULL\n$4 = 1\n> > > print parent\n$5 = (MemoryContext) 0x0\n\nThanks,\nReid", "msg_date": "Thu, 30 Jun 2022 15:54:50 -0400", "msg_from": "Reid Thompson <reid.thompson@crunchydata.com>", "msg_from_op": true, "msg_subject": "Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn Jul 28, 2022, 21:30 +0800, Reid Thompson <reid.thompson@crunchydata.com>, wrote:\n> Hi,\n>\n> There are instances where pgstat_setup_memcxt() and\n> pgstat_prep_pending_entry() are invoked before the CacheMemoryContext\n> has been created.  This results in PgStat* contexts being created\n> without a parent context.  Most easily reproduced/seen in autovacuum\n> worker via pgstat_setup_memcxt().\n>\n> Attached is a patch to address this.\n>\n> To see the issue one can add a line similar to this to the top of\n> MemoryContextCreate() in mcxt.c\n> fprintf(stderr, \"pid: %d ctxname: %s parent is %s CacheMemoryContext is %s\\n\", MyProcPid, name, parent == NULL ? \"NULL\" : \"not NULL\", CacheMemoryContext == NULL ? \"NULL\" : \"Not NULL\")\n> and startup postgres and grep for the above after autovacuum workers\n> have been invoked\n>\n> ...snip...\n> pid: 1427756 ctxname: PgStat Pending parent is NULL CacheMemoryContext is NULL\n> pid: 1427756 ctxname: PgStat Shared Ref parent is NULL CacheMemoryContext is NULL\n> ...snip...\n>\n> or\n>\n> startup postgres, attach gdb to postgres following child, break at\n> pgstat_setup_memcxt and wait for autovacuum worker to start...\n>\n> ...snip...\n> ─── Source ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n>  384  AllocSetContextCreateInternal(MemoryContext parent,\n>  385                                const char *name,\n>  386                                Size minContextSize,\n>  387                                Size initBlockSize,\n>  388                                Size maxBlockSize)\n>  389  {\n>  390      int            freeListIndex;\n>  391      Size        firstBlockSize;\n>  392      AllocSet    set;\n>  393      AllocBlock    block;\n> ─── Stack ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> [0] from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n> [1] from 0x000055b7e3f41c88 in pgstat_setup_memcxt+2544 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:979\n> [2] from 0x000055b7e3f41c88 in pgstat_get_entry_ref+2648 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:410\n> [3] from 0x000055b7e3f420ea in pgstat_get_entry_ref_locked+26 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:598\n> [4] from 0x000055b7e3f3e2c4 in pgstat_report_autovac+36 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_database.c:68\n> [5] from 0x000055b7e3e7f267 in AutoVacWorkerMain+807 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1694\n> [6] from 0x000055b7e3e7f435 in StartAutoVacWorker+133 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1493\n> [7] from 0x000055b7e3e87367 in StartAutovacuumWorker+557 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5539\n> [8] from 0x000055b7e3e87367 in sigusr1_handler+935 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5244\n> [9] from 0x00007fb02bca7420 in __restore_rt\n> [+]\n> ─── Threads ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> [1] id 1174088 name postgres from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n> ─── Variables ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> arg parent = 0x0, name = 0x55b7e416f179 \"PgStat Shared Ref\": 80 'P', minContextSize = 0, initBlockSize = 1024, maxBlockSize = 8192\n> loc firstBlockSize = <optimized out>, set = <optimized out>, block = <optimized out>, __FUNCTION__ = \"AllocSetContextCreateInternal\", __func__ = \"AllocSetContextCreateInternal\"\n> ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> > > > print CacheMemoryContext == NULL\n> $4 = 1\n> > > > print parent\n> $5 = (MemoryContext) 0x0\n>\n> Thanks,\n> Reid\n>\n>\n\n\nCodes seem good, my question is:\n\nDo auto vacuum processes need CacheMemoryContext?\n\nIs it designed not to  create CacheMemoryContext in such processes?\n\nIf so, we’d better use TopMemoryContext in such processes.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nOn Jul 28, 2022, 21:30 +0800, Reid Thompson <reid.thompson@crunchydata.com>, wrote:\nHi,\n\nThere are instances where pgstat_setup_memcxt() and\npgstat_prep_pending_entry() are invoked before the CacheMemoryContext\nhas been created.  This results in PgStat* contexts being created\nwithout a parent context.  Most easily reproduced/seen in autovacuum\nworker via pgstat_setup_memcxt().\n\nAttached is a patch to address this.\n\nTo see the issue one can add a line similar to this to the top of \nMemoryContextCreate() in mcxt.c\nfprintf(stderr, \"pid: %d ctxname: %s parent is %s CacheMemoryContext is %s\\n\", MyProcPid, name, parent == NULL ? \"NULL\" : \"not NULL\", CacheMemoryContext == NULL ? \"NULL\" : \"Not NULL\")\nand startup postgres and grep for the above after autovacuum workers\nhave been invoked\n\n...snip...\npid: 1427756 ctxname: PgStat Pending parent is NULL CacheMemoryContext is NULL                            \npid: 1427756 ctxname: PgStat Shared Ref parent is NULL CacheMemoryContext is NULL\n...snip...\n\nor\n\nstartup postgres, attach gdb to postgres following child, break at\npgstat_setup_memcxt and wait for autovacuum worker to start...\n\n...snip...\n─── Source ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 384  AllocSetContextCreateInternal(MemoryContext parent,\n 385                                const char *name,\n 386                                Size minContextSize,\n 387                                Size initBlockSize,\n 388                                Size maxBlockSize)\n 389  {\n 390      int            freeListIndex;\n 391      Size        firstBlockSize;\n 392      AllocSet    set;\n 393      AllocBlock    block;\n─── Stack ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n[0] from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n[1] from 0x000055b7e3f41c88 in pgstat_setup_memcxt+2544 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:979\n[2] from 0x000055b7e3f41c88 in pgstat_get_entry_ref+2648 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:410\n[3] from 0x000055b7e3f420ea in pgstat_get_entry_ref_locked+26 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_shmem.c:598\n[4] from 0x000055b7e3f3e2c4 in pgstat_report_autovac+36 at /home/rthompso/src/git/postgres/src/backend/utils/activity/pgstat_database.c:68\n[5] from 0x000055b7e3e7f267 in AutoVacWorkerMain+807 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1694\n[6] from 0x000055b7e3e7f435 in StartAutoVacWorker+133 at /home/rthompso/src/git/postgres/src/backend/postmaster/autovacuum.c:1493\n[7] from 0x000055b7e3e87367 in StartAutovacuumWorker+557 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5539\n[8] from 0x000055b7e3e87367 in sigusr1_handler+935 at /home/rthompso/src/git/postgres/src/backend/postmaster/postmaster.c:5244\n[9] from 0x00007fb02bca7420 in __restore_rt\n[+]\n─── Threads ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n[1] id 1174088 name postgres from 0x000055b7e4088b40 in AllocSetContextCreateInternal+0 at /home/rthompso/src/git/postgres/src/backend/utils/mmgr/aset.c:389\n─── Variables ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\narg parent = 0x0, name = 0x55b7e416f179 \"PgStat Shared Ref\": 80 'P', minContextSize = 0, initBlockSize = 1024, maxBlockSize = 8192\nloc firstBlockSize = <optimized out>, set = <optimized out>, block = <optimized out>, __FUNCTION__ = \"AllocSetContextCreateInternal\", __func__ = \"AllocSetContextCreateInternal\"\n────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\n\nprint CacheMemoryContext == NULL\n\n\n$4 = 1\n\n\nprint parent\n\n\n$5 = (MemoryContext) 0x0\n\nThanks,\nReid\n\n\n\n\nCodes seem good, my question is:\n\nDo auto vacuum processes need CacheMemoryContext?\n\nIs it designed not to  create CacheMemoryContext in such processes?\n\nIf so, we’d better use TopMemoryContext in such processes.\n\n\nRegards,\nZhang Mingli", "msg_date": "Thu, 28 Jul 2022 22:03:13 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null\n parent context" }, { "msg_contents": "At Thu, 28 Jul 2022 22:03:13 +0800, Zhang Mingli <zmlpostgres@gmail.com> wrote in \r\n> Hi,\r\n> \r\n> On Jul 28, 2022, 21:30 +0800, Reid Thompson <reid.thompson@crunchydata.com>, wrote:\r\n> > Attached is a patch to address this.\r\n\r\nGood Catch!\r\n\r\n> Codes seem good, my question is:\r\n> \r\n> Do auto vacuum processes need CacheMemoryContext?\r\n\r\npgstat_report_vacuum requires it. Startup process doesn't seem to use\r\npgstats while recovery proceeding but requires the context only at\r\ntermination...\r\n\r\n> Is it designed not to  create CacheMemoryContext in such processes?\r\n> \r\n> If so, we’d better use TopMemoryContext in such processes.\r\n\r\nThat makes the memorycontext-tree structure unstable because\r\nCacheMemoryContext can be created on-the-fly.\r\n\r\nHonestly I don't like to call CreateCacheMemoryContext in the two\r\nfunctions on-the-fly. Since every process that calls\r\npgstat_initialize() necessarily calls pgstat_setup_memcxt() at latest\r\nat process termination, I think we can create at least\r\nCacheMemoryContext in pgstat_initialize(). Or couldn't we create the\r\nall three contexts in the function, instead of calling\r\npgstat_setup_memcxt() on-the fly?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 29 Jul 2022 11:53:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "On Fri, 2022-07-29 at 11:53 +0900, Kyotaro Horiguchi wrote:\n> \n> That makes the memorycontext-tree structure unstable because\n> CacheMemoryContext can be created on-the-fly.\n> \n> Honestly I don't like to call CreateCacheMemoryContext in the two\n> functions on-the-fly.  Since every process that calls\n> pgstat_initialize() necessarily calls pgstat_setup_memcxt() at latest\n> at process termination, I think we can create at least\n> CacheMemoryContext in pgstat_initialize(). \n\nAttached is a patch creating CacheMemoryContext() in pgstat_initialize()\nrather than the two previous patch locations.\n\n> Or couldn't we create the\n> all three contexts in the function, instead of calling\n> pgstat_setup_memcxt() on-the fly?\n\nYou note that that pgstat_setup_memcxt() is called at latest at process\ntermination -- was the intent to hold off on requesting memory for these\ntwo contexts until it was needed?\n\n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com", "msg_date": "Thu, 04 Aug 2022 13:12:32 -0400", "msg_from": "Reid Thompson <reid.thompson@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "At Thu, 04 Aug 2022 13:12:32 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> On Fri, 2022-07-29 at 11:53 +0900, Kyotaro Horiguchi wrote:\n> > \n> > That makes the memorycontext-tree structure unstable because\n> > CacheMemoryContext can be created on-the-fly.\n> > \n> > Honestly I don't like to call CreateCacheMemoryContext in the two\n> > functions on-the-fly.  Since every process that calls\n> > pgstat_initialize() necessarily calls pgstat_setup_memcxt() at latest\n> > at process termination, I think we can create at least\n> > CacheMemoryContext in pgstat_initialize(). \n> \n> Attached is a patch creating CacheMemoryContext() in pgstat_initialize()\n> rather than the two previous patch locations.\n> \n> > Or couldn't we create the\n> > all three contexts in the function, instead of calling\n> > pgstat_setup_memcxt() on-the fly?\n> \n> You note that that pgstat_setup_memcxt() is called at latest at process\n> termination -- was the intent to hold off on requesting memory for these\n> two contexts until it was needed?\n\nI think it a bit different. Previously that memory (but for a bit\ndifferent use, precisely) was required only when stats data is read so\nalmost all server processes didn't need it. Now, every server process\nthat uses pgstats requires the two memory if it is going to write\nstats. Even if that didn't happen until process termination, that\nmemory eventually required to flush possibly remaining data. That\nfinal write might be avoidable but I'm not sure it's worth the\ntrouble. As the result, calling pgstat_initialize() is effectively\nthe declaration that the process requires the memory.\n\nThus I thought that we may let pgstat_initialize() promptly allocate\nthe memory.\n\nDoes it make sense?\n\n\nAbout the patch, I had something like the attached in my mind.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 05 Aug 2022 17:22:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "At Fri, 05 Aug 2022 17:22:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thus I thought that we may let pgstat_initialize() promptly allocate\n> the memory.\n> \n> Does it make sense?\n> \n> \n> About the patch, I had something like the attached in my mind.\n\nI haven't fully checked, but this change might cause all other calls\nto CreateCacheMemoryContext useless.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 05 Aug 2022 17:40:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 2022-08-05 17:22:38 +0900, Kyotaro Horiguchi wrote:\n> I think it a bit different. Previously that memory (but for a bit\n> different use, precisely) was required only when stats data is read so\n> almost all server processes didn't need it. Now, every server process\n> that uses pgstats requires the two memory if it is going to write\n> stats. Even if that didn't happen until process termination, that\n> memory eventually required to flush possibly remaining data. That\n> final write might be avoidable but I'm not sure it's worth the\n> trouble. As the result, calling pgstat_initialize() is effectively\n> the declaration that the process requires the memory.\n\nI don't think every process will end up calling pgstat_setup_memcxt() -\ne.g. walsender, bgwriter, checkpointer probably don't? What do we gain by\ncreating the contexts eagerly?\n\n\n> Thus I thought that we may let pgstat_initialize() promptly allocate\n> the memory.\n\nThat makes some sense - but pgstat_attach_shmem() seems like a very strange\nplace for the call to CreateCacheMemoryContext().\n\n\nI wonder if we shouldn't just use TopMemoryContext as the parent for most of\nthese contexts instead. CacheMemoryContext isn't actually a particularly good\nfit anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Aug 2022 19:19:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "At Sat, 6 Aug 2022 19:19:39 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-08-05 17:22:38 +0900, Kyotaro Horiguchi wrote:\n> > I think it a bit different. Previously that memory (but for a bit\n> > different use, precisely) was required only when stats data is read so\n> > almost all server processes didn't need it. Now, every server process\n> > that uses pgstats requires the two memory if it is going to write\n> > stats. Even if that didn't happen until process termination, that\n> > memory eventually required to flush possibly remaining data. That\n> > final write might be avoidable but I'm not sure it's worth the\n> > trouble. As the result, calling pgstat_initialize() is effectively\n> > the declaration that the process requires the memory.\n> \n> I don't think every process will end up calling pgstat_setup_memcxt() -\n> e.g. walsender, bgwriter, checkpointer probably don't? What do we gain by\n> creating the contexts eagerly?\n\nYes. they acutally does, in shmem_shutdown hook function, during\nat-termination stats write. I didn't consider to make that not\nhappen, to save 2kB of memory on such small number of processes.\n\n> > Thus I thought that we may let pgstat_initialize() promptly allocate\n> > the memory.\n> \n> That makes some sense - but pgstat_attach_shmem() seems like a very strange\n> place for the call to CreateCacheMemoryContext().\n\nSure. (I hesitantly added #include for catcache.h..)\n\n> I wonder if we shouldn't just use TopMemoryContext as the parent for most of\n> these contexts instead. CacheMemoryContext isn't actually a particularly good\n> fit anymore.\n\nIt looks better than creating CacheMemoryContext. Now\npgstat_initialize() creates the memory contexts for pgstats use under\nTopMemoryContext.\n\nAnd we don't hastle to avoid maybe-empty at-process-termination\nwrites..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 08 Aug 2022 15:12:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 2022-08-08 15:12:08 +0900, Kyotaro Horiguchi wrote:\n> At Sat, 6 Aug 2022 19:19:39 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > Hi,\n> >\n> > On 2022-08-05 17:22:38 +0900, Kyotaro Horiguchi wrote:\n> > > I think it a bit different. Previously that memory (but for a bit\n> > > different use, precisely) was required only when stats data is read so\n> > > almost all server processes didn't need it. Now, every server process\n> > > that uses pgstats requires the two memory if it is going to write\n> > > stats. Even if that didn't happen until process termination, that\n> > > memory eventually required to flush possibly remaining data. That\n> > > final write might be avoidable but I'm not sure it's worth the\n> > > trouble. As the result, calling pgstat_initialize() is effectively\n> > > the declaration that the process requires the memory.\n> >\n> > I don't think every process will end up calling pgstat_setup_memcxt() -\n> > e.g. walsender, bgwriter, checkpointer probably don't? What do we gain by\n> > creating the contexts eagerly?\n>\n> Yes. they acutally does, in shmem_shutdown hook function, during\n> at-termination stats write. I didn't consider to make that not\n> happen, to save 2kB of memory on such small number of processes.\n\nThat's true for checkpointer, but not e.g. for walwriter, bgwriter. I don't\nsee why we should force allocate memory that we're never going to use in\nbackground processes.\n\n\n> And we don't hastle to avoid maybe-empty at-process-termination\n> writes..\n\nHm?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Aug 2022 12:20:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 8/7/22 4:19 AM, Andres Freund wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> Hi,\n>\n> On 2022-08-05 17:22:38 +0900, Kyotaro Horiguchi wrote:\n>> I think it a bit different. Previously that memory (but for a bit\n>> different use, precisely) was required only when stats data is read so\n>> almost all server processes didn't need it. Now, every server process\n>> that uses pgstats requires the two memory if it is going to write\n>> stats. Even if that didn't happen until process termination, that\n>> memory eventually required to flush possibly remaining data. That\n>> final write might be avoidable but I'm not sure it's worth the\n>> trouble. As the result, calling pgstat_initialize() is effectively\n>> the declaration that the process requires the memory.\n> I don't think every process will end up calling pgstat_setup_memcxt() -\n> e.g. walsender, bgwriter, checkpointer probably don't? What do we gain by\n> creating the contexts eagerly?\n>\n>\n>> Thus I thought that we may let pgstat_initialize() promptly allocate\n>> the memory.\n> That makes some sense - but pgstat_attach_shmem() seems like a very strange\n> place for the call to CreateCacheMemoryContext().\n>\n>\n> I wonder if we shouldn't just use TopMemoryContext as the parent for most of\n> these contexts instead. CacheMemoryContext isn't actually a particularly good\n> fit anymore.\n\nCould using TopMemoryContext like in the attach be an option? (aka \nchanging CacheMemoryContext by TopMemoryContext in the 3 places of \ninterest): that would ensure the 3 pgStat* contexts to have a non NULL \nparent context.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com", "msg_date": "Mon, 5 Sep 2022 08:52:44 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "At Mon, 5 Sep 2022 08:52:44 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Could using TopMemoryContext like in the attach be an option? (aka\n> changing CacheMemoryContext by TopMemoryContext in the 3 places of\n> interest): that would ensure the 3 pgStat* contexts to have a non NULL\n> parent context.\n\nOf course it works. The difference from what I last proposed is\nwhether we postpone creating the memory contexts until the first call\nto pgstat_get_entry_ref(). The rationale of creating them at\npgstat_attach_shmem is that anyway once pgstat_attach_shmem is called,\nthe process fainally creates the contexts at the end of the process,\nand (I think) it's simpler that we don't do if() check at every\npgstat_get_entry_ref() call.\n\nI forgot about pgStatPendingContext, but it is sensible that we treat\nit the same way to the other two.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 05 Sep 2022 17:32:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 9/5/22 10:32 AM, Kyotaro Horiguchi wrote:\n> At Mon, 5 Sep 2022 08:52:44 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>> Could using TopMemoryContext like in the attach be an option? (aka\n>> changing CacheMemoryContext by TopMemoryContext in the 3 places of\n>> interest): that would ensure the 3 pgStat* contexts to have a non NULL\n>> parent context.\n> Of course it works. The difference from what I last proposed is\n> whether we postpone creating the memory contexts until the first call\n> to pgstat_get_entry_ref().\n\nRight.\n\n> The rationale of creating them at\n> pgstat_attach_shmem is that anyway once pgstat_attach_shmem is called,\n> the process fainally creates the contexts at the end of the process,\n\nRight.\n\nIIUC the downside is to allocate the new contexts even for processes \nthat don't need them (as mentioned by Andres upthread).\n\n> and (I think) it's simpler that we don't do if() check at every\n> pgstat_get_entry_ref() call.\n\nI wonder how much of a concern the if() checks are, given they are all 3 \nlegitimately using unlikely().\n\nLooks like that both approaches have their pros and cons. I'm tempted to \nvote +1 on \"just changing\" the parent context to TopMemoryContext and \nnot changing the allocations locations.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/5/22 10:32 AM, Kyotaro Horiguchi\n wrote:\n \n\nAt Mon, 5 Sep 2022 08:52:44 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n\n\nCould using TopMemoryContext like in the attach be an option? (aka\nchanging CacheMemoryContext by TopMemoryContext in the 3 places of\ninterest): that would ensure the 3 pgStat* contexts to have a non NULL\nparent context.\n\n\n\nOf course it works. The difference from what I last proposed is\nwhether we postpone creating the memory contexts until the first call\nto pgstat_get_entry_ref(). \n\nRight.\n\n\n The rationale of creating them at\npgstat_attach_shmem is that anyway once pgstat_attach_shmem is called,\nthe process fainally creates the contexts at the end of the process,\n\nRight.\nIIUC the downside is to allocate the new contexts even for\n processes that don't need them (as mentioned by Andres upthread).\n\n\nand (I think) it's simpler that we don't do if() check at every\npgstat_get_entry_ref() call.\n\nI wonder how much of a concern the if() checks are, given they\n are all 3 legitimately using unlikely().\nLooks like that both approaches have their pros and cons. I'm\n tempted to vote +1 on \"just changing\" the parent context to\n TopMemoryContext and not changing the allocations locations.\n\n\n\n Regards,\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 5 Sep 2022 14:46:55 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 2022-09-05 17:32:20 +0900, Kyotaro Horiguchi wrote:\n> The rationale of creating them at pgstat_attach_shmem is that anyway once\n> pgstat_attach_shmem is called, the process fainally creates the contexts at\n> the end of the process, and (I think) it's simpler that we don't do if()\n> check at every pgstat_get_entry_ref() call.\n\nBut that's not true, as pointed out here:\nhttps://postgr.es/m/20220808192020.nc556tlgcp66fdgw%40awork3.anarazel.de\n\nNor does it make sense to reserve memory for the entire lifetime of a process\njust because we might need it for a split second at the end.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Sep 2022 15:47:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "(It seems to me I overlooked some mails.. sorry.)\n\nAt Mon, 5 Sep 2022 15:47:37 -0700, Andres Freund <andres@anarazel.de> wrote in \n> On 2022-09-05 17:32:20 +0900, Kyotaro Horiguchi wrote:\n> > The rationale of creating them at pgstat_attach_shmem is that anyway once\n> > pgstat_attach_shmem is called, the process fainally creates the contexts at\n> > the end of the process, and (I think) it's simpler that we don't do if()\n> > check at every pgstat_get_entry_ref() call.\n> \n> But that's not true, as pointed out here:\n> https://postgr.es/m/20220808192020.nc556tlgcp66fdgw%40awork3.anarazel.de\n> \n> Nor does it make sense to reserve memory for the entire lifetime of a process\n> just because we might need it for a split second at the end.\n\nYeah, that's the most convincing argument aginst it.\n\nAt Mon, 5 Sep 2022 14:46:55 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Looks like that both approaches have their pros and cons. I'm tempted\n> to vote +1 on \"just changing\" the parent context to TopMemoryContext\n> and not changing the allocations locations.\n\nYeah. It is safe more than anything and we don't have a problem there.\n\nSo, I'm fine with just replacing the parent context at the three places.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 06 Sep 2022 14:53:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n> At Mon, 5 Sep 2022 14:46:55 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>> Looks like that both approaches have their pros and cons. I'm tempted\n>> to vote +1 on \"just changing\" the parent context to TopMemoryContext\n>> and not changing the allocations locations.\n> Yeah. It is safe more than anything and we don't have a problem there.\n>\n> So, I'm fine with just replacing the parent context at the three places.\n\nAttached a patch proposal to do so.\n\nRegards,\n\n-- \n\nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com", "msg_date": "Wed, 7 Sep 2022 11:11:11 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "At Wed, 7 Sep 2022 11:11:11 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n\n> On 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n> > So, I'm fine with just replacing the parent context at the three\n> > places.\n\nLooks good to me. To make sure, I ran make check-world with adding an\nassertion check that all non-toplevel memcontexts are created under\nnon-null parent and I saw no failure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 08 Sep 2022 09:26:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 9/8/22 2:26 AM, Kyotaro Horiguchi wrote:\n> At Wed, 7 Sep 2022 11:11:11 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>\n>> On 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n>>> So, I'm fine with just replacing the parent context at the three\n>>> places.\n> Looks good to me. To make sure, I ran make check-world with adding an\n> assertion check that all non-toplevel memcontexts are created under\n> non-null parent and I saw no failure.\n\nThanks!\n\nI'm updating the CF entry to Ready for Committer.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/8/22 2:26 AM, Kyotaro Horiguchi\n wrote:\n \n\nAt Wed, 7 Sep 2022 11:11:11 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n\n\n\nOn 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n\n\nSo, I'm fine with just replacing the parent context at the three\nplaces.\n\n\n\n\nLooks good to me. To make sure, I ran make check-world with adding an\nassertion check that all non-toplevel memcontexts are created under\nnon-null parent and I saw no failure.\n\nThanks!\nI'm updating the CF entry to Ready for Committer.\n\nRegards,\n \n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 9 Sep 2022 12:18:37 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 2022-09-07 11:11:11 +0200, Drouvot, Bertrand wrote:\n> On 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n> > At Mon, 5 Sep 2022 14:46:55 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n> > > Looks like that both approaches have their pros and cons. I'm tempted\n> > > to vote +1 on \"just changing\" the parent context to TopMemoryContext\n> > > and not changing the allocations locations.\n> > Yeah. It is safe more than anything and we don't have a problem there.\n> > \n> > So, I'm fine with just replacing the parent context at the three places.\n> \n> Attached a patch proposal to do so.\n\nPushed. Thanks for the report and the fix!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 17 Sep 2022 09:10:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" }, { "msg_contents": "Hi,\n\nOn 9/17/22 6:10 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-07 11:11:11 +0200, Drouvot, Bertrand wrote:\n>> On 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n>>> At Mon, 5 Sep 2022 14:46:55 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>>>> Looks like that both approaches have their pros and cons. I'm tempted\n>>>> to vote +1 on \"just changing\" the parent context to TopMemoryContext\n>>>> and not changing the allocations locations.\n>>> Yeah. It is safe more than anything and we don't have a problem there.\n>>>\n>>> So, I'm fine with just replacing the parent context at the three places.\n>>\n>> Attached a patch proposal to do so.\n> \n> Pushed. Thanks for the report and the fix!\n\nThanks! I just marked the corresponding CF entry as Committed.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Sep 2022 10:57:18 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to address creation of PgStat* contexts with null parent\n context" } ]
[ { "msg_contents": "Hi,\n\nI would like to submit patch for column wait_event of view \npg_stat_activity. Please find it attached.\n\nThe view pg_stat_activity provides snapshot of actual backends' state \nwith following columns:\n - wait_event contains event name for which backend is waiting for;\n - state of backend, for instance active or idle.\n\nWe observe that in high load applications the value of column wait_event \nof pg_stat_activity is incorrect. For instance, it can be \"ClientRead\" \nfor active backends. According to source code, the wait event \n\"ClientRead\" is possible only for idle or 'idle in transaction' \nbackends. So if backend is active, the wait_event can't be 'ClientRead'.\n\nThe key parameter to reproduce defect is high value of max_connections \n(more than 1000). Let's do the following:\n - create new database (initdb)\n - set max_connections to 10000\n - trigger pgbench in select-only mode (200 connections, 1 job)\n - fetch data from pg_stat_activity\n\nThe following query can be used to detect problem:\n\nselect state, wait_event, backend_type, count(1)\n from pg_stat_activity\n group by 1,2,3;\n\nThe pgbench command is following:\n\npgbench -n -c 200 -j 1 -T 60 -P 1 -M prepared -S postgres\n\nBefore patch, the output looks like:\n\npostgres=# select state, wait_event, backend_type, count(1) from \npg_stat_activity group by 1,2,3;\n state | wait_event | backend_type | count\n--------+---------------------+------------------------------+-------\n idle | | client backend | 3\n active | | client backend | 1\n | WalWriterMain | walwriter | 1\n | CheckpointerMain | checkpointer | 1\n | LogicalLauncherMain | logical replication launcher | 1\n | AutoVacuumMain | autovacuum launcher | 1\n | BgWriterHibernate | background writer | 1\n active | ClientRead | client backend | 4\n idle | ClientRead | client backend | 193\n(9 rows)\n\nTime: 4.406 ms\n\nPlease pay attention to lines with state 'active' and wait_event \n'ClientRead'. According to output above, we see 4 backends with such \ncombination of state and wait_event.\n\nAfter patch, the output is better:\n\npostgres=# select state, wait_event, backend_type, count(1) from \npg_stat_activity group by 1,2,3;\n state | wait_event | backend_type | count\n--------+---------------------+------------------------------+-------\n | | walwriter | 1\n active | | client backend | 5\n | LogicalLauncherMain | logical replication launcher | 1\n | AutoVacuumMain | autovacuum launcher | 1\n | | background writer | 1\n idle | ClientRead | client backend | 196\n | | checkpointer | 1\n(7 rows)\n\nTime: 1.520 ms\n\nThe lines with active-ClientRead and idle-nowait are disappeared and \noutput looks expecting: 5 active backend with no wait, 196 idle \nconnections with wait event ClientRead.\n\nThe output is incorrect because state & wait information are gathered at \ndifferent times. At first, the view gathers backends' information into \nlocal structures and then it iterates over backends to enrich data by \nwait event. To read wait event it tries to get LWLock per backend, so \niterating over backends takes some time (few milliseconds). As result \nbackend wait events may be changed for quick queries.\n\nThe idea of patch is to avoid iterating over backend and gather all \ninformation at once.\n\nAs well, patch changes way to allocate memory for local structure. \nBefore it estimates maximum size of required memory and allocate it at \nonce. It could result into allocation of dozens/hundreds of megabytes \nfor nothing. Now it allocates memory by chunks to reduce overall amount \nof allocated memory and reduce time for allocation.\n\nIn example above, the timing is reduced from 4.4ms to 1.5ms (3 times).\n\nThe patch is for PostgreSQL version 15. If fix is OK and is required for \nprevious versions, please let know.\nIt's worth to mention Yury Sokolov as co-author of patch.\n\nPlease feel free to ask any questions.\n\nBest regards,\n-- \nMichael Zhilin\nPostgres Professional\n+7(925)3366270\nhttps://www.postgrespro.ru", "msg_date": "Thu, 30 Jun 2022 23:05:28 +0300", "msg_from": "Michael Zhilin <m.zhilin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[PATCH] fix wait_event of pg_stat_activity in case of high amount of\n connections" }, { "msg_contents": "On Thu, Jun 30, 2022 at 11:05:28PM +0300, Michael Zhilin wrote:\n> I would like to submit patch for column wait_event of view pg_stat_activity.\n> Please find it attached.\n\nThanks. I had this thread on my list to look at later but then, by chance, I\ngot an erroneous nagios notification about a longrunning transaction, which\nseems to have been caused by this bug.\n\nYou can reproduce the problem by running a couple handsful of these:\n\npsql -h /tmp postgres -Atc \"SELECT * FROM pg_stat_activity WHERE state='active' AND wait_event='ClientRead' LIMIT 1\"&\n\n> As result backend wait events may be changed for quick queries.\n\nYep. I realize now that I've been seeing this at one of our customers for a\npkey lookup query. We don't use high max_connections, but if you have a lot of\nconnections, you might be more likely to hit this.\n\nI agree that this is a bug, since it can (and did) cause false positives in a\nmonitoring system.\n\n> As well, patch changes way to allocate memory for local structure. Before it\n> estimates maximum size of required memory and allocate it at once. It could\n> result into allocation of dozens/hundreds of megabytes for nothing. Now it\n> allocates memory by chunks to reduce overall amount of allocated memory and\n> reduce time for allocation.\n\nI suggest to present this as two patches: a 0001 patch to fix the bug, and\nproposed for backpatch, and an 0002 patch for master to improve memory usage.\nAs attached. Actually, once 0001 is resolved, it may be good to start a\nseparate thread for 0002. I plan to add to the next CF.\n\nDid you really experience latency before the patch ? It seems like\nmax_connections=1000 would only allocate a bit more than 1MB.\n\nFor me, max_connections=10000 makes pg_stat_activity 10% slower for me than 100\n(11.8s vs 13.2s). With the patch, 10k is only about ~3% slower than 100. So\nthere is an improvement, but people here may not be enthusiastic excited about\nimproving performance for huge values of max_connections.\n\ntime for a in `seq 1 99`; do psql -h /tmp postgres -Atc \"SELECT FROM pg_stat_activity -- WHERE state='active' AND wait_event='ClientRead' LIMIT 1\"; done\n\nIs there a reason why you made separate allocations for appname, hostname,\nactivity, ssl, gss (I know that's how it's done originally, too) ? Couldn't it\nbe a single allocation ? You could do a single (re)allocation when an active\nbackend is found if the amount of free space is less than\n2*NAMEDATALEN+track_activity_query_size+sizeof(PgBackendSSLStatus)+sizeof(PgBackendGSSStatus)\nI have a patch for that, but I'll share it after 0001 is resolved.\n\nWhy did you change backendStatus to a pointer ? Is it to minimize the size of \nthe structure to allow for huge values of max_connections ?\n\nNote that there were warnings from your 0002:\nbackend_status.c:723:21: warning: ‘localactivity_thr’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n\nThanks,\n-- \nJustin", "msg_date": "Thu, 7 Jul 2022 13:58:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high\n amount of connections" }, { "msg_contents": "At Thu, 7 Jul 2022 13:58:06 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> I agree that this is a bug, since it can (and did) cause false positives in a\n> monitoring system.\n\nI'm not this is undoubtfully a bug but agree about the rest.\n\n> > As well, patch changes way to allocate memory for local structure. Before it\n> > estimates maximum size of required memory and allocate it at once. It could\n> > result into allocation of dozens/hundreds of megabytes for nothing. Now it\n> > allocates memory by chunks to reduce overall amount of allocated memory and\n> > reduce time for allocation.\n> \n> I suggest to present this as two patches: a 0001 patch to fix the bug, and\n> proposed for backpatch, and an 0002 patch for master to improve memory usage.\n> As attached. Actually, once 0001 is resolved, it may be good to start a\n> separate thread for 0002. I plan to add to the next CF.\n\nLooking the patch 0001, I wonder we can move wait_even_info from\nPGPROC to backend status. If I'm not missing anything, I don't see a\nplausible requirement for it being in PROC, or rather see a reason to\nmove it to backend_status.\n\n> void\n> pgstat_report_activity(BackendState state, const char *cmd_str)\n> {\n> ...\n> \t\t\tbeentry->st_xact_start_timestamp = 0;\n> \t\t\tbeentry->st_query_id = UINT64CONST(0);\n> \t\t\tproc->wait_event_info = 0;\n> \t\t\tPGSTAT_END_WRITE_ACTIVITY(beentry);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Jul 2022 11:39:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high\n amount of connections" }, { "msg_contents": "On Thu, Jul 7, 2022 at 10:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 7 Jul 2022 13:58:06 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > I agree that this is a bug, since it can (and did) cause false positives in a\n> > monitoring system.\n>\n> I'm not this is undoubtfully a bug but agree about the rest.\n\nI don't agree that this is a bug, and even if it were, I don't think\nthis patch can fix it.\n\nLet's start with the second point first: pgstat_report_wait_start()\nand pgstat_report_wait_end() change the advertised wait event for a\nprocess, while the backend state is changed by\npgstat_report_activity(). Since those function calls are in different\nplaces, those changes are bound to happen at different times, and\ntherefore you can observe drift between the two values. Now perhaps\nthere are some one-directional guarantees: I think we probably always\nset the state to idle before we start reading from the client, and\nalways finish reading from the client before the state ceases to be\nidle. But I don't really see how that helps anything, because when you\nread those values, you must read one and then the other. If you read\nthe activity before the wait event, you might see the state before it\ngoes idle and then the wait event after it's reached ClientRead. If\nyou read the wait event before the activity, you might see the wait\nevent as ClientRead, and then by the time you check the activity the\nbackend might have gotten some data from the client and no longer be\nidle. The very best a patch like this can hope to do is narrow the\nrace condition enough that the discrepancies are observed less\nfrequently in practice.\n\nAnd that's why I think this is not a bug fix, or even a good idea.\nIt's just encouraging people to rely on something which can never be\nfully reliable in the way that the original poster is hoping. There\nwas never any intention of having wait events synchronized with the\npgstat_report_activity() stuff, and I think that's perfectly fine.\nBoth systems are trying to provide visibility into states that can\nchange very quickly, and therefore they need to be low-overhead, and\ntherefore they use very lightweight synchronization, which means that\nephemeral discrepancies are possible by nature. There are plenty of\nother examples of that as well. You can't for example query pg_locks\nand pg_stat_activity in the same query and expect that all and only\nthose backends that are apparently waiting for a lock in\npg_stat_activity will have an ungranted lock in pg_locks. It just\ndoesn't work like that, and there's a very good reason for that:\ntrying to make all of these introspection facilities behave in\nMVCC-like ways would be painful to code and probably end up slowing\nthe system down substantially.\n\nI think the right fix here is to change nothing in the code, and stop\nexpecting these things to be perfectly consistent with each other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 09:44:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high amount\n of connections" }, { "msg_contents": "В Пт, 08/07/2022 в 09:44 -0400, Robert Haas пишет:\n> On Thu, Jul 7, 2022 at 10:39 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 7 Jul 2022 13:58:06 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > I agree that this is a bug, since it can (and did) cause false positives in a\n> > > monitoring system.\n> > \n> > I'm not this is undoubtfully a bug but agree about the rest.\n> \n> I don't agree that this is a bug, and even if it were, I don't think\n> this patch can fix it.\n> \n> Let's start with the second point first: pgstat_report_wait_start()\n> and pgstat_report_wait_end() change the advertised wait event for a\n> process, while the backend state is changed by\n> pgstat_report_activity(). Since those function calls are in different\n> places, those changes are bound to happen at different times, and\n> therefore you can observe drift between the two values. Now perhaps\n> there are some one-directional guarantees: I think we probably always\n> set the state to idle before we start reading from the client, and\n> always finish reading from the client before the state ceases to be\n> idle. But I don't really see how that helps anything, because when you\n> read those values, you must read one and then the other. If you read\n> the activity before the wait event, you might see the state before it\n> goes idle and then the wait event after it's reached ClientRead. If\n> you read the wait event before the activity, you might see the wait\n> event as ClientRead, and then by the time you check the activity the\n> backend might have gotten some data from the client and no longer be\n> idle. The very best a patch like this can hope to do is narrow the\n> race condition enough that the discrepancies are observed less\n> frequently in practice.\n> \n> And that's why I think this is not a bug fix, or even a good idea.\n> It's just encouraging people to rely on something which can never be\n> fully reliable in the way that the original poster is hoping. There\n> was never any intention of having wait events synchronized with the\n> pgstat_report_activity() stuff, and I think that's perfectly fine.\n> Both systems are trying to provide visibility into states that can\n> change very quickly, and therefore they need to be low-overhead, and\n> therefore they use very lightweight synchronization, which means that\n> ephemeral discrepancies are possible by nature. There are plenty of\n> other examples of that as well. You can't for example query pg_locks\n> and pg_stat_activity in the same query and expect that all and only\n> those backends that are apparently waiting for a lock in\n> pg_stat_activity will have an ungranted lock in pg_locks. It just\n> doesn't work like that, and there's a very good reason for that:\n> trying to make all of these introspection facilities behave in\n> MVCC-like ways would be painful to code and probably end up slowing\n> the system down substantially.\n> \n> I think the right fix here is to change nothing in the code, and stop\n> expecting these things to be perfectly consistent with each other.\n\nI see analogy with Bus Stop:\n- there is bus stop\n- there is a schedule of bus arriving this top\n- there are passengers, who every day travel with this bus\n\nBus occasionally comes later... Well, it comes later quite often...\n\nWhich way Major (or other responsible person) should act?\n\nFirst possibility: do all the best Bus comes at the schedule. And\nalthough there will no be 100% guarantee, it will raise from 90%\nto 99%.\n\nSecond possibility: tell the passengers \"you should not rely on bus\nschedule, and we will not do anything to make it more reliable\".\n\nIf I were passenger, I'd prefer first choice.\n\n\nregards\n\nYura\n\n\n\n", "msg_date": "Fri, 08 Jul 2022 17:11:03 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high\n amount of connections" }, { "msg_contents": "On Fri, Jul 8, 2022 at 10:11 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> I see analogy with Bus Stop:\n> - there is bus stop\n> - there is a schedule of bus arriving this top\n> - there are passengers, who every day travel with this bus\n>\n> Bus occasionally comes later... Well, it comes later quite often...\n>\n> Which way Major (or other responsible person) should act?\n\nI do not think that is a good analogy, because a bus schedule is an\nimplicit promise - or at least a strong suggestion - that the bus will\narrive at the scheduled time.\n\nIn this case, who made such a promise? The original post presents it\nas fact that these systems should give compatible answers at all\ntimes, but there's nothing in the code or documentation to suggest\nthat this is true.\n\nIMHO, a better analogy would be if you noticed that the 7:03am bus was\nnormally blue and you took that one because you have a small child who\nlikes the color blue and it makes them happy to take a blue bus. And\nthen one day the bus at that time is a red bus and your child is upset\nand you call the major (or other responsible person) to complain.\nThey're probably not going to handle that situation by trying to send\na blue bus at 7:03am as often as possible. They're going to tell you\nthat they only promised you a bus at 7:03am, not what color it would\nbe.\n\nPerhaps that's not an ideal analogy either, because the reported wait\nevent and the reported activity are more closely related than the time\nof a bus is to the color of the bus. But I think it's still true that\nnobody ever promised that those values would be compatible with each\nother, and that's not really fixable, and that there are lots of other\ncases just like this one which can't be fixed either.\n\nI think that the more we try to pretend like it is possible to make\nthese values seem like they are synchronized, the more unhappy people\nwill be in the unavoidable cases where they aren't, and the more\npressure there will be to try to tighten it up even further. That's\nlikely to result in code that is more complex and slower, which I do\nnot want, and especially not for the sake of avoiding a harmless\nreporting discrepancy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 11:04:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high amount\n of connections" }, { "msg_contents": "В Пт, 08/07/2022 в 11:04 -0400, Robert Haas пишет:\n> On Fri, Jul 8, 2022 at 10:11 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > I see analogy with Bus Stop:\n> > - there is bus stop\n> > - there is a schedule of bus arriving this top\n> > - there are passengers, who every day travel with this bus\n> > \n> > Bus occasionally comes later... Well, it comes later quite often...\n> > \n> > Which way Major (or other responsible person) should act?\n> \n> I do not think that is a good analogy, because a bus schedule is an\n> implicit promise - or at least a strong suggestion - that the bus will\n> arrive at the scheduled time.\n\nThere is implicit promise: those data are written in single row.\nIf you want to notice they are NOT related to each other, return them\nin different rows or even in different view tables.\n\n> In this case, who made such a promise? The original post presents it\n> as fact that these systems should give compatible answers at all\n> times, but there's nothing in the code or documentation to suggest\n> that this is true.\n> \n> IMHO, a better analogy would be if you noticed that the 7:03am bus was\n> normally blue and you took that one because you have a small child who\n> likes the color blue and it makes them happy to take a blue bus. And\n> then one day the bus at that time is a red bus and your child is upset\n> and you call the major (or other responsible person) to complain.\n> They're probably not going to handle that situation by trying to send\n> a blue bus at 7:03am as often as possible. They're going to tell you\n> that they only promised you a bus at 7:03am, not what color it would\n> be.\n> \n> Perhaps that's not an ideal analogy either, because the reported wait\n> event and the reported activity are more closely related than the time\n> of a bus is to the color of the bus. But I think it's still true that\n> nobody ever promised that those values would be compatible with each\n> other, and that's not really fixable, and that there are lots of other\n> cases just like this one which can't be fixed either.\n> \n> I think that the more we try to pretend like it is possible to make\n> these values seem like they are synchronized, the more unhappy people\n> will be in the unavoidable cases where they aren't, and the more\n> pressure there will be to try to tighten it up even further. That's\n> likely to result in code that is more complex and slower, which I do\n> not want, and especially not for the sake of avoiding a harmless\n> reporting discrepancy.\n\nThen just don't return them together, right?\n\n\n\n", "msg_date": "Sat, 09 Jul 2022 02:32:13 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high\n amount of connections" }, { "msg_contents": "В Сб, 09/07/2022 в 02:32 +0300, Yura Sokolov пишет:\n> В Пт, 08/07/2022 в 11:04 -0400, Robert Haas пишет:\n> > On Fri, Jul 8, 2022 at 10:11 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > I see analogy with Bus Stop:\n> > > - there is bus stop\n> > > - there is a schedule of bus arriving this top\n> > > - there are passengers, who every day travel with this bus\n> > > \n> > > Bus occasionally comes later... Well, it comes later quite often...\n> > > \n> > > Which way Major (or other responsible person) should act?\n> > \n> > I do not think that is a good analogy, because a bus schedule is an\n> > implicit promise - or at least a strong suggestion - that the bus will\n> > arrive at the scheduled time.\n> \n> There is implicit promise: those data are written in single row.\n> If you want to notice they are NOT related to each other, return them\n> in different rows or even in different view tables.\n> \n> > In this case, who made such a promise? The original post presents it\n> > as fact that these systems should give compatible answers at all\n> > times, but there's nothing in the code or documentation to suggest\n> > that this is true.\n> > \n> > IMHO, a better analogy would be if you noticed that the 7:03am bus was\n> > normally blue and you took that one because you have a small child who\n> > likes the color blue and it makes them happy to take a blue bus. And\n> > then one day the bus at that time is a red bus and your child is upset\n> > and you call the major (or other responsible person) to complain.\n> > They're probably not going to handle that situation by trying to send\n> > a blue bus at 7:03am as often as possible. They're going to tell you\n> > that they only promised you a bus at 7:03am, not what color it would\n> > be.\n> > \n> > Perhaps that's not an ideal analogy either, because the reported wait\n> > event and the reported activity are more closely related than the time\n> > of a bus is to the color of the bus. But I think it's still true that\n> > nobody ever promised that those values would be compatible with each\n> > other, and that's not really fixable, and that there are lots of other\n> > cases just like this one which can't be fixed either.\n> > \n> > I think that the more we try to pretend like it is possible to make\n> > these values seem like they are synchronized, the more unhappy people\n> > will be in the unavoidable cases where they aren't, and the more\n> > pressure there will be to try to tighten it up even further. That's\n> > likely to result in code that is more complex and slower, which I do\n> > not want, and especially not for the sake of avoiding a harmless\n> > reporting discrepancy.\n> \n> Then just don't return them together, right?\n\nWell, I'm a bit hotter guy than it is needed. I appologize for that.\n\nLets look on situation from compromise point of view:\n- We are telling: we could make this view more synchronous (and faster).\n- You are telling: it will never be totally synchronous, and it is\n mistake we didn't mention the issue in documentation.\n\nWhy don't do both?\nWhy can't we do it more synchronous (and faster) AND mention in\ndocumentaion it is not totally synchronous and never will be?\n\n--------\n\nregards\n\nYura\n\n\n\n", "msg_date": "Sat, 09 Jul 2022 03:02:53 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix wait_event of pg_stat_activity in case of high\n amount of connections" } ]
[ { "msg_contents": "Hi,\n\nI had proposed $subject for some RI trigger functions in the last dev\ncycle [1]. Briefly, the proposal was to stop using an SQL query\n(using the SPI interface) for RI checks that could be done by directly\nscanning the primary/unique key index of the referenced table, which\nmust always be there. While acknowledging that the patch showed a\nclear performance benefit, Tom gave the feedback that doing so only\nfor some RI checks but not others is not very desirable [2].\n\nThe other cases include querying the referencing table when deleting\nfrom the referenced table to handle the referential action clause.\nTwo main hurdles to not using an SQL query for those cases that I\nhadn't addressed were:\n\n1) What should the hard-coded plan be? Referencing table may not\nalways have an index on the queried foreign key columns. Even if\nthere is one, it's not clear if scanning it is *always* better than\nscanning the whole table to find the matching rows.\n\n2) While the RI check functions for RESTRICT and NO ACTION actions\nissue a `SELECT ... LIMIT 1` query, those for CASCADE and SET actions\nissue a `UPDATE SET / DELETE`. I had no good idea as to how much of\nthe executor functionality would need to be replicated in order to\nperform the update/delete actions without leaving the ri_triggers.c\nmodule.\n\nWe had an unconference session to discuss these concerns at this\nyear's PGCon, whose minutes can be found at [3]. Among other\nsuggestions, one was to only stop using the SPI interface to issue the\nRI check/action queries, while continuing to use the same SQL queries\nas now. That means creating a copy in ri_triggers.c of the\nfunctionality of SPI_prepare(), which creates the CachedPlanSource for\nthe query, and of SPI_execute_plan(), which executes a CachedPlan\nobtained from that CachedPlanSource to produce the result tuples if\nany. That may not have the same performance boost as skipping the\nplanner/plancache and the executor altogether, but at least it becomes\neasier to check the difference between semantic behaviors of an RI\nquery implemented as SQL and another implemented using some hard-coded\nplan if we choose to do, because the logic would no longer be divided\nbetween ri_trigger.c and spi.c. I think that will, at least to some\ndegree, alleviate the concerns that Tom expressed about the previous\neffort.\n\nSo, I hacked together a patch (attached 0001) that invents an \"RI\nplan\" construct (struct RIPlan) to replace the use of an \"SPI plan\"\n(struct _SPI_plan). While the latter encapsulates the\nCachedPlanSource of an RI query directly, I decided to make it an\noption for a given RI trigger to specify whether it would like to have\nits RIPlan store CachedPlanSource if its check is still implemented as\nan SQL query or something else if the implementation will be a\nhard-coded plan. RIPlan contains callbacks to create, execute,\nvalidate, and free a plan that implements a given RI query. For\nexample, an RI plan for checks implemented as SQL will call the\ncallback ri_SqlStringPlanCreate() to parse the query and allocate a\nCachedPlanSource and ri_SqlStringPlanExecute() to a CachedPlan and\nexecutes it PlannedStmt using the executor interface directly.\nRemaining callbacks ri_SqlStringPlanIsValid() and\nri_SqlStringPlanFree() use CachedPlanIsValid() and DropCachedPlan(),\nrespectively, to validate and free a CachedPlan.\n\nWith that in place, I decided to rebase my previous patch [1] to use\nthis new interface and the result is attached 0002. One notable\nimprovement over the previous standalone patch is that the snapshot\nsetting logic need no longer be in function implementing the proposed\nhard-coded plan for RI check triggers. That logic and other\nconfiguration needed before executing the plan is now a part of the\ntop-level ri_PerformCheck() function that is shared between various RI\nplan implementations. So whether an RI check or action is implemented\nusing SQL plan or a hard-code plan, the execution should proceed with\nthe effectively same config/environment.\n\nI will continue investigating what to do about points (1) and (2)\nmentioned above and see if we can do away with using SQL in the\nremaining cases.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://postgr.es/m/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com\n\n[2] https://postgr.es/m/3400437.1649363527%40sss.pgh.pa.us\n\n[3] https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference#Removing_SPI_from_RI_trigger_implementation", "msg_date": "Fri, 1 Jul 2022 15:22:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Jun 30, 2022 at 11:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> I will continue investigating what to do about points (1) and (2)\n> mentioned above and see if we can do away with using SQL in the\n> remaining cases.\n\nHi Amit, looks like isolation tests are failing in cfbot:\n\n https://cirrus-ci.com/task/6642884727275520\n\nNote also the uninitialized variable warning that cfbot picked up;\nthat may or may not be related.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 5 Jul 2022 11:24:19 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Wed, Jul 6, 2022 at 3:24 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On Thu, Jun 30, 2022 at 11:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > I will continue investigating what to do about points (1) and (2)\n> > mentioned above and see if we can do away with using SQL in the\n> > remaining cases.\n>\n> Hi Amit, looks like isolation tests are failing in cfbot:\n>\n> https://cirrus-ci.com/task/6642884727275520\n>\n> Note also the uninitialized variable warning that cfbot picked up;\n> that may or may not be related.\n\nThanks for the heads up.\n\nYeah, I noticed the warning when I compiled with a different set of\ngcc parameters, though not the isolation test failures, so not sure\nwhat the bot is running into.\n\nAttaching updated patches which fix the warning and a few other issues\nI noticed.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Jul 2022 11:55:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Wed, Jul 6, 2022 at 11:55 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 6, 2022 at 3:24 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > On Thu, Jun 30, 2022 at 11:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> > > I will continue investigating what to do about points (1) and (2)\n> > > mentioned above and see if we can do away with using SQL in the\n> > > remaining cases.\n> >\n> > Hi Amit, looks like isolation tests are failing in cfbot:\n> >\n> > https://cirrus-ci.com/task/6642884727275520\n> >\n> > Note also the uninitialized variable warning that cfbot picked up;\n> > that may or may not be related.\n>\n> Thanks for the heads up.\n>\n> Yeah, I noticed the warning when I compiled with a different set of\n> gcc parameters, though not the isolation test failures, so not sure\n> what the bot is running into.\n>\n> Attaching updated patches which fix the warning and a few other issues\n> I noticed.\n\nHmm, cfbot is telling me that detach-partition-concurrently-2 is\nfailing on Cirrus-CI [1]. Will look into it.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://cirrus-ci.com/task/5253369525698560?logs=test_world#L317\n\n\n", "msg_date": "Thu, 7 Jul 2022 14:45:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Fri, Jul 1, 2022 at 2:23 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> So, I hacked together a patch (attached 0001) that invents an \"RI\n> plan\" construct (struct RIPlan) to replace the use of an \"SPI plan\"\n> (struct _SPI_plan).\n>\n> With that in place, I decided to rebase my previous patch [1] to use\n> this new interface and the result is attached 0002.\n\nI think inventing something like RIPlan is probably reasonable, but\nI'm not sure how much it really does to address the objections that\nwere raised previously. How do we know that ri_LookupKeyInPkRel does\nall the same things that executing a plan would have done? I see that\nfunction contains permission-checking logic, for example, as well as\nsnapshot-related logic, and maybe there are other subsystems to worry\nabout, like rules or triggers or row-level security. Maybe there's no\nanswer to that problem other than careful manual verification, because\nafter all the only way to be 100% certain we're doing all the things\nthat would happen if you executed a plan is to execute a plan, which\nkind of defeats the point of the whole thing. All I'm saying is that\nI'm not sure that this refactoring in and of itself addresses that\nconcern.\n\nAs far as 0002 goes, the part I'm most skeptical about is this:\n\n+static bool\n+ri_LookupKeyInPkRelPlanIsValid(RI_Plan *plan)\n+{\n+ /* Never store anything that can be invalidated. */\n+ return true;\n+}\n\nIsn't that leaving rather a lot on the table? ri_LookupKeyInPkRel is\ngoing to be called a lot of times and do a lot of things over and over\nagain that maybe only need to be done once, like checking permissions\nand looking up the operators to use and reopening the index. And all\nthe stuff ExecGetLeafPartitionForKey does too, yikes that's a lot of\nstuff. Now maybe that's what Tom wants, I don't know. Certainly, the\nexisting SQL-based implementation is going to do that stuff on every\ncall, too; I'm just not sure that's a good thing. I think there's some\ndebate to be had here over what behavior we need to preserve exactly\nvs. what we can and should change. For instance, it seems clear to me\nthat leaving out permissions checks altogether would be not OK, but if\nthis implementation arranged to cache the results of a permission\ncheck and the SQL-based implementations don't, is that OK? Maybe Tom\nwould argue that it isn't, because he considers that a part of the\nuser-visible behavior, but I'm not sure that's the right view of it. I\nthink what we're promising the user is that we will check permissions,\nnot that we're going to do it separately for every trigger firing, or\neven that every kind of trigger is going to do it exactly the same\nnumber of times as every other trigger. I think we need some input\nfrom Tom (and perhaps others) on how rigidly we need to maintain the\nhigh-level behavior here before we can really say much about whether\nthe implementation is as good as it can be.\n\nI suspect, though, that there's more that can be done here in terms of\nsharing code. For instance, picking on the permissions checking logic,\npresumably that's something that every non-SQL implementation would\nneed to do. But the rest of what's in ri_LookupKeyInPkRel() is\nspecific to one particular kind of trigger. If we had multiple non-SQL\ntrigger types, we'd want to somehow have common logic for permissions\nchecking for all of them.\n\nI also suspect that we ought to have a separation between planning and\nexecution even for non-SQL based things. You don't really have that\nhere. What that ought to look like, though, depends on the answers to\nthe questions above, about how exactly we think we need to reproduce\nthe existing behavior.\n\nI find my ego slightly wounded by the comment that \"the partition\ndescriptor machinery has a hack that assumes that the queries\noriginating in this module push the latest snapshot in the\ntransaction-snapshot mode.\" It's true that the partition descriptor\nmachinery gives different answers depending on the active snapshot,\nbut, err, is that a hack, or just a perfectly reasonable design\ndecision? An alternative might be for PartitionDirectoryLookup to take\na snapshot as an explicit argument rather than relying on the global\nvariable to get that information from context. I generally feel that\nwe rely too much on global variables where we should be passing around\nexplicit parameters, so if you're just arguing that explicit\nparameters would be better here, then I agree and just didn't think of\nit. If you're arguing that making the answer depend on the snapshot is\nitself a bad idea, I don't agree with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 12:14:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... I think there's some\n> debate to be had here over what behavior we need to preserve exactly\n> vs. what we can and should change.\n\nFor sure. For example, people occasionally complain because\nuser-defined triggers can defeat RI integrity checks. Should we\nchange that? I dunno, but if we're not using the standard executor\nthen there's at least some room to consider it. I think people would\nbe upset if we stopped firing user triggers at all; but if triggers\ncouldn't defeat RI actions short of throwing a transaction-aborting\nerror, I believe a lot of people would consider that an improvement.\n\n> For instance, it seems clear to me\n> that leaving out permissions checks altogether would be not OK, but if\n> this implementation arranged to cache the results of a permission\n> check and the SQL-based implementations don't, is that OK? Maybe Tom\n> would argue that it isn't, because he considers that a part of the\n> user-visible behavior, but I'm not sure that's the right view of it.\n\nUh ... if such caching behavior is at all competently implemented,\nit will be transparent because the cache will notice and respond to\nevents that should change its outputs. So I don't foresee a semantic\nproblem there. It may well be that it's practical to cache\npermissions-check info for RI checks when it isn't for more general\nqueries, so looking into ideas like that seems well within scope here.\n(Or then again, maybe we should be building a more general permissions\ncache?)\n\nI'm too tired to have more than that to say right now, but I agree\nthat there is room for discussion about exactly what behavior we\nwant to preserve.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 22:07:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Fri, Jul 8, 2022 at 10:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Uh ... if such caching behavior is at all competently implemented,\n> it will be transparent because the cache will notice and respond to\n> events that should change its outputs.\n\nWell, that assumes that we emit appropriate invalidations in every\nplace where permissions are updated, and take appropriate locks every\nplace where they are checked. I think that the first one might be too\noptimistic, and the second one is definitely too optimistic. For\ninstance, consider pg_proc_ownercheck. There's no lock of any kind\ntaken on the function here, and at least in typical cases, I don't\nthink the caller takes one either. Compare the extensive tap-dancing\naround locking and permissions checking in RangeVarGetRelidExtended\nagainst the blithe unconcern in FuncnameGetCandidates.\n\nI believe that of all the types of SQL objects in the system, only\nrelations have anything like proper interlocking against concurrent\nDDL. Other examples of not caring at all include LookupCollation() and\nLookupTypeNameExtended(). There's just no heavyweight locking here at\nall, and so no invalidation based on sinval messages can ever be\nreliable.\n\nGRANT and REVOKE don't take proper locks, either, even on tables:\n\nrhaas=# begin;\nBEGIN\nrhaas=*# lock table pgbench_accounts;\nLOCK TABLE\nrhaas=*#\n\nThen, in another session:\n\nrhaas=# create role foo;\nCREATE ROLE\nrhaas=# grant select on pgbench_accounts to foo;\nGRANT\nrhaas=#\n\nExecuting \"SELECT * FROM pgbench_accounts\" in the other session would\nhave blocked, but the GRANT has no problem at all.\n\nI don't see that any of this is this patch's job to fix. If nobody's\ncared enough to fix it any time in the past 20 years, or just didn't\nwant to pay the locking cost, well then we probably don't need to do\nit now either. But I think it means that even the slightest change in\nthe timing or frequency of permissions checks is in theory a\nuser-visible change, because there are no grounds for assuming that\nthe permissions on any of the objects involved aren't changing while\nthe query is executing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 14:15:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Sat, Jul 9, 2022 at 1:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jul 1, 2022 at 2:23 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > So, I hacked together a patch (attached 0001) that invents an \"RI\n> > plan\" construct (struct RIPlan) to replace the use of an \"SPI plan\"\n> > (struct _SPI_plan).\n> >\n> > With that in place, I decided to rebase my previous patch [1] to use\n> > this new interface and the result is attached 0002.\n>\n\nThanks for taking a look at this. I'll try to respond to other points\nin a separate email, but I wanted to clarify something about below:\n\n> I find my ego slightly wounded by the comment that \"the partition\n> descriptor machinery has a hack that assumes that the queries\n> originating in this module push the latest snapshot in the\n> transaction-snapshot mode.\" It's true that the partition descriptor\n> machinery gives different answers depending on the active snapshot,\n> but, err, is that a hack, or just a perfectly reasonable design\n> decision?\n\nI think my calling it a hack of \"partition descriptor machinery\" is\nnot entirely fair (sorry), because it's talking about the following\ncomment in find_inheritance_children_extended(), which describes it as\nbeing a hack, so I mentioned the word \"hack\" in my comment too:\n\n /*\n * Cope with partitions concurrently being detached. When we see a\n * partition marked \"detach pending\", we omit it from the returned set\n * of visible partitions if caller requested that and the tuple's xmin\n * does not appear in progress to the active snapshot. (If there's no\n * active snapshot set, that means we're not running a user query, so\n * it's OK to always include detached partitions in that case; if the\n * xmin is still running to the active snapshot, then the partition\n * has not been detached yet and so we include it.)\n *\n * The reason for this hack is that we want to avoid seeing the\n * partition as alive in RI queries during REPEATABLE READ or\n * SERIALIZABLE transactions: such queries use a different snapshot\n * than the one used by regular (user) queries.\n */\n\nThat bit came in to make DETACH CONCURRENTLY produce sane answers for\nRI queries in some cases.\n\nI guess my comment should really have said something like:\n\nHACK: find_inheritance_children_extended() has a hack that assumes\nthat the queries originating in this module push the latest snapshot\nin transaction-snapshot mode.\n\n> An alternative might be for PartitionDirectoryLookup to take\n> a snapshot as an explicit argument rather than relying on the global\n> variable to get that information from context. I generally feel that\n> we rely too much on global variables where we should be passing around\n> explicit parameters, so if you're just arguing that explicit\n> parameters would be better here, then I agree and just didn't think of\n> it. If you're arguing that making the answer depend on the snapshot is\n> itself a bad idea, I don't agree with that.\n\nNo, I'm not arguing that using a snapshot there is wrong and haven't\nreally thought hard about an alternative.\n\nI tend to agree passing a snapshot explicitly might be better than\nusing ActiveSnapshot stuff for this.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Jul 2022 20:59:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Wed, Jul 13, 2022 at 8:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Jul 9, 2022 at 1:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Thanks for taking a look at this. I'll try to respond to other points\n> in a separate email, but I wanted to clarify something about below:\n>\n> > I find my ego slightly wounded by the comment that \"the partition\n> > descriptor machinery has a hack that assumes that the queries\n> > originating in this module push the latest snapshot in the\n> > transaction-snapshot mode.\" It's true that the partition descriptor\n> > machinery gives different answers depending on the active snapshot,\n> > but, err, is that a hack, or just a perfectly reasonable design\n> > decision?\n>\n> I think my calling it a hack of \"partition descriptor machinery\" is\n> not entirely fair (sorry), because it's talking about the following\n> comment in find_inheritance_children_extended(), which describes it as\n> being a hack, so I mentioned the word \"hack\" in my comment too:\n>\n> /*\n> * Cope with partitions concurrently being detached. When we see a\n> * partition marked \"detach pending\", we omit it from the returned set\n> * of visible partitions if caller requested that and the tuple's xmin\n> * does not appear in progress to the active snapshot. (If there's no\n> * active snapshot set, that means we're not running a user query, so\n> * it's OK to always include detached partitions in that case; if the\n> * xmin is still running to the active snapshot, then the partition\n> * has not been detached yet and so we include it.)\n> *\n> * The reason for this hack is that we want to avoid seeing the\n> * partition as alive in RI queries during REPEATABLE READ or\n> * SERIALIZABLE transactions: such queries use a different snapshot\n> * than the one used by regular (user) queries.\n> */\n>\n> That bit came in to make DETACH CONCURRENTLY produce sane answers for\n> RI queries in some cases.\n>\n> I guess my comment should really have said something like:\n>\n> HACK: find_inheritance_children_extended() has a hack that assumes\n> that the queries originating in this module push the latest snapshot\n> in transaction-snapshot mode.\n\nPosting a new version with this bit fixed; cfbot complained that 0002\nneeded a rebase over 3592e0ff98.\n\nI will try to come up with a patch to enhance the PartitionDirectory\ninterface to allow passing the snapshot to use when scanning\npg_inherits explicitly, so we won't need the above \"hack\".\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Aug 2022 13:05:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Aug 4, 2022 at 1:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 13, 2022 at 8:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > That bit came in to make DETACH CONCURRENTLY produce sane answers for\n> > RI queries in some cases.\n> >\n> > I guess my comment should really have said something like:\n> >\n> > HACK: find_inheritance_children_extended() has a hack that assumes\n> > that the queries originating in this module push the latest snapshot\n> > in transaction-snapshot mode.\n>\n> Posting a new version with this bit fixed; cfbot complained that 0002\n> needed a rebase over 3592e0ff98.\n>\n> I will try to come up with a patch to enhance the PartitionDirectory\n> interface to allow passing the snapshot to use when scanning\n> pg_inherits explicitly, so we won't need the above \"hack\".\n\nSorry about the delay.\n\nSo I came up with such a patch that is attached as 0003.\n\nThe main problem I want to fix with it is the need for RI_FKey_check()\nto \"force\"-push the latest snapshot that the PartitionDesc code wants\nto use to correctly include or omit a detach-pending partition from\nthe view of that function's RI query. Scribbling on ActiveSnapshot\nthat way means that *all* scans involved in the execution of that\nquery now see a snapshot that they shouldn't likely be seeing; a bug\nresulting from this has been demonstrated in a test case added by the\ncommit 00cb86e75d.\n\nThe fix is to make RI_FKey_check(), or really its RI_Plan's execution\nfunction ri_LookupKeyInPkRel() added by patch 0002, pass the latest\nsnapshot explicitly as a parameter of PartitionDirectoryLookup(),\nwhich passes it down to the PartitionDesc code. No need to manipulate\nActiveSnapshot. The actual fix is in patch 0004, which I extracted\nout of 0002 to keep the latter a mere refactoring patch without any\nsemantic changes (though a bit more on that below). BTW, I don't know\nof a way to back-patch a fix like this for the bug, because there is\nno way other than ActiveSnapshot to pass the desired snapshot to the\nPartitionDesc code if the only way we get to that code is by executing\nan SQL query plan.\n\n0003 moves the relevant logic out of\nfind_inheritance_children_extended() into its callers. The logic of\ndeciding which snapshot to use to determine if a detach-pending\npartition should indeed be omitted from the consideration of a caller\nbased on the result of checking the visibility of the corresponding\npg_inherits row with the snapshot; it just uses ActiveSnapshot now.\nGiven the problems with using ActiveSnapshot mentioned above, I think\nit is better to make the callers decide the snapshot and pass it using\na parameter named omit_detached_snapshot. Only PartitionDesc code\nactually cares about sending anything but the parent query's\nActiveSnapshot, so the PartitionDesc and PartitionDirectory interface\nhas been changed to add the same omit_detached_snapshot parameter.\nfind_inheritance_children(), the other caller used in many sites that\nlook at a table's partitions, defaults to using ActiveSnapshot, which\ndoes not seem problematic. Furthermore, only RI_FKey_check() needs to\npass anything other than ActiveSnapshot, so other users of\nPartitionDesc, like user queries, still default to using the\nActiveSnapshot, which doesn't have any known problems either.\n\n0001 and 0002 are mostly unchanged in this version, except I took out\nthe visibility bug-fix from 0002 into 0004 described above, which\nlooks better using the interface added by 0003 anyway. I need to\naddress the main concern that it's still hard to be sure that the\npatch in its current form doesn't break any user-level semantics of\nthese RI check triggers and other concerns about the implementation\nthat Robert expressed in [1].\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com", "msg_date": "Thu, 29 Sep 2022 13:46:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Sep 29, 2022 at 1:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Sorry about the delay.\n>\n> So I came up with such a patch that is attached as 0003.\n>\n> The main problem I want to fix with it is the need for RI_FKey_check()\n> to \"force\"-push the latest snapshot that the PartitionDesc code wants\n> to use to correctly include or omit a detach-pending partition from\n> the view of that function's RI query. Scribbling on ActiveSnapshot\n> that way means that *all* scans involved in the execution of that\n> query now see a snapshot that they shouldn't likely be seeing; a bug\n> resulting from this has been demonstrated in a test case added by the\n> commit 00cb86e75d.\n>\n> The fix is to make RI_FKey_check(), or really its RI_Plan's execution\n> function ri_LookupKeyInPkRel() added by patch 0002, pass the latest\n> snapshot explicitly as a parameter of PartitionDirectoryLookup(),\n> which passes it down to the PartitionDesc code. No need to manipulate\n> ActiveSnapshot. The actual fix is in patch 0004, which I extracted\n> out of 0002 to keep the latter a mere refactoring patch without any\n> semantic changes (though a bit more on that below). BTW, I don't know\n> of a way to back-patch a fix like this for the bug, because there is\n> no way other than ActiveSnapshot to pass the desired snapshot to the\n> PartitionDesc code if the only way we get to that code is by executing\n> an SQL query plan.\n>\n> 0003 moves the relevant logic out of\n> find_inheritance_children_extended() into its callers. The logic of\n> deciding which snapshot to use to determine if a detach-pending\n> partition should indeed be omitted from the consideration of a caller\n> based on the result of checking the visibility of the corresponding\n> pg_inherits row with the snapshot; it just uses ActiveSnapshot now.\n> Given the problems with using ActiveSnapshot mentioned above, I think\n> it is better to make the callers decide the snapshot and pass it using\n> a parameter named omit_detached_snapshot. Only PartitionDesc code\n> actually cares about sending anything but the parent query's\n> ActiveSnapshot, so the PartitionDesc and PartitionDirectory interface\n> has been changed to add the same omit_detached_snapshot parameter.\n> find_inheritance_children(), the other caller used in many sites that\n> look at a table's partitions, defaults to using ActiveSnapshot, which\n> does not seem problematic. Furthermore, only RI_FKey_check() needs to\n> pass anything other than ActiveSnapshot, so other users of\n> PartitionDesc, like user queries, still default to using the\n> ActiveSnapshot, which doesn't have any known problems either.\n>\n> 0001 and 0002 are mostly unchanged in this version, except I took out\n> the visibility bug-fix from 0002 into 0004 described above, which\n> looks better using the interface added by 0003 anyway. I need to\n> address the main concern that it's still hard to be sure that the\n> patch in its current form doesn't break any user-level semantics of\n> these RI check triggers and other concerns about the implementation\n> that Robert expressed in [1].\n\nOops, I apparently posted the wrong 0004, containing a bug that\ncrashes `make check`.\n\nFixed version attached.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Sep 2022 16:43:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Sep 29, 2022 at 4:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 29, 2022 at 1:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Sorry about the delay.\n> >\n> > So I came up with such a patch that is attached as 0003.\n> >\n> > The main problem I want to fix with it is the need for RI_FKey_check()\n> > to \"force\"-push the latest snapshot that the PartitionDesc code wants\n> > to use to correctly include or omit a detach-pending partition from\n> > the view of that function's RI query. Scribbling on ActiveSnapshot\n> > that way means that *all* scans involved in the execution of that\n> > query now see a snapshot that they shouldn't likely be seeing; a bug\n> > resulting from this has been demonstrated in a test case added by the\n> > commit 00cb86e75d.\n> >\n> > The fix is to make RI_FKey_check(), or really its RI_Plan's execution\n> > function ri_LookupKeyInPkRel() added by patch 0002, pass the latest\n> > snapshot explicitly as a parameter of PartitionDirectoryLookup(),\n> > which passes it down to the PartitionDesc code. No need to manipulate\n> > ActiveSnapshot. The actual fix is in patch 0004, which I extracted\n> > out of 0002 to keep the latter a mere refactoring patch without any\n> > semantic changes (though a bit more on that below). BTW, I don't know\n> > of a way to back-patch a fix like this for the bug, because there is\n> > no way other than ActiveSnapshot to pass the desired snapshot to the\n> > PartitionDesc code if the only way we get to that code is by executing\n> > an SQL query plan.\n> >\n> > 0003 moves the relevant logic out of\n> > find_inheritance_children_extended() into its callers. The logic of\n> > deciding which snapshot to use to determine if a detach-pending\n> > partition should indeed be omitted from the consideration of a caller\n> > based on the result of checking the visibility of the corresponding\n> > pg_inherits row with the snapshot; it just uses ActiveSnapshot now.\n> > Given the problems with using ActiveSnapshot mentioned above, I think\n> > it is better to make the callers decide the snapshot and pass it using\n> > a parameter named omit_detached_snapshot. Only PartitionDesc code\n> > actually cares about sending anything but the parent query's\n> > ActiveSnapshot, so the PartitionDesc and PartitionDirectory interface\n> > has been changed to add the same omit_detached_snapshot parameter.\n> > find_inheritance_children(), the other caller used in many sites that\n> > look at a table's partitions, defaults to using ActiveSnapshot, which\n> > does not seem problematic. Furthermore, only RI_FKey_check() needs to\n> > pass anything other than ActiveSnapshot, so other users of\n> > PartitionDesc, like user queries, still default to using the\n> > ActiveSnapshot, which doesn't have any known problems either.\n> >\n> > 0001 and 0002 are mostly unchanged in this version, except I took out\n> > the visibility bug-fix from 0002 into 0004 described above, which\n> > looks better using the interface added by 0003 anyway. I need to\n> > address the main concern that it's still hard to be sure that the\n> > patch in its current form doesn't break any user-level semantics of\n> > these RI check triggers and other concerns about the implementation\n> > that Robert expressed in [1].\n>\n> Oops, I apparently posted the wrong 0004, containing a bug that\n> crashes `make check`.\n>\n> Fixed version attached.\n\nHere's another version that hopefully fixes the crash reported by\nCirrus CI [1] that is not reliably reproducible.\n\nI suspect it may have to do with error_context_stack not being reset\nwhen ri_LookupKeyInPkRel() does an early return; the `return false` in\nthat case was wrong too:\n\n@@ -2693,7 +2693,7 @@ ri_LookupKeyInPkRel(struct RI_Plan *plan,\n * looking for.\n */\n if (leaf_pk_rel == NULL)\n- return false;\n+ goto done;\n\n...\n+done:\n /*\n * Pop the error context stack\n */\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://cirrus-ci.com/task/4901906421121024 (permalink?)", "msg_date": "Thu, 29 Sep 2022 18:09:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Sep 29, 2022 at 6:09 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 29, 2022 at 4:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Sep 29, 2022 at 1:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Sorry about the delay.\n> > >\n> > > So I came up with such a patch that is attached as 0003.\n> > >\n> > > The main problem I want to fix with it is the need for RI_FKey_check()\n> > > to \"force\"-push the latest snapshot that the PartitionDesc code wants\n> > > to use to correctly include or omit a detach-pending partition from\n> > > the view of that function's RI query. Scribbling on ActiveSnapshot\n> > > that way means that *all* scans involved in the execution of that\n> > > query now see a snapshot that they shouldn't likely be seeing; a bug\n> > > resulting from this has been demonstrated in a test case added by the\n> > > commit 00cb86e75d.\n> > >\n> > > The fix is to make RI_FKey_check(), or really its RI_Plan's execution\n> > > function ri_LookupKeyInPkRel() added by patch 0002, pass the latest\n> > > snapshot explicitly as a parameter of PartitionDirectoryLookup(),\n> > > which passes it down to the PartitionDesc code. No need to manipulate\n> > > ActiveSnapshot. The actual fix is in patch 0004, which I extracted\n> > > out of 0002 to keep the latter a mere refactoring patch without any\n> > > semantic changes (though a bit more on that below). BTW, I don't know\n> > > of a way to back-patch a fix like this for the bug, because there is\n> > > no way other than ActiveSnapshot to pass the desired snapshot to the\n> > > PartitionDesc code if the only way we get to that code is by executing\n> > > an SQL query plan.\n> > >\n> > > 0003 moves the relevant logic out of\n> > > find_inheritance_children_extended() into its callers. The logic of\n> > > deciding which snapshot to use to determine if a detach-pending\n> > > partition should indeed be omitted from the consideration of a caller\n> > > based on the result of checking the visibility of the corresponding\n> > > pg_inherits row with the snapshot; it just uses ActiveSnapshot now.\n> > > Given the problems with using ActiveSnapshot mentioned above, I think\n> > > it is better to make the callers decide the snapshot and pass it using\n> > > a parameter named omit_detached_snapshot. Only PartitionDesc code\n> > > actually cares about sending anything but the parent query's\n> > > ActiveSnapshot, so the PartitionDesc and PartitionDirectory interface\n> > > has been changed to add the same omit_detached_snapshot parameter.\n> > > find_inheritance_children(), the other caller used in many sites that\n> > > look at a table's partitions, defaults to using ActiveSnapshot, which\n> > > does not seem problematic. Furthermore, only RI_FKey_check() needs to\n> > > pass anything other than ActiveSnapshot, so other users of\n> > > PartitionDesc, like user queries, still default to using the\n> > > ActiveSnapshot, which doesn't have any known problems either.\n> > >\n> > > 0001 and 0002 are mostly unchanged in this version, except I took out\n> > > the visibility bug-fix from 0002 into 0004 described above, which\n> > > looks better using the interface added by 0003 anyway. I need to\n> > > address the main concern that it's still hard to be sure that the\n> > > patch in its current form doesn't break any user-level semantics of\n> > > these RI check triggers and other concerns about the implementation\n> > > that Robert expressed in [1].\n> >\n> > Oops, I apparently posted the wrong 0004, containing a bug that\n> > crashes `make check`.\n> >\n> > Fixed version attached.\n>\n> Here's another version that hopefully fixes the crash reported by\n> Cirrus CI [1] that is not reliably reproducible.\n\nAnd cfbot #1, which failed a bit after the above one, is not happy\nwith my failing to include utils/snapshot.h in a partdesc.h to which I\nadded:\n\n@@ -65,9 +66,11 @@ typedef struct PartitionDescData\n\n\n extern PartitionDesc RelationGetPartitionDesc(Relation rel, bool\nomit_detached);\n+extern PartitionDesc RelationGetPartitionDescExt(Relation rel, bool\nomit_detached,\n+ Snapshot\nomit_detached_snapshot);\n\n extern PartitionDirectory CreatePartitionDirectory(MemoryContext\nmcxt, bool omit_detached);\n-extern PartitionDesc PartitionDirectoryLookup(PartitionDirectory, Relation);\n+extern PartitionDesc PartitionDirectoryLookup(PartitionDirectory,\nRelation, Snapshot);\n\nSo, here's a final revision for today. Sorry for the noise.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Sep 2022 18:18:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 18:18:10 +0900, Amit Langote wrote:\n> So, here's a final revision for today. Sorry for the noise.\n\nThis appears to fail on 32bit systems. Seems the new test is indeed\nworthwhile...\n\nhttps://cirrus-ci.com/task/6581521615159296?logs=test_world_32#L406\n\n[19:12:24.452] Summary of Failures:\n[19:12:24.452]\n[19:12:24.452] 2/243 postgresql:main / main/regress FAIL 45.08s (exit status 1)\n[19:12:24.452] 4/243 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR 71.96s\n[19:12:24.452] 32/243 postgresql:recovery / recovery/027_stream_regress ERROR 45.84s\n\nUnfortunately ccf36ea2580f66abbc37f27d8c296861ffaad9bf seems to not have\nsuceeded in capture the test files of the 32bit build (and perhaps broke it\nfor 64bit builds as well?), so I can't see the regression.diffs contents.\n\n\n[19:12:24.387] alter_table ... FAILED 4546 ms\n...\n[19:12:24.387] ========================\n[19:12:24.387] 1 of 211 tests failed.\n[19:12:24.387] ========================\n[19:12:24.387]\n...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 18:21:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "Hi,\n\nOn 2022-10-01 18:21:15 -0700, Andres Freund wrote:\n> On 2022-09-29 18:18:10 +0900, Amit Langote wrote:\n> > So, here's a final revision for today. Sorry for the noise.\n>\n> This appears to fail on 32bit systems. Seems the new test is indeed\n> worthwhile...\n>\n> https://cirrus-ci.com/task/6581521615159296?logs=test_world_32#L406\n>\n> [19:12:24.452] Summary of Failures:\n> [19:12:24.452]\n> [19:12:24.452] 2/243 postgresql:main / main/regress FAIL 45.08s (exit status 1)\n> [19:12:24.452] 4/243 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR 71.96s\n> [19:12:24.452] 32/243 postgresql:recovery / recovery/027_stream_regress ERROR 45.84s\n>\n> Unfortunately ccf36ea2580f66abbc37f27d8c296861ffaad9bf seems to not have\n> suceeded in capture the test files of the 32bit build (and perhaps broke it\n> for 64bit builds as well?), so I can't see the regression.diffs contents.\n\nOh, that appears to have been an issue on the CI side (*), while uploading the\nlogs. The previous run did catch the error:\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out\t2022-09-30 15:05:49.930613669 +0000\n+++ /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out\t2022-09-30 15:11:21.050383258 +0000\n@@ -672,6 +672,8 @@\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n -- Check it actually works\n INSERT INTO FKTABLE VALUES(42);\t\t-- should succeed\n+ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n+DETAIL: Key (ftest1)=(42) is not present in table \"pktable\".\n INSERT INTO FKTABLE VALUES(43);\t\t-- should fail\n ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n DETAIL: Key (ftest1)=(43) is not present in table \"pktable\".\n\nGreetings,\n\nAndres Freund\n\n* Error from upload stream: rpc error: code = Unknown desc =\n\n\n", "msg_date": "Sat, 1 Oct 2022 18:24:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Sun, Oct 2, 2022 at 10:24 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-01 18:21:15 -0700, Andres Freund wrote:\n> > On 2022-09-29 18:18:10 +0900, Amit Langote wrote:\n> > > So, here's a final revision for today. Sorry for the noise.\n> >\n> > This appears to fail on 32bit systems. Seems the new test is indeed\n> > worthwhile...\n> >\n> > https://cirrus-ci.com/task/6581521615159296?logs=test_world_32#L406\n> >\n> > [19:12:24.452] Summary of Failures:\n> > [19:12:24.452]\n> > [19:12:24.452] 2/243 postgresql:main / main/regress FAIL 45.08s (exit status 1)\n> > [19:12:24.452] 4/243 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR 71.96s\n> > [19:12:24.452] 32/243 postgresql:recovery / recovery/027_stream_regress ERROR 45.84s\n> >\n> > Unfortunately ccf36ea2580f66abbc37f27d8c296861ffaad9bf seems to not have\n> > suceeded in capture the test files of the 32bit build (and perhaps broke it\n> > for 64bit builds as well?), so I can't see the regression.diffs contents.\n>\n> Oh, that appears to have been an issue on the CI side (*), while uploading the\n> logs. The previous run did catch the error:\n>\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out 2022-09-30 15:05:49.930613669 +0000\n> +++ /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out 2022-09-30 15:11:21.050383258 +0000\n> @@ -672,6 +672,8 @@\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n> -- Check it actually works\n> INSERT INTO FKTABLE VALUES(42); -- should succeed\n> +ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n> +DETAIL: Key (ftest1)=(42) is not present in table \"pktable\".\n> INSERT INTO FKTABLE VALUES(43); -- should fail\n> ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n> DETAIL: Key (ftest1)=(43) is not present in table \"pktable\".\n\nThanks for the heads up. Hmm, this I am not sure how to reproduce on\nmy own, so I am currently left with second-guessing what may be going\nwrong on 32 bit machines with whichever of the 4 patches.\n\nFor now, I'll just post 0001, which I am claiming has no semantic\nchanges (proof pending), to rule out that that one's responsible.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Oct 2022 18:26:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Fri, Oct 7, 2022 at 6:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sun, Oct 2, 2022 at 10:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-01 18:21:15 -0700, Andres Freund wrote:\n> > > On 2022-09-29 18:18:10 +0900, Amit Langote wrote:\n> > > > So, here's a final revision for today. Sorry for the noise.\n> > >\n> > > This appears to fail on 32bit systems. Seems the new test is indeed\n> > > worthwhile...\n> > >\n> > > https://cirrus-ci.com/task/6581521615159296?logs=test_world_32#L406\n> > >\n> > > [19:12:24.452] Summary of Failures:\n> > > [19:12:24.452]\n> > > [19:12:24.452] 2/243 postgresql:main / main/regress FAIL 45.08s (exit status 1)\n> > > [19:12:24.452] 4/243 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR 71.96s\n> > > [19:12:24.452] 32/243 postgresql:recovery / recovery/027_stream_regress ERROR 45.84s\n> > >\n> > > Unfortunately ccf36ea2580f66abbc37f27d8c296861ffaad9bf seems to not have\n> > > suceeded in capture the test files of the 32bit build (and perhaps broke it\n> > > for 64bit builds as well?), so I can't see the regression.diffs contents.\n> >\n> > Oh, that appears to have been an issue on the CI side (*), while uploading the\n> > logs. The previous run did catch the error:\n> >\n> > diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out\n> > --- /tmp/cirrus-ci-build/src/test/regress/expected/alter_table.out 2022-09-30 15:05:49.930613669 +0000\n> > +++ /tmp/cirrus-ci-build/build-32/testrun/main/regress/results/alter_table.out 2022-09-30 15:11:21.050383258 +0000\n> > @@ -672,6 +672,8 @@\n> > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n> > -- Check it actually works\n> > INSERT INTO FKTABLE VALUES(42); -- should succeed\n> > +ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n> > +DETAIL: Key (ftest1)=(42) is not present in table \"pktable\".\n> > INSERT INTO FKTABLE VALUES(43); -- should fail\n> > ERROR: insert or update on table \"fktable\" violates foreign key constraint \"fktable_ftest1_fkey\"\n> > DETAIL: Key (ftest1)=(43) is not present in table \"pktable\".\n>\n> Thanks for the heads up. Hmm, this I am not sure how to reproduce on\n> my own, so I am currently left with second-guessing what may be going\n> wrong on 32 bit machines with whichever of the 4 patches.\n>\n> For now, I'll just post 0001, which I am claiming has no semantic\n> changes (proof pending), to rule out that that one's responsible.\n\nNope, not 0001. Here's 0001+0002.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 18:56:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On 2022-Oct-07, Amit Langote wrote:\n\n> > Thanks for the heads up. Hmm, this I am not sure how to reproduce on\n> > my own, so I am currently left with second-guessing what may be going\n> > wrong on 32 bit machines with whichever of the 4 patches.\n> >\n> > For now, I'll just post 0001, which I am claiming has no semantic\n> > changes (proof pending), to rule out that that one's responsible.\n> \n> Nope, not 0001. Here's 0001+0002.\n\nPlease note that you can set up a github repository so that cirrus-ci\ntests whatever patches you like, without having to post them to\npg-hackers. See src/tools/ci/README, it takes three minutes if you\nalready have the account and repository.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 7 Oct 2022 12:15:44 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Fri, Oct 7, 2022 at 19:15 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Oct-07, Amit Langote wrote:\n>\n> > > Thanks for the heads up. Hmm, this I am not sure how to reproduce on\n> > > my own, so I am currently left with second-guessing what may be going\n> > > wrong on 32 bit machines with whichever of the 4 patches.\n> > >\n> > > For now, I'll just post 0001, which I am claiming has no semantic\n> > > changes (proof pending), to rule out that that one's responsible.\n> >\n> > Nope, not 0001. Here's 0001+0002.\n>\n> Please note that you can set up a github repository so that cirrus-ci\n> tests whatever patches you like, without having to post them to\n> pg-hackers. See src/tools/ci/README, it takes three minutes if you\n> already have the account and repository.\n\n\nAh, that’s right. Will do so, thanks for the suggestion.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Fri, Oct 7, 2022 at 19:15 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Oct-07, Amit Langote wrote:\n\n> > Thanks for the heads up.  Hmm, this I am not sure how to reproduce on\n> > my own, so I am currently left with second-guessing what may be going\n> > wrong on 32 bit machines with whichever of the 4 patches.\n> >\n> > For now, I'll just post 0001, which I am claiming has no semantic\n> > changes (proof pending), to rule out that that one's responsible.\n> \n> Nope, not 0001.  Here's 0001+0002.\n\nPlease note that you can set up a github repository so that cirrus-ci\ntests whatever patches you like, without having to post them to\npg-hackers.  See src/tools/ci/README, it takes three minutes if you\nalready have the account and repository.Ah, that’s right.  Will do so, thanks for the suggestion.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Oct 2022 19:17:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Fri, Oct 7, 2022 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Oct 7, 2022 at 19:15 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> On 2022-Oct-07, Amit Langote wrote:\n>> > > Thanks for the heads up. Hmm, this I am not sure how to reproduce on\n>> > > my own, so I am currently left with second-guessing what may be going\n>> > > wrong on 32 bit machines with whichever of the 4 patches.\n>> > >\n>> > > For now, I'll just post 0001, which I am claiming has no semantic\n>> > > changes (proof pending), to rule out that that one's responsible.\n>> >\n>> > Nope, not 0001. Here's 0001+0002.\n\nI had forgotten to actually attach anything with that email.\n\n>> Please note that you can set up a github repository so that cirrus-ci\n>> tests whatever patches you like, without having to post them to\n>> pg-hackers. See src/tools/ci/README, it takes three minutes if you\n>> already have the account and repository.\n>\n> Ah, that’s right. Will do so, thanks for the suggestion.\n\nI'm waiting to hear from GitHub Support to resolve an error I'm facing\ntrying to add Cirrus CI to my account.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 11 Oct 2022 16:37:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Thu, Sep 29, 2022 at 12:47 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> [ patches ]\n\nWhile looking over this thread I came across this code:\n\n /* For data reading, executor always omits detached partitions */\n if (estate->es_partition_directory == NULL)\n estate->es_partition_directory =\n CreatePartitionDirectory(estate->es_query_cxt, false);\n\nBut CreatePartitionDirectory is declared like this:\n\nextern PartitionDirectory CreatePartitionDirectory(MemoryContext mcxt,\nbool omit_detached);\n\nSo the comment seems to say the opposite of what the code does. The\ncode seems to match the explanation in the commit message for\n71f4c8c6f74ba021e55d35b1128d22fb8c6e1629, so I am guessing that\nperhaps s/always/never/ is needed here.\n\nI also noticed that ExecCreatePartitionPruneState no longer exists in\nthe code but is still referenced in\nsrc/test/modules/delay_execution/specs/partition-addition.spec\n\nRegarding 0003, it seems unfortunate that\nfind_inheritance_children_extended() will now have 6 arguments 4 of\nwhich have to do with detached partition handling. That is a lot of\ndetached partition handling, and it's hard to reason about. I don't\nsee an obvious way of simplifying things very much, but I wonder if we\ncould at least have the new omit_detached_snapshot snapshot replace\nthe existing bool omit_detached flag. Something like the attached\nincremental patch.\n\nProbably we need to go further than the attached, though. I don't\nthink that PartitionDirectoryLookup() should be getting any new\narguments. The whole point of that function is that it's supposed to\nensure that the returned value is stable, and the comments say so. But\nwith these changes it isn't any more, because it depends on the\nsnapshot you pass. It seems fine to specify when you create the\npartition directory that you want it to show a different, still-stable\nview of the world, but as written, it seems to me to undermine the\nidea that the return value is expected to be stable at all. Is there a\nway we can avoid that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 11 Oct 2022 13:27:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Wed, Oct 12, 2022 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 29, 2022 at 12:47 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > [ patches ]\n>\n> While looking over this thread I came across this code:\n\nThanks for looking.\n\n> /* For data reading, executor always omits detached partitions */\n> if (estate->es_partition_directory == NULL)\n> estate->es_partition_directory =\n> CreatePartitionDirectory(estate->es_query_cxt, false);\n>\n> But CreatePartitionDirectory is declared like this:\n>\n> extern PartitionDirectory CreatePartitionDirectory(MemoryContext mcxt,\n> bool omit_detached);\n>\n> So the comment seems to say the opposite of what the code does. The\n> code seems to match the explanation in the commit message for\n> 71f4c8c6f74ba021e55d35b1128d22fb8c6e1629, so I am guessing that\n> perhaps s/always/never/ is needed here.\n\nI think you are right. In commit 8aba9322511 that fixed a bug in this\narea, we have this hunk:\n\n- /* Executor must always include detached partitions */\n+ /* For data reading, executor always omits detached partitions */\n if (estate->es_partition_directory == NULL)\n estate->es_partition_directory =\n- CreatePartitionDirectory(estate->es_query_cxt, true);\n+ CreatePartitionDirectory(estate->es_query_cxt, false);\n\nThe same commit also renamed the include_detached parameter of\nCreatePartitionDirectory() to omit_detached but the comment change\ndidn't quite match with that.\n\nI will fix this and other related comments to be consistent about\nusing the word \"omit\". Will include them in the updated 0003.\n\n> I also noticed that ExecCreatePartitionPruneState no longer exists in\n> the code but is still referenced in\n> src/test/modules/delay_execution/specs/partition-addition.spec\n\nIt looks like we missed that reference in commit 297daa9d435 wherein\nwe renamed it to just CreatePartitionPruneState().\n\nI have posted a patch to fix this.\n\n> Regarding 0003, it seems unfortunate that\n> find_inheritance_children_extended() will now have 6 arguments 4 of\n> which have to do with detached partition handling. That is a lot of\n> detached partition handling, and it's hard to reason about. I don't\n> see an obvious way of simplifying things very much, but I wonder if we\n> could at least have the new omit_detached_snapshot snapshot replace\n> the existing bool omit_detached flag. Something like the attached\n> incremental patch.\n\nYeah, I was wondering the same too and don't see a reason why we\ncouldn't do it that way.\n\nI have merged your incremental patch into 0003.\n\n> Probably we need to go further than the attached, though. I don't\n> think that PartitionDirectoryLookup() should be getting any new\n> arguments. The whole point of that function is that it's supposed to\n> ensure that the returned value is stable, and the comments say so. But\n> with these changes it isn't any more, because it depends on the\n> snapshot you pass. It seems fine to specify when you create the\n> partition directory that you want it to show a different, still-stable\n> view of the world, but as written, it seems to me to undermine the\n> idea that the return value is expected to be stable at all. Is there a\n> way we can avoid that?\n\nOk, I think it makes sense to have CreatePartitionDirectory take in\nthe snapshot and store it in PartitionDirectoryData for use during\neach subsequent PartitionDirectoryLookup(). So we'll be replacing the\ncurrent omit_detached flag in PartitionDirectoryData, just as we are\ndoing for the interface functions. Done that way in 0003.\n\nRegarding 0002, which introduces ri_LookupKeyInPkRel(), I realized\nthat it may have been initializing the ScanKeys wrongly. It was using\nScanKeyInit(), which uses InvalidOid for sk_subtype, causing the index\nAM / btree code to use the wrong comparison functions when PK and FK\ncolumn types don't match. That may have been a reason for 32-bit\nmachine failures pointed out by Andres upthread. I've fixed it by\nusing ScanKeyEntryInitialize() to pass the opfamily-specified right\nargument (FK column) type OID.\n\nAttached updated patches.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 15 Oct 2022 14:47:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Sat, Oct 15, 2022 at 1:47 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I have merged your incremental patch into 0003.\n\nNote that if someone goes to commit 0003, they would have no idea that\nI contributed to the effort. You should probably try to keep a running\nlist of co-authors, reviewers, or other people that need to be\nacknowledged in your draft commit messages. On that note, I think that\nthe commit messages for 0001 and to some extent 0002 need some more\nwork. In particular, it seems like the commit message for 0001 is\nentirely concerned with what the patch does and says nothing about why\nit's a good idea. In my opinion, a good commit message needs to do\nboth, ideally but not always in less space than this patch takes to do\nonly one of those things. 0002 has the same problem to a lesser\ndegree, since it is perhaps not so hard to infer that the reason for\navoiding the SQL query is performance.\n\nI am wondering if the ordering for this patch series needs to be\nrethought. The commit message for 0004 reads as if it is fixing a bug\nintroduced by earlier patches in the series. If that is not correct,\nmaybe it can be made clearer. If it is correct, then that's not good,\nbecause we don't want to commit buggy patches and then make follow-up\ncommits to remove the bugs. If a planned commit needs new\ninfrastructure to avoid being buggy, the commits adding that\ninfrastructure should happen first.\n\nBut I think the bigger problem for this patch set is that the\ndesign-level feedback from\nhttps://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com\nhasn't really been addressed, AFAICS. ri_LookupKeyInPkRelPlanIsValid\nis still trivial in v7, and that still seems wrong to me. And I still\ndon't know how we're going to avoid changing the semantics in ways\nthat are undesirable, or even knowing precisely what we did change. If\nwe don't have answers to those questions, then I suspect that this\npatch set isn't going anywhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Oct 2022 14:56:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "Hi,\n\nOn 2022-10-15 14:47:05 +0900, Amit Langote wrote:\n> Attached updated patches.\n\nThese started to fail to build recently:\n\n[04:43:33.046] ccache cc -Isrc/backend/postgres_lib.a.p -Isrc/include -I../src/include -Isrc/include/storage -Isrc/include/utils -Isrc/include/catalog -Isrc/include/nodes -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -fPIC -pthread -DBUILDING_DLL -MD -MQ src/backend/postgres_lib.a.p/executor_execPartition.c.o -MF src/backend/postgres_lib.a.p/executor_execPartition.c.o.d -o src/backend/postgres_lib.a.p/executor_execPartition.c.o -c ../src/backend/executor/execPartition.c\n[04:43:33.046] ../src/backend/executor/execPartition.c: In function ‘ExecGetLeafPartitionForKey’:\n[04:43:33.046] ../src/backend/executor/execPartition.c:1679:19: error: too few arguments to function ‘build_attrmap_by_name_if_req’\n[04:43:33.046] 1679 | AttrMap *map = build_attrmap_by_name_if_req(RelationGetDescr(root_rel),\n[04:43:33.046] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[04:43:33.046] In file included from ../src/include/access/tupconvert.h:17,\n[04:43:33.046] from ../src/include/nodes/execnodes.h:32,\n[04:43:33.046] from ../src/include/executor/execPartition.h:16,\n[04:43:33.046] from ../src/backend/executor/execPartition.c:21:\n[04:43:33.046] ../src/include/access/attmap.h:47:17: note: declared here\n[04:43:33.046] 47 | extern AttrMap *build_attrmap_by_name_if_req(TupleDesc indesc,\n[04:43:33.046] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Tue, 6 Dec 2022 10:37:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Mon, 17 Oct 2022 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Oct 15, 2022 at 1:47 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> But I think the bigger problem for this patch set is that the\n> design-level feedback from\n> https://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com\n> hasn't really been addressed, AFAICS. ri_LookupKeyInPkRelPlanIsValid\n> is still trivial in v7, and that still seems wrong to me. And I still\n> don't know how we're going to avoid changing the semantics in ways\n> that are undesirable, or even knowing precisely what we did change. If\n> we don't have answers to those questions, then I suspect that this\n> patch set isn't going anywhere.\n\nAmit, do you plan to work on this patch for this commitfest (and\ntherefore this release?). And do you think it has a realistic chance\nof being ready for commit this month?\n\nIt looks to me like you have some good feedback and can progress and\nare unlikely to finish this patch for this release. In which case\nmaybe we can move it forward to the next release?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 14:53:57 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "Hi Greg,\n\nOn Tue, Mar 21, 2023 at 3:54 AM Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n> On Mon, 17 Oct 2022 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > But I think the bigger problem for this patch set is that the\n> > design-level feedback from\n> > https://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com\n> > hasn't really been addressed, AFAICS. ri_LookupKeyInPkRelPlanIsValid\n> > is still trivial in v7, and that still seems wrong to me. And I still\n> > don't know how we're going to avoid changing the semantics in ways\n> > that are undesirable, or even knowing precisely what we did change. If\n> > we don't have answers to those questions, then I suspect that this\n> > patch set isn't going anywhere.\n>\n> Amit, do you plan to work on this patch for this commitfest (and\n> therefore this release?). And do you think it has a realistic chance\n> of being ready for commit this month?\n\nUnfortunately, I don't think so.\n\n> It looks to me like you have some good feedback and can progress and\n> are unlikely to finish this patch for this release. In which case\n> maybe we can move it forward to the next release?\n\nYes, that's what I am thinking too at this point.\n\nI agree with Robert's point that changing the implementation from an\nSQL query plan to a hand-rolled C function is going to change the\nsemantics in some known and perhaps many unknown ways. Until I have\nenumerated all those semantic changes, it's hard to judge whether the\nhand-rolled implementation is correct to begin with. I had started\ndoing that a few months back but couldn't keep up due to some other\nwork.\n\nAn example I had found of a thing that would be broken by taking out\nthe executor out of the equation, as the patch does, is the behavior\nof an update under READ COMMITTED isolation, whereby a PK tuple being\nchecked for existence is concurrently updated and thus needs to\nrechecked whether it still satisfies the RI query's conditions. The\nexecutor has the EvalPlanQual() mechanism to do that, but while the\nhand-rolled implementation did refactor ExecLockRows() to allow doing\nthe tuple-locking without a PlanState, it gave no consideration to\nhandling rechecking under READ COMMITTED isolation.\n\nThere may be other such things and I think I'd better look for them\ncarefully in the next cycle than in the next couple of weeks for this\nrelease. My apologies that I didn't withdraw the patch sooner.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Mar 2023 14:03:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "> On 21 Mar 2023, at 06:03, Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Mar 21, 2023 at 3:54 AM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n>> On Mon, 17 Oct 2022 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>>> But I think the bigger problem for this patch set is that the\n>>> design-level feedback from\n>>> https://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com\n>>> hasn't really been addressed, AFAICS. ri_LookupKeyInPkRelPlanIsValid\n>>> is still trivial in v7, and that still seems wrong to me. And I still\n>>> don't know how we're going to avoid changing the semantics in ways\n>>> that are undesirable, or even knowing precisely what we did change. If\n>>> we don't have answers to those questions, then I suspect that this\n>>> patch set isn't going anywhere.\n>> \n>> Amit, do you plan to work on this patch for this commitfest (and\n>> therefore this release?). And do you think it has a realistic chance\n>> of being ready for commit this month?\n> \n> Unfortunately, I don't think so.\n\nThis thread has stalled with the patch not building and/or applying for a\nwhile, so I am going to mark this Returned with Feebdback. Please feel free to\nresubmit to a future CF when there is renewed interest/time to work on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 10 Jul 2023 10:27:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" }, { "msg_contents": "On Mon, Jul 10, 2023 at 5:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 21 Mar 2023, at 06:03, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Mar 21, 2023 at 3:54 AM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> >> On Mon, 17 Oct 2022 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> >>> But I think the bigger problem for this patch set is that the\n> >>> design-level feedback from\n> >>> https://www.postgresql.org/message-id/CA%2BTgmoaiTNj4DgQy42OT9JmTTP1NWcMV%2Bke0i%3D%2Ba7%3DVgnzqGXw%40mail.gmail.com\n> >>> hasn't really been addressed, AFAICS. ri_LookupKeyInPkRelPlanIsValid\n> >>> is still trivial in v7, and that still seems wrong to me. And I still\n> >>> don't know how we're going to avoid changing the semantics in ways\n> >>> that are undesirable, or even knowing precisely what we did change. If\n> >>> we don't have answers to those questions, then I suspect that this\n> >>> patch set isn't going anywhere.\n> >>\n> >> Amit, do you plan to work on this patch for this commitfest (and\n> >> therefore this release?). And do you think it has a realistic chance\n> >> of being ready for commit this month?\n> >\n> > Unfortunately, I don't think so.\n>\n> This thread has stalled with the patch not building and/or applying for a\n> while, so I am going to mark this Returned with Feebdback.\n\nAgreed, I was about to do so myself.\n\nI'll give this another try later in the cycle.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Jul 2023 17:30:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eliminating SPI from RI triggers - take 2" } ]
[ { "msg_contents": "Hackers,\n\nWhen working in the read committed transaction isolation mode\n(default), we have the following sequence of actions when\ntuple_update() or tuple_delete() find concurrently updated tuple.\n\n1. tuple_update()/tuple_delete() returns TM_Updated\n2. tuple_lock()\n3. Re-evaluate plan qual (recheck if we still need to update/delete\nand calculate the new tuple for update)\n4. tuple_update()/tuple_delete() (this time should be successful,\nsince we've previously locked the tuple).\n\nI wonder if we should merge steps 1 and 2. We could save some efforts\nalready done during tuple_update()/tuple_delete() for locking the\ntuple. In heap table access method, we've to start tuple_lock() with\nthe first tuple in the chain, but tuple_update()/tuple_delete()\nalready visited it. For undo-based table access methods,\ntuple_update()/tuple_delete() should start from the last version, why\ndon't place the tuple lock immediately once a concurrent update is\ndetected. I think this patch should have some performance benefits on\nhigh concurrency.\n\nAlso, the patch simplifies code in nodeModifyTable.c getting rid of\nthe nested case. I also get rid of extra\ntable_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old\ntuple, when it should be exactly the same tuple we've just locked.\n\nI'm going to check the performance impact. Thoughts and feedback are welcome.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 1 Jul 2022 14:18:37 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi Alexander,\n\n> Thoughts and feedback are welcome.\n\nI took some preliminary look at the patch. I'm going to need more time\nto meditate on the proposed changes and to figure out the performance\nimpact.\n\nSo far I just wanted to let you know that the patch applied OK for me\nand passed all the tests. The `else` branch here seems to be redundant\nhere:\n\n+ if (!updated)\n+ {\n+ /* Should not encounter speculative tuple on recheck */\n+ Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n- ReleaseBuffer(buffer);\n+ ReleaseBuffer(buffer);\n+ }\n+ else\n+ {\n+ updated = false;\n+ }\n\nAlso I wish there were a little bit more comments since some of the\nproposed changes are not that straightforward.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 5 Jul 2022 16:38:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi again,\n\n> + if (!updated)\n> + {\n> + /* Should not encounter speculative tuple on recheck */\n> + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n> - ReleaseBuffer(buffer);\n> + ReleaseBuffer(buffer);\n> + }\n> + else\n> + {\n> + updated = false;\n> + }\n\nOK, I got confused here. I suggest changing the if(!...) { .. } else {\n.. } code to if() { .. } else { .. } here.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 5 Jul 2022 16:41:27 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi Alexander,\n\n> I'm going to need more time to meditate on the proposed changes and to figure out the performance impact.\n\nOK, turned out this patch is slightly more complicated than I\ninitially thought, but I think I managed to get some vague\nunderstanding of what's going on.\n\nI tried to reproduce the case with concurrently updated tuples you\ndescribed on the current `master` branch. I created a new table:\n\n```\nCREATE TABLE phonebook(\n \"id\" SERIAL PRIMARY KEY NOT NULL,\n \"name\" NAME NOT NULL,\n \"phone\" INT NOT NULL);\n\nINSERT INTO phonebook (\"name\", \"phone\")\nVALUES ('Alice', 123), ('Bob', 456), ('Charlie', 789);\n```\n\nThen I opened two sessions and attached them with LLDB. I did:\n\n```\n(lldb) b heapam_tuple_update\n(lldb) c\n```\n\n... in both cases because I wanted to see two calls (steps 2 and 4) to\nheapam_tuple_update() and check the return values.\n\nThen I did:\n\n```\nsession1 =# BEGIN;\nsession2 =# BEGIN;\nsession1 =# UPDATE phonebook SET name = 'Alex' WHERE name = 'Alice';\n```\n\nThis update succeeds and I see heapam_tuple_update() returning TM_Ok.\n\n```\nsession2 =# UPDATE phonebook SET name = 'Alfred' WHERE name = 'Alice';\n```\n\nThis update hangs on a lock.\n\n```\nsession1 =# COMMIT;\n```\n\nNow session2 unfreezes and returns 'UPDATE 0'. table_tuple_update()\nwas called once and returned TM_Updated. Also session2 sees an updated\ntuple now. So apparently the visibility check (step 3) didn't pass.\n\nAt this point I'm slightly confused. I don't see where a performance\nimprovement is expected, considering that session2 gets blocked until\nsession1 commits.\n\nCould you please walk me through here? Am I using the right test case\nor maybe you had another one in mind? Which steps do you consider\nexpensive and expect to be mitigated by the patch?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:42:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi Aleksander!\n\nThank you for your efforts reviewing this patch.\n\nOn Thu, Jul 7, 2022 at 12:43 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > I'm going to need more time to meditate on the proposed changes and to figure out the performance impact.\n>\n> OK, turned out this patch is slightly more complicated than I\n> initially thought, but I think I managed to get some vague\n> understanding of what's going on.\n>\n> I tried to reproduce the case with concurrently updated tuples you\n> described on the current `master` branch. I created a new table:\n>\n> ```\n> CREATE TABLE phonebook(\n> \"id\" SERIAL PRIMARY KEY NOT NULL,\n> \"name\" NAME NOT NULL,\n> \"phone\" INT NOT NULL);\n>\n> INSERT INTO phonebook (\"name\", \"phone\")\n> VALUES ('Alice', 123), ('Bob', 456), ('Charlie', 789);\n> ```\n>\n> Then I opened two sessions and attached them with LLDB. I did:\n>\n> ```\n> (lldb) b heapam_tuple_update\n> (lldb) c\n> ```\n>\n> ... in both cases because I wanted to see two calls (steps 2 and 4) to\n> heapam_tuple_update() and check the return values.\n>\n> Then I did:\n>\n> ```\n> session1 =# BEGIN;\n> session2 =# BEGIN;\n> session1 =# UPDATE phonebook SET name = 'Alex' WHERE name = 'Alice';\n> ```\n>\n> This update succeeds and I see heapam_tuple_update() returning TM_Ok.\n>\n> ```\n> session2 =# UPDATE phonebook SET name = 'Alfred' WHERE name = 'Alice';\n> ```\n>\n> This update hangs on a lock.\n>\n> ```\n> session1 =# COMMIT;\n> ```\n>\n> Now session2 unfreezes and returns 'UPDATE 0'. table_tuple_update()\n> was called once and returned TM_Updated. Also session2 sees an updated\n> tuple now. So apparently the visibility check (step 3) didn't pass.\n\nYes. But it's not exactly a visibility check. Session2 re-evaluates\nWHERE condition on the most recent row version (bypassing snapshot).\nWHERE condition is not true anymore, thus the row is not upated.\n\n> At this point I'm slightly confused. I don't see where a performance\n> improvement is expected, considering that session2 gets blocked until\n> session1 commits.\n>\n> Could you please walk me through here? Am I using the right test case\n> or maybe you had another one in mind? Which steps do you consider\n> expensive and expect to be mitigated by the patch?\n\nThis patch is not intended to change some high-level logic. On the\nhigh level transaction, which updated the row, still holding a lock on\nit until finished. The possible positive performance impact I expect\nfrom doing the work of two calls tuple_update() and tuple_lock() in\nthe one call of tuple_update(). If we do this in one call, we can\nsave some efforts, for instance lock the same buffer once not twice.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 12 Jul 2022 13:29:44 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, hackers!\nI ran the following benchmark on master branch (15) vs patch (15-lock):\n\nOn the 36-vcore AWS server, I've run an UPDATE-only pgbench script with 50\nconnections on pgbench_tellers with 100 rows. The idea was to introduce as\nmuch as possible concurrency for updates but avoid much clients being in a\nwait state.\nIndexes were not built to avoid index-update-related delays.\nDone 2 runs each consisting of 6 series of updates (1st run:\nmaster-patch-master-patch-master-patch, 2nd run\npatch-master-patch-master-patch-master)\nEach series started a fresh server and did VACUUM FULL to avoid bloating\nheap relation after the previous series to affect the current. It collected\ndata for 10 minutes with first-minute data being dropped.\nDisk-related operations were suppressed where possible (WAL, fsync etc.)\n\npostgresql.conf:\nfsync = off\nautovacuum = off\nfull_page_writes = off\nmax_worker_processes = 99\nmax_parallel_workers = 99\nmax_connections = 100\nshared_buffers = 4096MB\nwork_mem = 50MB\n\nAttached are pictures of 2 runs, shell script, and SQL script that were\nrunning.\nAccording to htop all 36-cores were loaded to ~94% in each series\n\nI'm not sure how to interpret the results. Seems like a TPS difference\nbetween runs is significant, with average performance with lock-patch *(15lock)\n*seeming a little bit faster than the master* (15)*.\n\nCould someone try to repeat this on another server? What do you think?\n\n-- \nBest regards,\nPavel Borisov,\nSupabase, https://supabase.com/", "msg_date": "Fri, 29 Jul 2022 12:11:31 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Pavel!\n\nOn Fri, Jul 29, 2022 at 11:12 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I ran the following benchmark on master branch (15) vs patch (15-lock):\n>\n> On the 36-vcore AWS server, I've run an UPDATE-only pgbench script with 50 connections on pgbench_tellers with 100 rows. The idea was to introduce as much as possible concurrency for updates but avoid much clients being in a wait state.\n> Indexes were not built to avoid index-update-related delays.\n> Done 2 runs each consisting of 6 series of updates (1st run: master-patch-master-patch-master-patch, 2nd run patch-master-patch-master-patch-master)\n> Each series started a fresh server and did VACUUM FULL to avoid bloating heap relation after the previous series to affect the current. It collected data for 10 minutes with first-minute data being dropped.\n> Disk-related operations were suppressed where possible (WAL, fsync etc.)\n>\n> postgresql.conf:\n> fsync = off\n> autovacuum = off\n> full_page_writes = off\n> max_worker_processes = 99\n> max_parallel_workers = 99\n> max_connections = 100\n> shared_buffers = 4096MB\n> work_mem = 50MB\n>\n> Attached are pictures of 2 runs, shell script, and SQL script that were running.\n> According to htop all 36-cores were loaded to ~94% in each series\n>\n> I'm not sure how to interpret the results. Seems like a TPS difference between runs is significant, with average performance with lock-patch (15lock) seeming a little bit faster than the master (15).\n>\n> Could someone try to repeat this on another server? What do you think?\n\nThank you for your benchmarks. The TPS variation is high, and run\norder heavily affects the result. Nevertheless, I think there is a\nsmall but noticeable positive effect of the patch. I'll continue\nworking on the patch bringing it into more acceptable shape.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 29 Jul 2022 11:35:33 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hackers,\n>\n> When working in the read committed transaction isolation mode\n> (default), we have the following sequence of actions when\n> tuple_update() or tuple_delete() find concurrently updated tuple.\n>\n> 1. tuple_update()/tuple_delete() returns TM_Updated\n> 2. tuple_lock()\n> 3. Re-evaluate plan qual (recheck if we still need to update/delete\n> and calculate the new tuple for update)\n> 4. tuple_update()/tuple_delete() (this time should be successful,\n> since we've previously locked the tuple).\n>\n> I wonder if we should merge steps 1 and 2. We could save some efforts\n> already done during tuple_update()/tuple_delete() for locking the\n> tuple. In heap table access method, we've to start tuple_lock() with\n> the first tuple in the chain, but tuple_update()/tuple_delete()\n> already visited it. For undo-based table access methods,\n> tuple_update()/tuple_delete() should start from the last version, why\n> don't place the tuple lock immediately once a concurrent update is\n> detected. I think this patch should have some performance benefits on\n> high concurrency.\n>\n> Also, the patch simplifies code in nodeModifyTable.c getting rid of\n> the nested case. I also get rid of extra\n> table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old\n> tuple, when it should be exactly the same tuple we've just locked.\n>\n> I'm going to check the performance impact. Thoughts and feedback are welcome.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\neb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===\n=== applying patch\n./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch\npatching file src/backend/executor/nodeModifyTable.c\n...\nHunk #3 FAILED at 1376.\n...\n1 out of 15 hunks FAILED -- saving rejects to file\nsrc/backend/executor/nodeModifyTable.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4099.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 4 Jan 2023 15:10:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Vignesh!\n\nOn Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > Hackers,\n> >\n> > When working in the read committed transaction isolation mode\n> > (default), we have the following sequence of actions when\n> > tuple_update() or tuple_delete() find concurrently updated tuple.\n> >\n> > 1. tuple_update()/tuple_delete() returns TM_Updated\n> > 2. tuple_lock()\n> > 3. Re-evaluate plan qual (recheck if we still need to update/delete\n> > and calculate the new tuple for update)\n> > 4. tuple_update()/tuple_delete() (this time should be successful,\n> > since we've previously locked the tuple).\n> >\n> > I wonder if we should merge steps 1 and 2. We could save some efforts\n> > already done during tuple_update()/tuple_delete() for locking the\n> > tuple. In heap table access method, we've to start tuple_lock() with\n> > the first tuple in the chain, but tuple_update()/tuple_delete()\n> > already visited it. For undo-based table access methods,\n> > tuple_update()/tuple_delete() should start from the last version, why\n> > don't place the tuple lock immediately once a concurrent update is\n> > detected. I think this patch should have some performance benefits on\n> > high concurrency.\n> >\n> > Also, the patch simplifies code in nodeModifyTable.c getting rid of\n> > the nested case. I also get rid of extra\n> > table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old\n> > tuple, when it should be exactly the same tuple we've just locked.\n> >\n> > I'm going to check the performance impact. Thoughts and feedback are welcome.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===\n> === applying patch\n> ./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch\n> patching file src/backend/executor/nodeModifyTable.c\n> ...\n> Hunk #3 FAILED at 1376.\n> ...\n> 1 out of 15 hunks FAILED -- saving rejects to file\n> src/backend/executor/nodeModifyTable.c.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_4099.log\n\nThe rebased patch is attached. It's just a change in formatting, no\nchanges in code though.\n\nRegards,\nPavel Borisov,\nSupabase.", "msg_date": "Wed, 4 Jan 2023 12:52:45 +0300", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Wed, 4 Jan 2023 at 12:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi, Vignesh!\n>\n> On Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >\n> > > Hackers,\n> > >\n> > > When working in the read committed transaction isolation mode\n> > > (default), we have the following sequence of actions when\n> > > tuple_update() or tuple_delete() find concurrently updated tuple.\n> > >\n> > > 1. tuple_update()/tuple_delete() returns TM_Updated\n> > > 2. tuple_lock()\n> > > 3. Re-evaluate plan qual (recheck if we still need to update/delete\n> > > and calculate the new tuple for update)\n> > > 4. tuple_update()/tuple_delete() (this time should be successful,\n> > > since we've previously locked the tuple).\n> > >\n> > > I wonder if we should merge steps 1 and 2. We could save some efforts\n> > > already done during tuple_update()/tuple_delete() for locking the\n> > > tuple. In heap table access method, we've to start tuple_lock() with\n> > > the first tuple in the chain, but tuple_update()/tuple_delete()\n> > > already visited it. For undo-based table access methods,\n> > > tuple_update()/tuple_delete() should start from the last version, why\n> > > don't place the tuple lock immediately once a concurrent update is\n> > > detected. I think this patch should have some performance benefits on\n> > > high concurrency.\n> > >\n> > > Also, the patch simplifies code in nodeModifyTable.c getting rid of\n> > > the nested case. I also get rid of extra\n> > > table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old\n> > > tuple, when it should be exactly the same tuple we've just locked.\n> > >\n> > > I'm going to check the performance impact. Thoughts and feedback are welcome.\n> >\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > === Applying patches on top of PostgreSQL commit ID\n> > eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===\n> > === applying patch\n> > ./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch\n> > patching file src/backend/executor/nodeModifyTable.c\n> > ...\n> > Hunk #3 FAILED at 1376.\n> > ...\n> > 1 out of 15 hunks FAILED -- saving rejects to file\n> > src/backend/executor/nodeModifyTable.c.rej\n> >\n> > [1] - http://cfbot.cputube.org/patch_41_4099.log\n>\n> The rebased patch is attached. It's just a change in formatting, no\n> changes in code though.\n\nOne more update of a patchset to avoid compiler warnings.\n\nRegards,\nPavel Borisov", "msg_date": "Wed, 4 Jan 2023 15:42:34 +0300", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Pavel!\n\nOn Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Wed, 4 Jan 2023 at 12:52, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > On Wed, 4 Jan 2023 at 12:41, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Fri, 1 Jul 2022 at 16:49, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > >\n> > > > Hackers,\n> > > >\n> > > > When working in the read committed transaction isolation mode\n> > > > (default), we have the following sequence of actions when\n> > > > tuple_update() or tuple_delete() find concurrently updated tuple.\n> > > >\n> > > > 1. tuple_update()/tuple_delete() returns TM_Updated\n> > > > 2. tuple_lock()\n> > > > 3. Re-evaluate plan qual (recheck if we still need to update/delete\n> > > > and calculate the new tuple for update)\n> > > > 4. tuple_update()/tuple_delete() (this time should be successful,\n> > > > since we've previously locked the tuple).\n> > > >\n> > > > I wonder if we should merge steps 1 and 2. We could save some efforts\n> > > > already done during tuple_update()/tuple_delete() for locking the\n> > > > tuple. In heap table access method, we've to start tuple_lock() with\n> > > > the first tuple in the chain, but tuple_update()/tuple_delete()\n> > > > already visited it. For undo-based table access methods,\n> > > > tuple_update()/tuple_delete() should start from the last version, why\n> > > > don't place the tuple lock immediately once a concurrent update is\n> > > > detected. I think this patch should have some performance benefits on\n> > > > high concurrency.\n> > > >\n> > > > Also, the patch simplifies code in nodeModifyTable.c getting rid of\n> > > > the nested case. I also get rid of extra\n> > > > table_tuple_fetch_row_version() in ExecUpdate. Why re-fetch the old\n> > > > tuple, when it should be exactly the same tuple we've just locked.\n> > > >\n> > > > I'm going to check the performance impact. Thoughts and feedback are welcome.\n> > >\n> > > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > > === Applying patches on top of PostgreSQL commit ID\n> > > eb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===\n> > > === applying patch\n> > > ./0001-Lock-updated-tuples-in-tuple_update-and-tuple_del-v1.patch\n> > > patching file src/backend/executor/nodeModifyTable.c\n> > > ...\n> > > Hunk #3 FAILED at 1376.\n> > > ...\n> > > 1 out of 15 hunks FAILED -- saving rejects to file\n> > > src/backend/executor/nodeModifyTable.c.rej\n> > >\n> > > [1] - http://cfbot.cputube.org/patch_41_4099.log\n> >\n> > The rebased patch is attached. It's just a change in formatting, no\n> > changes in code though.\n>\n> One more update of a patchset to avoid compiler warnings.\n\nThank you for your help. I'm going to provide the revised version of\npatch with comments and commit message in the next couple of days.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 4 Jan 2023 17:05:03 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > One more update of a patchset to avoid compiler warnings.\n>\n> Thank you for your help. I'm going to provide the revised version of\n> patch with comments and commit message in the next couple of days.\n\nThe revised patch is attached. It contains describing commit message,\ncomments and some minor code improvements.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 5 Jan 2023 15:11:43 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Alexander!\n\nOn Thu, 5 Jan 2023 at 15:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > One more update of a patchset to avoid compiler warnings.\n> >\n> > Thank you for your help. I'm going to provide the revised version of\n> > patch with comments and commit message in the next couple of days.\n>\n> The revised patch is attached. It contains describing commit message,\n> comments and some minor code improvements.\n\nI've looked through the patch once again. It seems in a nice state to\nbe committed.\nI also noticed that in tableam level and NodeModifyTable function\ncalls we have a one-to-one correspondence between *lockedSlot и bool\nlockUpdated, but no checks on this in case something changes in the\ncode in the future. I'd propose combining these variables to remain\nfree from these checks. See v5 of a patch. Tests are successfully\npassed.\nBesides, the new version has only some minor changes in the comments\nand the commit message.\n\nKind regards,\nPavel Borisov,\nSupabase.", "msg_date": "Fri, 6 Jan 2023 15:45:29 +0300", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Fri, Jan 6, 2023 at 4:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Hi, Alexander!\n>\n> On Thu, 5 Jan 2023 at 15:11, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> >\n> > On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > > On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> > > > One more update of a patchset to avoid compiler warnings.\n> > >\n> > > Thank you for your help. I'm going to provide the revised version of\n> > > patch with comments and commit message in the next couple of days.\n> >\n> > The revised patch is attached. It contains describing commit message,\n> > comments and some minor code improvements.\n>\n> I've looked through the patch once again. It seems in a nice state to\n> be committed.\n> I also noticed that in tableam level and NodeModifyTable function\n> calls we have a one-to-one correspondence between *lockedSlot и bool\n> lockUpdated, but no checks on this in case something changes in the\n> code in the future. I'd propose combining these variables to remain\n> free from these checks. See v5 of a patch. Tests are successfully\n> passed.\n> Besides, the new version has only some minor changes in the comments\n> and the commit message.\n>\n> Kind regards,\n> Pavel Borisov,\n> Supabase.\n>\n\nIt looks good, and the greater the concurrency the greater the benefit will\nbe. Just a few minor suggestions regarding comments.\n\n\"ExecDeleteAct() have already locked the old tuple for us\", change \"have\"\nto \"has\".\n\nThe comments in heapam_tuple_delete() and heapam_tuple_update() might be a\nlittle clearer with something like:\n\n\"If the tuple has been concurrently updated, get lock already so that on\nretry it will succeed, provided that the caller asked to do this by\nproviding a lockedSlot.\"\n\nAlso, not too important, but perhaps better clarify in the commit message\nthat the repeated work is driven by ExecUpdate and ExecDelete and can\nhappen multiple times depending on the concurrency.\n\nBest Regards,\n\nMason\n\nOn Fri, Jan 6, 2023 at 4:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, Alexander!\n\nOn Thu, 5 Jan 2023 at 15:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 5:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Wed, Jan 4, 2023 at 3:43 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > One more update of a patchset to avoid compiler warnings.\n> >\n> > Thank you for your help.  I'm going to provide the revised version of\n> > patch with comments and commit message in the next couple of days.\n>\n> The revised patch is attached.  It contains describing commit message,\n> comments and some minor code improvements.\n\nI've looked through the patch once again. It seems in a nice state to\nbe committed.\nI also noticed that in tableam level and NodeModifyTable function\ncalls we have a one-to-one correspondence between *lockedSlot и bool\nlockUpdated, but no checks on this in case something changes in the\ncode in the future. I'd propose combining these variables to remain\nfree from these checks. See v5 of a patch. Tests are successfully\npassed.\nBesides, the new version has only some minor changes in the comments\nand the commit message.\n\nKind regards,\nPavel Borisov,\nSupabase.It looks good, and the greater the concurrency the greater the benefit will be. Just a few minor suggestions regarding comments.\"ExecDeleteAct() have already locked the old tuple for us\", change \"have\" to \"has\".The comments in heapam_tuple_delete() and heapam_tuple_update() might be a little clearer with something like:\"If the tuple has been concurrently updated, get lock already so that onretry it will succeed, provided that the caller asked to do this byproviding a lockedSlot.\"Also, not too important, but perhaps better clarify in the commit message that the repeated work is driven by ExecUpdate and ExecDelete and can happen multiple times depending on the concurrency.Best Regards,Mason", "msg_date": "Sun, 8 Jan 2023 10:33:40 -0800", "msg_from": "Mason Sharp <masonlists@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Mason!\n\nThank you very much for your review.\n\nOn Sun, Jan 8, 2023 at 9:33 PM Mason Sharp <masonlists@gmail.com> wrote:\n> On Fri, Jan 6, 2023 at 4:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> Besides, the new version has only some minor changes in the comments\n>> and the commit message.\n> It looks good, and the greater the concurrency the greater the benefit will be. Just a few minor suggestions regarding comments.\n>\n> \"ExecDeleteAct() have already locked the old tuple for us\", change \"have\" to \"has\".\n>\n> The comments in heapam_tuple_delete() and heapam_tuple_update() might be a little clearer with something like:\n>\n> \"If the tuple has been concurrently updated, get lock already so that on\n> retry it will succeed, provided that the caller asked to do this by\n> providing a lockedSlot.\"\n\nThank you. These changes are incorporated into v6 of the patch.\n\n> Also, not too important, but perhaps better clarify in the commit message that the repeated work is driven by ExecUpdate and ExecDelete and can happen multiple times depending on the concurrency.\n\nHmm... It can't happen arbitrary number of times. If tuple was\nconcurrently updated, the we lock it. Once we lock, nobody can change\nit until we finish out work. So, I think no changes needed.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 9 Jan 2023 01:07:45 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi Alexander,\n\n> I'm going to push this if no objections.\n\nI took a fresh look at the patch and it LGTM. I only did a few\ncosmetic changes, PFA v7.\n\nChanges since v6 are:\n\n```\n@@ -318,12 +318,12 @@ heapam_tuple_delete(Relation relation,\nItemPointer tid, CommandId cid,\n result = heap_delete(relation, tid, cid, crosscheck, wait, tmfd,\nchangingPart);\n\n /*\n- * If the tuple has been concurrently updated, get lock already so that on\n- * retry it will succeed, provided that the caller asked to do this by\n- * providing a lockedSlot.\n+ * If lockUpdated is true and the tuple has been concurrently updated, get\n+ * the lock immediately so that on retry we will succeed.\n */\n if (result == TM_Updated && lockUpdated)\n {\n+ Assert(lockedSlot != NULL);\n```\n\n... and the same for heapam_tuple_update().\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 9 Jan 2023 12:56:12 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi Aleksander,\n\nOn Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > I'm going to push this if no objections.\n>\n> I took a fresh look at the patch and it LGTM. I only did a few\n> cosmetic changes, PFA v7.\n>\n> Changes since v6 are:\n\nThank you for looking into this. It appears that I've applied changes\nproposed by Mason to v5, not v6. That lead to comment mismatch with\nthe code that you've noticed. v8 should be correct. Please, recheck.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 9 Jan 2023 13:10:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Mon, Jan 9, 2023 at 1:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > > I'm going to push this if no objections.\n> >\n> > I took a fresh look at the patch and it LGTM. I only did a few\n> > cosmetic changes, PFA v7.\n> >\n> > Changes since v6 are:\n>\n> Thank you for looking into this. It appears that I've applied changes\n> proposed by Mason to v5, not v6. That lead to comment mismatch with\n> the code that you've noticed. v8 should be correct. Please, recheck.\n\nv9 also incorporates lost changes to the commit message by Pavel Borisov.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 9 Jan 2023 13:29:18 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Alexander!\n\nOn Mon, 9 Jan 2023 at 13:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Jan 9, 2023 at 1:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> > > > I'm going to push this if no objections.\n> > >\n> > > I took a fresh look at the patch and it LGTM. I only did a few\n> > > cosmetic changes, PFA v7.\n> > >\n> > > Changes since v6 are:\n> >\n> > Thank you for looking into this. It appears that I've applied changes\n> > proposed by Mason to v5, not v6. That lead to comment mismatch with\n> > the code that you've noticed. v8 should be correct. Please, recheck.\n>\n> v9 also incorporates lost changes to the commit message by Pavel Borisov.\nI've looked through patch v9. It resembles patch v5 plus comments\nclarification by Mason plus the right discussion link in the commit\nmessage from v8. Aleksander's proposal of Assert in v7 was due to\nchanges lost between v5 and v6, as combining connected variables in v5\nmakes checks for them being in agreement one with the other\nunnecessary. So changes from v7 are not in v9.\n\nSorry for being so detailed in small details. In my opinion the patch\nnow is ready to be committed.\n\nRegards,\nPavel Borisov\n\n\n", "msg_date": "Mon, 9 Jan 2023 13:40:51 +0300", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Alexander, Pavel,\n\n> Sorry for being so detailed in small details. In my opinion the patch\n> now is ready to be committed.\n\nAgree.\n\nPersonally I liked the version with (lockUpdated, lockedSlot) pair a\nbit more since it is a bit more readable, however the version without\nlockUpdated is less error prone and slightly more efficient. So all in\nall I have no strong opinion on which is better.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 9 Jan 2023 13:46:42 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Pavel!\n\nOn Mon, Jan 9, 2023 at 1:41 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Mon, 9 Jan 2023 at 13:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Mon, Jan 9, 2023 at 1:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Mon, Jan 9, 2023 at 12:56 PM Aleksander Alekseev\n> > > <aleksander@timescale.com> wrote:\n> > > > > I'm going to push this if no objections.\n> > > >\n> > > > I took a fresh look at the patch and it LGTM. I only did a few\n> > > > cosmetic changes, PFA v7.\n> > > >\n> > > > Changes since v6 are:\n> > >\n> > > Thank you for looking into this. It appears that I've applied changes\n> > > proposed by Mason to v5, not v6. That lead to comment mismatch with\n> > > the code that you've noticed. v8 should be correct. Please, recheck.\n> >\n> > v9 also incorporates lost changes to the commit message by Pavel Borisov.\n> I've looked through patch v9. It resembles patch v5 plus comments\n> clarification by Mason plus the right discussion link in the commit\n> message from v8. Aleksander's proposal of Assert in v7 was due to\n> changes lost between v5 and v6, as combining connected variables in v5\n> makes checks for them being in agreement one with the other\n> unnecessary. So changes from v7 are not in v9.\n>\n> Sorry for being so detailed in small details. In my opinion the patch\n> now is ready to be committed.\n\nSorry for creating this mess with lost changes. And thank you for\nconfirming it's good now. I'm going to push v9.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 9 Jan 2023 13:46:50 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-01-09 13:46:50 +0300, Alexander Korotkov wrote:\n> I'm going to push v9.\n\nCould you hold off for a bit? I'd like to look at this, I'm not sure I like\nthe direction.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Jan 2023 15:38:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nI'm a bit worried that this is optimizing the rare case while hurting the\ncommon case. See e.g. my point below about creating additional slots in the\nhappy path.\n\nIt's also not clear that change is right directionally. If we want to avoid\nre-fetching the \"original\" row version, why don't we provide that\nfunctionality via table_tuple_lock()?\n\n\nOn 2023-01-09 13:29:18 +0300, Alexander Korotkov wrote:\n> @@ -53,6 +53,12 @@ static bool SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer,\n> \t\t\t\t\t\t\t\t HeapTuple tuple,\n> \t\t\t\t\t\t\t\t OffsetNumber tupoffset);\n>\n> +static TM_Result heapam_tuple_lock_internal(Relation relation, ItemPointer tid,\n> +\t\t\t\t\t\t\t\t\t\t\tSnapshot snapshot, TupleTableSlot *slot,\n> +\t\t\t\t\t\t\t\t\t\t\tCommandId cid, LockTupleMode mode,\n> +\t\t\t\t\t\t\t\t\t\t\tLockWaitPolicy wait_policy, uint8 flags,\n> +\t\t\t\t\t\t\t\t\t\t\tTM_FailureData *tmfd, bool updated);\n> +\n> static BlockNumber heapam_scan_get_blocks_done(HeapScanDesc hscan);\n>\n> static const TableAmRoutine heapam_methods;\n> @@ -299,14 +305,39 @@ heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> static TM_Result\n> heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,\n> \t\t\t\t\tSnapshot snapshot, Snapshot crosscheck, bool wait,\n> -\t\t\t\t\tTM_FailureData *tmfd, bool changingPart)\n> +\t\t\t\t\tTM_FailureData *tmfd, bool changingPart,\n> +\t\t\t\t\tTupleTableSlot *lockedSlot)\n> {\n> +\tTM_Result\tresult;\n> +\n> \t/*\n> \t * Currently Deleting of index tuples are handled at vacuum, in case if\n> \t * the storage itself is cleaning the dead tuples by itself, it is the\n> \t * time to call the index tuple deletion also.\n> \t */\n> -\treturn heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> +\tresult = heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> +\n> +\t/*\n> +\t * If the tuple has been concurrently updated, get lock already so that on\n> +\t * retry it will succeed, provided that the caller asked to do this by\n> +\t * providing a lockedSlot.\n> +\t */\n> +\tif (result == TM_Updated && lockedSlot != NULL)\n> +\t{\n> +\t\tresult = heapam_tuple_lock_internal(relation, tid, snapshot,\n> +\t\t\t\t\t\t\t\t\t\t\tlockedSlot, cid, LockTupleExclusive,\n> +\t\t\t\t\t\t\t\t\t\t\tLockWaitBlock,\n> +\t\t\t\t\t\t\t\t\t\t\tTUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> +\t\t\t\t\t\t\t\t\t\t\ttmfd, true);\n\nYou're ignoring the 'wait' parameter here, no? I think the modification to\nheapam_tuple_update() has the same issue.\n\n\n> +\t\tif (result == TM_Ok)\n> +\t\t{\n> +\t\t\ttmfd->traversed = true;\n> +\t\t\treturn TM_Updated;\n> +\t\t}\n> +\t}\n> +\n> +\treturn result;\n\nDoesn't this mean that the caller can't easily distinguish between\nheapam_tuple_delete() and heapam_tuple_lock_internal() returning a failure\nstate?\n\n\n> @@ -350,213 +402,8 @@ heapam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot,\n> \t\t\t\t LockWaitPolicy wait_policy, uint8 flags,\n> \t\t\t\t TM_FailureData *tmfd)\n> {\n\nMoving the entire body of the function around, makes it harder to review\nthis change, because the code movement is intermingled with \"actual\" changes.\n\n\n> +/*\n> + * This routine does the work for heapam_tuple_lock(), but also support\n> + * `updated` to re-use the work done by heapam_tuple_update() or\n> + * heapam_tuple_delete() on fetching tuple and checking its visibility.\n> + */\n> +static TM_Result\n> +heapam_tuple_lock_internal(Relation relation, ItemPointer tid, Snapshot snapshot,\n> +\t\t\t\t\t\t TupleTableSlot *slot, CommandId cid, LockTupleMode mode,\n> +\t\t\t\t\t\t LockWaitPolicy wait_policy, uint8 flags,\n> +\t\t\t\t\t\t TM_FailureData *tmfd, bool updated)\n> +{\n> +\tBufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot;\n> +\tTM_Result\tresult;\n> +\tBuffer\t\tbuffer = InvalidBuffer;\n> +\tHeapTuple\ttuple = &bslot->base.tupdata;\n> +\tbool\t\tfollow_updates;\n> +\n> +\tfollow_updates = (flags & TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS) != 0;\n> +\ttmfd->traversed = false;\n> +\n> +\tAssert(TTS_IS_BUFFERTUPLE(slot));\n> +\n> +tuple_lock_retry:\n> +\ttuple->t_self = *tid;\n> +\tif (!updated)\n> +\t\tresult = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> +\t\t\t\t\t\t\t\t follow_updates, &buffer, tmfd);\n> +\telse\n> +\t\tresult = TM_Updated;\n\nTo make sure I understand: You're basically trying to have\nheapam_tuple_lock_internal() work as before, except that you want to omit\nfetching the first row version, assuming that the caller already tried to lock\nit?\n\nI think at the very this needs an assert verifying that the slot actually\ncontains a tuple in the \"updated\" path.\n\n\n> +\tif (result == TM_Updated &&\n> +\t\t(flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION))\n> +\t{\n> +\t\tif (!updated)\n> +\t\t{\n> +\t\t\t/* Should not encounter speculative tuple on recheck */\n> +\t\t\tAssert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n\nWhy shouldn't this be checked in the updated case as well?\n\n\n> @@ -1490,7 +1492,16 @@ ExecDelete(ModifyTableContext *context,\n> \t\t * transaction-snapshot mode transactions.\n> \t\t */\n> ldelete:\n> -\t\tresult = ExecDeleteAct(context, resultRelInfo, tupleid, changingPart);\n> +\n> +\t\t/*\n> +\t\t * Ask ExecDeleteAct() to immediately place the lock on the updated\n> +\t\t * tuple if we will need EvalPlanQual() in that case to handle it.\n> +\t\t */\n> +\t\tif (!IsolationUsesXactSnapshot())\n> +\t\t\tslot = ExecGetReturningSlot(estate, resultRelInfo);\n> +\n> +\t\tresult = ExecDeleteAct(context, resultRelInfo, tupleid, changingPart,\n> +\t\t\t\t\t\t\t slot);\n\nI don't like that 'slot' is now used for multiple things. I think this could\nbest be addressed by simply moving the slot variable inside the blocks using\nit. And here it should be named more accurately.\n\nIs there a potential conflict with other uses of the ExecGetReturningSlot()?\n\n\nGiven that we now always create the slot, doesn't this increase the overhead\nfor the very common case of not needing EPQ? We'll create unnecessary slots\nall the time, no?\n\n\n> \t\t\t\t\t */\n> \t\t\t\t\tEvalPlanQualBegin(context->epqstate);\n> \t\t\t\t\tinputslot = EvalPlanQualSlot(context->epqstate, resultRelationDesc,\n> \t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo->ri_RangeTableIndex);\n> +\t\t\t\t\tExecCopySlot(inputslot, slot);\n\n> -\t\t\t\t\tresult = table_tuple_lock(resultRelationDesc, tupleid,\n> -\t\t\t\t\t\t\t\t\t\t\t estate->es_snapshot,\n> -\t\t\t\t\t\t\t\t\t\t\t inputslot, estate->es_output_cid,\n> -\t\t\t\t\t\t\t\t\t\t\t LockTupleExclusive, LockWaitBlock,\n> -\t\t\t\t\t\t\t\t\t\t\t TUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> -\t\t\t\t\t\t\t\t\t\t\t &context->tmfd);\n> +\t\t\t\t\tAssert(context->tmfd.traversed);\n> +\t\t\t\t\tepqslot = EvalPlanQual(context->epqstate,\n> +\t\t\t\t\t\t\t\t\t\t resultRelationDesc,\n> +\t\t\t\t\t\t\t\t\t\t resultRelInfo->ri_RangeTableIndex,\n> +\t\t\t\t\t\t\t\t\t\t inputslot);\n\nThe only point of using EvalPlanQualSlot() is to avoid copying the tuple from\none slot to another. Given that we're not benefiting from that anymore (due to\nyour manual ExecCopySlot() call), it seems we could just pass 'slot' to\nEvalPlanQual() and not bother with EvalPlanQualSlot().\n\n\n\n> @@ -1449,6 +1451,8 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,\n> *\ttmfd - filled in failure cases (see below)\n> *\tchangingPart - true iff the tuple is being moved to another partition\n> *\t\ttable due to an update of the partition key. Otherwise, false.\n> + *\tlockedSlot - slot to save the locked tuple if should lock the last row\n> + *\t\tversion during the concurrent update. NULL if not needed.\n\nThe grammar in the new comments is off (\"if should lock\").\n\nI think this is also needs to mention that this *significantly* changes the\nbehaviour of table_tuple_delete(). That's not at all clear from the comment.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Jan 2023 17:07:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "It looks like this patch received some feedback from Andres and hasn't\nhad any further work posted. I'm going to move it to \"Waiting on\nAuthor\".\n\nIt doesn't sound like this is likely to get committed this release\ncycle unless responding to Andres's points simpler than I expect.\n\n\n", "msg_date": "Tue, 28 Feb 2023 16:02:46 -0500", "msg_from": "Gregory Stark <stark@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Wed, Mar 1, 2023 at 12:03 AM Gregory Stark <stark@postgresql.org> wrote:\n> It looks like this patch received some feedback from Andres and hasn't\n> had any further work posted. I'm going to move it to \"Waiting on\n> Author\".\n\nI'll post the updated version in the next couple of days.\n\n> It doesn't sound like this is likely to get committed this release\n> cycle unless responding to Andres's points simpler than I expect.\n\nI wouldn't think ahead that much.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 1 Mar 2023 00:08:58 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Andres.\n\nThank you for your review. Sorry for the late reply. I took some\ntime for me to figure out how to revise the patch.\n\nThe revised patchset is attached. I decided to split the patch into two:\n1) Avoid re-fetching the \"original\" row version during update and delete.\n2) Save the efforts by re-using existing context of\ntuple_update()/tuple_delete() for locking the tuple.\nThey are two separate optimizations. So let's evaluate their\nperformance separately.\n\nOn Tue, Jan 10, 2023 at 4:07 AM Andres Freund <andres@anarazel.de> wrote:\n> I'm a bit worried that this is optimizing the rare case while hurting the\n> common case. See e.g. my point below about creating additional slots in the\n> happy path.\n\nThis makes sense. It worth to allocate the slot only if we're going\nto store a tuple there. I implemented this by passing a callback for\nslot allocation instead of the slot.\n\n> It's also not clear that change is right directionally. If we want to avoid\n> re-fetching the \"original\" row version, why don't we provide that\n> functionality via table_tuple_lock()?\n\nThese are two distinct optimizations. Now, they come as two distinct patches.\n\n> On 2023-01-09 13:29:18 +0300, Alexander Korotkov wrote:\n> > @@ -53,6 +53,12 @@ static bool SampleHeapTupleVisible(TableScanDesc scan, Buffer buffer,\n> > HeapTuple tuple,\n> > OffsetNumber tupoffset);\n> >\n> > +static TM_Result heapam_tuple_lock_internal(Relation relation, ItemPointer tid,\n> > + Snapshot snapshot, TupleTableSlot *slot,\n> > + CommandId cid, LockTupleMode mode,\n> > + LockWaitPolicy wait_policy, uint8 flags,\n> > + TM_FailureData *tmfd, bool updated);\n> > +\n> > static BlockNumber heapam_scan_get_blocks_done(HeapScanDesc hscan);\n> >\n> > static const TableAmRoutine heapam_methods;\n> > @@ -299,14 +305,39 @@ heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> > static TM_Result\n> > heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,\n> > Snapshot snapshot, Snapshot crosscheck, bool wait,\n> > - TM_FailureData *tmfd, bool changingPart)\n> > + TM_FailureData *tmfd, bool changingPart,\n> > + TupleTableSlot *lockedSlot)\n> > {\n> > + TM_Result result;\n> > +\n> > /*\n> > * Currently Deleting of index tuples are handled at vacuum, in case if\n> > * the storage itself is cleaning the dead tuples by itself, it is the\n> > * time to call the index tuple deletion also.\n> > */\n> > - return heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> > + result = heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> > +\n> > + /*\n> > + * If the tuple has been concurrently updated, get lock already so that on\n> > + * retry it will succeed, provided that the caller asked to do this by\n> > + * providing a lockedSlot.\n> > + */\n> > + if (result == TM_Updated && lockedSlot != NULL)\n> > + {\n> > + result = heapam_tuple_lock_internal(relation, tid, snapshot,\n> > + lockedSlot, cid, LockTupleExclusive,\n> > + LockWaitBlock,\n> > + TUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> > + tmfd, true);\n>\n> You're ignoring the 'wait' parameter here, no? I think the modification to\n> heapam_tuple_update() has the same issue.\n\nYep. I didn't catch this, because currently we also call\ntuple_update()/tuple_delete() with wait == true. Fixed.\n\n> > + if (result == TM_Ok)\n> > + {\n> > + tmfd->traversed = true;\n> > + return TM_Updated;\n> > + }\n> > + }\n> > +\n> > + return result;\n>\n> Doesn't this mean that the caller can't easily distinguish between\n> heapam_tuple_delete() and heapam_tuple_lock_internal() returning a failure\n> state?\n\nExactly. But currently nodeModifyTable.c handles these failure states\nin the similar way. And I don't see why it should be different in\nfuture.\n\n> > @@ -350,213 +402,8 @@ heapam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot,\n> > LockWaitPolicy wait_policy, uint8 flags,\n> > TM_FailureData *tmfd)\n> > {\n>\n> Moving the entire body of the function around, makes it harder to review\n> this change, because the code movement is intermingled with \"actual\" changes.\n\nOK, fixed.\n\n> > +/*\n> > + * This routine does the work for heapam_tuple_lock(), but also support\n> > + * `updated` to re-use the work done by heapam_tuple_update() or\n> > + * heapam_tuple_delete() on fetching tuple and checking its visibility.\n> > + */\n> > +static TM_Result\n> > +heapam_tuple_lock_internal(Relation relation, ItemPointer tid, Snapshot snapshot,\n> > + TupleTableSlot *slot, CommandId cid, LockTupleMode mode,\n> > + LockWaitPolicy wait_policy, uint8 flags,\n> > + TM_FailureData *tmfd, bool updated)\n> > +{\n> > + BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot;\n> > + TM_Result result;\n> > + Buffer buffer = InvalidBuffer;\n> > + HeapTuple tuple = &bslot->base.tupdata;\n> > + bool follow_updates;\n> > +\n> > + follow_updates = (flags & TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS) != 0;\n> > + tmfd->traversed = false;\n> > +\n> > + Assert(TTS_IS_BUFFERTUPLE(slot));\n> > +\n> > +tuple_lock_retry:\n> > + tuple->t_self = *tid;\n> > + if (!updated)\n> > + result = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> > + follow_updates, &buffer, tmfd);\n> > + else\n> > + result = TM_Updated;\n>\n> To make sure I understand: You're basically trying to have\n> heapam_tuple_lock_internal() work as before, except that you want to omit\n> fetching the first row version, assuming that the caller already tried to lock\n> it?\n>\n> I think at the very this needs an assert verifying that the slot actually\n> contains a tuple in the \"updated\" path.\n\nThis part was re-written.\n\n> > + if (result == TM_Updated &&\n> > + (flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION))\n> > + {\n> > + if (!updated)\n> > + {\n> > + /* Should not encounter speculative tuple on recheck */\n> > + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n>\n> Why shouldn't this be checked in the updated case as well?\n>\n>\n> > @@ -1490,7 +1492,16 @@ ExecDelete(ModifyTableContext *context,\n> > * transaction-snapshot mode transactions.\n> > */\n> > ldelete:\n> > - result = ExecDeleteAct(context, resultRelInfo, tupleid, changingPart);\n> > +\n> > + /*\n> > + * Ask ExecDeleteAct() to immediately place the lock on the updated\n> > + * tuple if we will need EvalPlanQual() in that case to handle it.\n> > + */\n> > + if (!IsolationUsesXactSnapshot())\n> > + slot = ExecGetReturningSlot(estate, resultRelInfo);\n> > +\n> > + result = ExecDeleteAct(context, resultRelInfo, tupleid, changingPart,\n> > + slot);\n>\n> I don't like that 'slot' is now used for multiple things. I think this could\n> best be addressed by simply moving the slot variable inside the blocks using\n> it. And here it should be named more accurately.\n\nI didn't do that refactoring. But now editing introduced by the 1st\npatch of the set are more granular and doesn't affect usage of the\n'slot' variable.\n\n> Is there a potential conflict with other uses of the ExecGetReturningSlot()?\n\nYep. The current revision evades this random usage of slots.\n\n> Given that we now always create the slot, doesn't this increase the overhead\n> for the very common case of not needing EPQ? We'll create unnecessary slots\n> all the time, no?\n\nYes, this is addressed by allocating EPQ slot only once it is needed\nvia callback. I'm thinking about wrapping this into some abstraction\ncalled 'LazySlot'.\n\n> > */\n> > EvalPlanQualBegin(context->epqstate);\n> > inputslot = EvalPlanQualSlot(context->epqstate, resultRelationDesc,\n> > resultRelInfo->ri_RangeTableIndex);\n> > + ExecCopySlot(inputslot, slot);\n>\n> > - result = table_tuple_lock(resultRelationDesc, tupleid,\n> > - estate->es_snapshot,\n> > - inputslot, estate->es_output_cid,\n> > - LockTupleExclusive, LockWaitBlock,\n> > - TUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> > - &context->tmfd);\n> > + Assert(context->tmfd.traversed);\n> > + epqslot = EvalPlanQual(context->epqstate,\n> > + resultRelationDesc,\n> > + resultRelInfo->ri_RangeTableIndex,\n> > + inputslot);\n>\n> The only point of using EvalPlanQualSlot() is to avoid copying the tuple from\n> one slot to another. Given that we're not benefiting from that anymore (due to\n> your manual ExecCopySlot() call), it seems we could just pass 'slot' to\n> EvalPlanQual() and not bother with EvalPlanQualSlot().\n\nThis makes sense. Now, usage pattern of the slots is more clear.\n\n> > @@ -1449,6 +1451,8 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,\n> > * tmfd - filled in failure cases (see below)\n> > * changingPart - true iff the tuple is being moved to another partition\n> > * table due to an update of the partition key. Otherwise, false.\n> > + * lockedSlot - slot to save the locked tuple if should lock the last row\n> > + * version during the concurrent update. NULL if not needed.\n>\n> The grammar in the new comments is off (\"if should lock\").\n>\n> I think this is also needs to mention that this *significantly* changes the\n> behaviour of table_tuple_delete(). That's not at all clear from the comment.\n\nLet's see the performance results for the patchset. I'll properly\nrevise the comments if results will be good.\n\nPavel, could you please re-run your tests over revised patchset?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 1 Mar 2023 17:57:45 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Alexander!\n\n> Let's see the performance results for the patchset. I'll properly\n> revise the comments if results will be good.\n>\n> Pavel, could you please re-run your tests over revised patchset?\n\nSince last time I've improved the test to avoid significant series\ndifferences due to AWS storage access variation that is seen in [1].\nI.e. each series of tests is run on a tmpfs with newly inited pgbench\ntables and vacuum. Also, I've added a test for low-concurrency updates\nwhere the locking optimization isn't expected to improve performance,\njust to make sure the patches don't make things worse.\n\nThe tests are as follows:\n1. Heap updates with high tuple concurrency:\nPrepare without pkeys (pgbench -d postgres -i -I dtGv -s 10 --unlogged-tables)\nUpdate tellers 100 rows, 50 conns ( pgbench postgres -f\n./update-only-tellers.sql -s 10 -P10 -M prepared -T 600 -j 5 -c 50 )\n\nResult: Average of 5 series with patches (0001+0002) is around 5%\nfaster than both master and patch 0001. Still, there are some\nfluctuations between different series of the measurements of the same\npatch, but much less than in [1]\n\n2. Heap updates with low tuple concurrency:\nPrepare with pkeys (pgbench -d postgres -i -I dtGvp -s 300 --unlogged-tables)\nUpdate 3*10^7 rows, 50 conns (pgbench postgres -f\n./update-only-account.sql -s 300 -P10 -M prepared -T 600 -j 5 -c 50)\n\nResult: Both patches and master are the same within a tolerance of\nless than 0.7%.\n\nTests are run on the same 36-vcore AWS c5.9xlarge as [1]. The results\npictures are attached.\n\nUsing pkeys in low-concurrency cases is to make the index search of a\ntuple to be updated. No pkeys in case of high concurrency is for\nconcurrent index updates not contribute to updates performance.\n\nCommon settings:\nshared_memory 20Gb\nmax_worker_processes = 1024\nmax_parallel_workers = 1024\nmax_connections=10000\nautovacuum_multixact_freeze_max_age=2000000000\nautovacuum_freeze_max_age=2000000000\nmax_wal_senders=0\nwal_level=minimal\nmax_wal_size = 10G\nautovacuum = off\nfsync = off\nfull_page_writes = off\n\nKind regards,\nPavel Borisov,\nSupabase.\n\n[1] https://www.postgresql.org/message-id/CALT9ZEGhxwh2_WOpOjdazW7CNkBzen17h7xMdLbBjfZb5aULgg%40mail.gmail.com", "msg_date": "Thu, 2 Mar 2023 14:28:56 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Pavel!\n\nOn Thu, Mar 2, 2023 at 1:29 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Let's see the performance results for the patchset. I'll properly\n> > revise the comments if results will be good.\n> >\n> > Pavel, could you please re-run your tests over revised patchset?\n>\n> Since last time I've improved the test to avoid significant series\n> differences due to AWS storage access variation that is seen in [1].\n> I.e. each series of tests is run on a tmpfs with newly inited pgbench\n> tables and vacuum. Also, I've added a test for low-concurrency updates\n> where the locking optimization isn't expected to improve performance,\n> just to make sure the patches don't make things worse.\n>\n> The tests are as follows:\n> 1. Heap updates with high tuple concurrency:\n> Prepare without pkeys (pgbench -d postgres -i -I dtGv -s 10 --unlogged-tables)\n> Update tellers 100 rows, 50 conns ( pgbench postgres -f\n> ./update-only-tellers.sql -s 10 -P10 -M prepared -T 600 -j 5 -c 50 )\n>\n> Result: Average of 5 series with patches (0001+0002) is around 5%\n> faster than both master and patch 0001. Still, there are some\n> fluctuations between different series of the measurements of the same\n> patch, but much less than in [1]\n\nThank you for running this that fast!\n\nSo, it appears that 0001 patch has no effect. So, we probably should\nconsider to drop 0001 patch and consider just 0002 patch.\n\nThe attached patch v12 contains v11 0002 patch extracted separately.\nPlease, add it to the performance comparison. Thanks.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 2 Mar 2023 17:53:08 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Alexander!\n\nOn Thu, 2 Mar 2023 at 18:53, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi, Pavel!\n>\n> On Thu, Mar 2, 2023 at 1:29 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > Let's see the performance results for the patchset. I'll properly\n> > > revise the comments if results will be good.\n> > >\n> > > Pavel, could you please re-run your tests over revised patchset?\n> >\n> > Since last time I've improved the test to avoid significant series\n> > differences due to AWS storage access variation that is seen in [1].\n> > I.e. each series of tests is run on a tmpfs with newly inited pgbench\n> > tables and vacuum. Also, I've added a test for low-concurrency updates\n> > where the locking optimization isn't expected to improve performance,\n> > just to make sure the patches don't make things worse.\n> >\n> > The tests are as follows:\n> > 1. Heap updates with high tuple concurrency:\n> > Prepare without pkeys (pgbench -d postgres -i -I dtGv -s 10 --unlogged-tables)\n> > Update tellers 100 rows, 50 conns ( pgbench postgres -f\n> > ./update-only-tellers.sql -s 10 -P10 -M prepared -T 600 -j 5 -c 50 )\n> >\n> > Result: Average of 5 series with patches (0001+0002) is around 5%\n> > faster than both master and patch 0001. Still, there are some\n> > fluctuations between different series of the measurements of the same\n> > patch, but much less than in [1]\n>\n> Thank you for running this that fast!\n>\n> So, it appears that 0001 patch has no effect. So, we probably should\n> consider to drop 0001 patch and consider just 0002 patch.\n>\n> The attached patch v12 contains v11 0002 patch extracted separately.\n> Please, add it to the performance comparison. Thanks.\n\nI've done a benchmarking on a full series of four variants: master vs\nv11-0001 vs v11-0001+0002 vs v12 in the same configuration as in the\nprevious measurement. The results are as follows:\n\n1. Heap updates with high tuple concurrency:\nAverage of 5 series v11-0001+0002 is around 7% faster than the master.\nI need to note that while v11-0001+0002 shows consistent performance\nimprovement over the master, its value can not be determined more\nprecisely than a couple of percents even with averaging. So I'd\nsuppose we may not conclude from the results if a more subtle\ndifference between v11-0001+0002 vs v12 (and master vs v11-0001)\nreally exists.\n\n2. Heap updates with high tuple concurrency:\nAll patches and master are still the same within a tolerance of\nless than 0.7%.\n\nOverall patch v11-0001+0002 doesn't show performance degradation so I\ndon't see why to apply only patch 0002 skipping 0001.\n\nRegards,\nPavel Borisov,\nSupabase.", "msg_date": "Thu, 2 Mar 2023 22:17:19 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Thu, Mar 2, 2023 at 9:17 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Thu, 2 Mar 2023 at 18:53, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Thu, Mar 2, 2023 at 1:29 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > > Let's see the performance results for the patchset. I'll properly\n> > > > revise the comments if results will be good.\n> > > >\n> > > > Pavel, could you please re-run your tests over revised patchset?\n> > >\n> > > Since last time I've improved the test to avoid significant series\n> > > differences due to AWS storage access variation that is seen in [1].\n> > > I.e. each series of tests is run on a tmpfs with newly inited pgbench\n> > > tables and vacuum. Also, I've added a test for low-concurrency updates\n> > > where the locking optimization isn't expected to improve performance,\n> > > just to make sure the patches don't make things worse.\n> > >\n> > > The tests are as follows:\n> > > 1. Heap updates with high tuple concurrency:\n> > > Prepare without pkeys (pgbench -d postgres -i -I dtGv -s 10 --unlogged-tables)\n> > > Update tellers 100 rows, 50 conns ( pgbench postgres -f\n> > > ./update-only-tellers.sql -s 10 -P10 -M prepared -T 600 -j 5 -c 50 )\n> > >\n> > > Result: Average of 5 series with patches (0001+0002) is around 5%\n> > > faster than both master and patch 0001. Still, there are some\n> > > fluctuations between different series of the measurements of the same\n> > > patch, but much less than in [1]\n> >\n> > Thank you for running this that fast!\n> >\n> > So, it appears that 0001 patch has no effect. So, we probably should\n> > consider to drop 0001 patch and consider just 0002 patch.\n> >\n> > The attached patch v12 contains v11 0002 patch extracted separately.\n> > Please, add it to the performance comparison. Thanks.\n>\n> I've done a benchmarking on a full series of four variants: master vs\n> v11-0001 vs v11-0001+0002 vs v12 in the same configuration as in the\n> previous measurement. The results are as follows:\n>\n> 1. Heap updates with high tuple concurrency:\n> Average of 5 series v11-0001+0002 is around 7% faster than the master.\n> I need to note that while v11-0001+0002 shows consistent performance\n> improvement over the master, its value can not be determined more\n> precisely than a couple of percents even with averaging. So I'd\n> suppose we may not conclude from the results if a more subtle\n> difference between v11-0001+0002 vs v12 (and master vs v11-0001)\n> really exists.\n>\n> 2. Heap updates with high tuple concurrency:\n> All patches and master are still the same within a tolerance of\n> less than 0.7%.\n>\n> Overall patch v11-0001+0002 doesn't show performance degradation so I\n> don't see why to apply only patch 0002 skipping 0001.\n\nThank you, Pavel. So, it seems that we have substantial benefit only\nwith two patches. So, I'll continue working on both of them.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 2 Mar 2023 23:16:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Thu, Mar 2, 2023 at 11:16 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Mar 2, 2023 at 9:17 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > On Thu, 2 Mar 2023 at 18:53, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Thu, Mar 2, 2023 at 1:29 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > > > Let's see the performance results for the patchset. I'll properly\n> > > > > revise the comments if results will be good.\n> > > > >\n> > > > > Pavel, could you please re-run your tests over revised patchset?\n> > > >\n> > > > Since last time I've improved the test to avoid significant series\n> > > > differences due to AWS storage access variation that is seen in [1].\n> > > > I.e. each series of tests is run on a tmpfs with newly inited pgbench\n> > > > tables and vacuum. Also, I've added a test for low-concurrency updates\n> > > > where the locking optimization isn't expected to improve performance,\n> > > > just to make sure the patches don't make things worse.\n> > > >\n> > > > The tests are as follows:\n> > > > 1. Heap updates with high tuple concurrency:\n> > > > Prepare without pkeys (pgbench -d postgres -i -I dtGv -s 10 --unlogged-tables)\n> > > > Update tellers 100 rows, 50 conns ( pgbench postgres -f\n> > > > ./update-only-tellers.sql -s 10 -P10 -M prepared -T 600 -j 5 -c 50 )\n> > > >\n> > > > Result: Average of 5 series with patches (0001+0002) is around 5%\n> > > > faster than both master and patch 0001. Still, there are some\n> > > > fluctuations between different series of the measurements of the same\n> > > > patch, but much less than in [1]\n> > >\n> > > Thank you for running this that fast!\n> > >\n> > > So, it appears that 0001 patch has no effect. So, we probably should\n> > > consider to drop 0001 patch and consider just 0002 patch.\n> > >\n> > > The attached patch v12 contains v11 0002 patch extracted separately.\n> > > Please, add it to the performance comparison. Thanks.\n> >\n> > I've done a benchmarking on a full series of four variants: master vs\n> > v11-0001 vs v11-0001+0002 vs v12 in the same configuration as in the\n> > previous measurement. The results are as follows:\n> >\n> > 1. Heap updates with high tuple concurrency:\n> > Average of 5 series v11-0001+0002 is around 7% faster than the master.\n> > I need to note that while v11-0001+0002 shows consistent performance\n> > improvement over the master, its value can not be determined more\n> > precisely than a couple of percents even with averaging. So I'd\n> > suppose we may not conclude from the results if a more subtle\n> > difference between v11-0001+0002 vs v12 (and master vs v11-0001)\n> > really exists.\n> >\n> > 2. Heap updates with high tuple concurrency:\n> > All patches and master are still the same within a tolerance of\n> > less than 0.7%.\n> >\n> > Overall patch v11-0001+0002 doesn't show performance degradation so I\n> > don't see why to apply only patch 0002 skipping 0001.\n>\n> Thank you, Pavel. So, it seems that we have substantial benefit only\n> with two patches. So, I'll continue working on both of them.\n\nThe revised patchset is attached. The patch removing extra\ntable_tuple_fetch_row_version() is back. The second patch now\nimplements a concept of LazyTupleTableSlot, a slot which gets\nallocated only when needed. Also, there is more minor refactoring and\nmore comments.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 7 Mar 2023 04:45:32 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-03-02 14:28:56 +0400, Pavel Borisov wrote:\n> 2. Heap updates with low tuple concurrency:\n> Prepare with pkeys (pgbench -d postgres -i -I dtGvp -s 300 --unlogged-tables)\n> Update 3*10^7 rows, 50 conns (pgbench postgres -f\n> ./update-only-account.sql -s 300 -P10 -M prepared -T 600 -j 5 -c 50)\n> \n> Result: Both patches and master are the same within a tolerance of\n> less than 0.7%.\n\nWhat exactly does that mean? I would definitely not want to accept a 0.7%\nregression of the uncontended case to benefit the contended case here...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Mar 2023 17:50:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Tue, Mar 7, 2023 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-02 14:28:56 +0400, Pavel Borisov wrote:\n> > 2. Heap updates with low tuple concurrency:\n> > Prepare with pkeys (pgbench -d postgres -i -I dtGvp -s 300 --unlogged-tables)\n> > Update 3*10^7 rows, 50 conns (pgbench postgres -f\n> > ./update-only-account.sql -s 300 -P10 -M prepared -T 600 -j 5 -c 50)\n> >\n> > Result: Both patches and master are the same within a tolerance of\n> > less than 0.7%.\n>\n> What exactly does that mean? I would definitely not want to accept a 0.7%\n> regression of the uncontended case to benefit the contended case here...\n\nI don't know what exactly Pavel meant, but average overall numbers for\nlow concurrency are.\nmaster: 420401 (stddev of average 233)\npatchset v11: 420111 (stddev of average 199)\nThe difference is less than 0.1% and that is very safely within the error.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 7 Mar 2023 05:09:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Andres and Alexander!\n\nOn Tue, 7 Mar 2023, 10:10 Alexander Korotkov, <aekorotkov@gmail.com> wrote:\n\n> On Tue, Mar 7, 2023 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-03-02 14:28:56 +0400, Pavel Borisov wrote:\n> > > 2. Heap updates with low tuple concurrency:\n> > > Prepare with pkeys (pgbench -d postgres -i -I dtGvp -s 300\n> --unlogged-tables)\n> > > Update 3*10^7 rows, 50 conns (pgbench postgres -f\n> > > ./update-only-account.sql -s 300 -P10 -M prepared -T 600 -j 5 -c 50)\n> > >\n> > > Result: Both patches and master are the same within a tolerance of\n> > > less than 0.7%.\n> >\n> > What exactly does that mean? I would definitely not want to accept a 0.7%\n> > regression of the uncontended case to benefit the contended case here...\n>\n> I don't know what exactly Pavel meant, but average overall numbers for\n> low concurrency are.\n> master: 420401 (stddev of average 233)\n> patchset v11: 420111 (stddev of average 199)\n> The difference is less than 0.1% and that is very safely within the error.\n>\n\nYes, the only thing that I meant is that for low-concurrency case the\nresults between patch and master are within the difference between repeated\nseries of measurements. So I concluded that the test can not prove any\ndifference between patch and master.\n\nI haven't meant or written there is some performance degradation.\n\nAlexander, I suppose did an extra step and calculated overall average and\nstddev, from raw data provided. Thanks!\n\nRegards,\nPavel.\n\n>\n\nHi, Andres and Alexander!On Tue, 7 Mar 2023, 10:10 Alexander Korotkov, <aekorotkov@gmail.com> wrote:On Tue, Mar 7, 2023 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-02 14:28:56 +0400, Pavel Borisov wrote:\n> > 2. Heap updates with low tuple concurrency:\n> > Prepare with pkeys (pgbench -d postgres -i -I dtGvp -s 300 --unlogged-tables)\n> > Update 3*10^7 rows, 50 conns (pgbench postgres -f\n> > ./update-only-account.sql -s 300 -P10 -M prepared -T 600 -j 5 -c 50)\n> >\n> > Result: Both patches and master are the same within a tolerance of\n> > less than 0.7%.\n>\n> What exactly does that mean? I would definitely not want to accept a 0.7%\n> regression of the uncontended case to benefit the contended case here...\n\nI don't know what exactly Pavel meant, but average overall numbers for\nlow concurrency are.\nmaster: 420401 (stddev of average 233)\npatchset v11: 420111 (stddev of average 199)\nThe difference is less than 0.1% and that is very safely within the error.Yes, the only thing that I meant is that for low-concurrency case the results between patch and master are within the difference between repeated series of measurements. So I concluded that the test can not prove any difference between patch and master. I haven't meant or written there is some performance degradation.Alexander, I suppose did an extra step and calculated overall average and stddev, from raw data provided. Thanks!Regards,Pavel.", "msg_date": "Wed, 8 Mar 2023 00:26:19 +0800", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Tue, Mar 7, 2023 at 7:26 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Tue, 7 Mar 2023, 10:10 Alexander Korotkov, <aekorotkov@gmail.com> wrote:\n>> I don't know what exactly Pavel meant, but average overall numbers for\n>> low concurrency are.\n>> master: 420401 (stddev of average 233)\n>> patchset v11: 420111 (stddev of average 199)\n>> The difference is less than 0.1% and that is very safely within the error.\n>\n>\n> Yes, the only thing that I meant is that for low-concurrency case the results between patch and master are within the difference between repeated series of measurements. So I concluded that the test can not prove any difference between patch and master.\n>\n> I haven't meant or written there is some performance degradation.\n>\n> Alexander, I suppose did an extra step and calculated overall average and stddev, from raw data provided. Thanks!\n\nPavel, thank you for verifying this.\n\nCould you, please, rerun performance benchmarks for the v13? It\nintroduces LazyTupleTableSlot, which shouldn't do any measurable\nimpact on performance. But still.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 8 Mar 2023 02:17:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-03-07 04:45:32 +0300, Alexander Korotkov wrote:\n> The second patch now implements a concept of LazyTupleTableSlot, a slot\n> which gets allocated only when needed. Also, there is more minor\n> refactoring and more comments.\n\nThis patch already is pretty big for what it actually improves. Introducing\neven infrastructure to get a not that big win, in a not particularly\ninteresting, extreme, workload...\n\nWhat is motivating this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Mar 2023 17:21:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Wed, Mar 8, 2023 at 4:22 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-07 04:45:32 +0300, Alexander Korotkov wrote:\n> > The second patch now implements a concept of LazyTupleTableSlot, a slot\n> > which gets allocated only when needed. Also, there is more minor\n> > refactoring and more comments.\n>\n> This patch already is pretty big for what it actually improves. Introducing\n> even infrastructure to get a not that big win, in a not particularly\n> interesting, extreme, workload...\n\nIt's true that the win isn't dramatic. But can't agree that workload\nisn't interesting. In my experience, high-contention over limited set\nof row is something that frequently happen is production. I\npersonally took part in multiple investigations over such workloads.\n\n> What is motivating this?\n\nRight, the improvement this patch gives to heap is not the full\nmotivation. Another motivation is improvement it gives to TableAM\nAPI. Our current API implies that the effort on locating the tuple by\ntid is small. This is more or less true for heap, where we just need\nto pin and lock the buffer. But imagine other TableAM\nimplementations, where locating a tuple is more expensive. Current\nAPI insist that we do that twice in update attempt and lock. Doing\nthat in single call could give such TableAM's singification economy\n(but even for heap it's something). I'm working on such TableAM: it's\nOrioleDB which implements index-organized tables. And I know there\nare other examples (for instance, zedstore), where TID lookup includes\nsome indirection.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 10 Mar 2023 11:47:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "\"Right, the improvement this patch gives to the heap is not the full motivation. Another motivation is the improvement it gives to TableAM API. Our current API implies that the effort on locating the tuple by tid is small. This is more or less true for the heap, where we just need to pin and lock the buffer. But imagine other TableAM implementations, where locating a tuple is more expensive.\"\r\n\r\nYeah. Our TableAM API is a very nice start to getting pluggable storage, but we still have a long ways to go to have an ability to really provide a wide variety of pluggable storage engines.\r\n\r\nIn particular, the following approaches are likely to have much more expensive tid lookups:\r\n - columnar storage (may require a lot of random IO to reconstruct a tuple)\r\n - index oriented storage (tid no longer physically locatable in the file via seek)\r\n - compressed cold storage like pg_ctyogen (again seek may be problematic).\r\n\r\nTo my mind I think the performance benefits are a nice side benefit, but the main interest I have on this is regarding improvements in the TableAM capabilities. I cannot see how to do this without a lot more infrastructure.", "msg_date": "Fri, 10 Mar 2023 17:16:40 +0000", "msg_from": "Chris Travers <chris.travers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Fri, Mar 10, 2023 at 8:17 PM Chris Travers <chris.travers@gmail.com> wrote:\n> \"Right, the improvement this patch gives to the heap is not the full motivation. Another motivation is the improvement it gives to TableAM API. Our current API implies that the effort on locating the tuple by tid is small. This is more or less true for the heap, where we just need to pin and lock the buffer. But imagine other TableAM implementations, where locating a tuple is more expensive.\"\n>\n> Yeah. Our TableAM API is a very nice start to getting pluggable storage, but we still have a long ways to go to have an ability to really provide a wide variety of pluggable storage engines.\n>\n> In particular, the following approaches are likely to have much more expensive tid lookups:\n> - columnar storage (may require a lot of random IO to reconstruct a tuple)\n> - index oriented storage (tid no longer physically locatable in the file via seek)\n> - compressed cold storage like pg_ctyogen (again seek may be problematic).\n>\n> To my mind I think the performance benefits are a nice side benefit, but the main interest I have on this is regarding improvements in the TableAM capabilities. I cannot see how to do this without a lot more infrastructure.\n\nChris, thank you for your feedback.\n\nThe revised patch set is attached. Some comments are improved. Also,\nwe implicitly skip the new facility for the MERGE case. As I get Dean\nRasheed is going to revise the locking for MERGE soon [1].\n\nPavel, could you please re-run your test case on the revised patch?\n\n1. https://www.postgresql.org/message-id/CAEZATCU9e9Ccbi70yNbCcF7xvZ+zrjiD0_6eEq2zEZA1p+707A@mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 12 Mar 2023 19:05:47 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi!\n\nOn Sun, Mar 12, 2023 at 7:05 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> The revised patch set is attached. Some comments are improved. Also,\n> we implicitly skip the new facility for the MERGE case. As I get Dean\n> Rasheed is going to revise the locking for MERGE soon [1].\n>\n> Pavel, could you please re-run your test case on the revised patch?\n\nI found the experiments made by Pavel [1] hard to reproduce due to the\nhigh variation of the TPS. Instead, I constructed a different\nbenchmark, which includes multiple updates (40 rows) in one query, and\nrun it on c5d.18xlarge. That produces stable performance results as\nwell as measurable performance benefits of the patch.\n\nI found that patchsets v11 and v14 not showing any performance\nimprovements over v10. v10 is also much less invasive for\nheap-related code. This is why I made v15 using the v10 approach and\nporting LazyTupleTableSlot and improved comments there. I think this\nshould address some of Andres's complaints regarding introducing too\nmuch infrastructure [2].\n\nThe average results for high concurrency case (errors are given for a\n95% confidence level) are given below. We can see that v15 gives a\nmeasurable performance improvement.\n\nmaster = 40084 +- 447 tps\npatchset v10 = 41761 +- 1117 tps\npatchset v11 = 41473 +- 773 tps\npatchset v14 = 40966 +- 1008 tps\npatchset v15 = 42855 +- 977 tps\n\nThe average results for low concurrency case (errors are given for a\n95% confidence level) are given below. It verifies that the patch\nintroduces no overhead in the low concurrency case.\n\nmaster = 50626 +- 784 tps\npatchset v15 = 51297 +- 876 tps\n\nSee attachments for raw experiment data and scripts.\n\nSo, as we can see patch gives a small performance improvement for the\nheap in edge high concurrency case. But also it improves table AM API\nfor future use cases [3][4].\n\nI'm going to push patchset v15 if no objections.\n\nLinks\n1. https://www.postgresql.org/message-id/CALT9ZEHKdCF_jCoK2ErUuUtCuYPf82%2BZr1XE5URzneSFxz3zqA%40mail.gmail.com\n2. https://www.postgresql.org/message-id/20230308012157.wo73y22ll2cojpvk%40awork3.anarazel.de\n3. https://www.postgresql.org/message-id/CAPpHfdu1dqqcTz9V9iG-ZRewYAFL2VhizwfiN5SW%3DZ%2B1rj99-g%40mail.gmail.com\n4. https://www.postgresql.org/message-id/167846860062.628976.2440696515718158538.pgcf%40coridan.postgresql.org\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 21 Mar 2023 01:25:11 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-03-21 01:25:11 +0300, Alexander Korotkov wrote:\n> I'm going to push patchset v15 if no objections.\n\nJust saw that this went in - didn't catch up with the thread before,\nunfortunately. At the very least I'd like to see some more work on cleaning up\nthe lazy tuple slot stuff. It's replete with unnecessary multiple-evaluation\nhazards - I realize that there's some of those already, but I don't think we\nshould go further down that route. As far as I can tell there's no need for\nany of this to be macros.\n\n\n> From a8b4e8a7b27815e013ea07b8cc9ac68541a9ac07 Mon Sep 17 00:00:00 2001\n> From: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Tue, 21 Mar 2023 00:34:15 +0300\n> Subject: [PATCH 1/2] Evade extra table_tuple_fetch_row_version() in\n> ExecUpdate()/ExecDelete()\n>\n> When we lock tuple using table_tuple_lock() then we at the same time fetch\n> the locked tuple to the slot. In this case we can skip extra\n> table_tuple_fetch_row_version() thank to we've already fetched the 'old' tuple\n> and nobody can change it concurrently since it's locked.\n>\n> Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n> Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp\n> Reviewed-by: Andres Freund, Chris Travers\n> ---\n> src/backend/executor/nodeModifyTable.c | 48 +++++++++++++++++++-------\n> 1 file changed, 35 insertions(+), 13 deletions(-)\n>\n> diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c\n> index 3a673895082..93ebfdbb0d8 100644\n> --- a/src/backend/executor/nodeModifyTable.c\n> +++ b/src/backend/executor/nodeModifyTable.c\n> @@ -1559,6 +1559,22 @@ ldelete:\n> \t\t\t\t\t{\n> \t\t\t\t\t\tcase TM_Ok:\n> \t\t\t\t\t\t\tAssert(context->tmfd.traversed);\n> +\n> +\t\t\t\t\t\t\t/*\n> +\t\t\t\t\t\t\t * Save locked tuple for further processing of\n> +\t\t\t\t\t\t\t * RETURNING clause.\n> +\t\t\t\t\t\t\t */\n> +\t\t\t\t\t\t\tif (processReturning &&\n> +\t\t\t\t\t\t\t\tresultRelInfo->ri_projectReturning &&\n> +\t\t\t\t\t\t\t\t!resultRelInfo->ri_FdwRoutine)\n> +\t\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\t\tTupleTableSlot *returningSlot;\n> +\n> +\t\t\t\t\t\t\t\treturningSlot = ExecGetReturningSlot(estate, resultRelInfo);\n> +\t\t\t\t\t\t\t\tExecCopySlot(returningSlot, inputslot);\n> +\t\t\t\t\t\t\t\tExecMaterializeSlot(returningSlot);\n> +\t\t\t\t\t\t\t}\n> +\n> \t\t\t\t\t\t\tepqslot = EvalPlanQual(context->epqstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t resultRelationDesc,\n> \t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo->ri_RangeTableIndex,\n\nThis seems a bit byzantine. We use inputslot = EvalPlanQualSlot(...) to make\nEvalPlanQual() a bit cheaper, because that avoids a slot copy inside\nEvalPlanQual(). But now we copy and materialize that slot anyway - and we do\nso even if EPQ fails. And we afaics also do it when epqreturnslot is set, in\nwhich case we'll afaics never use the copied slot.\n\nRead the next paragraph below before replying to the above - I don't think\nthis is right for other reasons:\n\n> @@ -1673,12 +1689,17 @@ ldelete:\n> \t\t}\n> \t\telse\n> \t\t{\n> +\t\t\t/*\n> +\t\t\t * Tuple can be already fetched to the returning slot in case\n> +\t\t\t * we've previously locked it. Fetch the tuple only if the slot\n> +\t\t\t * is empty.\n> +\t\t\t */\n> \t\t\tslot = ExecGetReturningSlot(estate, resultRelInfo);\n> \t\t\tif (oldtuple != NULL)\n> \t\t\t{\n> \t\t\t\tExecForceStoreHeapTuple(oldtuple, slot, false);\n> \t\t\t}\n> -\t\t\telse\n> +\t\t\telse if (TupIsNull(slot))\n> \t\t\t{\n> \t\t\t\tif (!table_tuple_fetch_row_version(resultRelationDesc, tupleid,\n> \t\t\t\t\t\t\t\t\t\t\t\t SnapshotAny, slot))\n\n\nI don't think this is correct as-is - what if ExecDelete() is called with some\nolder tuple in the returning slot? If we don't enter the TM_Updated path, it\nwon't get updated, and we'll return the wrong tuple. It certainly looks\npossible to me - consider what happens if a first tuple enter the TM_Updated\npath but then fails EvalPlanQual(). If a second tuple is deleted without\nentering the TM_Updated path, the wrong tuple will be used for RETURNING.\n\n<plays around with isolationtester>\n\nYes, indeed. The attached isolationtest breaks with 764da7710bf.\n\n\nI think it's entirely sensible to avoid the tuple fetching in ExecDelete(),\nbut it needs a bit less localized work. Instead of using the presence of a\ntuple in the returning slot, ExecDelete() should track whether it already has\nfetched the deleted tuple.\n\nOr alternatively, do the work to avoid refetching the tuple for the much more\ncommon case of not needing EPQ at all.\n\nI guess this really is part of my issue with this change - it optimizes the\nrare case, while not addressing the same inefficiency in the common case.\n\n\n\n> @@ -299,14 +305,46 @@ heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> static TM_Result\n> heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,\n> \t\t\t\t\tSnapshot snapshot, Snapshot crosscheck, bool wait,\n> -\t\t\t\t\tTM_FailureData *tmfd, bool changingPart)\n> +\t\t\t\t\tTM_FailureData *tmfd, bool changingPart,\n> +\t\t\t\t\tLazyTupleTableSlot *lockedSlot)\n> {\n> +\tTM_Result\tresult;\n> +\n> \t/*\n> \t * Currently Deleting of index tuples are handled at vacuum, in case if\n> \t * the storage itself is cleaning the dead tuples by itself, it is the\n> \t * time to call the index tuple deletion also.\n> \t */\n> -\treturn heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> +\tresult = heap_delete(relation, tid, cid, crosscheck, wait,\n> +\t\t\t\t\t\t tmfd, changingPart);\n> +\n> +\t/*\n> +\t * If the tuple has been concurrently updated, then get the lock on it.\n> +\t * (Do this if caller asked for tat by providing a 'lockedSlot'.) With the\n> +\t * lock held retry of delete should succeed even if there are more\n> +\t * concurrent update attempts.\n> +\t */\n> +\tif (result == TM_Updated && lockedSlot)\n> +\t{\n> +\t\tTupleTableSlot *evalSlot;\n> +\n> +\t\tAssert(wait);\n> +\n> +\t\tevalSlot = LAZY_TTS_EVAL(lockedSlot);\n> +\t\tresult = heapam_tuple_lock_internal(relation, tid, snapshot,\n> +\t\t\t\t\t\t\t\t\t\t\tevalSlot, cid, LockTupleExclusive,\n> +\t\t\t\t\t\t\t\t\t\t\tLockWaitBlock,\n> +\t\t\t\t\t\t\t\t\t\t\tTUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> +\t\t\t\t\t\t\t\t\t\t\ttmfd, true);\n\nOh, huh? As I mentioned before, the unconditional use of LockWaitBlock means\nthe wait parameter is ignored.\n\nI'm frankly getting annoyed here.\n\n\n> +/*\n> + * This routine does the work for heapam_tuple_lock(), but also support\n> + * `updated` argument to re-use the work done by heapam_tuple_update() or\n> + * heapam_tuple_delete() on figuring out that tuple was concurrently updated.\n> + */\n> +static TM_Result\n> +heapam_tuple_lock_internal(Relation relation, ItemPointer tid,\n> +\t\t\t\t\t\t Snapshot snapshot, TupleTableSlot *slot,\n> +\t\t\t\t\t\t CommandId cid, LockTupleMode mode,\n> +\t\t\t\t\t\t LockWaitPolicy wait_policy, uint8 flags,\n> +\t\t\t\t\t\t TM_FailureData *tmfd, bool updated)\n\nWhy is the new parameter named 'updated'?\n\n\n> {\n> \tBufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot;\n> \tTM_Result\tresult;\n> -\tBuffer\t\tbuffer;\n> +\tBuffer\t\tbuffer = InvalidBuffer;\n> \tHeapTuple\ttuple = &bslot->base.tupdata;\n> \tbool\t\tfollow_updates;\n>\n> @@ -374,16 +455,26 @@ heapam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot,\n>\n> tuple_lock_retry:\n> \ttuple->t_self = *tid;\n> -\tresult = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> -\t\t\t\t\t\t\t follow_updates, &buffer, tmfd);\n> +\tif (!updated)\n> +\t\tresult = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> +\t\t\t\t\t\t\t\t follow_updates, &buffer, tmfd);\n> +\telse\n> +\t\tresult = TM_Updated;\n>\n> \tif (result == TM_Updated &&\n> \t\t(flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION))\n> \t{\n> -\t\t/* Should not encounter speculative tuple on recheck */\n> -\t\tAssert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n> +\t\tif (!updated)\n> +\t\t{\n> +\t\t\t/* Should not encounter speculative tuple on recheck */\n> +\t\t\tAssert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n\nHm, why is it ok to encounter speculative tuples in the updated case? Oh, I\nguess you got failures because slot doesn't point anywhere at this point.\n\n\n> -\t\tReleaseBuffer(buffer);\n> +\t\t\tReleaseBuffer(buffer);\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tupdated = false;\n> +\t\t}\n>\n> \t\tif (!ItemPointerEquals(&tmfd->ctid, &tuple->t_self))\n> \t\t{\n\nWhich means this is completely bogus now?\n\n\tHeapTuple\ttuple = &bslot->base.tupdata;\n\nIn the first iteration this just points to the newly created slot. Which\ndoesn't have a tuple stored in it. So the above checks some uninitialized\nmemory.\n\n\nGiving up at this point.\n\n\nThis doesn't seem ready to have been committed.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 22 Mar 2023 17:30:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi!\n\nOn Thu, Mar 23, 2023 at 3:30 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-21 01:25:11 +0300, Alexander Korotkov wrote:\n> > I'm going to push patchset v15 if no objections.\n>\n> Just saw that this went in - didn't catch up with the thread before,\n> unfortunately. At the very least I'd like to see some more work on cleaning up\n> the lazy tuple slot stuff. It's replete with unnecessary multiple-evaluation\n> hazards - I realize that there's some of those already, but I don't think we\n> should go further down that route. As far as I can tell there's no need for\n> any of this to be macros.\n\nThank you for taking a look at this, even post-commit. Regarding\nmarcos, do you think inline functions would be good instead?\n\n> > From a8b4e8a7b27815e013ea07b8cc9ac68541a9ac07 Mon Sep 17 00:00:00 2001\n> > From: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Tue, 21 Mar 2023 00:34:15 +0300\n> > Subject: [PATCH 1/2] Evade extra table_tuple_fetch_row_version() in\n> > ExecUpdate()/ExecDelete()\n> >\n> > When we lock tuple using table_tuple_lock() then we at the same time fetch\n> > the locked tuple to the slot. In this case we can skip extra\n> > table_tuple_fetch_row_version() thank to we've already fetched the 'old' tuple\n> > and nobody can change it concurrently since it's locked.\n> >\n> > Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com\n> > Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp\n> > Reviewed-by: Andres Freund, Chris Travers\n> > ---\n> > src/backend/executor/nodeModifyTable.c | 48 +++++++++++++++++++-------\n> > 1 file changed, 35 insertions(+), 13 deletions(-)\n> >\n> > diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c\n> > index 3a673895082..93ebfdbb0d8 100644\n> > --- a/src/backend/executor/nodeModifyTable.c\n> > +++ b/src/backend/executor/nodeModifyTable.c\n> > @@ -1559,6 +1559,22 @@ ldelete:\n> > {\n> > case TM_Ok:\n> > Assert(context->tmfd.traversed);\n> > +\n> > + /*\n> > + * Save locked tuple for further processing of\n> > + * RETURNING clause.\n> > + */\n> > + if (processReturning &&\n> > + resultRelInfo->ri_projectReturning &&\n> > + !resultRelInfo->ri_FdwRoutine)\n> > + {\n> > + TupleTableSlot *returningSlot;\n> > +\n> > + returningSlot = ExecGetReturningSlot(estate, resultRelInfo);\n> > + ExecCopySlot(returningSlot, inputslot);\n> > + ExecMaterializeSlot(returningSlot);\n> > + }\n> > +\n> > epqslot = EvalPlanQual(context->epqstate,\n> > resultRelationDesc,\n> > resultRelInfo->ri_RangeTableIndex,\n>\n> This seems a bit byzantine. We use inputslot = EvalPlanQualSlot(...) to make\n> EvalPlanQual() a bit cheaper, because that avoids a slot copy inside\n> EvalPlanQual(). But now we copy and materialize that slot anyway - and we do\n> so even if EPQ fails. And we afaics also do it when epqreturnslot is set, in\n> which case we'll afaics never use the copied slot.\n\nYes, I agree that there is a redundancy we could avoid.\n\n> Read the next paragraph below before replying to the above - I don't think\n> this is right for other reasons:\n>\n> > @@ -1673,12 +1689,17 @@ ldelete:\n> > }\n> > else\n> > {\n> > + /*\n> > + * Tuple can be already fetched to the returning slot in case\n> > + * we've previously locked it. Fetch the tuple only if the slot\n> > + * is empty.\n> > + */\n> > slot = ExecGetReturningSlot(estate, resultRelInfo);\n> > if (oldtuple != NULL)\n> > {\n> > ExecForceStoreHeapTuple(oldtuple, slot, false);\n> > }\n> > - else\n> > + else if (TupIsNull(slot))\n> > {\n> > if (!table_tuple_fetch_row_version(resultRelationDesc, tupleid,\n> > SnapshotAny, slot))\n>\n>\n> I don't think this is correct as-is - what if ExecDelete() is called with some\n> older tuple in the returning slot? If we don't enter the TM_Updated path, it\n> won't get updated, and we'll return the wrong tuple. It certainly looks\n> possible to me - consider what happens if a first tuple enter the TM_Updated\n> path but then fails EvalPlanQual(). If a second tuple is deleted without\n> entering the TM_Updated path, the wrong tuple will be used for RETURNING.\n>\n> <plays around with isolationtester>\n>\n> Yes, indeed. The attached isolationtest breaks with 764da7710bf.\n\nThank you for cathing this! This is definitely a bug.\n\n> I think it's entirely sensible to avoid the tuple fetching in ExecDelete(),\n> but it needs a bit less localized work. Instead of using the presence of a\n> tuple in the returning slot, ExecDelete() should track whether it already has\n> fetched the deleted tuple.\n>\n> Or alternatively, do the work to avoid refetching the tuple for the much more\n> common case of not needing EPQ at all.\n>\n> I guess this really is part of my issue with this change - it optimizes the\n> rare case, while not addressing the same inefficiency in the common case.\n\nI'm going to fix this for ExecDelete(). Avoiding refetching the tuple\nin more common case is something I'm definitely very interested in.\nBut I would leave it for the future.\n\n> > @@ -299,14 +305,46 @@ heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> > static TM_Result\n> > heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,\n> > Snapshot snapshot, Snapshot crosscheck, bool wait,\n> > - TM_FailureData *tmfd, bool changingPart)\n> > + TM_FailureData *tmfd, bool changingPart,\n> > + LazyTupleTableSlot *lockedSlot)\n> > {\n> > + TM_Result result;\n> > +\n> > /*\n> > * Currently Deleting of index tuples are handled at vacuum, in case if\n> > * the storage itself is cleaning the dead tuples by itself, it is the\n> > * time to call the index tuple deletion also.\n> > */\n> > - return heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> > + result = heap_delete(relation, tid, cid, crosscheck, wait,\n> > + tmfd, changingPart);\n> > +\n> > + /*\n> > + * If the tuple has been concurrently updated, then get the lock on it.\n> > + * (Do this if caller asked for tat by providing a 'lockedSlot'.) With the\n> > + * lock held retry of delete should succeed even if there are more\n> > + * concurrent update attempts.\n> > + */\n> > + if (result == TM_Updated && lockedSlot)\n> > + {\n> > + TupleTableSlot *evalSlot;\n> > +\n> > + Assert(wait);\n> > +\n> > + evalSlot = LAZY_TTS_EVAL(lockedSlot);\n> > + result = heapam_tuple_lock_internal(relation, tid, snapshot,\n> > + evalSlot, cid, LockTupleExclusive,\n> > + LockWaitBlock,\n> > + TUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> > + tmfd, true);\n>\n> Oh, huh? As I mentioned before, the unconditional use of LockWaitBlock means\n> the wait parameter is ignored.\n>\n> I'm frankly getting annoyed here.\n\nlockedSlot shoudln't be provided when wait == false. The assertion\nabove expresses this intention. However, the code lacking of comment\ndirectly expressing this idea.\n\nAnd sorry for getting you annoyed. The relevant comment should be\nalready there.\n\n> > +/*\n> > + * This routine does the work for heapam_tuple_lock(), but also support\n> > + * `updated` argument to re-use the work done by heapam_tuple_update() or\n> > + * heapam_tuple_delete() on figuring out that tuple was concurrently updated.\n> > + */\n> > +static TM_Result\n> > +heapam_tuple_lock_internal(Relation relation, ItemPointer tid,\n> > + Snapshot snapshot, TupleTableSlot *slot,\n> > + CommandId cid, LockTupleMode mode,\n> > + LockWaitPolicy wait_policy, uint8 flags,\n> > + TM_FailureData *tmfd, bool updated)\n>\n> Why is the new parameter named 'updated'?\n\nTo indicate that we know that we're locking the updated tuple.\nProbably not descriptive enough.\n\n> > {\n> > BufferHeapTupleTableSlot *bslot = (BufferHeapTupleTableSlot *) slot;\n> > TM_Result result;\n> > - Buffer buffer;\n> > + Buffer buffer = InvalidBuffer;\n> > HeapTuple tuple = &bslot->base.tupdata;\n> > bool follow_updates;\n> >\n> > @@ -374,16 +455,26 @@ heapam_tuple_lock(Relation relation, ItemPointer tid, Snapshot snapshot,\n> >\n> > tuple_lock_retry:\n> > tuple->t_self = *tid;\n> > - result = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> > - follow_updates, &buffer, tmfd);\n> > + if (!updated)\n> > + result = heap_lock_tuple(relation, tuple, cid, mode, wait_policy,\n> > + follow_updates, &buffer, tmfd);\n> > + else\n> > + result = TM_Updated;\n> >\n> > if (result == TM_Updated &&\n> > (flags & TUPLE_LOCK_FLAG_FIND_LAST_VERSION))\n> > {\n> > - /* Should not encounter speculative tuple on recheck */\n> > - Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n> > + if (!updated)\n> > + {\n> > + /* Should not encounter speculative tuple on recheck */\n> > + Assert(!HeapTupleHeaderIsSpeculative(tuple->t_data));\n>\n> Hm, why is it ok to encounter speculative tuples in the updated case? Oh, I\n> guess you got failures because slot doesn't point anywhere at this point.\n\nYes, given that tuple is not accessible, this assert can't work. I've\na couple ideas on how to replace it.\n1) As I get the primary point of of this assertion is to be sure that\ntmfd->ctid really points us to a correct tuple (while speculative\ntoken doesn't do). So, for the 'updated' case we can check tmfd->ctid\ndirectly (or even for both cases?). However, I can't find the\nrelevant macro for this, probably\n2) We can check that we don't return TM_Updated from heap_update() and\nheap_delete(), when old tuple is speculative token.\n\n> > - ReleaseBuffer(buffer);\n> > + ReleaseBuffer(buffer);\n> > + }\n> > + else\n> > + {\n> > + updated = false;\n> > + }\n> >\n> > if (!ItemPointerEquals(&tmfd->ctid, &tuple->t_self))\n> > {\n>\n> Which means this is completely bogus now?\n>\n> HeapTuple tuple = &bslot->base.tupdata;\n>\n> In the first iteration this just points to the newly created slot. Which\n> doesn't have a tuple stored in it. So the above checks some uninitialized\n> memory.\n\nNo, this is not so. tuple->t_self is unconditionally assigned few\nlines before. So, it can't be uninitialized memory given that *tid is\ninitialized.\n\nI seriously doubt this patch could pass the tests if that comparison\nwould use uninitialized memory.\n\n> Giving up at this point.\n>\n>\n> This doesn't seem ready to have been committed.\n\nYep, that could be better. Given, that item pointer comparison\ndoesn't really use uninitialized memory, probably not as bad as you\nthought at the first glance.\n\nI'm going to post a patch to address the issues you've raised in next 24h.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 23 Mar 2023 18:08:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-03-23 18:08:36 +0300, Alexander Korotkov wrote:\n> On Thu, Mar 23, 2023 at 3:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-03-21 01:25:11 +0300, Alexander Korotkov wrote:\n> > > I'm going to push patchset v15 if no objections.\n> >\n> > Just saw that this went in - didn't catch up with the thread before,\n> > unfortunately. At the very least I'd like to see some more work on cleaning up\n> > the lazy tuple slot stuff. It's replete with unnecessary multiple-evaluation\n> > hazards - I realize that there's some of those already, but I don't think we\n> > should go further down that route. As far as I can tell there's no need for\n> > any of this to be macros.\n> \n> Thank you for taking a look at this, even post-commit. Regarding\n> marcos, do you think inline functions would be good instead?\n\nYes.\n\n\n> > I think it's entirely sensible to avoid the tuple fetching in ExecDelete(),\n> > but it needs a bit less localized work. Instead of using the presence of a\n> > tuple in the returning slot, ExecDelete() should track whether it already has\n> > fetched the deleted tuple.\n> >\n> > Or alternatively, do the work to avoid refetching the tuple for the much more\n> > common case of not needing EPQ at all.\n> >\n> > I guess this really is part of my issue with this change - it optimizes the\n> > rare case, while not addressing the same inefficiency in the common case.\n> \n> I'm going to fix this for ExecDelete(). Avoiding refetching the tuple\n> in more common case is something I'm definitely very interested in.\n> But I would leave it for the future.\n\nIt doesn't seem like a good plan to start with the rare and then address the\ncommon case. The solution for the common case might solve the rare case as\nwell. One way to make to fix the common case would be to return a tuple\nsuitable for returning computation as part of the input plan - which would\nalso fix the EPQ case, since we could just use the EPQ output. Of course there\nare complications like triggers, but they seem like they could be dealt with.\n\n\n> > > @@ -299,14 +305,46 @@ heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> > > static TM_Result\n> > > heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,\n> > > Snapshot snapshot, Snapshot crosscheck, bool wait,\n> > > - TM_FailureData *tmfd, bool changingPart)\n> > > + TM_FailureData *tmfd, bool changingPart,\n> > > + LazyTupleTableSlot *lockedSlot)\n> > > {\n> > > + TM_Result result;\n> > > +\n> > > /*\n> > > * Currently Deleting of index tuples are handled at vacuum, in case if\n> > > * the storage itself is cleaning the dead tuples by itself, it is the\n> > > * time to call the index tuple deletion also.\n> > > */\n> > > - return heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);\n> > > + result = heap_delete(relation, tid, cid, crosscheck, wait,\n> > > + tmfd, changingPart);\n> > > +\n> > > + /*\n> > > + * If the tuple has been concurrently updated, then get the lock on it.\n> > > + * (Do this if caller asked for tat by providing a 'lockedSlot'.) With the\n> > > + * lock held retry of delete should succeed even if there are more\n> > > + * concurrent update attempts.\n> > > + */\n> > > + if (result == TM_Updated && lockedSlot)\n> > > + {\n> > > + TupleTableSlot *evalSlot;\n> > > +\n> > > + Assert(wait);\n> > > +\n> > > + evalSlot = LAZY_TTS_EVAL(lockedSlot);\n> > > + result = heapam_tuple_lock_internal(relation, tid, snapshot,\n> > > + evalSlot, cid, LockTupleExclusive,\n> > > + LockWaitBlock,\n> > > + TUPLE_LOCK_FLAG_FIND_LAST_VERSION,\n> > > + tmfd, true);\n> >\n> > Oh, huh? As I mentioned before, the unconditional use of LockWaitBlock means\n> > the wait parameter is ignored.\n> >\n> > I'm frankly getting annoyed here.\n> \n> lockedSlot shoudln't be provided when wait == false. The assertion\n> above expresses this intention. However, the code lacking of comment\n> directly expressing this idea.\n\nI don't think a comment here is going to fix things. You can't expect somebody\ntrying to use tableam to look into the guts of a heapam function to understand\nthe API constraints to this degree. And there's afaict no comments in tableam\nthat indicate any of this.\n\nI also just don't see why this is a sensible constraint? Why should this only\nwork if wait == false?\n\n\n> > > +/*\n> > > + * This routine does the work for heapam_tuple_lock(), but also support\n> > > + * `updated` argument to re-use the work done by heapam_tuple_update() or\n> > > + * heapam_tuple_delete() on figuring out that tuple was concurrently updated.\n> > > + */\n> > > +static TM_Result\n> > > +heapam_tuple_lock_internal(Relation relation, ItemPointer tid,\n> > > + Snapshot snapshot, TupleTableSlot *slot,\n> > > + CommandId cid, LockTupleMode mode,\n> > > + LockWaitPolicy wait_policy, uint8 flags,\n> > > + TM_FailureData *tmfd, bool updated)\n> >\n> > Why is the new parameter named 'updated'?\n> \n> To indicate that we know that we're locking the updated tuple.\n> Probably not descriptive enough.\n\nGiven it's used for deletions, I'd say so.\n\n\n> > > - ReleaseBuffer(buffer);\n> > > + ReleaseBuffer(buffer);\n> > > + }\n> > > + else\n> > > + {\n> > > + updated = false;\n> > > + }\n> > >\n> > > if (!ItemPointerEquals(&tmfd->ctid, &tuple->t_self))\n> > > {\n> >\n> > Which means this is completely bogus now?\n> >\n> > HeapTuple tuple = &bslot->base.tupdata;\n> >\n> > In the first iteration this just points to the newly created slot. Which\n> > doesn't have a tuple stored in it. So the above checks some uninitialized\n> > memory.\n> \n> No, this is not so. tuple->t_self is unconditionally assigned few\n> lines before. So, it can't be uninitialized memory given that *tid is\n> initialized.\n\nUgh. This means you're basically leaving uninitialized / not initialized state\nin the other portions of the tuple/slot, without even documenting that. The\ncode was ugly starting out, but this certainly makes it worse.\n\nThere's also no comment explaining that tmfd suddenly is load-bearing *input*\ninto heapam_tuple_lock_internal(), whereas previously it was purely an output\nparameter - and is documented as such:\n * Output parameters:\n *\t*slot: contains the target tuple\n *\t*tmfd: filled in failure cases (see below)\n\nThis is an *awful* API.\n\n\n> I seriously doubt this patch could pass the tests if that comparison\n> would use uninitialized memory.\n\nIDK about that - it's hard to exercise this code in the regression tests, and\nplenty things are zero initialized, which often makes things appear to work in\nthe first iteration.\n\n\n> > Giving up at this point.\n> >\n> >\n> > This doesn't seem ready to have been committed.\n> \n> Yep, that could be better. Given, that item pointer comparison\n> doesn't really use uninitialized memory, probably not as bad as you\n> thought at the first glance.\n\nThe details of how it's bad maybe differ slightly, but looking at it a bit\nlonger I also found new things. So I don't really think it's better than what\nI though it was.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Mar 2023 09:50:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nAn off-list conversation veered on-topic again. Reposting for posterity:\n\nOn 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > I seriously doubt that solving this at the tuple locking level is the right\n> > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > delete/update to generally put the old tuple version into a slot, not just as\n> > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > refetching tuples for triggers. It'd also provide the basis for adding support\n> > for referencing the OLD version in RETURNING, which'd be quite powerful.\n>\n> I spent some time thinking on this. Does our attempt to update/delete\n> tuple imply that we've already fetched the old tuple version?\n\nYes, but somewhat \"far away\", below the ExecProcNode() in ExecModifyTable(). I\ndon't think we can rely on that. The old tuple is just identified via a junk\nattribute (c.f. \"For UPDATE/DELETE/MERGE, fetch the row identity info for the\ntuple...\"). The NEW tuple is computed in the target list of the source query.\nIt's possible that for some simpler cases we could figure out that the\nreturned slot is the \"old\" tuple, but it'd be hard to make that work.\n\nAlternatively we could evaluate returning as part of the source query\nplan. While that'd work nicely for the EPQ cases (the EPQ evaluation would\ncompute the new values), it could not be relied upon for before triggers.\n\nIt might or might not be a win to try to do so - if you have a selective\nquery, ferrying around the entire source tuple might cost more than it\nsaves.\n\n\n> We needed that at least to do initial qual check and calculation of the new\n> tuple (for update case).\n\nThe NEW tuple is computed in the source query, as I mentioned, I don't think\nwe easily can get access to the source row in the general case.\n\n\n> We currently may not have the old tuple at hand at the time we do\n> table_tuple_update()/table_tuple_delete(). But that seems to be just and\n> issue of our executor code. Do it worth to make table AM fetch the old\n> *unmodified* tuple given that we've already fetched it for sure?\n\nNot unconditionally (e.g. if you neither have triggers, nor RETURNING, there's\nnot much point, unless the query is simple enough that we could make it\nfree). But in the other cases it seems beneficial. The caller would reliably\nknow whether they want the source tuple to be fetched, or not.\n\nWe could make it so that iff we already have the \"old\" tuple in the slot,\nit'll not be put in there \"again\", but if it's not the right row version, it\nis.\n\nWe could use the same approach to make the \"happy path\" in update/delete\ncheaper. If the source tuple is provided, heap_delete(), heap_update() won't\nneed to do a ReadBuffer(), they could just IncrBufferRefCount(). That'd be a\nquite substantial win.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Mar 2023 17:39:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi!\n\nOn Fri, Mar 24, 2023 at 3:39 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> > On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I seriously doubt that solving this at the tuple locking level is the right\n> > > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > > delete/update to generally put the old tuple version into a slot, not just as\n> > > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > > refetching tuples for triggers. It'd also provide the basis for adding support\n> > > for referencing the OLD version in RETURNING, which'd be quite powerful.\n\nAfter some thoughts, I think I like idea of fetching old tuple version\nin update/delete. Everything that evades extra tuple fetching and do\nmore of related work in a single table AM call, makes table AM API\nmore flexible.\n\nI'm working on patch implementing this. I'm going to post it later today.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 27 Mar 2023 13:49:22 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Andres,\n\nOn Mon, Mar 27, 2023 at 1:49 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Mar 24, 2023 at 3:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> > > On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > I seriously doubt that solving this at the tuple locking level is the right\n> > > > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > > > delete/update to generally put the old tuple version into a slot, not just as\n> > > > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > > > refetching tuples for triggers. It'd also provide the basis for adding support\n> > > > for referencing the OLD version in RETURNING, which'd be quite powerful.\n>\n> After some thoughts, I think I like idea of fetching old tuple version\n> in update/delete. Everything that evades extra tuple fetching and do\n> more of related work in a single table AM call, makes table AM API\n> more flexible.\n>\n> I'm working on patch implementing this. I'm going to post it later today.\n\nHere is the patchset. I'm continue to work on comments and refactoring.\n\nMy quick question is why do we need ri_TrigOldSlot for triggers?\nCan't we just pass the old tuple for after row trigger in\nri_oldTupleSlot?\n\nAlso, I wonder if we really need a LazyTupleSlot. It allows to evade\nextra tuple slot allocation. But as I get in the end the tuple slot\nallocation is just a single palloc. I bet the effect would be\ninvisible in the benchmarks.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 29 Mar 2023 20:34:10 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Wed, Mar 29, 2023 at 8:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Mar 27, 2023 at 1:49 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Fri, Mar 24, 2023 at 3:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> > > > On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > I seriously doubt that solving this at the tuple locking level is the right\n> > > > > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > > > > delete/update to generally put the old tuple version into a slot, not just as\n> > > > > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > > > > refetching tuples for triggers. It'd also provide the basis for adding support\n> > > > > for referencing the OLD version in RETURNING, which'd be quite powerful.\n> >\n> > After some thoughts, I think I like idea of fetching old tuple version\n> > in update/delete. Everything that evades extra tuple fetching and do\n> > more of related work in a single table AM call, makes table AM API\n> > more flexible.\n> >\n> > I'm working on patch implementing this. I'm going to post it later today.\n>\n> Here is the patchset. I'm continue to work on comments and refactoring.\n>\n> My quick question is why do we need ri_TrigOldSlot for triggers?\n> Can't we just pass the old tuple for after row trigger in\n> ri_oldTupleSlot?\n>\n> Also, I wonder if we really need a LazyTupleSlot. It allows to evade\n> extra tuple slot allocation. But as I get in the end the tuple slot\n> allocation is just a single palloc. I bet the effect would be\n> invisible in the benchmarks.\n\nSorry, previous patches don't even compile. The fixed version is attached.\nI'm going to post significantly revised patchset soon.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 31 Mar 2023 16:57:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-03-31 16:57:41 +0300, Alexander Korotkov wrote:\n> On Wed, Mar 29, 2023 at 8:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Mon, Mar 27, 2023 at 1:49 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Fri, Mar 24, 2023 at 3:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> > > > > On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > I seriously doubt that solving this at the tuple locking level is the right\n> > > > > > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > > > > > delete/update to generally put the old tuple version into a slot, not just as\n> > > > > > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > > > > > refetching tuples for triggers. It'd also provide the basis for adding support\n> > > > > > for referencing the OLD version in RETURNING, which'd be quite powerful.\n> > >\n> > > After some thoughts, I think I like idea of fetching old tuple version\n> > > in update/delete. Everything that evades extra tuple fetching and do\n> > > more of related work in a single table AM call, makes table AM API\n> > > more flexible.\n> > >\n> > > I'm working on patch implementing this. I'm going to post it later today.\n> >\n> > Here is the patchset. I'm continue to work on comments and refactoring.\n> >\n> > My quick question is why do we need ri_TrigOldSlot for triggers?\n> > Can't we just pass the old tuple for after row trigger in\n> > ri_oldTupleSlot?\n> >\n> > Also, I wonder if we really need a LazyTupleSlot. It allows to evade\n> > extra tuple slot allocation. But as I get in the end the tuple slot\n> > allocation is just a single palloc. I bet the effect would be\n> > invisible in the benchmarks.\n> \n> Sorry, previous patches don't even compile. The fixed version is attached.\n> I'm going to post significantly revised patchset soon.\n\nGiven that the in-tree state has been broken for a week, I think it probably\nis time to revert the commits that already went in.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Mar 2023 22:21:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Andres!\n\nOn Sat, 1 Apr 2023, 09:21 Andres Freund, <andres@anarazel.de> wrote:\n\n> Given that the in-tree state has been broken for a week, I think it\n> probably\n> is time to revert the commits that already went in.\n>\n\nIt seems that although the patch addressing the issues is not a quick fix,\nthere is a big progress in it already. I propose to see it's status a week\nlater and if it is not ready then to revert existing. Hope there are no\nother patches in the existing branch complained to suffer this.\n\nKind regards,\nPavel Borisov,\nSupabase\n\n>\n\nHi, Andres!On Sat, 1 Apr 2023, 09:21 Andres Freund, <andres@anarazel.de> wrote:\nGiven that the in-tree state has been broken for a week, I think it probably\nis time to revert the commits that already went in.It seems that although the patch addressing the issues is not a quick fix, there is a big progress in it already. I propose to see it's status a week later and if it is not ready then to revert existing. Hope there are no other patches in the existing branch complained to suffer this.Kind regards,Pavel Borisov,Supabase", "msg_date": "Sat, 1 Apr 2023 11:24:09 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": ".Hi!\n\nOn Sat, Apr 1, 2023 at 8:21 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-31 16:57:41 +0300, Alexander Korotkov wrote:\n> > On Wed, Mar 29, 2023 at 8:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Mon, Mar 27, 2023 at 1:49 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > > On Fri, Mar 24, 2023 at 3:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > On 2023-03-23 23:24:19 +0300, Alexander Korotkov wrote:\n> > > > > > On Thu, Mar 23, 2023 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > > I seriously doubt that solving this at the tuple locking level is the right\n> > > > > > > thing. If we want to avoid refetching tuples, why don't we add a parameter to\n> > > > > > > delete/update to generally put the old tuple version into a slot, not just as\n> > > > > > > an optimization for a subsequent lock_tuple()? Then we could remove all\n> > > > > > > refetching tuples for triggers. It'd also provide the basis for adding support\n> > > > > > > for referencing the OLD version in RETURNING, which'd be quite powerful.\n> > > >\n> > > > After some thoughts, I think I like idea of fetching old tuple version\n> > > > in update/delete. Everything that evades extra tuple fetching and do\n> > > > more of related work in a single table AM call, makes table AM API\n> > > > more flexible.\n> > > >\n> > > > I'm working on patch implementing this. I'm going to post it later today.\n> > >\n> > > Here is the patchset. I'm continue to work on comments and refactoring.\n> > >\n> > > My quick question is why do we need ri_TrigOldSlot for triggers?\n> > > Can't we just pass the old tuple for after row trigger in\n> > > ri_oldTupleSlot?\n> > >\n> > > Also, I wonder if we really need a LazyTupleSlot. It allows to evade\n> > > extra tuple slot allocation. But as I get in the end the tuple slot\n> > > allocation is just a single palloc. I bet the effect would be\n> > > invisible in the benchmarks.\n> >\n> > Sorry, previous patches don't even compile. The fixed version is attached.\n> > I'm going to post significantly revised patchset soon.\n>\n> Given that the in-tree state has been broken for a week, I think it probably\n> is time to revert the commits that already went in.\n\nThe revised patch is attached. The most notable change is getting rid\nof LazyTupleTableSlot. Also get rid of complex computations to detect\nhow to initialize LazyTupleTableSlot. Instead just pass the oldSlot\nas an argument of ExecUpdate() and ExecDelete(). The price for this\nis just preallocation of ri_oldTupleSlot before calling ExecDelete().\nThe slot allocation is quite cheap. After all wrappers it's\ntable_slot_callbacks(), which is very cheap, single palloc() and few\nfields initialization. It doesn't seem reasonable to introduce an\ninfrastructure to evade this.\n\nI think patch resolves all the major issues you've highlighted. Even\nif there are some minor things missed, I'd prefer to push this rather\nthan reverting the whole work.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 2 Apr 2023 03:37:19 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi,\n\nOn 2023-04-02 03:37:19 +0300, Alexander Korotkov wrote:\n> On Sat, Apr 1, 2023 at 8:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > Given that the in-tree state has been broken for a week, I think it probably\n> > is time to revert the commits that already went in.\n> \n> The revised patch is attached. The most notable change is getting rid\n> of LazyTupleTableSlot. Also get rid of complex computations to detect\n> how to initialize LazyTupleTableSlot. Instead just pass the oldSlot\n> as an argument of ExecUpdate() and ExecDelete(). The price for this\n> is just preallocation of ri_oldTupleSlot before calling ExecDelete().\n> The slot allocation is quite cheap. After all wrappers it's\n> table_slot_callbacks(), which is very cheap, single palloc() and few\n> fields initialization. It doesn't seem reasonable to introduce an\n> infrastructure to evade this.\n> \n> I think patch resolves all the major issues you've highlighted. Even\n> if there are some minor things missed, I'd prefer to push this rather\n> than reverting the whole work.\n\nShrug. You're designing new APIs, days before the feature freeze. This just\ndoesn't seem ready in time for 16. I certainly won't have time to look at it\nsufficiently in the next 5 days.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Apr 2023 17:47:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Sun, Apr 2, 2023 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-04-02 03:37:19 +0300, Alexander Korotkov wrote:\n> > On Sat, Apr 1, 2023 at 8:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Given that the in-tree state has been broken for a week, I think it probably\n> > > is time to revert the commits that already went in.\n> >\n> > The revised patch is attached. The most notable change is getting rid\n> > of LazyTupleTableSlot. Also get rid of complex computations to detect\n> > how to initialize LazyTupleTableSlot. Instead just pass the oldSlot\n> > as an argument of ExecUpdate() and ExecDelete(). The price for this\n> > is just preallocation of ri_oldTupleSlot before calling ExecDelete().\n> > The slot allocation is quite cheap. After all wrappers it's\n> > table_slot_callbacks(), which is very cheap, single palloc() and few\n> > fields initialization. It doesn't seem reasonable to introduce an\n> > infrastructure to evade this.\n> >\n> > I think patch resolves all the major issues you've highlighted. Even\n> > if there are some minor things missed, I'd prefer to push this rather\n> > than reverting the whole work.\n>\n> Shrug. You're designing new APIs, days before the feature freeze. This just\n> doesn't seem ready in time for 16. I certainly won't have time to look at it\n> sufficiently in the next 5 days.\n\nOK. Reverted.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 3 Apr 2023 16:57:05 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Alexander!\n\nOn 2023-04-02 03:37:19 +0300, Alexander Korotkov wrote:\n> On Sat, Apr 1, 2023 at 8:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > Given that the in-tree state has been broken for a week, I think it probably\n> > is time to revert the commits that already went in.\n>\n> The revised patch is attached. The most notable change is getting rid\n> of LazyTupleTableSlot. Also get rid of complex computations to detect\n> how to initialize LazyTupleTableSlot. Instead just pass the oldSlot\n> as an argument of ExecUpdate() and ExecDelete(). The price for this\n> is just preallocation of ri_oldTupleSlot before calling ExecDelete().\n> The slot allocation is quite cheap. After all wrappers it's\n> table_slot_callbacks(), which is very cheap, single palloc() and few\n> fields initialization. It doesn't seem reasonable to introduce an\n> infrastructure to evade this.\n>\n> I think patch resolves all the major issues you've highlighted. Even\n> if there are some minor things missed, I'd prefer to push this rather\n> than reverting the whole work.\n\nI looked into the latest patch v3.\nIn my view, it addresses all the issues discussed in [1]. Also, with\nthe pushing oldslot logic outside code becomes more transparent. I've\nadded some very minor modifications to the code and comments in patch\nv4-0001. Also, I'm for committing Andres' isolation test. I've added\nsome minor revisions to make the test run routinely among the other\nisolation tests. The test could also be made a part of the existing\neval-plan-qual.spec, but I have left it separate yet.\n\nAlso, I think that signatures of ExecUpdate() and ExecDelete()\nfunctions, especially the last one are somewhat overloaded with\ndifferent status bool variables added by different authors on\ndifferent occasions. If they are combined into some kind of status\nvariable, it would be nice. But as this doesn't touch API, is not\nrelated to the current update/delete optimization, it could be\nmodified anytime in the future as well.\n\nThe changes that indeed touch API are adding TupleTableSlot and\nconversion of bool wait flag into now four-state options variable for\ntuple_update(), tuple_delete(), heap_update(), heap_delete() and\nheap_lock_tuple() and a couple of Exec*DeleteTriggers(). I think they\nare justified.\n\nOne thing that is not clear to me is that we pass oldSlot into\nsimple_table_tuple_update() whereas as per the comment on this\nfunction \"concurrent updates of\nthe target tuple is not expected (for example, because we have a lock\non the relation associated with the tuple)\". It seems not to break\nanything but maybe this could be simplified.\n\nOverall I think the patch is good enough.\n\nRegards,\nPavel Borisov,\nSupabase.\n\n[1] https://www.postgresql.org/message-id/CAPpHfdtwKb5UVXKkDQZYW8nQCODy0fY_S7mV5Z%2Bcg7urL%3DzDEA%40mail.gmail.com", "msg_date": "Mon, 3 Apr 2023 17:57:37 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Upon Alexander reverting patches v15 from master, I've rebased what\nwas correction patches v4 in a message above on a fresh master\n(together with patches v15). The resulting patch v16 is attached.\n\nPavel.", "msg_date": "Mon, 3 Apr 2023 18:12:09 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Mon, Apr 3, 2023 at 5:12 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> Upon Alexander reverting patches v15 from master, I've rebased what\n> was correction patches v4 in a message above on a fresh master\n> (together with patches v15). The resulting patch v16 is attached.\n\nPavel, thank you for you review, revisions and rebase.\nWe'll reconsider this once v17 is branched.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 4 Apr 2023 01:25:46 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Tue, Apr 04, 2023 at 01:25:46AM +0300, Alexander Korotkov wrote:\n> Pavel, thank you for you review, revisions and rebase.\n> We'll reconsider this once v17 is branched.\n\nThe patch was still in the current CF, so I have moved it to the next\none based on the latest updates. \n--\nMichael", "msg_date": "Wed, 5 Apr 2023 11:54:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, hackers!\n\n> You're designing new APIs, days before the feature freeze.\nOn Wed, 5 Apr 2023 at 06:54, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 04, 2023 at 01:25:46AM +0300, Alexander Korotkov wrote:\n> > Pavel, thank you for you review, revisions and rebase.\n> > We'll reconsider this once v17 is branched.\n\nI've looked through patches v16 once more and think they're good\nenough, and previous issues are all addressed. I see that there is\nnothing that blocks it from being committed except the last iteration\nwas days before v16 feature freeze.\n\nRecently in another thread [1] Alexander posted a new version of\npatches v16 (as 0001 and 0002) In 0001 only indenation, comments, and\ncommit messages changed from v16 in this thread. In 0002 new test\neval-plan-qual-2 was integrated into the existing eval-plan-qual test.\nFor maintaining the most recent versions in this thread I'm attaching\nthem under v17. I suppose that we can commit these patches to v17 if\nthere are no objections or additional reviews.\n\n[1] https://www.postgresql.org/message-id/flat/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com\n\nKind regards,\nPavel Borisov", "msg_date": "Tue, 28 Nov 2023 13:00:07 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "Hi, Pavel!\n\nOn Tue, Nov 28, 2023 at 11:00 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > You're designing new APIs, days before the feature freeze.\n> On Wed, 5 Apr 2023 at 06:54, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Apr 04, 2023 at 01:25:46AM +0300, Alexander Korotkov wrote:\n> > > Pavel, thank you for you review, revisions and rebase.\n> > > We'll reconsider this once v17 is branched.\n>\n> I've looked through patches v16 once more and think they're good\n> enough, and previous issues are all addressed. I see that there is\n> nothing that blocks it from being committed except the last iteration\n> was days before v16 feature freeze.\n>\n> Recently in another thread [1] Alexander posted a new version of\n> patches v16 (as 0001 and 0002) In 0001 only indenation, comments, and\n> commit messages changed from v16 in this thread. In 0002 new test\n> eval-plan-qual-2 was integrated into the existing eval-plan-qual test.\n> For maintaining the most recent versions in this thread I'm attaching\n> them under v17. I suppose that we can commit these patches to v17 if\n> there are no objections or additional reviews.\n>\n> [1] https://www.postgresql.org/message-id/flat/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com\n\nThe new revision of patches is attached.\n\nIt has updated commit messages, new comments, and some variables were\nrenamed to be more consistent with surroundings.\n\nI also think that all the design issues spoken before are resolved.\nIt would be nice to hear from Andres about this.\n\nI'll continue rechecking these patches myself.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 19 Mar 2024 17:20:18 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" }, { "msg_contents": "On Tue, Mar 19, 2024 at 5:20 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Nov 28, 2023 at 11:00 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > You're designing new APIs, days before the feature freeze.\n> > On Wed, 5 Apr 2023 at 06:54, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Tue, Apr 04, 2023 at 01:25:46AM +0300, Alexander Korotkov wrote:\n> > > > Pavel, thank you for you review, revisions and rebase.\n> > > > We'll reconsider this once v17 is branched.\n> >\n> > I've looked through patches v16 once more and think they're good\n> > enough, and previous issues are all addressed. I see that there is\n> > nothing that blocks it from being committed except the last iteration\n> > was days before v16 feature freeze.\n> >\n> > Recently in another thread [1] Alexander posted a new version of\n> > patches v16 (as 0001 and 0002) In 0001 only indenation, comments, and\n> > commit messages changed from v16 in this thread. In 0002 new test\n> > eval-plan-qual-2 was integrated into the existing eval-plan-qual test.\n> > For maintaining the most recent versions in this thread I'm attaching\n> > them under v17. I suppose that we can commit these patches to v17 if\n> > there are no objections or additional reviews.\n> >\n> > [1] https://www.postgresql.org/message-id/flat/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com\n>\n> The new revision of patches is attached.\n>\n> It has updated commit messages, new comments, and some variables were\n> renamed to be more consistent with surroundings.\n>\n> I also think that all the design issues spoken before are resolved.\n> It would be nice to hear from Andres about this.\n>\n> I'll continue rechecking these patches myself.\n\nI've re-read this thread. It still seems to me that the issues raised\nbefore are addressed now. Fingers crossed, I'm going to push this if\nthere are no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 24 Mar 2024 03:12:11 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: POC: Lock updated tuples in tuple_update() and tuple_delete()" } ]
[ { "msg_contents": "Is it time to drop support for the oldest release ?\nAs in cf0cab868 30e7c175b e469f0aaf c03b7f526 492046fa9\n\nIf we also did the same thing next year, it'd be possible to use ::regnamespace\nwith impunity...\n\n\n", "msg_date": "Fri, 1 Jul 2022 08:23:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "drop support for v9.3 ?" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Is it time to drop support for the oldest release ?\n> As in cf0cab868 30e7c175b e469f0aaf c03b7f526 492046fa9\n\nI'm not really in favor of moving that goalpost forward when\nthere's no concrete reason to do so. The amount of work involved\nin a sweep for \"what code can be removed\" is more or less constant,\nso that doing this in (say) three years will be much less work\nthan doing it every year because the calendar says to.\n\nThe reason we pushed up the minimum to 9.2 was that we found that\nversions older than that don't build readily on modern toolchains.\nI've not yet heard of similar bit-rot in 9.2 ... and when it does\nhappen it'll probably affect a number of branches at once.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Jul 2022 09:50:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop support for v9.3 ?" } ]
[ { "msg_contents": "Hello!\n\nIt's been July everywhere on Earth for a few hours, so the July\ncommitfest is now in progress:\n\n https://commitfest.postgresql.org/38/\n\nNew patches may be registered for the next commitfest in September.\nPick some patches to review and have fun!\n\nHappy hacking,\n--Jacob\n\n\n", "msg_date": "Fri, 1 Jul 2022 08:08:06 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "[Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/1/22 08:08, Jacob Champion wrote:\n> It's been July everywhere on Earth for a few hours, so the July\n> commitfest is now in progress:\n> \n> https://commitfest.postgresql.org/38/\nOne week down, three to go.\n\nI forgot to put the overall status in the last email. We started the\nmonth with the following stats:\n\n Needs review: 214\n Waiting on Author: 36\n Ready for Committer: 23\n Committed: 21\n Moved to next CF: 1\n Withdrawn: 5\n Rejected: 2\n Returned with Feedback: 3\n --\n Total: 305\n\nAnd as of this email, we're now at\n\n Needs review: 193\n Waiting on Author: 38\n Ready for Committer: 24\n Committed: 37\n Moved to next CF: 2\n Withdrawn: 6\n Rejected: 2\n Returned with Feedback: 3\n --\n Total: 305\n\nThat's sixteen patchsets committed in the first week.\n\nHave a good weekend,\n--Jacob\n\n\n", "msg_date": "Fri, 8 Jul 2022 16:42:53 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "Hi Jacob\n\nAbort Global temporary table\nhttps://commitfest.postgresql.org/36/2349/# <https://commitfest.postgresql.org/36/2349/#>\nPlease move the Global Temporary table to check next month, that is at 202208.\nI need more time to process the existing issue.\n\nThanks\nWenjing\n\n\n> 2022年7月9日 07:42,Jacob Champion <jchampion@timescale.com> 写道:\n> \n> On 7/1/22 08:08, Jacob Champion wrote:\n>> It's been July everywhere on Earth for a few hours, so the July\n>> commitfest is now in progress:\n>> \n>> https://commitfest.postgresql.org/38/\n> One week down, three to go.\n> \n> I forgot to put the overall status in the last email. We started the\n> month with the following stats:\n> \n> Needs review: 214\n> Waiting on Author: 36\n> Ready for Committer: 23\n> Committed: 21\n> Moved to next CF: 1\n> Withdrawn: 5\n> Rejected: 2\n> Returned with Feedback: 3\n> --\n> Total: 305\n> \n> And as of this email, we're now at\n> \n> Needs review: 193\n> Waiting on Author: 38\n> Ready for Committer: 24\n> Committed: 37\n> Moved to next CF: 2\n> Withdrawn: 6\n> Rejected: 2\n> Returned with Feedback: 3\n> --\n> Total: 305\n> \n> That's sixteen patchsets committed in the first week.\n> \n> Have a good weekend,\n> --Jacob\n> \n> \n\n\nHi JacobAbort Global temporary tablehttps://commitfest.postgresql.org/36/2349/#Please move the Global Temporary table to check next month, that is at 202208.I need more time to process the existing issue.ThanksWenjing2022年7月9日 07:42,Jacob Champion <jchampion@timescale.com> 写道:On 7/1/22 08:08, Jacob Champion wrote:It's been July everywhere on Earth for a few hours, so the Julycommitfest is now in progress:    https://commitfest.postgresql.org/38/One week down, three to go.I forgot to put the overall status in the last email. We started themonth with the following stats:    Needs review:         214    Waiting on Author:     36    Ready for Committer:   23    Committed:             21    Moved to next CF:       1    Withdrawn:              5    Rejected:               2    Returned with Feedback: 3    --    Total:                305And as of this email, we're now at    Needs review:         193    Waiting on Author:     38    Ready for Committer:   24    Committed:             37    Moved to next CF:       2    Withdrawn:              6    Rejected:               2    Returned with Feedback: 3    --    Total:                305That's sixteen patchsets committed in the first week.Have a good weekend,--Jacob", "msg_date": "Fri, 15 Jul 2022 16:37:16 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "[changing the subject line, was \"[Commitfest 2022-07] Begins Now\"]\n\nOn Fri, Jul 15, 2022 at 1:37 AM Wenjing Zeng <wjzeng2012@gmail.com> wrote:\n> Please move the Global Temporary table to check next month, that is at 202208.\n> I need more time to process the existing issue.\n\nHi Wenjing,\n\nMy current understanding is that RwF patches can't be moved (even by\nthe CFM). You can just reattach the existing thread to a new CF entry\nwhen you're ready, as discussed in [1]. (Reviewers can sign themselves\nback up; don't carry them over.)\n\nIt does seem annoying to have to reapply annotations, though. I would\nlike to see the CF app support this and I'll try to put a patch\ntogether sometime (there's a growing list).\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CALT9ZEGoVd6%2B5FTJ3d6DXRO4z%3DrvqK%2BdjrmgSiFo7dkKeMkyfQ%40mail.gmail.com\n\n\n", "msg_date": "Fri, 15 Jul 2022 16:16:46 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Moving RwF patches to a new CF" }, { "msg_contents": "On 7/8/22 16:42, Jacob Champion wrote:\n> On 7/1/22 08:08, Jacob Champion wrote:\n>> It's been July everywhere on Earth for a few hours, so the July\n>> commitfest is now in progress:\n>>\n>> https://commitfest.postgresql.org/38/\n\nHalfway through!\n\nWe are now at\n\n Needs review: 175\n Waiting on Author: 43\n Ready for Committer: 20\n Committed: 52\n Moved to next CF: 2\n Returned with Feedback: 4\n Rejected: 3\n Withdrawn: 6\n --\n Total: 305\n\nSince last week, that's fifteen more committed patchsets, with the Ready\nfor Committer queue holding fairly steady. Nice work, everyone.\n\nI started removing stale Reviewers fairly aggressively today, as\ndiscussed in [1], but there was some immediate feedback and I have\npaused that process for now. If you're wondering why you are no longer\nmarked as reviewer on a patch, I followed the following rule: if you\nwere signed up to review before June of this year, but you haven't\ninteracted with the patch in this commitfest, I removed you from the\nlist. If you have thoughts/comments on this approach, please share them!\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/flat/34b32cb2-a728-090a-00d5-067305874174%40timescale.com#3247e661b219f8736ae418c9b5452d63\n\n\n", "msg_date": "Fri, 15 Jul 2022 16:42:03 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "Hi,\n\nOn 2022-07-15 16:42:03 -0700, Jacob Champion wrote:\n> I started removing stale Reviewers fairly aggressively today, as\n> discussed in [1], but there was some immediate feedback and I have\n> paused that process for now. If you're wondering why you are no longer\n> marked as reviewer on a patch, I followed the following rule: if you\n> were signed up to review before June of this year, but you haven't\n> interacted with the patch in this commitfest, I removed you from the\n> list. If you have thoughts/comments on this approach, please share them!\n\nI'd make it dependent on whether there have been previous rounds of feedback\nor not. If somebody spent a good amount of time reviewing a patch previously,\nbut then didn't review the newest version in the last few weeks, it doesn't\nseem useful to remove them from the CF entry. The situation is different if\nsomebody has signed up but not done much.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Jul 2022 16:51:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/15/22 16:51, Andres Freund wrote:\n> I'd make it dependent on whether there have been previous rounds of feedback\n> or not. If somebody spent a good amount of time reviewing a patch previously,\n> but then didn't review the newest version in the last few weeks, it doesn't\n> seem useful to remove them from the CF entry. The situation is different if\n> somebody has signed up but not done much.\n\nIf someone put a lot of review into a patchset a few months ago, they\nabsolutely deserve credit. But if that entry has been sitting with no\nfeedback this month, why is it useful to keep that Reviewer around?\n\n(We may want to join this with the other thread.)\n\n--Jacob\n\n\n", "msg_date": "Fri, 15 Jul 2022 17:28:06 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "Hi,\n\nOn 2022-07-15 17:28:06 -0700, Jacob Champion wrote:\n> On 7/15/22 16:51, Andres Freund wrote:\n> > I'd make it dependent on whether there have been previous rounds of feedback\n> > or not. If somebody spent a good amount of time reviewing a patch previously,\n> > but then didn't review the newest version in the last few weeks, it doesn't\n> > seem useful to remove them from the CF entry. The situation is different if\n> > somebody has signed up but not done much.\n> \n> If someone put a lot of review into a patchset a few months ago, they\n> absolutely deserve credit. But if that entry has been sitting with no\n> feedback this month, why is it useful to keep that Reviewer around?\n\nIDK, I've plenty times given feedback and it took months till it all was\nimplemented. What's the point of doing further rounds of review until then?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Jul 2022 18:07:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 16.07.22 01:16, Jacob Champion wrote:\n> My current understanding is that RwF patches can't be moved (even by\n> the CFM). You can just reattach the existing thread to a new CF entry\n> when you're ready, as discussed in [1]. (Reviewers can sign themselves\n> back up; don't carry them over.)\n\nI think you can just change the entry back to \"needs review\" and then \nmove it forward.\n\n\n", "msg_date": "Sat, 16 Jul 2022 10:31:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Moving RwF patches to a new CF" }, { "msg_contents": "Hi hackers,\n\n> > If someone put a lot of review into a patchset a few months ago, they\n> > absolutely deserve credit. But if that entry has been sitting with no\n> > feedback this month, why is it useful to keep that Reviewer around?\n\nAs I recall, several committers reported before that they use\nReviewers field in the CF application when writing the commit message.\nI would argue that this is the reason.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:05:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 2022-Jul-18, Aleksander Alekseev wrote:\n\n> Hi hackers,\n> \n> > > If someone put a lot of review into a patchset a few months ago, they\n> > > absolutely deserve credit. But if that entry has been sitting with no\n> > > feedback this month, why is it useful to keep that Reviewer around?\n> \n> As I recall, several committers reported before that they use\n> Reviewers field in the CF application when writing the commit message.\n> I would argue that this is the reason.\n\nMaybe we need two separate reviewer columns -- one for credits\n(historical tracking) and one for people currently reviewing a patch.\nSo we definitely expect an email \"soon\" from someone in the second\ncolumn, but not from somebody who is only in the first column.\n\n-- \nÁlvaro Herrera\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:53:04 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/15/22 16:42, Jacob Champion wrote:\n> If you have thoughts/comments on this approach, please share them!\n\nOkay, plenty of feedback to sift through here.\n\n[CFM hat]\n\nFirst of all: mea culpa. I unilaterally made a change that I had assumed\nwould be uncontroversial; it clearly was not, and I interrupted the flow\nof the CF for people when my goal was to be mostly invisible this month.\n (My single email to a single thread saying \"any objections?\" is, in\nretrospect, not nearly enough reach or mandate to have made this\nchange.) Big thank you to Justin for seeing it happen and speaking up\nimmediately.\n\nHere is a rough summary of opinions that have been shared so far; pulled\nfrom the other thread [1] as well:\n\nThere are at least three major use cases for the Reviewer field at the\nmoment.\n\n1) As a new reviewer, find a patch that needs help moving forward.\n2) As a committer, give credit to people who moved the patch forward.\n3) As an established reviewer, keep track of patches \"in flight.\"\n\nI had never realized the third case existed. To those of you who I've\ninterrupted by modifying your checklist without permission, I'm sorry. I\nsee that several of you have already added yourselves back, which is\ngreat; I will try to find the CF update stream that has been alluded to\nelsewhere and see if I can restore the original Reviewers lists that I\nnulled out on Friday.\n\nIt was suggested that we track historical reviewers and current reviews\nseparately from each other, to handle both cases 1 and 2.\n\nThere appears to be a need for people to be able to consider a patch\n\"blocked\" pending some action, so that further review cycles aren't\nburned on it. Some people use Waiting on Author for that, but others use\nWoA as soon as an email is sent. The two cases have similarities but, to\nme at least, aren't the same and may be working at cross purposes.\n\nIt is is apparently possible to pull one of your closed patches from a\nprior commitfest into the new one, but you have to set it back to Needs\nReview first. I plan to work on a CF patch to streamline that, if\nsomeone does not beat me to it.\n\nOkay, I think those are the broad strokes. I will put my [dev hat] on\nnow and respond more granularly to threads, with stronger opinions.\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/flat/34b32cb2-a728-090a-00d5-067305874174%40timescale.com#3247e661b219f8736ae418c9b5452d63\n\n\n\n", "msg_date": "Mon, 18 Jul 2022 10:44:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "[dev hat]\n\nOn 7/15/22 18:07, Andres Freund wrote:\n> IDK, I've plenty times given feedback and it took months till it all was\n> implemented. What's the point of doing further rounds of review until then?\n\nI guess I would wonder why we're optimizing for that case. Is it helpful\nfor that patch to stick around in an active CF for months? There's an\nestablished need for keeping a \"TODO item\" around and not letting it\nfall off, but I think that should remain separate in an application\nwhich seems to be focused on organizing active volunteers.\n\nAnd if that's supposed to be what Waiting on Author is for, then I think\nwe need more guidance on how to use that status effectively. Some\nreviewers seem to use it as a \"replied\" flag. I think there's a\nmeaningful difference between soft-blocked on review feedback and\nhard-blocked on new implementation. And maybe there's even a middle\nstate, where the patch just needs someone to do a mindless rebase.\n\nI think you're in a better position than most to \"officially\" decide\nthat a patch can no longer benefit from review. Most of us can't do\nthat, I imagine -- nor should we.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:22:25 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 12:22:25 -0700, Jacob Champion wrote:\n> [dev hat]\n> \n> On 7/15/22 18:07, Andres Freund wrote:\n> > IDK, I've plenty times given feedback and it took months till it all was\n> > implemented. What's the point of doing further rounds of review until then?\n> \n> I guess I would wonder why we're optimizing for that case. Is it helpful\n> for that patch to stick around in an active CF for months?\n\nI'm not following - I'm talking about the patch author needing a while to\naddress the higher level feedback given by a reviewer. The author might put\nout a couple new versions, which each might still benefit from review. In that\n- pretty common imo - situation I don't think it's useful for the reviewer\nthat provided the higher level feedback to be removed from the patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:32:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/18/22 12:32, Andres Freund wrote:\n> I'm not following - I'm talking about the patch author needing a while to\n> address the higher level feedback given by a reviewer. The author might put\n> out a couple new versions, which each might still benefit from review. In that\n> - pretty common imo - situation I don't think it's useful for the reviewer\n> that provided the higher level feedback to be removed from the patch.\n\nOkay, I think I get it now. Thanks.\n\nThere's still something off in that case that I can't quite\narticulate... Is it your intent to use Reviewer as a signal that \"I'll\ncome back to this eventually\"? As a signal to other prospective\nreviewers that you're handling the patch? How should a CFM move things\nforward when they come to a patch that's been responded to by the author\nbut the sole Reviewer has been silent?\n\n--Jacob\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:34:52 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 13:34:52 -0700, Jacob Champion wrote:\n> On 7/18/22 12:32, Andres Freund wrote:\n> > I'm not following - I'm talking about the patch author needing a while to\n> > address the higher level feedback given by a reviewer. The author might put\n> > out a couple new versions, which each might still benefit from review. In that\n> > - pretty common imo - situation I don't think it's useful for the reviewer\n> > that provided the higher level feedback to be removed from the patch.\n> \n> Okay, I think I get it now. Thanks.\n> \n> There's still something off in that case that I can't quite\n> articulate... Is it your intent to use Reviewer as a signal that \"I'll\n> come back to this eventually\"?\n\nThat, and as a way to find out what I possible should look at again.\n\n\n> As a signal to other prospective reviewers that you're handling the patch?\n\nDefinitely not. I think no reviewer on a patch should be taken as\nthat. There's often many angles to a patch, and leaving trivial patches aside,\nno reviewer is an expert in all of them.\n\n\n> How should a CFM move things forward when they come to a patch that's been\n> responded to by the author but the sole Reviewer has been silent?\n\nPing the reviewer and/or thread, ensure the patch is needs-review state. I\ndon't think removing reviewers in the CF app would help with that anyway -\noften some reviewers explicitly state that they're only reviewing a specific\npart of the patch, or that looked at everything but lack expertise to be\nconfident in their positions etc. Such reviewers might do more rounds of\nfeedback to newer patches, but the patch might still need more feedback.\n\nISTM that you're trying to get patches to have zero reviewers if they need\nmore reviewers, because that can serve as a signal in the CF app. But to me\nthat's a bad proxy.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:44:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/18/22 02:53, Alvaro Herrera wrote:\n> On 2022-Jul-18, Aleksander Alekseev wrote:\n> \n>> Hi hackers,\n>>\n>>>> If someone put a lot of review into a patchset a few months ago, they\n>>>> absolutely deserve credit. But if that entry has been sitting with no\n>>>> feedback this month, why is it useful to keep that Reviewer around?\n>>\n>> As I recall, several committers reported before that they use\n>> Reviewers field in the CF application when writing the commit message.\n>> I would argue that this is the reason.\n> \n> Maybe we need two separate reviewer columns -- one for credits\n> (historical tracking) and one for people currently reviewing a patch.\n> So we definitely expect an email \"soon\" from someone in the second\n> column, but not from somebody who is only in the first column.\n\n+1\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:34:03 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/15/22 16:42, Jacob Champion wrote:\n> On 7/8/22 16:42, Jacob Champion wrote:\n>> On 7/1/22 08:08, Jacob Champion wrote:\n>>> It's been July everywhere on Earth for a few hours, so the July\n>>> commitfest is now in progress:\n>>>\n>>> https://commitfest.postgresql.org/38/\n\nWith one week remaining, we're now at\n\n Needs review: 162\n Waiting on Author: 42\n Ready for Committer: 21\n Committed: 60\n Moved to next CF: 2\n Returned with Feedback: 6\n Rejected: 3\n Withdrawn: 9\n --\n Total: 305\n\nAn additional eight patches committed and five closed, with the Ready\nfor Committer queue still steady.\n\nNext week I'll begin highlighting patches for help or closure.\n\nHave a good weekend,\n--Jacob\n\n\n", "msg_date": "Fri, 22 Jul 2022 16:03:07 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On Mon, Jul 18, 2022 at 1:44 PM Andres Freund <andres@anarazel.de> wrote:\n> ISTM that you're trying to get patches to have zero reviewers if they need\n> more reviewers, because that can serve as a signal in the CF app. But to me\n> that's a bad proxy.\n\nOkay. I need to put some more thought into what it is that I really\nwant (and the wiki needs to be updated, because it's suggesting that\nthe CFM use the Reviewers field in this way as well). Thanks for the\nfeedback!\n\n--Jacob\n\n\n", "msg_date": "Fri, 22 Jul 2022 16:04:01 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" }, { "msg_contents": "On 7/22/22 16:03, Jacob Champion wrote:\n> On 7/15/22 16:42, Jacob Champion wrote:\n>> On 7/8/22 16:42, Jacob Champion wrote:\n>>> On 7/1/22 08:08, Jacob Champion wrote:\n>>>> It's been July everywhere on Earth for a few hours, so the July\n>>>> commitfest is now in progress:\n>>>>\n>>>> https://commitfest.postgresql.org/38/\n\nIt's the final weekend!\n\nThe July CF will officially close after 11:59p, July 31, anywhere on\nEarth. (There will be a grace period of a few hours, because, well, I'll\nbe asleep when the deadline passes.) At that point I'll begin moving\nactive patches to the next CF, and closing out others as discussed in\nthe triage threads.\n\nOur statistics are now at\n\n Needs review: 150\n Waiting on Author: 42\n Ready for Committer: 22\n Committed: 68\n Moved to next CF: 5\n Returned with Feedback: 7\n Rejected: 3\n Withdrawn: 11\n --\n Total: 308\n\nThat's an additional eight committed, three closed, and nine Ready for\nCommitter.\n\n--Jacob\n\n\n", "msg_date": "Fri, 29 Jul 2022 14:02:03 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-07] Begins Now" } ]
[ { "msg_contents": "Nicola Contu reported two years ago to pgsql-general[1] that they were\nhaving sporadic query failures, because EINTR is reported on some system\ncall. I have been told that the problem persists, though it is very\ninfrequent. I propose the attached patch. Kyotaro proposed a slightly\ndifferent patch which also protects write(), but I think that's not\nnecessary.\n\nThomas M. produced some more obscure theories for other things that\ncould fail, but I think we should patch this problem first, which seems\nthe most obvious one, and deal with others if and when they are\nreported.\n\n[1] https://www.postgresql.org/message-id/CAMTZZh2V%2B0wJVgSqTVvXUAVMduF57Uxubvvw58%3DkbOae%2B53%2BQQ%40mail.gmail.com\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"", "msg_date": "Fri, 1 Jul 2022 17:41:05 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-01 17:41:05 +0200, Alvaro Herrera wrote:\n> Nicola Contu reported two years ago to pgsql-general[1] that they were\n> having sporadic query failures, because EINTR is reported on some system\n> call. I have been told that the problem persists, though it is very\n> infrequent. I propose the attached patch. Kyotaro proposed a slightly\n> different patch which also protects write(), but I think that's not\n> necessary.\n\nWhat is the reason for the || ProcDiePending || QueryCancelPending bit? What\nif there's dsm operations intentionally done while QueryCancelPending?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 10:30:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On 2022-Jul-01, Andres Freund wrote:\n\n> On 2022-07-01 17:41:05 +0200, Alvaro Herrera wrote:\n> > Nicola Contu reported two years ago to pgsql-general[1] that they were\n> > having sporadic query failures, because EINTR is reported on some system\n> > call. I have been told that the problem persists, though it is very\n> > infrequent. I propose the attached patch. Kyotaro proposed a slightly\n> > different patch which also protects write(), but I think that's not\n> > necessary.\n> \n> What is the reason for the || ProcDiePending || QueryCancelPending bit? What\n> if there's dsm operations intentionally done while QueryCancelPending?\n\nThat mirrors the test for the other block in that function, which was\nadded by 63efab4ca139, whose commit message explains:\n\n Allow DSM allocation to be interrupted.\n \n Chris Travers reported that the startup process can repeatedly try to\n cancel a backend that is in a posix_fallocate()/EINTR loop and cause it\n to loop forever. Teach the retry loop to give up if an interrupt is\n pending. Don't actually check for interrupts in that loop though,\n because a non-local exit would skip some clean-up code in the caller.\n\nThanks for looking!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 1 Jul 2022 19:55:16 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-01 19:55:16 +0200, Alvaro Herrera wrote:\n> On 2022-Jul-01, Andres Freund wrote:\n> \n> > On 2022-07-01 17:41:05 +0200, Alvaro Herrera wrote:\n> > > Nicola Contu reported two years ago to pgsql-general[1] that they were\n> > > having sporadic query failures, because EINTR is reported on some system\n> > > call. I have been told that the problem persists, though it is very\n> > > infrequent. I propose the attached patch. Kyotaro proposed a slightly\n> > > different patch which also protects write(), but I think that's not\n> > > necessary.\n> > \n> > What is the reason for the || ProcDiePending || QueryCancelPending bit? What\n> > if there's dsm operations intentionally done while QueryCancelPending?\n> \n> That mirrors the test for the other block in that function, which was\n> added by 63efab4ca139, whose commit message explains:\n> \n> Allow DSM allocation to be interrupted.\n> \n> Chris Travers reported that the startup process can repeatedly try to\n> cancel a backend that is in a posix_fallocate()/EINTR loop and cause it\n> to loop forever. Teach the retry loop to give up if an interrupt is\n> pending. Don't actually check for interrupts in that loop though,\n> because a non-local exit would skip some clean-up code in the caller.\n\nThat whole approach seems quite wrong to me. At the absolute very least the\ncode needs to check if interrupts are being processed in the current context\nbefore just giving up due to ProcDiePending || QueryCancelPending.\n\nI'm very unconvinced this ought to be fixed in dsm_impl_posix_resize(), rather\nthan the startup process signalling.\n\nThere is an argument for allowing more things to be cancelled, but we'd need a\nretry loop for the !INTERRUPTS_CAN_BE_PROCESSED() case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 13:29:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi Chris,\n\nOn 2022-07-01 13:29:44 -0700, Andres Freund wrote:\n> On 2022-07-01 19:55:16 +0200, Alvaro Herrera wrote:\n> > On 2022-Jul-01, Andres Freund wrote:\n> > \n> > > On 2022-07-01 17:41:05 +0200, Alvaro Herrera wrote:\n> > > > Nicola Contu reported two years ago to pgsql-general[1] that they were\n> > > > having sporadic query failures, because EINTR is reported on some system\n> > > > call. I have been told that the problem persists, though it is very\n> > > > infrequent. I propose the attached patch. Kyotaro proposed a slightly\n> > > > different patch which also protects write(), but I think that's not\n> > > > necessary.\n> > > \n> > > What is the reason for the || ProcDiePending || QueryCancelPending bit? What\n> > > if there's dsm operations intentionally done while QueryCancelPending?\n> > \n> > That mirrors the test for the other block in that function, which was\n> > added by 63efab4ca139, whose commit message explains:\n> > \n> > Allow DSM allocation to be interrupted.\n> > \n> > Chris Travers reported that the startup process can repeatedly try to\n> > cancel a backend that is in a posix_fallocate()/EINTR loop and cause it\n> > to loop forever. Teach the retry loop to give up if an interrupt is\n> > pending. Don't actually check for interrupts in that loop though,\n> > because a non-local exit would skip some clean-up code in the caller.\n> \n> That whole approach seems quite wrong to me. At the absolute very least the\n> code needs to check if interrupts are being processed in the current context\n> before just giving up due to ProcDiePending || QueryCancelPending.\n> \n> I'm very unconvinced this ought to be fixed in dsm_impl_posix_resize(), rather\n> than the startup process signalling.\n> \n> There is an argument for allowing more things to be cancelled, but we'd need a\n> retry loop for the !INTERRUPTS_CAN_BE_PROCESSED() case.\n\nChris, do you have any additional details about the machine that lead to this\nchange? OS version, whether it might have been swapping, etc?\n\nI wonder if what happened is that posix_fallocate() used glibc's fallback\nimplementation because the kernel was old enough to not support fallocate()\nfor tmpfs. Looks like support for fallocate() for tmpfs was added in 3.5\n([1]). So e.g. a rhel 6 wouldn't have had that.\n\nGreetings,\n\nAndres Freund\n\n[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e2d12e22c59ce714008aa5266d769f8568d74eac\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:06:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Sat, Jul 2, 2022 at 9:06 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-01 13:29:44 -0700, Andres Freund wrote:\n> > On 2022-07-01 19:55:16 +0200, Alvaro Herrera wrote:\n> > > Allow DSM allocation to be interrupted.\n> > >\n> > > Chris Travers reported that the startup process can repeatedly try to\n> > > cancel a backend that is in a posix_fallocate()/EINTR loop and cause it\n> > > to loop forever. Teach the retry loop to give up if an interrupt is\n> > > pending. Don't actually check for interrupts in that loop though,\n> > > because a non-local exit would skip some clean-up code in the caller.\n> >\n> > That whole approach seems quite wrong to me. At the absolute very least the\n> > code needs to check if interrupts are being processed in the current context\n> > before just giving up due to ProcDiePending || QueryCancelPending.\n> >\n> > I'm very unconvinced this ought to be fixed in dsm_impl_posix_resize(), rather\n> > than the startup process signalling.\n\nI agree it's not great. It was a back-patchable bandaid in need of a\nbetter solution.\n\n> Chris, do you have any additional details about the machine that lead to this\n> change? OS version, whether it might have been swapping, etc?\n>\n> I wonder if what happened is that posix_fallocate() used glibc's fallback\n> implementation because the kernel was old enough to not support fallocate()\n> for tmpfs. Looks like support for fallocate() for tmpfs was added in 3.5\n> ([1]). So e.g. a rhel 6 wouldn't have had that.\n\nWith a quick test program on my Linux 5.10 kernel I see that an\nSA_RESTART signal handler definitely causes posix_fallocate() to\nreturn EINTR (can post trivial program).\n\nA drive-by look at the current/modern kernel source supports this:\nshmem_fallocate returns -EINTR directly (not -ERESTARTSYS, which seems\nto be the Linux-y way to say you want EINTR or restart as\nappropriate?), and it also undoes all partial progress too (not too\nsurprising), which would explain why a perfectly timed machine gun\nstream of signals from our recovery conflict system can make an\nfallocate retry loop never terminate, for large enough sizes.\n\n\n", "msg_date": "Sat, 2 Jul 2022 09:52:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-02 09:52:33 +1200, Thomas Munro wrote:\n> On Sat, Jul 2, 2022 at 9:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-01 13:29:44 -0700, Andres Freund wrote:\n> > Chris, do you have any additional details about the machine that lead to this\n> > change? OS version, whether it might have been swapping, etc?\n> >\n> > I wonder if what happened is that posix_fallocate() used glibc's fallback\n> > implementation because the kernel was old enough to not support fallocate()\n> > for tmpfs. Looks like support for fallocate() for tmpfs was added in 3.5\n> > ([1]). So e.g. a rhel 6 wouldn't have had that.\n> \n> With a quick test program on my Linux 5.10 kernel I see that an\n> SA_RESTART signal handler definitely causes posix_fallocate() to\n> return EINTR (can post trivial program).\n> \n> A drive-by look at the current/modern kernel source supports this:\n> shmem_fallocate returns -EINTR directly (not -ERESTARTSYS, which seems\n> to be the Linux-y way to say you want EINTR or restart as\n> appropriate?), and it also undoes all partial progress too (not too\n> surprising), which would explain why a perfectly timed machine gun\n> stream of signals from our recovery conflict system can make an\n> fallocate retry loop never terminate, for large enough sizes.\n\nYea :(\n\nAnd even if we fix recovery to not do douse other processes in signals quite\nthat badly, there are plenty other sources of signals that can arrive at a\nsteady clip. So I think we need to do something to defuse this another way.\n\nIdeas:\n\n1) do the fallocate in smaller chunks, thereby making it much more likely to\n complete between two signal deliveries\n2) block signals while calling posix_fallocate(). That won't work for\n everything (e.g. rapid SIGSTOP/SIGCONT), but that's not something we'd send\n ourselves, so whatever.\n3) 1+2\n4) ?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:17:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On 2022-Jul-01, Andres Freund wrote:\n\n> On 2022-07-01 19:55:16 +0200, Alvaro Herrera wrote:\n> > On 2022-Jul-01, Andres Freund wrote:\n\n> > > What is the reason for the || ProcDiePending || QueryCancelPending bit? What\n> > > if there's dsm operations intentionally done while QueryCancelPending?\n> > \n> > That mirrors the test for the other block in that function, which was\n> > added by 63efab4ca139, whose commit message explains:\n\n> That whole approach seems quite wrong to me. At the absolute very least the\n> code needs to check if interrupts are being processed in the current context\n> before just giving up due to ProcDiePending || QueryCancelPending.\n\nFor the time being, I can just push the addition of the EINTR retry\nwithout testing ProcDiePending || QueryCancelPending.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)", "msg_date": "Mon, 4 Jul 2022 13:07:50 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-04 13:07:50 +0200, Alvaro Herrera wrote:\n> On 2022-Jul-01, Andres Freund wrote:\n> \n> > On 2022-07-01 19:55:16 +0200, Alvaro Herrera wrote:\n> > > On 2022-Jul-01, Andres Freund wrote:\n> \n> > > > What is the reason for the || ProcDiePending || QueryCancelPending bit? What\n> > > > if there's dsm operations intentionally done while QueryCancelPending?\n> > > \n> > > That mirrors the test for the other block in that function, which was\n> > > added by 63efab4ca139, whose commit message explains:\n> \n> > That whole approach seems quite wrong to me. At the absolute very least the\n> > code needs to check if interrupts are being processed in the current context\n> > before just giving up due to ProcDiePending || QueryCancelPending.\n> \n> For the time being, I can just push the addition of the EINTR retry\n> without testing ProcDiePending || QueryCancelPending.\n\nI think we'd be better off disabling at least some signals during\ndsm_impl_posix_resize(). I'm afraid we'll otherwise just find another\nvariation of these problems. I haven't checked the source of ftruncate, but\nwhat Thomas dug up for fallocate makes it pretty clear that our current\napproach of just retrying again and again isn't good enough. It's a bit more\nobvious that it's a problem for fallocate, but I don't think it's worth having\ndifferent solutions for the two.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 17:20:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On 2022-Jul-05, Andres Freund wrote:\n\n> I think we'd be better off disabling at least some signals during\n> dsm_impl_posix_resize(). I'm afraid we'll otherwise just find another\n> variation of these problems. I haven't checked the source of ftruncate, but\n> what Thomas dug up for fallocate makes it pretty clear that our current\n> approach of just retrying again and again isn't good enough. It's a bit more\n> obvious that it's a problem for fallocate, but I don't think it's worth having\n> different solutions for the two.\n\nSo what if we move the retry loop one level up? As in the attached.\nHere, if we get EINTR then we retry both syscalls.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"", "msg_date": "Wed, 6 Jul 2022 21:29:41 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 21:29:41 +0200, Alvaro Herrera wrote:\n> On 2022-Jul-05, Andres Freund wrote:\n> \n> > I think we'd be better off disabling at least some signals during\n> > dsm_impl_posix_resize(). I'm afraid we'll otherwise just find another\n> > variation of these problems. I haven't checked the source of ftruncate, but\n> > what Thomas dug up for fallocate makes it pretty clear that our current\n> > approach of just retrying again and again isn't good enough. It's a bit more\n> > obvious that it's a problem for fallocate, but I don't think it's worth having\n> > different solutions for the two.\n> \n> So what if we move the retry loop one level up? As in the attached.\n> Here, if we get EINTR then we retry both syscalls.\n\nDoesn't really seem to address the problem to me. posix_fallocate()\ntakes some time (~1s for 3GB roughly), so if we signal at a higher rate,\nwe'll just get stuck.\n\nI hacked a bit on a test program from Thomas, and it's pretty clearly\nthat with a 5ms timer interval you'll pretty much not make\nprogress. It's much easier to get fallocate() to get interrupted than\nftruncate(), but the latter gets interrupted e.g. when you do a strace\nin the \"wrong\" moment (afaics SIGSTOP/SIGCONT trigger EINTR in\nsituations that are retried otherwise).\n\nSo I think we need: 1) block most signals, 2) a retry loop *without*\ninterrupt checks.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 13:38:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Thu, Jul 7, 2022 at 8:39 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-06 21:29:41 +0200, Alvaro Herrera wrote:\n> > On 2022-Jul-05, Andres Freund wrote:\n> >\n> > > I think we'd be better off disabling at least some signals during\n> > > dsm_impl_posix_resize(). I'm afraid we'll otherwise just find another\n> > > variation of these problems. I haven't checked the source of ftruncate, but\n> > > what Thomas dug up for fallocate makes it pretty clear that our current\n> > > approach of just retrying again and again isn't good enough. It's a bit more\n> > > obvious that it's a problem for fallocate, but I don't think it's worth having\n> > > different solutions for the two.\n> >\n> > So what if we move the retry loop one level up? As in the attached.\n> > Here, if we get EINTR then we retry both syscalls.\n>\n> Doesn't really seem to address the problem to me. posix_fallocate()\n> takes some time (~1s for 3GB roughly), so if we signal at a higher rate,\n> we'll just get stuck.\n>\n> I hacked a bit on a test program from Thomas, and it's pretty clearly\n> that with a 5ms timer interval you'll pretty much not make\n> progress. It's much easier to get fallocate() to get interrupted than\n> ftruncate(), but the latter gets interrupted e.g. when you do a strace\n> in the \"wrong\" moment (afaics SIGSTOP/SIGCONT trigger EINTR in\n> situations that are retried otherwise).\n>\n> So I think we need: 1) block most signals, 2) a retry loop *without*\n> interrupt checks.\n\nYeah. I was also wondering about wrapping the whole function in\nPG_SETMASK(&BlockSig), PG_SETMASK(&UnBlockSig), but also leaving the\nwhile (rc == EINTR) loop there (without the check for *Pending\nvariables), only because otherwise when you attach a debugger and\ncontinue you'll get a spurious EINTR and it'll interfere with program\nexecution. All blockable signals would be blocked *except* SIGQUIT,\nwhich means that fast shutdown/crash will still work. It seems nice\nto leave that way to interrupt it without resorting to SIGKILL.\n\n\n", "msg_date": "Thu, 7 Jul 2022 08:56:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-07 08:56:33 +1200, Thomas Munro wrote:\n> On Thu, Jul 7, 2022 at 8:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > So I think we need: 1) block most signals, 2) a retry loop *without*\n> > interrupt checks.\n> \n> Yeah. I was also wondering about wrapping the whole function in\n> PG_SETMASK(&BlockSig), PG_SETMASK(&UnBlockSig), but also leaving the\n> while (rc == EINTR) loop there (without the check for *Pending\n> variables), only because otherwise when you attach a debugger and\n> continue you'll get a spurious EINTR and it'll interfere with program\n> execution. All blockable signals would be blocked *except* SIGQUIT,\n> which means that fast shutdown/crash will still work. It seems nice\n> to leave that way to interrupt it without resorting to SIGKILL.\n\nFast shutdown shouldn't use SIGQUIT - did you mean immediate? I think\nit's fine to allow immediate shutdowns, but I don't think we should\nallow fast shutdowns to interrupt it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 14:03:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Thu, Jul 7, 2022 at 9:03 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-07 08:56:33 +1200, Thomas Munro wrote:\n> > On Thu, Jul 7, 2022 at 8:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > So I think we need: 1) block most signals, 2) a retry loop *without*\n> > > interrupt checks.\n> >\n> > Yeah. I was also wondering about wrapping the whole function in\n> > PG_SETMASK(&BlockSig), PG_SETMASK(&UnBlockSig), but also leaving the\n> > while (rc == EINTR) loop there (without the check for *Pending\n> > variables), only because otherwise when you attach a debugger and\n> > continue you'll get a spurious EINTR and it'll interfere with program\n> > execution. All blockable signals would be blocked *except* SIGQUIT,\n> > which means that fast shutdown/crash will still work. It seems nice\n> > to leave that way to interrupt it without resorting to SIGKILL.\n>\n> Fast shutdown shouldn't use SIGQUIT - did you mean immediate? I think\n> it's fine to allow immediate shutdowns, but I don't think we should\n> allow fast shutdowns to interrupt it.\n\nErr, yeah, that one.\n\n\n", "msg_date": "Thu, 7 Jul 2022 09:05:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Thu, Jul 7, 2022 at 9:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jul 7, 2022 at 9:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-07 08:56:33 +1200, Thomas Munro wrote:\n> > > On Thu, Jul 7, 2022 at 8:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > So I think we need: 1) block most signals, 2) a retry loop *without*\n> > > > interrupt checks.\n\nHere's a draft patch that tries to explain all this in the commit\nmessage and comments.\n\nEven if we go with this approach now, I think it's plausible that we\nmight want to reconsider this yet again one day, perhaps allocating\nmemory with some future asynchronous infrastructure while still\nprocessing interrupts. It's not very nice to hold up recovery or\nProcSignalBarrier for long operations.\n\nI'm a little unclear about ftruncate() here. I don't expect it to\nreport EINTR in other places where we use it (ie to make a local file\non a non-\"slow device\" smaller), because I expect that to be like\nread(), write() etc which we don't wrap in EINTR loops. Here you've\nobserved EINTR when messing around with a debugger*. It seems\ninconsistent to put posix_fallocate() in an EINTR retry loop for the\nbenefit of debuggers, but not ftruncate(). But perhaps that's good\nenough, on the theory that posix_fallocate(1GB) is a very large target\nand you have a decent chance of hitting it.\n\nAnother observation while staring at that ftruncate(): It's entirely\nredundant on Linux, because we only ever call dsm_impl_posix_resize()\nto make the segment bigger. Before commit 3c60d0fa (v12) it was\npossible to resize a segment to be smaller with dsm_resize(), so you\nneeded one or t'other depending on the requested size and we just\ncalled both, but dsm_resize() wasn't ever used AFAIK and didn't even\nwork on all DSM implementations, among other problems, so we ripped it\nout. So... on master at least, we could also change the #ifdef to be\neither-or. While refactoring like that, I think we might as well also\nrearrange the code so that the wait event is reported also for other\nOSes, just in case it takes a long time. See 0002 patch.\n\n*It's funny that ftruncate() apparently doesn't automatically restart\nfor ptrace SIGCONT on Linux according to your report, while poll()\ndoes according to my experiments, even though the latter is one of the\nnever-restart functions (it doesn't on other OSes I hack on, and you\nfeel the difference when debugging missing wakeup type bugs...).\nRandom implementation details...", "msg_date": "Thu, 7 Jul 2022 17:58:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On 2022-Jul-07, Thomas Munro wrote:\n\n> On Thu, Jul 7, 2022 at 9:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Jul 7, 2022 at 9:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-07-07 08:56:33 +1200, Thomas Munro wrote:\n> > > > On Thu, Jul 7, 2022 at 8:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > So I think we need: 1) block most signals, 2) a retry loop *without*\n> > > > > interrupt checks.\n> \n> Here's a draft patch that tries to explain all this in the commit\n> message and comments.\n\nI gave 0001 a try. I agree with the approach, and it seems to work as\nintended; or at least I couldn't break it under GDB.\n\nI didn't look at 0002, but I wish that the pgstat_report_wait calls were\nmoved to cover both syscalls in a separate commit, just so we still have\nthem even if we decide not to do 0002.\n\nDo you intend to get it pushed before the next minors? I have an\ninterest in that happening. Thanks for working on this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:30:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-07-07 17:58:10 +1200, Thomas Munro wrote:\n> Even if we go with this approach now, I think it's plausible that we\n> might want to reconsider this yet again one day, perhaps allocating\n> memory with some future asynchronous infrastructure while still\n> processing interrupts. It's not very nice to hold up recovery or\n> ProcSignalBarrier for long operations.\n\nI think the next improvement would be to do the fallocate in smaller chunks,\nand accept interrupts inbetween.\n\n\n> I'm a little unclear about ftruncate() here. I don't expect it to\n> report EINTR in other places where we use it (ie to make a local file\n> on a non-\"slow device\" smaller), because I expect that to be like\n> read(), write() etc which we don't wrap in EINTR loops. Here you've\n> observed EINTR when messing around with a debugger*. It seems\n> inconsistent to put posix_fallocate() in an EINTR retry loop for the\n> benefit of debuggers, but not ftruncate(). But perhaps that's good\n> enough, on the theory that posix_fallocate(1GB) is a very large target\n> and you have a decent chance of hitting it.\n\n> *It's funny that ftruncate() apparently doesn't automatically restart\n> for ptrace SIGCONT on Linux according to your report, while poll()\n> does according to my experiments, even though the latter is one of the\n> never-restart functions (it doesn't on other OSes I hack on, and you\n> feel the difference when debugging missing wakeup type bugs...).\n> Random implementation details...\n\nMy test was basically while (true); {if (!ftruncate()) bleat(); if\n(!fallocate()) bleat();} with a SIGALRM triggering regularly in the\nbackground. The ftruncate failed, rarely, when attaching / detaching strace\n-p. Rarely enough that I had already written you an IM saying that I couldn't\nmake it fail... So it's hard to be confident this can't otherwise be\nhit. With that caveat: I didn't hit it with a \"real\" file on a \"real\"\nfilesystem in a few minutes of trying. But unsurprisingly it triggers when\nputting the file on a tmpfs.\n\n\n> @@ -362,6 +355,14 @@ dsm_impl_posix_resize(int fd, off_t size)\n> {\n> \tint\t\t\trc;\n> \n> +\t/*\n> +\t * Block all blockable signals, except SIGQUIT. posix_fallocate() can run\n> +\t * for quite a long time, and is an all-or-nothing operation. If we\n> +\t * allowed SIGUSR1 to interrupt us repeatedly (for example, due to recovery\n> +\t * conflicts), the retry loop might never succeed.\n> +\t */\n> +\tPG_SETMASK(&BlockSig);\n> +\n> \t/* Truncate (or extend) the file to the requested size. */\n> \trc = ftruncate(fd, size);\n>\n\nHm - given that we've observed ftruncate failing with strace, and that\nstracing to find problems isn't insane, shouldn't we retry the ftruncate too?\nIt's kind of obsoleted by your next patch, but not really, because it's not\nunconceivable that other OSs behave similarly? And IIUC you're planning to not\nbackpatch 0002?\n\n\n> +\tpgstat_report_wait_start(WAIT_EVENT_DSM_FILL_ZERO_WRITE);\n\n(not new in this patch, just moved around) - FILL_ZERO_WRITE is imo an odd\ndescription of what's happening... In my understanding this isn't doing any\nwriting, just allocating. But whatever...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 10:45:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Tue, Jul 12, 2022 at 5:45 AM Andres Freund <andres@anarazel.de> wrote:\n> Hm - given that we've observed ftruncate failing with strace, and that\n> stracing to find problems isn't insane, shouldn't we retry the ftruncate too?\n> It's kind of obsoleted by your next patch, but not really, because it's not\n> unconceivable that other OSs behave similarly? And IIUC you're planning to not\n> backpatch 0002?\n\nYeah. Done, and pushed. 0002 not back-patched.\n\n> > + pgstat_report_wait_start(WAIT_EVENT_DSM_FILL_ZERO_WRITE);\n>\n> (not new in this patch, just moved around) - FILL_ZERO_WRITE is imo an odd\n> description of what's happening... In my understanding this isn't doing any\n> writing, just allocating. But whatever...\n\nTrue. Fixed in master.\n\n\n", "msg_date": "Fri, 15 Jul 2022 00:15:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Fri, Jul 15, 2022 at 12:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah. Done, and pushed. 0002 not back-patched.\n\nHmm, there were a couple of hard to understand build farm failures.\nMy first thought is that the signal mask stuff should only be done if\nIsUnderPostmaster, otherwise it clobbers the postmaster's signal mask\nwhen reached from dsm_postmaster_startup(). Looking into that.\n\n\n", "msg_date": "Fri, 15 Jul 2022 01:02:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Fri, Jul 15, 2022 at 1:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jul 15, 2022 at 12:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Yeah. Done, and pushed. 0002 not back-patched.\n>\n> Hmm, there were a couple of hard to understand build farm failures.\n> My first thought is that the signal mask stuff should only be done if\n> IsUnderPostmaster, otherwise it clobbers the postmaster's signal mask\n> when reached from dsm_postmaster_startup(). Looking into that.\n\nI pushed that change, and I hope that clears up the problems seen on\neg curculio. It does raise the more general question of why it's safe\nto assume the signal mask is UnBlockSig on entry in regular backends.\nI expect it to be in released branches, but things get more\ncomplicated as we use DSM in more ways and it's not ideal to bet on\nthat staying true. I checked that this throw-away assertion doesn't\nfail currently:\n\n if (IsUnderPostmaster)\n+ {\n+ sigset_t old;\n+ sigprocmask(SIG_SETMASK, NULL, &old);\n+ Assert(memcmp(&old, &UnBlockSig, sizeof(UnBlockSig)) == 0);\n PG_SETMASK(&BlockSig);\n+ }\n\n... but now I'm wondering if we should be more defensive and possibly\neven save/restore the mask. Originally I discounted that because I\nthought I had to go through PG_SETMASK for portability reasons, but on\ncloser inspection, I don't see any reason not to use sigprocmask\ndirectly in Unix-only code.\n\n\n", "msg_date": "Fri, 15 Jul 2022 02:26:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On 2022-Jul-15, Thomas Munro wrote:\n\n> I checked that this throw-away assertion doesn't fail currently:\n> \n> if (IsUnderPostmaster)\n> + {\n> + sigset_t old;\n> + sigprocmask(SIG_SETMASK, NULL, &old);\n> + Assert(memcmp(&old, &UnBlockSig, sizeof(UnBlockSig)) == 0);\n> PG_SETMASK(&BlockSig);\n> + }\n> \n> ... but now I'm wondering if we should be more defensive and possibly\n> even save/restore the mask.\n\nYeah, that sounds better to me.\n\n> Originally I discounted that because I thought I had to go through\n> PG_SETMASK for portability reasons, but on closer inspection, I don't\n> see any reason not to use sigprocmask directly in Unix-only code.\n\nISTM it would be cleaner to patch PG_SETMASK to have a second argument\nand to return the original mask if that's not NULL. This is more\ninvasive, but there aren't that many callsites of that macro.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 14 Jul 2022 16:46:52 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... but now I'm wondering if we should be more defensive and possibly\n> even save/restore the mask.\n\n+1, sounds like a more future-proof solution.\n\n> Originally I discounted that because I\n> thought I had to go through PG_SETMASK for portability reasons, but on\n> closer inspection, I don't see any reason not to use sigprocmask\n> directly in Unix-only code.\n\nSeems like the thing to do is to add a suitable operation to the\npqsignal.h API. We could leave it unimplemented for now on Windows,\nand then worry what to do if we ever need it in that context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Jul 2022 11:24:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> ISTM it would be cleaner to patch PG_SETMASK to have a second argument\n> and to return the original mask if that's not NULL. This is more\n> invasive, but there aren't that many callsites of that macro.\n\n[ shoulda read your message before replying ]\n\nGiven that this needs back-patched, I think changing PG_SETMASK\nis a bad idea: there might be outside callers. However, we could\nadd another macro with the additional argument. PG_GET_AND_SET_MASK?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Jul 2022 11:27:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Fri, Jul 15, 2022 at 3:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > ISTM it would be cleaner to patch PG_SETMASK to have a second argument\n> > and to return the original mask if that's not NULL. This is more\n> > invasive, but there aren't that many callsites of that macro.\n>\n> [ shoulda read your message before replying ]\n>\n> Given that this needs back-patched, I think changing PG_SETMASK\n> is a bad idea: there might be outside callers. However, we could\n> add another macro with the additional argument. PG_GET_AND_SET_MASK?\n\nIt's funny though, the reason we had PG_SETMASK in the first place is\nnot for Windows. Ancient commit 47937403676 added that for long gone\npre-POSIX systems like NeXTSTEP which only had single-argument\nsigsetmask(), not sigprocmask(). In general on Windows we're\nemulating POSIX signal interfaces with normal names like sigemptyset()\netc, it just so happens that we chose to emulate that pre-standard\nsigsetmask() interface (as you complained about in the commit message\nfor a65e0864).\n\nSo why would I add another wrapper like PG_SETMASK and leave it\nunimplemented for now on Windows, when I could just use sigprocmask()\ndirectly and leave it unimplemented for now on Windows?\n\nThe only reason I can think of for a wrapper is to provide a place to\ncheck the return code and ereport (panic?). That seems to be of\nlimited value (how can it fail ... bad \"how\" value, or a sandbox\ndenying some system calls, ...?). I did make sure to preserve the\nerrno though; even though we're assuming these calls can't fail by\nlong standing tradition, I didn't feel like additionally assuming that\nsuccessful calls don't clobber errno.\n\nI guess, coded like this, it should also be safe to do it in the\npostmaster, but I think maybe we should leave it conditional, rather\nthan relying on BlockSig being initialised and sane during early\npostmaster initialisation.", "msg_date": "Fri, 15 Jul 2022 09:22:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> So why would I add another wrapper like PG_SETMASK and leave it\n> unimplemented for now on Windows, when I could just use sigprocmask()\n> directly and leave it unimplemented for now on Windows?\n\nFair enough, I guess. No objection to this patch.\n\n(Someday we oughta go ahead and make our Windows signal API look more\nlike POSIX, as I suggested back in 2015. I'm still not taking\npoint on that, though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Jul 2022 17:34:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Fri, Jul 15, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (Someday we oughta go ahead and make our Windows signal API look more\n> like POSIX, as I suggested back in 2015. I'm still not taking\n> point on that, though.)\n\nFor the sigprocmask() part, here's a patch that passes CI. Only the\nSIG_SETMASK case is actually exercised by our current code, though.\n\nOne weird thing about our PG_SETMASK() macro is that you couldn't have\nused its return value portably: on Windows we were returning the old\nmask (like sigsetmask(), which has no way to report errors), and on\nUnix we were returning 0/-1 (from setprocmask(), ie the error we never\nchecked).", "msg_date": "Fri, 15 Jul 2022 16:19:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Jul 15, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (Someday we oughta go ahead and make our Windows signal API look more\n>> like POSIX, as I suggested back in 2015. I'm still not taking\n>> point on that, though.)\n\n> For the sigprocmask() part, here's a patch that passes CI. Only the\n> SIG_SETMASK case is actually exercised by our current code, though.\n\nPasses an eyeball check, but I can't actually test it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Jul 2022 09:28:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Sat, Jul 16, 2022 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Jul 15, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> (Someday we oughta go ahead and make our Windows signal API look more\n> >> like POSIX, as I suggested back in 2015. I'm still not taking\n> >> point on that, though.)\n>\n> > For the sigprocmask() part, here's a patch that passes CI. Only the\n> > SIG_SETMASK case is actually exercised by our current code, though.\n>\n> Passes an eyeball check, but I can't actually test it.\n\nThanks. Pushed.\n\nI'm not brave enough to try to write a replacement sigaction() yet,\nbut it does appear that we could rip more ugliness and inconsistencies\nthat way, eg sa_mask.\n\n\n", "msg_date": "Sat, 16 Jul 2022 17:18:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Sat, Jul 16, 2022 at 5:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jul 16, 2022 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Fri, Jul 15, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> (Someday we oughta go ahead and make our Windows signal API look more\n> > >> like POSIX, as I suggested back in 2015. I'm still not taking\n> > >> point on that, though.)\n> >\n> > > For the sigprocmask() part, here's a patch that passes CI. Only the\n> > > SIG_SETMASK case is actually exercised by our current code, though.\n> >\n> > Passes an eyeball check, but I can't actually test it.\n>\n> Thanks. Pushed.\n>\n> I'm not brave enough to try to write a replacement sigaction() yet,\n> but it does appear that we could rip more ugliness and inconsistencies\n> that way, eg sa_mask.\n\nHere's a draft patch that adds a minimal sigaction() implementation\nfor Windows. It doesn't implement stuff we're not using, for sample\nsa_sigaction functions, but it has the sa_mask feature so we can\nharmonize the stuff that I believe you were talking about.", "msg_date": "Wed, 17 Aug 2022 07:51:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Wed, Aug 17, 2022 at 07:51:34AM +1200, Thomas Munro wrote:\n> Here's a draft patch that adds a minimal sigaction() implementation\n> for Windows. It doesn't implement stuff we're not using, for sample\n> sa_sigaction functions, but it has the sa_mask feature so we can\n> harmonize the stuff that I believe you were talking about.\n\nDid you see that this paniced ?\n\nhttps://cirrus-ci.com/task/4975957546106880\nhttps://api.cirrus-ci.com/v1/artifact/task/4975957546106880/testrun/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log\n\n2022-09-30 09:13:03.496 GMT [7312][startup] PANIC: hash_xlog_split_allocate_page: failed to acquire cleanup lock\n2022-09-30 09:13:03.496 GMT [7312][startup] CONTEXT: WAL redo at 0/7AF6FA8 for Hash/SPLIT_ALLOCATE_PAGE: new_bucket 63, meta_page_masks_updated F, issplitpoint_changed F; blkref #0: rel 1663/16384/23784, blk 45; blkref #1: rel 1663/16384/23784, blk 78; blkref #2: rel 1663/16384/23784, blk 0\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:53:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 13:53:45 -0500, Justin Pryzby wrote:\n> On Wed, Aug 17, 2022 at 07:51:34AM +1200, Thomas Munro wrote:\n> > Here's a draft patch that adds a minimal sigaction() implementation\n> > for Windows. It doesn't implement stuff we're not using, for sample\n> > sa_sigaction functions, but it has the sa_mask feature so we can\n> > harmonize the stuff that I believe you were talking about.\n> \n> Did you see that this paniced ?\n> \n> https://cirrus-ci.com/task/4975957546106880\n> https://api.cirrus-ci.com/v1/artifact/task/4975957546106880/testrun/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log\n> \n> 2022-09-30 09:13:03.496 GMT [7312][startup] PANIC: hash_xlog_split_allocate_page: failed to acquire cleanup lock\n> 2022-09-30 09:13:03.496 GMT [7312][startup] CONTEXT: WAL redo at 0/7AF6FA8 for Hash/SPLIT_ALLOCATE_PAGE: new_bucket 63, meta_page_masks_updated F, issplitpoint_changed F; blkref #0: rel 1663/16384/23784, blk 45; blkref #1: rel 1663/16384/23784, blk 78; blkref #2: rel 1663/16384/23784, blk 0\n\nThis looks like broken code in hash, independent of any recent changes:\nhttps://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 11:59:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" }, { "msg_contents": "On Wed, Aug 17, 2022 at 7:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jul 16, 2022 at 5:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, Jul 16, 2022 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > > On Fri, Jul 15, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >> (Someday we oughta go ahead and make our Windows signal API look more\n> > > >> like POSIX, as I suggested back in 2015. I'm still not taking\n> > > >> point on that, though.)\n> > >\n> > > > For the sigprocmask() part, here's a patch that passes CI. Only the\n> > > > SIG_SETMASK case is actually exercised by our current code, though.\n> > >\n> > > Passes an eyeball check, but I can't actually test it.\n> >\n> > Thanks. Pushed.\n> >\n> > I'm not brave enough to try to write a replacement sigaction() yet,\n> > but it does appear that we could rip more ugliness and inconsistencies\n> > that way, eg sa_mask.\n>\n> Here's a draft patch that adds a minimal sigaction() implementation\n> for Windows. It doesn't implement stuff we're not using, for sample\n> sa_sigaction functions, but it has the sa_mask feature so we can\n> harmonize the stuff that I believe you were talking about.\n\nPushed.\n\nAs discussed before, a much better idea would probably be to use\nlatch-based wakeup instead of putting postmaster logic in signal\nhandlers, but in the meantime this gets rid of the extra\nWindows-specific noise.\n\n\n", "msg_date": "Wed, 9 Nov 2022 13:11:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR in ftruncate()" } ]
[ { "msg_contents": "[ Before settling into commitfest mode, I wanted to put out a snapshot\nof what I've been working on for the past few weeks. This is not\nanywhere near committable, but I think people might be interested\nin looking at it now anyway. ]\n\nWe've had many discussions (eg [1][2]) about the need to treat outer\njoins more honestly in parsed queries, so that the planner's reasoning\nabout things like equivalence classes can stand on a firmer foundation.\nThe attached patch series makes a start at doing that, and carries the\nidea as far as a working system in which all Vars are labeled as to\nwhich outer joins can null them. I have not yet gotten to the fun part\nof fixing or ripping out all the higher-level planner logic that could\nnow be simplified or removed entirely --- but as examples, I believe\nthat reconsider_outer_join_clause no longer does anything useful, and\na lot of the logic in deconstruct_jointree and distribute_qual_to_rels\ncould be simplified, and we shouldn't need the notion of\nsecond-class-citizen EquivalenceClasses for \"below outer join\" cases.\n\nAnother thing that could be built on this infrastructure, but I've\nnot tackled it yet, is fixing the known semantic bugs in grouping sets\n[3][4]. What I have in mind there is to invent a dummy RTE representing\nthe action of grouping, and use Vars that are marked as nullable by that\nRTE to represent possibly-nullable grouping-set expressions.\n\nThe main thing here that differs from my previous ideas is that the\nnulling-rel labeling is placed directly on Vars or PlaceHolderVars,\nwhereas I had been advocating to use some sort of wrapper node instead.\nAfter several failed attempts I decided that it was too complicated\nto separate the labeling from the Var itself. (I'll just mention one\nweak spot in that idea: the entire API concept of replace_rte_variables\nbreaks down, because many of the callbacks using it need to manipulate\nnulling-rel labeling along the way, which they can only do if they\nsee it on the Var they're passed.) Of course, the objection to doing it\nlike this is that it bloats struct Var, which is a mighty common node\ntype, even in cases where there's no outer join anywhere. However, on\na 64-bit machine struct Var would widen from 40 to 48 bytes, which is\nbasically free considering that palloc will round the allocation up to\n64 bytes. There's a more valid consideration that the pg_node_tree\nrepresentation of a Var will get longer; but really, if you're worried\nabout that you should be designing a more compact storage representation\nfor node trees. There's also reason to fear that the distributed cost\nof maintaining these extra Bitmapsets will pose a noticeable drag on\nparsing or planning speed. However, I see little point in doing\nperformance measurements when we've not yet reaped any of the\nforeseeable planner improvements.\n\nAnyway, on to the patch series. I've broken it up a little bit\nfor review, but note I'm not claiming that the intermediate states\nwould compile or pass regression testing.\n\n0000: Write some overview documentation in optimizer/README.\nThis might be worth reading even if you lack time to look at the code.\nI went into some detail about Var semantics, and also added a discussion\nof PlaceHolderVars, which Robert has rightfully complained are badly\nunderdocumented. (At one point I'd thought maybe we could get rid of\nPlaceHolderVars, but now I see them as complementary to this work ---\nindeed, arguably the reason for them to exist is so that a Var's\nnullingrels markers are not lost when replacing it with a pulled-up\nexpression from a subquery.) The changes in the section about\nEquivalenceClasses are pretty rough and speculative, since I've not\nactually coded those changes yet.\n\n0001: add infrastructure, namely add new fields to assorted data\nstructures and update places like backend/nodes/*.c. This is mostly\npretty boring, except for the commentary changes in *nodes.h.\n\n0002: change the parser to correctly label emitted Vars with the\nsets of outer joins that can null them, according to the query text\nas-written. (That is, we don't account here for the possibility\nof join strength reduction or anything like that.)\n\n0003: fix the planner to cope, including adjusting nullingrel labeling\nfor join elimination, join strength reduction, join reordering, etc.\nThis is still WIP to some extent. In particular note all the XXX\ncomments in setrefs.c complaining about how we're not verifying that the\nnullingrel states agree when matching upper-level Vars to lower-level\nones. This is partly due to setrefs.c not having readily-available info\nabout which outer joins are applied at which plan nodes (should we\nexpend the space to mark them?), and partly because I'm not sure\nthat we can enforce 100% consistency there anyway. Because of the\ncompromises involved in implementing outer-join identity 3 (see 0000),\nthere will be cases where an upper Var that \"should\" have a nullingrel\nbit set will not. I don't know how to make a hole in the check that\nwill allow those cases without rendering such checking mostly useless.\n\nIs there a way that we can do the identity-3 transformation without\nbeing squishy about the nullability state of Vars in the moved qual?\nI've not thought of one, but it's very annoying considering that the\nwhole point of this patch series is to not be squishy about that.\nI guess the good news is that the squishiness only seems to be needed\nduring final transformation of the plan, where all we are losing is\nthe ability to detect bugs in earlier planner stages. All of the\ndecisions that actually count seem to work fine without compromises.\n\nSo far the patch series changes no regression test results, and I've\nnot added any new tests either. The next steps will probably have\nvisible effects in the form of improved plans for some test queries.\n\nAnyway, even though this is far from done, I'm pretty pleased with\nthe results so far, so I thought I'd put it out for review by\nanyone who cares to take a look. I'll add it to the September CF\nin hopes that it might be more or less finished by then, and so\nthat the cfbot will check it out.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/7771.1576452845%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/15848.1576515643%40sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/17071-24dc13fbfa29672d%40postgresql.org\n[4] https://www.postgresql.org/message-id/CAMbWs48AtQTQGk37MSyDk_EAgDO3Y0iA_LzvuvGQ2uO_Wh2muw%40mail.gmail.com", "msg_date": "Fri, 01 Jul 2022 12:42:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Making Vars outer-join aware" }, { "msg_contents": "Tom, two quick questions before attempting to read the patch:\r\n\r\n Given that views are represented in a parsed representation, does anything need to happen to the Vars inside a view when that view is outer-joined to? \r\n\r\n If an outer join is converted to an inner join, must this information get propagated to all the affected Vars, potentially across query block levels?\r\n\r\n\r\n", "msg_date": "Fri, 1 Jul 2022 20:19:48 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "\"Finnerty, Jim\" <jfinnert@amazon.com> writes:\n> Given that views are represented in a parsed representation, does anything need to happen to the Vars inside a view when that view is outer-joined to? \n\nNo. The markings only refer to what is in the same Query tree as the Var\nitself.\n\nSubquery flattening during planning does deal with this: if we pull up a\nsubquery (possibly inserted from a view) that was underneath an outer\njoin, the nullingrel marks on the upper-level Vars referring to subquery\noutputs will get merged into what is pulled up, either by unioning the\nvarnullingrel bitmaps if what is pulled up is just a Var, or if what is\npulled up isn't a Var, by wrapping it in a PlaceHolderVar that carries\nthe old outer Var's markings. We had essentially this same behavior\nwith PlaceHolderVars before, but I think this way makes it a lot more\nprincipled and intelligible (and I suspect there are now cases where we\nmanage to avoid inserting unnecessary PlaceHolderVars that the old code\ncouldn't avoid).\n\n> If an outer join is converted to an inner join, must this information get propagated to all the affected Vars, potentially across query block levels?\n\nYes. The code is there in the patch to run around and remove nullingrel\nbits from affected Vars.\n\nOne thing that doesn't happen (and didn't before, so this is not a\nregression) is that if we strength-reduce a FULL JOIN USING to an outer\nor plain join, it'd be nice if the \"COALESCE\" hack we represent the\nmerged USING column with could be replaced with the same lower-relation\nVar that the parser would have used if the join weren't FULL to begin\nwith. Without that, we're leaving optimization opportunities on the\ntable. I'm hesitant to try to do that though as long as the COALESCE\nstructures look exactly like something a user could write. It'd be\nsafer if we used some bespoke node structure for this purpose ...\nbut nobody's bothered to invent that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Jul 2022 16:40:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Sat, Jul 2, 2022 at 12:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Anyway, even though this is far from done, I'm pretty pleased with\n> the results so far, so I thought I'd put it out for review by\n> anyone who cares to take a look. I'll add it to the September CF\n> in hopes that it might be more or less finished by then, and so\n> that the cfbot will check it out.\n>\n\nThanks for the work! I have a question about qual clause placement.\n\nFor the query in the example\n\n SELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y) WHERE foo(t2.z)\n\n(foo() is not strict.) We want to avoid pushing foo(t2.z) down to the t2\nscan level. Previously we do that with check_outerjoin_delay() by\nscanning all the outer joins below and check if the qual references any\nnullable rels of the OJ, and if so include the OJ's rels into the qual.\nSo as a result we'd get that foo(t2.z) is referencing t1 and t2, and\nwe'd put the qual into the join lists of t1 and t2.\n\nNow there is the 'varnullingrels' marker in the t2.z, which is the LEFT\nJOIN below (with RTI 3). So we consider the qual is referencing RTE 2\n(which is t2) and RTE 3 (which is the OJ). Do we still need to include\nRTE 1, i.e. t1 into the qual's required relids? How should we do that?\n\nThanks\nRichard\n\nOn Sat, Jul 2, 2022 at 12:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nAnyway, even though this is far from done, I'm pretty pleased with\nthe results so far, so I thought I'd put it out for review by\nanyone who cares to take a look.  I'll add it to the September CF\nin hopes that it might be more or less finished by then, and so\nthat the cfbot will check it out.Thanks for the work! I have a question about qual clause placement.For the query in the example    SELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y) WHERE foo(t2.z)(foo() is not strict.) We want to avoid pushing foo(t2.z) down to the t2scan level. Previously we do that with check_outerjoin_delay() byscanning all the outer joins below and check if the qual references anynullable rels of the OJ, and if so include the OJ's rels into the qual.So as a result we'd get that foo(t2.z) is referencing t1 and t2, andwe'd put the qual into the join lists of t1 and t2.Now there is the 'varnullingrels' marker in the t2.z, which is the LEFTJOIN below (with RTI 3). So we consider the qual is referencing RTE 2(which is t2) and RTE 3 (which is the OJ). Do we still need to includeRTE 1, i.e. t1 into the qual's required relids? How should we do that?ThanksRichard", "msg_date": "Tue, 5 Jul 2022 19:02:38 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> For the query in the example\n\n> SELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y) WHERE foo(t2.z)\n\n> (foo() is not strict.) We want to avoid pushing foo(t2.z) down to the t2\n> scan level. Previously we do that with check_outerjoin_delay() by\n> scanning all the outer joins below and check if the qual references any\n> nullable rels of the OJ, and if so include the OJ's rels into the qual.\n> So as a result we'd get that foo(t2.z) is referencing t1 and t2, and\n> we'd put the qual into the join lists of t1 and t2.\n\n> Now there is the 'varnullingrels' marker in the t2.z, which is the LEFT\n> JOIN below (with RTI 3). So we consider the qual is referencing RTE 2\n> (which is t2) and RTE 3 (which is the OJ). Do we still need to include\n> RTE 1, i.e. t1 into the qual's required relids? How should we do that?\n\nIt seems likely to me that we could leave the qual's required_relids\nas just {2,3} and not have to bother ORing any additional bits into\nthat. However, in the case of a Var-free JOIN/ON clause it'd still\nbe necessary to artificially add some relids to its initially empty\nrelids. Since I've not yet tried to rewrite distribute_qual_to_rels\nI'm not sure how the details will shake out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 10:24:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Here's v2 of this patch series. It's functionally identical to v1,\nbut I've rebased it over the recent auto-node-support-generation\nchanges, and also extracted a few separable bits in hopes of making\nthe main planner patch smaller. (It's still pretty durn large,\nunfortunately.) Unlike the original submission, each step will\ncompile on its own, though the intermediate states mostly don't\npass all regression tests.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Jul 2022 15:38:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Sun, Jul 10, 2022 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's v2 of this patch series. It's functionally identical to v1,\n> but I've rebased it over the recent auto-node-support-generation\n> changes, and also extracted a few separable bits in hopes of making\n> the main planner patch smaller. (It's still pretty durn large,\n> unfortunately.) Unlike the original submission, each step will\n> compile on its own, though the intermediate states mostly don't\n> pass all regression tests.\n>\n> regards, tom lane\n>\n> Hi,\nFor v2-0004-cope-with-nullability-in-planner.patch.\nIn remove_unneeded_nulling_relids():\n\n+ if (removable_relids == NULL)\n\nWhy is bms_is_empty() not used in the above check ?\nEarlier there is `if (bms_is_empty(old_nulling_relids))`\n\n+typedef struct reduce_outer_joins_partial_state\n\nSince there are already reduce_outer_joins_pass1_state\nand reduce_outer_joins_pass2_state, a comment\nabove reduce_outer_joins_partial_state would help other people follow its\npurpose.\n\n+ if (j->rtindex)\n+ {\n+ if (j->jointype == JOIN_INNER)\n+ {\n+ if (include_inner_joins)\n+ result = bms_add_member(result, j->rtindex);\n+ }\n+ else\n+ {\n+ if (include_outer_joins)\n\nSince there are other join types beside JOIN_INNER, should there be an\nassertion in the else block ? e.g. jointype wouldn't be JOIN_UNIQUE_INNER.\n\nCheers\n\nOn Sun, Jul 10, 2022 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's v2 of this patch series.  It's functionally identical to v1,\nbut I've rebased it over the recent auto-node-support-generation\nchanges, and also extracted a few separable bits in hopes of making\nthe main planner patch smaller.  (It's still pretty durn large,\nunfortunately.)  Unlike the original submission, each step will\ncompile on its own, though the intermediate states mostly don't\npass all regression tests.\n\n                        regards, tom lane\nHi,For v2-0004-cope-with-nullability-in-planner.patch.In remove_unneeded_nulling_relids():+   if (removable_relids == NULL)Why is bms_is_empty() not used in the above check ?Earlier there is `if (bms_is_empty(old_nulling_relids))`+typedef struct reduce_outer_joins_partial_stateSince there are already reduce_outer_joins_pass1_state and reduce_outer_joins_pass2_state, a comment above reduce_outer_joins_partial_state would help other people follow its purpose.+       if (j->rtindex)+       {+           if (j->jointype == JOIN_INNER)+           {+               if (include_inner_joins)+                   result = bms_add_member(result, j->rtindex);+           }+           else+           {+               if (include_outer_joins)Since there are other join types beside JOIN_INNER, should there be an assertion in the else block ? e.g. jointype wouldn't be JOIN_UNIQUE_INNER.Cheers", "msg_date": "Sun, 10 Jul 2022 14:04:41 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> In remove_unneeded_nulling_relids():\n\n> + if (removable_relids == NULL)\n\n> Why is bms_is_empty() not used in the above check ?\n\nWe initialized that to NULL just a few lines above, and then did\nnothing to it other than perhaps bms_add_member, so it's impossible\nfor it to be empty-and-yet-not-NULL.\n\n> +typedef struct reduce_outer_joins_partial_state\n\n> Since there are already reduce_outer_joins_pass1_state\n> and reduce_outer_joins_pass2_state, a comment\n> above reduce_outer_joins_partial_state would help other people follow its\n> purpose.\n\nWe generally document these sorts of structs in the using code,\nnot at the struct declaration.\n\n> + if (j->rtindex)\n> + {\n> + if (j->jointype == JOIN_INNER)\n> + {\n> + if (include_inner_joins)\n> + result = bms_add_member(result, j->rtindex);\n> + }\n> + else\n> + {\n> + if (include_outer_joins)\n\n> Since there are other join types beside JOIN_INNER, should there be an\n> assertion in the else block?\n\nLike what? We don't particularly care what the join type is here,\nas long as it's not INNER. In any case there is plenty of nearby\ncode checking for unsupported join types.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Jul 2022 19:51:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Jul 11, 2022 at 3:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's v2 of this patch series. It's functionally identical to v1,\n> but I've rebased it over the recent auto-node-support-generation\n> changes, and also extracted a few separable bits in hopes of making\n> the main planner patch smaller. (It's still pretty durn large,\n> unfortunately.) Unlike the original submission, each step will\n> compile on its own, though the intermediate states mostly don't\n> pass all regression tests.\n\n\nNoticed a different behavior from previous regarding PlaceHolderVar.\nTake the query below as an example:\n\nselect a.i, ss.jj from a left join (select b.i, b.j + 1 as jj from b) ss\non a.i = ss.i;\n\nPreviously the expression 'b.j + 1' would not be wrapped in a\nPlaceHolderVar, since it contains a Var of the subquery and meanwhile it\ndoes not contain any non-strict constructs. And now in the patch, we\nwould insert a PlaceHolderVar for it, in order to have a place to store\nvarnullingrels. So the plan for the above query now becomes:\n\n# explain (verbose, costs off) select a.i, ss.jj from a left join\n(select b.i, b.j + 1 as jj from b) ss on a.i = ss.i;\n QUERY PLAN\n----------------------------------\n Hash Right Join\n Output: a.i, ((b.j + 1))\n Hash Cond: (b.i = a.i)\n -> Seq Scan on public.b\n Output: b.i, (b.j + 1)\n -> Hash\n Output: a.i\n -> Seq Scan on public.a\n Output: a.i\n(9 rows)\n\nNote that the evaluation of expression 'b.j + 1' now occurs below the\nouter join. Is this something we need to be concerned about?\n\nThanks\nRichard\n\nOn Mon, Jul 11, 2022 at 3:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's v2 of this patch series.  It's functionally identical to v1,\nbut I've rebased it over the recent auto-node-support-generation\nchanges, and also extracted a few separable bits in hopes of making\nthe main planner patch smaller.  (It's still pretty durn large,\nunfortunately.)  Unlike the original submission, each step will\ncompile on its own, though the intermediate states mostly don't\npass all regression tests.Noticed a different behavior from previous regarding PlaceHolderVar.Take the query below as an example:select a.i, ss.jj from a left join (select b.i, b.j + 1 as jj from b) sson a.i = ss.i;Previously the expression 'b.j + 1' would not be wrapped in aPlaceHolderVar, since it contains a Var of the subquery and meanwhile itdoes not contain any non-strict constructs. And now in the patch, wewould insert a PlaceHolderVar for it, in order to have a place to storevarnullingrels. So the plan for the above query now becomes:# explain (verbose, costs off) select a.i, ss.jj from a left join(select b.i, b.j + 1 as jj from b) ss on a.i = ss.i;            QUERY PLAN---------------------------------- Hash Right Join   Output: a.i, ((b.j + 1))   Hash Cond: (b.i = a.i)   ->  Seq Scan on public.b         Output: b.i, (b.j + 1)   ->  Hash         Output: a.i         ->  Seq Scan on public.a               Output: a.i(9 rows)Note that the evaluation of expression 'b.j + 1' now occurs below theouter join. Is this something we need to be concerned about?ThanksRichard", "msg_date": "Tue, 12 Jul 2022 15:20:37 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Note that the evaluation of expression 'b.j + 1' now occurs below the\n> outer join. Is this something we need to be concerned about?\n\nIt seems more formally correct to me, but perhaps somebody would\ncomplain about possibly-useless expression evals. We could likely\nre-complicate the logic that inserts PHVs during pullup so that it\nlooks for Vars it can apply the nullingrels to. Maybe there's an\nopportunity to share code with flatten_join_alias_vars?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Jul 2022 09:37:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Jul 12, 2022 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Note that the evaluation of expression 'b.j + 1' now occurs below the\n> > outer join. Is this something we need to be concerned about?\n>\n> It seems more formally correct to me, but perhaps somebody would\n> complain about possibly-useless expression evals. We could likely\n> re-complicate the logic that inserts PHVs during pullup so that it\n> looks for Vars it can apply the nullingrels to. Maybe there's an\n> opportunity to share code with flatten_join_alias_vars?\n\n\nYeah, maybe we can extend and leverage the codes in\nadjust_standard_join_alias_expression() to do that?\n\nBut I'm not sure which is better, to evaluate the expression below or\nabove the outer join. It seems to me that if the size of base rel is\nlarge and somehow the size of the joinrel is small, evaluation above the\nouter join would win. And in the opposite case evaluation below the\nouter join would be better.\n\nThanks\nRichard\n\nOn Tue, Jul 12, 2022 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Note that the evaluation of expression 'b.j + 1' now occurs below the\n> outer join. Is this something we need to be concerned about?\n\nIt seems more formally correct to me, but perhaps somebody would\ncomplain about possibly-useless expression evals.  We could likely\nre-complicate the logic that inserts PHVs during pullup so that it\nlooks for Vars it can apply the nullingrels to.  Maybe there's an\nopportunity to share code with flatten_join_alias_vars?Yeah, maybe we can extend and leverage the codes inadjust_standard_join_alias_expression() to do that?But I'm not sure which is better, to evaluate the expression below orabove the outer join. It seems to me that if the size of base rel islarge and somehow the size of the joinrel is small, evaluation above theouter join would win. And in the opposite case evaluation below theouter join would be better.ThanksRichard", "msg_date": "Wed, 13 Jul 2022 15:48:41 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> But I'm not sure which is better, to evaluate the expression below or\n> above the outer join. It seems to me that if the size of base rel is\n> large and somehow the size of the joinrel is small, evaluation above the\n> outer join would win. And in the opposite case evaluation below the\n> outer join would be better.\n\nReasonable question. But I think for the purposes of this patch,\nit's better to keep the old behavior as much as we can. People\nhave probably relied on it while tuning queries. (I'm not saying\nit has to be *exactly* bug-compatible, but simple cases like your\nexample probably should work the same.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Jul 2022 10:09:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Here's a rebase up to HEAD, mostly to placate the cfbot.\nI accounted for d8e34fa7a (s/all_baserels/all_query_rels/\nin those places) and made one tiny bug-fix change.\nNothing substantive as yet.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 01 Aug 2022 15:51:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> For v3-0003-label-Var-nullability-in-parser.patch :\n\n> + if (rtindex > 0 && rtindex <= list_length(pstate->p_nullingrels))\n> + relids = (Bitmapset *) list_nth(pstate->p_nullingrels, rtindex - 1);\n> + else\n> + relids = NULL;\n> +\n> + /*\n> + * Merge with any already-declared nulling rels. (Typically there won't\n> + * be any, but let's get it right if there are.)\n> + */\n> + if (relids != NULL)\n\n> It seems the last if block can be merged into the previous if block. That\n> way `relids = NULL` can be omitted.\n\nNo, because the list entry we fetch could be NULL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Aug 2022 16:26:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Aug 1, 2022 at 12:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's a rebase up to HEAD, mostly to placate the cfbot.\n> I accounted for d8e34fa7a (s/all_baserels/all_query_rels/\n> in those places) and made one tiny bug-fix change.\n> Nothing substantive as yet.\n>\n> regards, tom lane\n>\n> Hi,\nFor v3-0003-label-Var-nullability-in-parser.patch :\n\n+ if (rtindex > 0 && rtindex <= list_length(pstate->p_nullingrels))\n+ relids = (Bitmapset *) list_nth(pstate->p_nullingrels, rtindex - 1);\n+ else\n+ relids = NULL;\n+\n+ /*\n+ * Merge with any already-declared nulling rels. (Typically there won't\n+ * be any, but let's get it right if there are.)\n+ */\n+ if (relids != NULL)\n\nIt seems the last if block can be merged into the previous if block. That\nway `relids = NULL` can be omitted.\n\nCheers\n\nOn Mon, Aug 1, 2022 at 12:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's a rebase up to HEAD, mostly to placate the cfbot.\nI accounted for d8e34fa7a (s/all_baserels/all_query_rels/\nin those places) and made one tiny bug-fix change.\nNothing substantive as yet.\n\n                        regards, tom lane\nHi,For v3-0003-label-Var-nullability-in-parser.patch :+   if (rtindex > 0 && rtindex <= list_length(pstate->p_nullingrels))+       relids = (Bitmapset *) list_nth(pstate->p_nullingrels, rtindex - 1);+   else+       relids = NULL;++   /*+    * Merge with any already-declared nulling rels.  (Typically there won't+    * be any, but let's get it right if there are.)+    */+   if (relids != NULL)It seems the last if block can be merged into the previous if block. That way `relids = NULL` can be omitted.Cheers", "msg_date": "Mon, 1 Aug 2022 13:28:37 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Aug 2, 2022 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's a rebase up to HEAD, mostly to placate the cfbot.\n> I accounted for d8e34fa7a (s/all_baserels/all_query_rels/\n> in those places) and made one tiny bug-fix change.\n> Nothing substantive as yet.\n\n\nWhen we add required PlaceHolderVars to a join rel's targetlist, if the\nPHV can be computed in the nullable-side of the join, we would add the\njoin's RT index to phnullingrels. This is correct as we know the PHV's\nvalue can be nulled at this join. But I'm wondering if it is necessary\nsince we have already propagated any varnullingrels into the PHV when we\napply pullup variable replacement in perform_pullup_replace_vars().\n\nOn the other hand, the phnullingrels may contain RT indexes of outer\njoins above this join level. It seems not good to add such a PHV to the\njoinrel's targetlist. Below is an example:\n\n# explain (verbose, costs off) select a.i, ss.jj from a left join (b left\njoin (select c.i, coalesce(c.j, 1) as jj from c) ss on b.i = ss.i) on true;\n QUERY PLAN\n---------------------------------------------------------\n Nested Loop Left Join\n Output: a.i, (COALESCE(c.j, 1))\n -> Seq Scan on public.a\n Output: a.i, a.j\n -> Materialize\n Output: (COALESCE(c.j, 1))\n -> Hash Left Join\n Output: (COALESCE(c.j, 1))\n Hash Cond: (b.i = c.i)\n -> Seq Scan on public.b\n Output: b.i, b.j\n -> Hash\n Output: c.i, (COALESCE(c.j, 1))\n -> Seq Scan on public.c\n Output: c.i, COALESCE(c.j, 1)\n(15 rows)\n\nIn this query, for the joinrel {B, C}, the PHV in its targetlist has a\nphnullingrels that contains the join of {A} and {BC}, which is confusing\nbecause we have not reached that join level.\n\nI tried the changes below to illustrate the two issues. The assertion\nholds true during regression tests and the error pops up for the query\nabove.\n\n--- a/src/backend/optimizer/util/placeholder.c\n+++ b/src/backend/optimizer/util/placeholder.c\n@@ -464,18 +464,20 @@ add_placeholders_to_joinrel(PlannerInfo *root,\nRelOptInfo *joinrel,\n {\n if (sjinfo->jointype == JOIN_FULL &&\nsjinfo->ojrelid != 0)\n {\n- /* PHV's value can be nulled at this\njoin */\n- phv->phnullingrels =\nbms_add_member(phv->phnullingrels,\n-\n sjinfo->ojrelid);\n+\n Assert(bms_is_member(sjinfo->ojrelid, phv->phnullingrels));\n+\n+ if\n(!bms_is_subset(phv->phnullingrels, joinrel->relids))\n+ elog(ERROR, \"phnullingrels\nis not subset of joinrel's relids\");\n }\n }\n else if (bms_is_subset(phinfo->ph_eval_at,\ninner_rel->relids))\n {\n if (sjinfo->jointype != JOIN_INNER &&\nsjinfo->ojrelid != 0)\n {\n- /* PHV's value can be nulled at this\njoin */\n- phv->phnullingrels =\nbms_add_member(phv->phnullingrels,\n-\n sjinfo->ojrelid);\n+\n Assert(bms_is_member(sjinfo->ojrelid, phv->phnullingrels));\n+\n+ if\n(!bms_is_subset(phv->phnullingrels, joinrel->relids))\n+ elog(ERROR, \"phnullingrels\nis not subset of joinrel's relids\");\n }\n }\n\n\nIf the two issues are indeed something we need to fix, maybe we can\nchange add_placeholders_to_joinrel() to search the PHVs from\nouter_rel/inner_rel's targetlist, and add the ojrelid to phnullingrels\nif needed, just like what we do in build_joinrel_tlist(). The PHVs there\nshould have the correct phnullingrels (at least the PHVs in base rels'\ntargetlists have correctly set phnullingrels to NULL).\n\nThanks\nRichard\n\nOn Tue, Aug 2, 2022 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's a rebase up to HEAD, mostly to placate the cfbot.\r\nI accounted for d8e34fa7a (s/all_baserels/all_query_rels/\r\nin those places) and made one tiny bug-fix change.\r\nNothing substantive as yet.When we add required PlaceHolderVars to a join rel's targetlist, if thePHV can be computed in the nullable-side of the join, we would add thejoin's RT index to phnullingrels. This is correct as we know the PHV'svalue can be nulled at this join. But I'm wondering if it is necessarysince we have already propagated any varnullingrels into the PHV when weapply pullup variable replacement in perform_pullup_replace_vars().On the other hand, the phnullingrels may contain RT indexes of outerjoins above this join level. It seems not good to add such a PHV to thejoinrel's targetlist. Below is an example:# explain (verbose, costs off) select a.i, ss.jj from a left join (b left join (select c.i, coalesce(c.j, 1) as jj from c) ss on b.i = ss.i) on true;                       QUERY PLAN--------------------------------------------------------- Nested Loop Left Join   Output: a.i, (COALESCE(c.j, 1))   ->  Seq Scan on public.a         Output: a.i, a.j   ->  Materialize         Output: (COALESCE(c.j, 1))         ->  Hash Left Join               Output: (COALESCE(c.j, 1))               Hash Cond: (b.i = c.i)               ->  Seq Scan on public.b                     Output: b.i, b.j               ->  Hash                     Output: c.i, (COALESCE(c.j, 1))                     ->  Seq Scan on public.c                           Output: c.i, COALESCE(c.j, 1)(15 rows)In this query, for the joinrel {B, C}, the PHV in its targetlist has aphnullingrels that contains the join of {A} and {BC}, which is confusingbecause we have not reached that join level.I tried the changes below to illustrate the two issues. The assertionholds true during regression tests and the error pops up for the queryabove.--- a/src/backend/optimizer/util/placeholder.c+++ b/src/backend/optimizer/util/placeholder.c@@ -464,18 +464,20 @@ add_placeholders_to_joinrel(PlannerInfo *root, RelOptInfo *joinrel,                       {                               if (sjinfo->jointype == JOIN_FULL && sjinfo->ojrelid != 0)                               {-                                      /* PHV's value can be nulled at this join */-                                      phv->phnullingrels = bms_add_member(phv->phnullingrels,-                                                                                                              sjinfo->ojrelid);+                                      Assert(bms_is_member(sjinfo->ojrelid, phv->phnullingrels));++                                      if (!bms_is_subset(phv->phnullingrels, joinrel->relids))+                                              elog(ERROR, \"phnullingrels is not subset of joinrel's relids\");                               }                       }                       else if (bms_is_subset(phinfo->ph_eval_at, inner_rel->relids))                       {                               if (sjinfo->jointype != JOIN_INNER && sjinfo->ojrelid != 0)                               {-                                      /* PHV's value can be nulled at this join */-                                      phv->phnullingrels = bms_add_member(phv->phnullingrels,-                                                                                                              sjinfo->ojrelid);+                                      Assert(bms_is_member(sjinfo->ojrelid, phv->phnullingrels));++                                      if (!bms_is_subset(phv->phnullingrels, joinrel->relids))+                                              elog(ERROR, \"phnullingrels is not subset of joinrel's relids\");                               }                       }If the two issues are indeed something we need to fix, maybe we canchange add_placeholders_to_joinrel() to search the PHVs fromouter_rel/inner_rel's targetlist, and add the ojrelid to phnullingrelsif needed, just like what we do in build_joinrel_tlist(). The PHVs thereshould have the correct phnullingrels (at least the PHVs in base rels'targetlists have correctly set phnullingrels to NULL).ThanksRichard", "msg_date": "Mon, 15 Aug 2022 16:48:23 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> When we add required PlaceHolderVars to a join rel's targetlist, if the\n> PHV can be computed in the nullable-side of the join, we would add the\n> join's RT index to phnullingrels. This is correct as we know the PHV's\n> value can be nulled at this join. But I'm wondering if it is necessary\n> since we have already propagated any varnullingrels into the PHV when we\n> apply pullup variable replacement in perform_pullup_replace_vars().\n\nI'm not seeing the connection there? Any nullingrels that are set\nduring perform_pullup_replace_vars would refer to outer joins within the\npulled-up subquery, whereas what you are talking about here is what\nhappens when the PHV's value propagates up through outer joins of the\nparent query. There's no overlap between those relid sets, or if there\nis we have some fault in the logic that constrains join order to ensure\nthat there's a valid place to compute each PHV.\n\n> On the other hand, the phnullingrels may contain RT indexes of outer\n> joins above this join level. It seems not good to add such a PHV to the\n> joinrel's targetlist.\n\nHmm, yeah, add_placeholders_to_joinrel is doing this wrong. The\nintent was to match what build_joinrel_tlist does with plain Vars,\nbut in that case we're adding the join's relid to what we find in\nvarnullingrels in the input tlist. Using the phnullingrels from\nthe placeholder_list entry is wrong. (I wonder whether a\nplaceholder_list entry's phnullingrels is meaningful at all?\nMaybe it isn't.) I think it might work to take the intersection\nof the join's relids with root->outer_join_rels.\n\n> If the two issues are indeed something we need to fix, maybe we can\n> change add_placeholders_to_joinrel() to search the PHVs from\n> outer_rel/inner_rel's targetlist, and add the ojrelid to phnullingrels\n> if needed, just like what we do in build_joinrel_tlist(). The PHVs there\n> should have the correct phnullingrels (at least the PHVs in base rels'\n> targetlists have correctly set phnullingrels to NULL).\n\nYeah, maybe we should do something more invasive and make use of the\ninput targetlists rather than doing this from scratch. Not sure.\nI'm worried that doing it that way would increase the risk of getting\ndifferent join tlist contents depending on which pair of input rels\nwe happen to consider first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Aug 2022 12:08:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "I wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> If the two issues are indeed something we need to fix, maybe we can\n>> change add_placeholders_to_joinrel() to search the PHVs from\n>> outer_rel/inner_rel's targetlist, and add the ojrelid to phnullingrels\n>> if needed, just like what we do in build_joinrel_tlist(). The PHVs there\n>> should have the correct phnullingrels (at least the PHVs in base rels'\n>> targetlists have correctly set phnullingrels to NULL).\n\n> Yeah, maybe we should do something more invasive and make use of the\n> input targetlists rather than doing this from scratch. Not sure.\n> I'm worried that doing it that way would increase the risk of getting\n> different join tlist contents depending on which pair of input rels\n> we happen to consider first.\n\nAfter chewing on that for awhile, I've concluded that that is the way\nto go. 0001 attached is a standalone patch to rearrange the way that\nPHVs are added to joinrel targetlists. It results in one cosmetic\nchange in the regression tests, where the targetlist order for an\nintermediate join node changes. I think that's fine; if anything,\nthe new order is more sensible than the old because it matches the\ninputs' targetlist orders better.\n\nI believe the reason I didn't do it like this to start with is that\nI was concerned about the cost of searching the placeholder_list\nrepeatedly. With a lot of PHVs, build_joinrel_tlist becomes O(N^2)\njust from the repeated find_placeholder_info lookups. We can fix\nthat by adding an index array to go straight from phid to the\nPlaceHolderInfo. While thinking about where to construct such\nan index array, I decided it'd be a good idea to have an explicit\nstep to \"freeze\" the set of PlaceHolderInfos, at the start of\ndeconstruct_jointree. This allows getting rid of the create_new_ph\nflags for find_placeholder_info and add_vars_to_targetlist, which\nI've always feared were bugs waiting to happen: they require callers\nto have a very clear understanding of when they're invoked. There\nmight be some speed gain over existing code too, but I've not really\ntried to measure it. I did drop a couple of hacks that were only\nmeant to short-circuit find_placeholder_info calls; that function\nshould now be cheap enough to not matter.\n\nBarring objections, I'd like to push these soon and then rebase\nthe main outer-join-vars patch set over them.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Aug 2022 15:24:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "I wrote:\n> ... We can fix\n> that by adding an index array to go straight from phid to the\n> PlaceHolderInfo. While thinking about where to construct such\n> an index array, I decided it'd be a good idea to have an explicit\n> step to \"freeze\" the set of PlaceHolderInfos, at the start of\n> deconstruct_jointree.\n\nOn further thought, it seems better to maintain the index array\nfrom the start, allowing complete replacement of the linear list\nsearches. We can add a separate bool flag to denote frozen-ness.\nThis does have minor downsides:\n\n* Some fiddly code is needed to enlarge the index array at need.\nBut it's not different from that for, say, simple_rel_array.\n\n* If we ever have a need to mutate the placeholder_list as a whole,\nwe'd have to reconstruct the index array to point to the new\nobjects. We don't do that at present, except in one place in\nanalyzejoins.c, which is easily fixed. While the same argument\ncould be raised against the v1 patch, it's not very likely that\nwe'd be doing such mutation beyond the start of deconstruct_jointree.\n\nHence, see v2 attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Aug 2022 16:57:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Wed, Aug 17, 2022 at 4:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> On further thought, it seems better to maintain the index array\n> from the start, allowing complete replacement of the linear list\n> searches. We can add a separate bool flag to denote frozen-ness.\n\n\n+1 for 0001 patch. Now we process plain Vars and PlaceHolderVars in a\nmore consistent way when building joinrel's tlist. And this change would\nmake it easier to build up phnullingrels for PHVs as we climb up the\njoin tree.\n\nBTW, the comment just above the two calls to build_joinrel_tlist says:\n\n * Create a new tlist containing just the vars that need to be output from\n\nHere by 'vars' it means both plain Vars and PlaceHolderVars, right? If\nnot we may need to adjust this comment to also include PlaceHolderVars.\n\n\n0002 patch looks good to me. Glad we can get rid of create_new_ph flag.\nA minor comment is that seems we can get rid of phid inside\nPlaceHolderInfo, since we do not do linear list searches any more. It's\nsome duplicate to the phid inside PlaceHolderVar. Currently there are\ntwo places referencing PlaceHolderInfo->phid, remove_rel_from_query and\nfind_placeholder_info. We can use PlaceHolderVar->phid instead in both\nthe two places.\n\nThanks\nRichard\n\nOn Wed, Aug 17, 2022 at 4:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOn further thought, it seems better to maintain the index array\nfrom the start, allowing complete replacement of the linear list\nsearches.  We can add a separate bool flag to denote frozen-ness.+1 for 0001 patch. Now we process plain Vars and PlaceHolderVars in amore consistent way when building joinrel's tlist. And this change wouldmake it easier to build up phnullingrels for PHVs as we climb up thejoin tree.BTW, the comment just above the two calls to build_joinrel_tlist says: * Create a new tlist containing just the vars that need to be output fromHere by 'vars' it means both plain Vars and PlaceHolderVars, right? Ifnot we may need to adjust this comment to also include PlaceHolderVars.0002 patch looks good to me. Glad we can get rid of create_new_ph flag.A minor comment is that seems we can get rid of phid insidePlaceHolderInfo, since we do not do linear list searches any more. It'ssome duplicate to the phid inside PlaceHolderVar. Currently there aretwo places referencing PlaceHolderInfo->phid, remove_rel_from_query andfind_placeholder_info. We can use PlaceHolderVar->phid instead in boththe two places.ThanksRichard", "msg_date": "Wed, 17 Aug 2022 17:25:53 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> BTW, the comment just above the two calls to build_joinrel_tlist says:\n> * Create a new tlist containing just the vars that need to be output from\n> Here by 'vars' it means both plain Vars and PlaceHolderVars, right? If\n> not we may need to adjust this comment to also include PlaceHolderVars.\n\nI think it did intend just Vars because that's all that\nbuild_joinrel_tlist did; but we really should have updated it when we\ninvented PlaceHolderVars, and even more so now that build_joinrel_tlist\nadds PHVs too. I changed the wording.\n\n> A minor comment is that seems we can get rid of phid inside\n> PlaceHolderInfo, since we do not do linear list searches any more. It's\n> some duplicate to the phid inside PlaceHolderVar. Currently there are\n> two places referencing PlaceHolderInfo->phid, remove_rel_from_query and\n> find_placeholder_info. We can use PlaceHolderVar->phid instead in both\n> the two places.\n\nMeh, I'm not excited about that. I don't think that the phid field\nis only there to make the search loops faster; it's the basic\nidentity of the PlaceHolderInfo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Aug 2022 16:17:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Here's a rebase up to HEAD, mainly to get the cfbot back in sync\nas to what's the live patch.\n\nI went ahead and pushed improve-adjust_appendrel_attrs_multilevel.patch,\nas that seemed uncontroversial and independently useful. So that's\ngone from this patchset. I also cleaned up the mess with phnullingrels\nin PHVs created for join tlists, as we discussed. No other interesting\nchanges.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 18 Aug 2022 14:45:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Progress report on this ...\n\nI've been trying to remove some of the cruftier aspects of\nEquivalenceClasses (which really is the main point of this whole\neffort), and gotten repeatedly blocked by the fact that the semantics\nare still a bit crufty thanks to the hacking associated with outer\njoin identity 3. I think I see a path forward though.\n\nTo recap, the thing about identity 3 is that it says you can get\nequivalent results from\n\n\t(A leftjoin B on (Pab)) leftjoin C on (Pb*c)\n\n\tA leftjoin (B leftjoin C on (Pbc)) on (Pab)\n\nif Pbc is strict for B. Unlike what it says in optimizer/README,\nI've written the first form as \"Pb*c\" to indicate that any B Vars\nappearing in that clause will be marked as possibly nulled by\nthe A/B join. This makes the problem apparent: we cannot use\nthe same representation of Pbc for both join orders, because\nin the second variant B's Vars are not nulled by anything.\nWe've been trying to get away with writing Pbc just one way,\nand that leads directly to the semantic squishiness I've been\nfighting.\n\nWhat I'm thinking we should do about this, once we detect that\nthis identity is applicable, is to generate *both* forms of Pbc,\neither adding or removing the varnullingrels bits depending on\nwhich form we got from the parser. Then, when we come to forming\njoin paths, use the appropriate variant depending on which join\norder we're considering. build_join_rel() already has the concept\nthat the join restriction list varies depending on which input\nrelations we're trying to join, so this doesn't require any\nfundamental restructuring -- only a way to identify which\nRestrictInfos to use or ignore for a particular join. That will\nprobably require some new field in RestrictInfo, but I'm not\nfussed about that because there are other fields we'll be able\nto remove as this work progresses.\n\nSimilarly, generate_join_implied_equalities() will need to generate\nEquivalenceClass-derived join clauses with or without varnullingrels\nmarks, as appropriate. I'm not quite sure how to do that, but it\nfeels like just a small matter of programming, not a fundamental\nproblem with the model which is where things are right now.\nWe'll only need this for ECs that include source clauses coming\nfrom a movable outer join clause (i.e., Pbc in identity 3).\n\nAn interesting point is that I think we want to force movable\nouter joins into the second format for the purpose of generating\nECs, that is we want to use Pbc not Pb*c as the EC source form.\nThe reason for that is to allow generation of relation-scan-level\nclauses from an EC, particularly an EC that includes a constant.\nAs an example, given\n\n\tA leftjoin (B leftjoin C on (B.b = C.c)) on (A.a = B.b)\n\twhere A.a = constant\n\nwe can decide unconditionally that A.a, B.b, C.c, and the constant all\nbelong to the same equivalence class, and thereby generate relation\nscan restrictions A.a = constant, B.b = constant, and C.c = constant.\nIf we start with the other join order, which will include \"B.b* = C.c\"\n(ie Pb*c) then we'd have two separate ECs: {A.a, B.b, constant} and\n{B.b*, C.c}. So we'll fail to produce any scan restriction for C, or\nat least we can't do so in any principled way.\n\nFurthermore, if the joins are done in the second order then we don't\nneed any additional join clauses -- both joins can act like \"LEFT JOIN\nON TRUE\". (Right now, we'll emit redundant B.b = C.c and A.a = B.b\njoin clauses in addition to the scan-level clauses, which is\ninefficient.) However, if we make use of identity 3 to do the\njoins in the other order, then we do need an extra join clause, like\n\n\t(A leftjoin B on (true)) leftjoin C on (B.b* = C.c)\n\n(or maybe we could just emit \"B.b* IS NOT NULL\" for Pb*c?)\nWithout any Pb*c join constraint we get wrong answers because\nnulling of B fails to propagate to C.\n\nSo while there are lots of details to work out, it feels like\nthis line of thought can lead to something where setrefs.c\ndoesn't have to ignore varnullingrels mismatches (yay) and\nthere is no squishiness in EquivalenceClass semantics.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Aug 2022 18:52:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Sun, Aug 21, 2022 at 6:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> What I'm thinking we should do about this, once we detect that\n> this identity is applicable, is to generate *both* forms of Pbc,\n> either adding or removing the varnullingrels bits depending on\n> which form we got from the parser. Then, when we come to forming\n> join paths, use the appropriate variant depending on which join\n> order we're considering. build_join_rel() already has the concept\n> that the join restriction list varies depending on which input\n> relations we're trying to join, so this doesn't require any\n> fundamental restructuring -- only a way to identify which\n> RestrictInfos to use or ignore for a particular join. That will\n> probably require some new field in RestrictInfo, but I'm not\n> fussed about that because there are other fields we'll be able\n> to remove as this work progresses.\n\n\nDo you mean we generate two RestrictInfos for Pbc in the case of\nidentity 3, one with varnullingrels and one without varnullingrels, and\nchoose the appropriate one when forming join paths? Do we need to also\ngenerate two SpecialJoinInfos for the B/C join in the first order, with\nand without the A/B join in its min_lefthand?\n\nThanks\nRichard\n\nOn Sun, Aug 21, 2022 at 6:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nWhat I'm thinking we should do about this, once we detect that\nthis identity is applicable, is to generate *both* forms of Pbc,\neither adding or removing the varnullingrels bits depending on\nwhich form we got from the parser.  Then, when we come to forming\njoin paths, use the appropriate variant depending on which join\norder we're considering.  build_join_rel() already has the concept\nthat the join restriction list varies depending on which input\nrelations we're trying to join, so this doesn't require any\nfundamental restructuring -- only a way to identify which\nRestrictInfos to use or ignore for a particular join.  That will\nprobably require some new field in RestrictInfo, but I'm not\nfussed about that because there are other fields we'll be able\nto remove as this work progresses. Do you mean we generate two RestrictInfos for Pbc in the case ofidentity 3, one with varnullingrels and one without varnullingrels, andchoose the appropriate one when forming join paths? Do we need to alsogenerate two SpecialJoinInfos for the B/C join in the first order, withand without the A/B join in its min_lefthand?ThanksRichard", "msg_date": "Wed, 24 Aug 2022 18:26:46 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Sun, Aug 21, 2022 at 6:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm thinking we should do about this, once we detect that\n>> this identity is applicable, is to generate *both* forms of Pbc,\n>> either adding or removing the varnullingrels bits depending on\n>> which form we got from the parser.\n\n> Do you mean we generate two RestrictInfos for Pbc in the case of\n> identity 3, one with varnullingrels and one without varnullingrels, and\n> choose the appropriate one when forming join paths?\n\nRight.\n\n> Do we need to also\n> generate two SpecialJoinInfos for the B/C join in the first order, with\n> and without the A/B join in its min_lefthand?\n\nNo, the SpecialJoinInfos would stay as they are now. It's already the\ncase that the first join's min_righthand would contain only B, and\nthe second one's min_righthand would contain only C.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Aug 2022 17:18:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Thu, Aug 25, 2022 at 5:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Do we need to also\n> > generate two SpecialJoinInfos for the B/C join in the first order, with\n> > and without the A/B join in its min_lefthand?\n>\n> No, the SpecialJoinInfos would stay as they are now. It's already the\n> case that the first join's min_righthand would contain only B, and\n> the second one's min_righthand would contain only C.\n\n\nI'm not sure if I understand it correctly. If we are given the first\norder from the parser, the SpecialJoinInfo for the B/C join would have\nmin_lefthand as containing both B and the A/B join. And this\nSpecialJoinInfo would make the B/C join be invalid, which is not what we\nwant. Currently the patch resolves this by explicitly running\nremove_unneeded_nulling_relids, and the A/B join would be removed from\nB/C join's min_lefthand, if Pbc is strict for B.\n\nDo we still need this kind of fixup if we are to keep just one form of\nSpecialJoinInfo and two forms of RestrictInfos?\n\nThanks\nRichard\n\nOn Thu, Aug 25, 2022 at 5:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Do we need to also\n> generate two SpecialJoinInfos for the B/C join in the first order, with\n> and without the A/B join in its min_lefthand?\n\nNo, the SpecialJoinInfos would stay as they are now.  It's already the\ncase that the first join's min_righthand would contain only B, and\nthe second one's min_righthand would contain only C. I'm not sure if I understand it correctly. If we are given the firstorder from the parser, the SpecialJoinInfo for the B/C join would havemin_lefthand as containing both B and the A/B join. And thisSpecialJoinInfo would make the B/C join be invalid, which is not what wewant. Currently the patch resolves this by explicitly runningremove_unneeded_nulling_relids, and the A/B join would be removed fromB/C join's min_lefthand, if Pbc is strict for B.Do we still need this kind of fixup if we are to keep just one form ofSpecialJoinInfo and two forms of RestrictInfos?ThanksRichard", "msg_date": "Thu, 25 Aug 2022 18:27:38 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Fri, Aug 19, 2022 at 2:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's a rebase up to HEAD, mainly to get the cfbot back in sync\n> as to what's the live patch.\n\n\nNoticed another different behavior from previous. When we try to reduce\nJOIN_LEFT to JOIN_ANTI, we want to know if the join's own quals are\nstrict for any Var that was forced null by higher qual levels. We do\nthat by checking whether local_nonnullable_vars and forced_null_vars\noverlap. However, the same Var from local_nonnullable_vars and\nforced_null_vars may be labeled with different varnullingrels. If that\nis the case, currently we would fail to tell they actually overlap. As\nan example, consider 'b.i' in the query below\n\n# explain (costs off) select * from a left join b on a.i = b.i where b.i is\nnull;\n QUERY PLAN\n---------------------------\n Hash Left Join\n Hash Cond: (a.i = b.i)\n Filter: (b.i IS NULL)\n -> Seq Scan on a\n -> Hash\n -> Seq Scan on b\n(6 rows)\n\nThanks\nRichard\n\nOn Fri, Aug 19, 2022 at 2:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's a rebase up to HEAD, mainly to get the cfbot back in sync\nas to what's the live patch. Noticed another different behavior from previous. When we try to reduceJOIN_LEFT to JOIN_ANTI, we want to know if the join's own quals arestrict for any Var that was forced null by higher qual levels. We dothat by checking whether local_nonnullable_vars and forced_null_varsoverlap. However, the same Var from local_nonnullable_vars andforced_null_vars may be labeled with different varnullingrels. If thatis the case, currently we would fail to tell they actually overlap. Asan example, consider 'b.i' in the query below# explain (costs off) select * from a left join b on a.i = b.i where b.i is null;        QUERY PLAN--------------------------- Hash Left Join   Hash Cond: (a.i = b.i)   Filter: (b.i IS NULL)   ->  Seq Scan on a   ->  Hash         ->  Seq Scan on b(6 rows)ThanksRichard", "msg_date": "Mon, 29 Aug 2022 14:30:23 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Aug 29, 2022 at 2:30 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Fri, Aug 19, 2022 at 2:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Here's a rebase up to HEAD, mainly to get the cfbot back in sync\n>> as to what's the live patch.\n>\n>\n> Noticed another different behavior from previous. When we try to reduce\n> JOIN_LEFT to JOIN_ANTI, we want to know if the join's own quals are\n> strict for any Var that was forced null by higher qual levels. We do\n> that by checking whether local_nonnullable_vars and forced_null_vars\n> overlap. However, the same Var from local_nonnullable_vars and\n> forced_null_vars may be labeled with different varnullingrels. If that\n> is the case, currently we would fail to tell they actually overlap.\n>\n\nI wonder why this is not noticed by regression tests. So I did some\nsearch and it seems we do not have any test cases covering the\ntransformation we apply to reduce outer joins. I think maybe we should\nadd such cases in regression tests.\n\nThanks\nRichard\n\nOn Mon, Aug 29, 2022 at 2:30 PM Richard Guo <guofenglinux@gmail.com> wrote:On Fri, Aug 19, 2022 at 2:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's a rebase up to HEAD, mainly to get the cfbot back in sync\nas to what's the live patch. Noticed another different behavior from previous. When we try to reduceJOIN_LEFT to JOIN_ANTI, we want to know if the join's own quals arestrict for any Var that was forced null by higher qual levels. We dothat by checking whether local_nonnullable_vars and forced_null_varsoverlap. However, the same Var from local_nonnullable_vars andforced_null_vars may be labeled with different varnullingrels. If thatis the case, currently we would fail to tell they actually overlap. I wonder why this is not noticed by regression tests. So I did somesearch and it seems we do not have any test cases covering thetransformation we apply to reduce outer joins. I think maybe we shouldadd such cases in regression tests.ThanksRichard", "msg_date": "Tue, 30 Aug 2022 11:21:42 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I wonder why this is not noticed by regression tests. So I did some\n> search and it seems we do not have any test cases covering the\n> transformation we apply to reduce outer joins. I think maybe we should\n> add such cases in regression tests.\n\nRight, done at 0043aa6b8. The actual fix is in 0010 below (it would\nhave been earlier, except I'd forgotten about this issue).\n\nI've been working away at this patch series, and here is an up-to-date\nversion. I've mostly fixed the inability to check in setrefs.c that\nvarnullingrels match up at different join levels, and I've found a\nsolution that I feel reasonably happy about for variant join quals\ndepending on application of outer-join identity 3. There's certainly\nbits of this that could be done in other ways, but overall I'm pleased\nwith the state of these patches.\n\nI think that the next step is to change things so that the \"push\na constant through outer-join quals\" hacks are replaced by\nEquivalenceClass-based logic. That turns out to be harder than\nI'd supposed initially, because labeling Vars with nullingrels\nfixes only part of the problem there. The other part is that\ndeductions we make from an outer-join qual can only be applied\nbelow the nullable side of that join --- we can't use them to\nremove rows from the non-nullable side. I have an idea how to\nmake that work, but it's not passing regression tests yet :-(.\n\nAnyway, there's much more to do, but here's what I've got today.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 31 Oct 2022 20:19:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "I wrote:\n> I've been working away at this patch series, and here is an up-to-date\n> version.\n\nThis needs a rebase after ff8fa0bf7 and b0b72c64a. I also re-ordered\nthe patches so that the commit messages' claims about when regression\ntests start to pass are true again. No interesting changes, though.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 05 Nov 2022 17:53:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Thu, Aug 25, 2022 at 6:27 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I'm not sure if I understand it correctly. If we are given the first\n> order from the parser, the SpecialJoinInfo for the B/C join would have\n> min_lefthand as containing both B and the A/B join. And this\n> SpecialJoinInfo would make the B/C join be invalid, which is not what we\n> want.\n>\n\nNow I see how this works from v6 patch. Once we notice identity 3\napplies, we will remove the lower OJ's ojrelid from the min_lefthand or\nmin_righthand so that the commutation is allowed. So in this case, the\nA/B join would be removed from B/C join's min_lefthand when we build the\nSpecialJoinInfo for B/C join, and that makes the B/C join to be legal.\n\nBTW, inner_join_rels can contain base Relids and OJ Relids. Maybe we\ncan revise the comments a bit for it atop deconstruct_recurse and\nmake_outerjoininfo. The same for the comments of qualscope, ojscope and\nouterjoin_nonnullable atop distribute_qual_to_rels.\n\nThe README mentions restriction_is_computable_at(), I think it should be\nclause_is_computable_at().\n\nThanks\nRichard\n\nOn Thu, Aug 25, 2022 at 6:27 PM Richard Guo <guofenglinux@gmail.com> wrote:I'm not sure if I understand it correctly. If we are given the firstorder from the parser, the SpecialJoinInfo for the B/C join would havemin_lefthand as containing both B and the A/B join. And thisSpecialJoinInfo would make the B/C join be invalid, which is not what wewant.  Now I see how this works from v6 patch.  Once we notice identity 3applies, we will remove the lower OJ's ojrelid from the min_lefthand ormin_righthand so that the commutation is allowed.  So in this case, theA/B join would be removed from B/C join's min_lefthand when we build theSpecialJoinInfo for B/C join, and that makes the B/C join to be legal.BTW, inner_join_rels can contain base Relids and OJ Relids.  Maybe wecan revise the comments a bit for it atop deconstruct_recurse andmake_outerjoininfo.  The same for the comments of qualscope, ojscope andouterjoin_nonnullable atop distribute_qual_to_rels.The README mentions restriction_is_computable_at(), I think it should beclause_is_computable_at().ThanksRichard", "msg_date": "Thu, 10 Nov 2022 18:13:54 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Sun, Nov 6, 2022 at 5:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > I've been working away at this patch series, and here is an up-to-date\n> > version.\n>\n> This needs a rebase after ff8fa0bf7 and b0b72c64a. I also re-ordered\n> the patches so that the commit messages' claims about when regression\n> tests start to pass are true again. No interesting changes, though.\n\n\nI'm reviewing the part about multiple version clauses, and I find a case\nthat may not work as expected. I tried with some query as below\n\n (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pcd)\n\nAssume Pbc is strict for B and Pcd is strict for C.\n\nAccording to identity 3, we know one of its equivalent form is\n\n ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pcd)\n\nFor outer join clause Pcd, we would generate two versions from the first\nform\n\n Version 1: C Vars with nullingrels as {A/B}\n Version 2: C Vars with nullingrels as {B/C, A/B}\n\nI understand version 2 is reasonable as the nullingrels from parser\nwould be set as that. But it seems version 1 is not applicable in\neither form.\n\nLooking at the two forms again, it seems the expected two versions for\nPcd should be\n\n Version 1: C Vars with nullingrels as {B/C}\n Version 2: C Vars with nullingrels as {B/C, A/B}\n\nWith this we may have another problem that the two versions are both\napplicable at the C/D join according to clause_is_computable_at(), in\nboth forms.\n\nAnother thing is I believe we have another equivalent form as\n\n (A left join B on (Pab)) left join (C left join D on (Pcd)) on (Pbc)\n\nCurrently this form cannot be generated because of the issue discussed\nin [1]. But someday when we can do that, I think we should have a third\nversion for Pcd\n\n Version 3: C Vars with empty nullingrels\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_8n5ANh_aX2PinRZ9V9mtBguhnRd4DOVt9msPgHmEMOQ%40mail.gmail.com\n\nThanks\nRichard\n\nOn Sun, Nov 6, 2022 at 5:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> I've been working away at this patch series, and here is an up-to-date\n> version.\n\nThis needs a rebase after ff8fa0bf7 and b0b72c64a.  I also re-ordered\nthe patches so that the commit messages' claims about when regression\ntests start to pass are true again.  No interesting changes, though. I'm reviewing the part about multiple version clauses, and I find a casethat may not work as expected.  I tried with some query as below (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pcd)Assume Pbc is strict for B and Pcd is strict for C.According to identity 3, we know one of its equivalent form is ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pcd)For outer join clause Pcd, we would generate two versions from the firstform    Version 1: C Vars with nullingrels as {A/B}    Version 2: C Vars with nullingrels as {B/C, A/B}I understand version 2 is reasonable as the nullingrels from parserwould be set as that.  But it seems version 1 is not applicable ineither form.Looking at the two forms again, it seems the expected two versions forPcd should be    Version 1: C Vars with nullingrels as {B/C}    Version 2: C Vars with nullingrels as {B/C, A/B}With this we may have another problem that the two versions are bothapplicable at the C/D join according to clause_is_computable_at(), inboth forms.Another thing is I believe we have another equivalent form as (A left join B on (Pab)) left join (C left join D on (Pcd)) on (Pbc)Currently this form cannot be generated because of the issue discussedin [1].  But someday when we can do that, I think we should have a thirdversion for Pcd    Version 3: C Vars with empty nullingrels[1] https://www.postgresql.org/message-id/flat/CAMbWs4_8n5ANh_aX2PinRZ9V9mtBguhnRd4DOVt9msPgHmEMOQ%40mail.gmail.comThanksRichard", "msg_date": "Tue, 15 Nov 2022 16:59:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I'm reviewing the part about multiple version clauses, and I find a case\n> that may not work as expected. I tried with some query as below\n> (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pcd)\n> Assume Pbc is strict for B and Pcd is strict for C.\n> According to identity 3, we know one of its equivalent form is\n> ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pcd)\n> For outer join clause Pcd, we would generate two versions from the first\n> form\n> Version 1: C Vars with nullingrels as {A/B}\n> Version 2: C Vars with nullingrels as {B/C, A/B}\n> I understand version 2 is reasonable as the nullingrels from parser\n> would be set as that. But it seems version 1 is not applicable in\n> either form.\n\nHmm. Looking at the data structures generated for the first form,\nwe have\n\nB/C join:\n\n {SPECIALJOININFO \n :min_lefthand (b 2)\n :min_righthand (b 3)\n :syn_lefthand (b 2)\n :syn_righthand (b 3)\n :jointype 1 \n :ojrelid 4 \n :commute_above_l (b 7)\n :commute_above_r (b 5)\n :commute_below (b)\n\nA/B join:\n\n {SPECIALJOININFO \n :min_lefthand (b 1)\n :min_righthand (b 2)\n :syn_lefthand (b 1)\n :syn_righthand (b 2 3 4)\n :jointype 1 \n :ojrelid 5 \n :commute_above_l (b)\n :commute_above_r (b)\n :commute_below (b 4)\n\neverything-to-D join:\n\n {SPECIALJOININFO \n :min_lefthand (b 1 2 3 4 5)\n :min_righthand (b 6)\n :syn_lefthand (b 1 2 3 4 5)\n :syn_righthand (b 6)\n :jointype 1 \n :ojrelid 7 \n :commute_above_l (b)\n :commute_above_r (b)\n :commute_below (b 4)\n\nSo we've marked the 4 and 7 joins as possibly commuting, but they\ncannot commute according to 7's min_lefthand set. I don't think\nthe extra clone condition is terribly harmful --- it's useless\nbut shouldn't cause any problems. However, if these joins should be\nable to commute then the min_lefthand marking is preventing us\nfrom considering legal join orders (and has been doing so all along,\nthat's not new in this patch). It looks to me like they should be\nable to commute (giving your third form), so this is a pre-existing\nplanning deficiency.\n\nWithout having looked too closely, I suspect this is coming from\nthe delay_upper_joins/check_outerjoin_delay stuff in initsplan.c.\nThat's a chunk of logic that I'd like to nuke altogether, and maybe\nwe will be able to do so once this patchset is a bit further along.\nBut I've not had time to look at it yet.\n\nI'm not entirely clear on whether the strange selection of clone\nclauses for this example is a bug in process_postponed_left_join_quals\nor if that function is just getting misled by the bogus min_lefthand\nvalue.\n\n> Looking at the two forms again, it seems the expected two versions for\n> Pcd should be\n> Version 1: C Vars with nullingrels as {B/C}\n> Version 2: C Vars with nullingrels as {B/C, A/B}\n> With this we may have another problem that the two versions are both\n> applicable at the C/D join according to clause_is_computable_at(), in\n> both forms.\n\nAt least when I tried it just now, clause_is_computable_at correctly\nrejected the first version, because we've already computed A/B when\nwe are trying to form the C/D join so we expect it to be listed in\nvarnullingrels.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 15:46:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> BTW, inner_join_rels can contain base Relids and OJ Relids. Maybe we\n> can revise the comments a bit for it atop deconstruct_recurse and\n> make_outerjoininfo. The same for the comments of qualscope, ojscope and\n> outerjoin_nonnullable atop distribute_qual_to_rels.\n\nYeah. I had an XXX comment about whether or not it was okay to\ninclude OJs in inner_join_rels. I took a second look and decided it's\nfine, so I removed the XXX and updated these comments.\n\n> The README mentions restriction_is_computable_at(), I think it should be\n> clause_is_computable_at().\n\nRight. I think when I wrote that I was imagining that there'd be a\nwrapper function specifically concerned with RestrictInfos, but in the\nevent it didn't seem useful. There's only one place that uses this,\nnamely subbuild_joinrel_restrictlist.\n\nThe cfbot is about to start complaining that this patchset doesn't apply\nover e9e26b5e7, so here's a rebase.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 16 Nov 2022 17:02:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Thu, Nov 17, 2022 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> So we've marked the 4 and 7 joins as possibly commuting, but they\n> cannot commute according to 7's min_lefthand set. I don't think\n> the extra clone condition is terribly harmful --- it's useless\n> but shouldn't cause any problems. However, if these joins should be\n> able to commute then the min_lefthand marking is preventing us\n> from considering legal join orders (and has been doing so all along,\n> that's not new in this patch). It looks to me like they should be\n> able to commute (giving your third form), so this is a pre-existing\n> planning deficiency.\n\n\nYeah. This is an issue that can also be seen on HEAD and is discussed\nin [1]. It happens because when building SpecialJoinInfo for 7, we find\nA/B join 5 is in our LHS, and our join condition (Pcd) uses 5's\nsyn_righthand while is not strict for 5's min_righthand. So we decide\nto preserve the ordering of 7 and 5, by adding 5's full syntactic relset\nto 7's min_lefthand. As discussed in [1], maybe we should consider 5's\nmin_righthand rather than syn_righthand when checking if Pcd uses 5's\nRHS.\n\n\n> > Looking at the two forms again, it seems the expected two versions for\n> > Pcd should be\n> > Version 1: C Vars with nullingrels as {B/C}\n> > Version 2: C Vars with nullingrels as {B/C, A/B}\n> > With this we may have another problem that the two versions are both\n> > applicable at the C/D join according to clause_is_computable_at(), in\n> > both forms.\n>\n> At least when I tried it just now, clause_is_computable_at correctly\n> rejected the first version, because we've already computed A/B when\n> we are trying to form the C/D join so we expect it to be listed in\n> varnullingrels.\n\n\nFor the first version of Pcd, which is C Vars with nullingrels as {B/C}\nhere, although at the C/D join level A/B join has been computed and\nmeanwhile is not listed in varnullingrels, but since Pcd does not\nmention any nullable Vars in A/B's min_righthand, it seems to me this\nversion cannot be rejected by clause_is_computable_at(). But maybe I'm\nmissing something.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_8n5ANh_aX2PinRZ9V9mtBguhnRd4DOVt9msPgHmEMOQ%40mail.gmail.com\n\nThanks\nRichard\n\nOn Thu, Nov 17, 2022 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nSo we've marked the 4 and 7 joins as possibly commuting, but they\ncannot commute according to 7's min_lefthand set.  I don't think\nthe extra clone condition is terribly harmful --- it's useless\nbut shouldn't cause any problems.  However, if these joins should be\nable to commute then the min_lefthand marking is preventing us\nfrom considering legal join orders (and has been doing so all along,\nthat's not new in this patch).  It looks to me like they should be\nable to commute (giving your third form), so this is a pre-existing\nplanning deficiency. Yeah.  This is an issue that can also be seen on HEAD and is discussedin [1].  It happens because when building SpecialJoinInfo for 7, we findA/B join 5 is in our LHS, and our join condition (Pcd) uses 5'ssyn_righthand while is not strict for 5's min_righthand.  So we decideto preserve the ordering of 7 and 5, by adding 5's full syntactic relsetto 7's min_lefthand.  As discussed in [1], maybe we should consider 5'smin_righthand rather than syn_righthand when checking if Pcd uses 5'sRHS. \n> Looking at the two forms again, it seems the expected two versions for\n> Pcd should be\n>     Version 1: C Vars with nullingrels as {B/C}\n>     Version 2: C Vars with nullingrels as {B/C, A/B}\n> With this we may have another problem that the two versions are both\n> applicable at the C/D join according to clause_is_computable_at(), in\n> both forms.\n\nAt least when I tried it just now, clause_is_computable_at correctly\nrejected the first version, because we've already computed A/B when\nwe are trying to form the C/D join so we expect it to be listed in\nvarnullingrels. For the first version of Pcd, which is C Vars with nullingrels as {B/C}here, although at the C/D join level A/B join has been computed andmeanwhile is not listed in varnullingrels, but since Pcd does notmention any nullable Vars in A/B's min_righthand, it seems to me thisversion cannot be rejected by clause_is_computable_at().  But maybe I'mmissing something.[1] https://www.postgresql.org/message-id/flat/CAMbWs4_8n5ANh_aX2PinRZ9V9mtBguhnRd4DOVt9msPgHmEMOQ%40mail.gmail.comThanksRichard", "msg_date": "Thu, 17 Nov 2022 16:56:34 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Here's a new edition of this patch series.\n\nI shoved some preliminary refactoring into the 0001 patch,\nnotably splitting deconstruct_jointree into two passes.\n0002-0009 cover the same ground as they did before, though\nwith some differences in detail. 0010-0012 are new work\nmostly aimed at removing kluges we no longer need.\n\nThere are two big areas that I would still like to improve, but\nI think we've run out of time to address them in the v16 cycle:\n\n* It'd be nice to apply the regular EquivalenceClass deduction\nmechanisms to outer-join equalities, instead of the\nreconsider_outer_join_clauses kluge. I've made several stabs at that\nwithout much success. I think that the \"join domain\" framework added\nin 0012 is likely to provide a workable foundation, but some more\neffort is needed.\n\n* I really want to get rid of RestrictInfo.is_pushed_down and\nRINFO_IS_PUSHED_DOWN(), because those seem exceedingly awkward\nand squishy. I've not gotten far with finding a better\nreplacement there, either.\n\nDespite the work being unfinished, I feel that this has moved us a\nlong way towards outer-join handling being less of a jury-rigged\naffair.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 23 Dec 2022 13:20:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Fri, Dec 23, 2022 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's a new edition of this patch series.\n>\n> I shoved some preliminary refactoring into the 0001 patch,\n> notably splitting deconstruct_jointree into two passes.\n> 0002-0009 cover the same ground as they did before, though\n> with some differences in detail. 0010-0012 are new work\n> mostly aimed at removing kluges we no longer need.\n>\n> There are two big areas that I would still like to improve, but\n> I think we've run out of time to address them in the v16 cycle:\n>\n> * It'd be nice to apply the regular EquivalenceClass deduction\n> mechanisms to outer-join equalities, instead of the\n> reconsider_outer_join_clauses kluge. I've made several stabs at that\n> without much success. I think that the \"join domain\" framework added\n> in 0012 is likely to provide a workable foundation, but some more\n> effort is needed.\n>\n> * I really want to get rid of RestrictInfo.is_pushed_down and\n> RINFO_IS_PUSHED_DOWN(), because those seem exceedingly awkward\n> and squishy. I've not gotten far with finding a better\n> replacement there, either.\n>\n> Despite the work being unfinished, I feel that this has moved us a\n> long way towards outer-join handling being less of a jury-rigged\n> affair.\n>\n> regards, tom lane\n>\n> Hi,\nFor v8-0012-invent-join-domains.patch, in `distribute_qual_to_rels`, it\nseems that `pseudoconstant` and `root->hasPseudoConstantQuals` carry the\nsame value.\nCan the variable `pseudoconstant` be omitted ?\n\nCheers\n\nOn Fri, Dec 23, 2022 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's a new edition of this patch series.\n\nI shoved some preliminary refactoring into the 0001 patch,\nnotably splitting deconstruct_jointree into two passes.\n0002-0009 cover the same ground as they did before, though\nwith some differences in detail.  0010-0012 are new work\nmostly aimed at removing kluges we no longer need.\n\nThere are two big areas that I would still like to improve, but\nI think we've run out of time to address them in the v16 cycle:\n\n* It'd be nice to apply the regular EquivalenceClass deduction\nmechanisms to outer-join equalities, instead of the\nreconsider_outer_join_clauses kluge.  I've made several stabs at that\nwithout much success.  I think that the \"join domain\" framework added\nin 0012 is likely to provide a workable foundation, but some more\neffort is needed.\n\n* I really want to get rid of RestrictInfo.is_pushed_down and\nRINFO_IS_PUSHED_DOWN(), because those seem exceedingly awkward\nand squishy.  I've not gotten far with finding a better\nreplacement there, either.\n\nDespite the work being unfinished, I feel that this has moved us a\nlong way towards outer-join handling being less of a jury-rigged\naffair.\n\n                        regards, tom lane\nHi,For v8-0012-invent-join-domains.patch, in `distribute_qual_to_rels`, it seems that `pseudoconstant` and `root->hasPseudoConstantQuals` carry the same value.Can the variable `pseudoconstant` be omitted ?Cheers", "msg_date": "Fri, 23 Dec 2022 12:59:07 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Ted Yu <yuzhihong@gmail.com> writes:\n> For v8-0012-invent-join-domains.patch, in `distribute_qual_to_rels`, it\n> seems that `pseudoconstant` and `root->hasPseudoConstantQuals` carry the\n> same value.\n> Can the variable `pseudoconstant` be omitted ?\n\nSurely not. 'pseudoconstant' tells whether the current qual clause\nis pseudoconstant. root->hasPseudoConstantQuals remembers whether\nwe have found any pseudoconstant qual in the query.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Dec 2022 16:09:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Sat, Dec 24, 2022 at 2:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I shoved some preliminary refactoring into the 0001 patch,\n> notably splitting deconstruct_jointree into two passes.\n> 0002-0009 cover the same ground as they did before, though\n> with some differences in detail. 0010-0012 are new work\n> mostly aimed at removing kluges we no longer need.\n\n\nI'm looking at 0010-0012 and I really like the changes and removals\nthere. Thanks for the great work!\n\nFor 0010, the change seems quite independent. I think maybe we can\napply it to HEAD directly.\n\nFor 0011, I found that some clauses that were outerjoin_delayed and thus\nnot equivalent before might be treated as being equivalent now. For\nexample\n\nexplain (costs off)\nselect * from a left join b on a.i = b.i where coalesce(b.j, 0) = 0 and\ncoalesce(b.j, 0) = a.j;\n QUERY PLAN\n----------------------------------\n Hash Right Join\n Hash Cond: (b.i = a.i)\n Filter: (COALESCE(b.j, 0) = 0)\n -> Seq Scan on b\n -> Hash\n -> Seq Scan on a\n Filter: (j = 0)\n(7 rows)\n\nThis is different behavior from HEAD. But I think it's an improvement.\n\nFor 0012, I'm still trying to understand JoinDomain. AFAIU all EC\nmembers of the same EC should have the same JoinDomain, because for\nconstants we match EC members only within the same JoinDomain, and for\nVars if they come from different join domains they will have different\nnullingrels and thus will not match. So I wonder if we can have the\nJoinDomain kept in EquivalenceClass rather than in each\nEquivalenceMembers.\n\nThanks\nRichard\n\nOn Sat, Dec 24, 2022 at 2:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI shoved some preliminary refactoring into the 0001 patch,\nnotably splitting deconstruct_jointree into two passes.\n0002-0009 cover the same ground as they did before, though\nwith some differences in detail.  0010-0012 are new work\nmostly aimed at removing kluges we no longer need. I'm looking at 0010-0012 and I really like the changes and removalsthere.  Thanks for the great work!For 0010, the change seems quite independent.  I think maybe we canapply it to HEAD directly.For 0011, I found that some clauses that were outerjoin_delayed and thusnot equivalent before might be treated as being equivalent now.  Forexampleexplain (costs off)select * from a left join b on a.i = b.i where coalesce(b.j, 0) = 0 and coalesce(b.j, 0) = a.j;            QUERY PLAN---------------------------------- Hash Right Join   Hash Cond: (b.i = a.i)   Filter: (COALESCE(b.j, 0) = 0)   ->  Seq Scan on b   ->  Hash         ->  Seq Scan on a               Filter: (j = 0)(7 rows)This is different behavior from HEAD.  But I think it's an improvement.For 0012, I'm still trying to understand JoinDomain.  AFAIU all ECmembers of the same EC should have the same JoinDomain, because forconstants we match EC members only within the same JoinDomain, and forVars if they come from different join domains they will have differentnullingrels and thus will not match.  So I wonder if we can have theJoinDomain kept in EquivalenceClass rather than in eachEquivalenceMembers.ThanksRichard", "msg_date": "Tue, 27 Dec 2022 16:27:49 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> For 0012, I'm still trying to understand JoinDomain. AFAIU all EC\n> members of the same EC should have the same JoinDomain, because for\n> constants we match EC members only within the same JoinDomain, and for\n> Vars if they come from different join domains they will have different\n> nullingrels and thus will not match. So I wonder if we can have the\n> JoinDomain kept in EquivalenceClass rather than in each\n> EquivalenceMembers.\n\nYeah, I tried to do it like that at first, and failed. There is\nsome sort of association between ECs and join domains, for sure,\nbut exactly what it is seems to need more elucidation.\n\nThe thing that I couldn't get around before is that if you have,\nsay, a mergejoinable equality clause in an outer join:\n\n select ... from a left join b on a.x = b.y;\n\nthat equality clause can only be associated with the join domain\nfor B, because it certainly can't be enforced against A. However,\nyou'd still wish to be able to do a mergejoin using indexes on\na.x and b.y, and this means that we have to understand the ordering\ninduced by a PathKey based on this EC as applicable to A, even\nthough that relation is not in the same join domain. So there are\nsituations where sort orderings apply across domain boundaries even\nthough equalities don't. We might have to split the notion of\nEquivalenceClass into two sorts of objects, and somewhere right\nabout here is where I realized that this wasn't getting finished\nfor v16 :-(.\n\nSo the next pass at this is likely going to involve some more\nrefactoring, and maybe we'll end up saying that an EquivalenceClass\nis tightly bound to a join domain or maybe we won't. For the moment\nit seemed to work better to associate domains with only the const\nmembers of ECs. (As written, the patch does fill em_jdomain even\nfor non-const members, but that was just for simplicity. I'd\noriginally meant to make it NULL for non-const members, but that\nturned out to be a bit too tedious because the responsibility for\nmarking a member as const or not is split among several places.)\n\nAnother part of the motivation for doing it like that is that\nI've been thinking about having just a single common pool of\nEquivalenceMember objects, and turning EquivalenceClasses into\nbitmapsets of indexes into the shared EquivalenceMember list.\nThis would support having ECs that share some member(s) without\nbeing exactly the same thing, which I think might be necessary\nto get to the point of treating outer-join clauses as creating\nEC equalities.\n\nBTW, I can't escape the suspicion that I've reinvented an idea\nthat's already well known in the literature. Has anyone seen\nsomething like this \"join domain\" concept before, and if so\nwhat was it called?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Dec 2022 10:31:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Dec 27, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The thing that I couldn't get around before is that if you have,\n> say, a mergejoinable equality clause in an outer join:\n>\n> select ... from a left join b on a.x = b.y;\n>\n> that equality clause can only be associated with the join domain\n> for B, because it certainly can't be enforced against A. However,\n> you'd still wish to be able to do a mergejoin using indexes on\n> a.x and b.y, and this means that we have to understand the ordering\n> induced by a PathKey based on this EC as applicable to A, even\n> though that relation is not in the same join domain. So there are\n> situations where sort orderings apply across domain boundaries even\n> though equalities don't. We might have to split the notion of\n> EquivalenceClass into two sorts of objects, and somewhere right\n> about here is where I realized that this wasn't getting finished\n> for v16 :-(.\n\n\nI think I see where the problem is. And I can see currently in\nget_eclass_for_sort_expr we always use the top JoinDomain. So although\nthe equality clause 'a.x = b.y' belongs to JoinDomain {B}, we set up ECs\nfor 'a.x' and 'b.y' that belong to the top JoinDomain {A, B, A/B}.\n\nBut doing so would lead to a situation where the \"same\" Vars from\ndifferent join domains might have the same varnullingrels and thus would\nmatch by equal(). As an example, consider\n\n select ... from a left join b on a.x = b.y where a.x = 1;\n\nAs said we would set up EC for 'b.y' as belonging to the top JoinDomain.\nThen when reconsider_outer_join_clause generates the equality clause\n'b.y = 1', we figure out that the new clause belongs to JoinDomain {B}.\nNote that the two 'b.y' here belong to different join domains but they\nhave the same varnullingrels (empty varnullingrels actually). As a\nresult, the equality 'b.y = 1' would be merged into the existing EC for\n'b.y', because the two 'b.y' matches by equal() and we do not check\nJoinDomain for non-const EC members. So we would end up with an EC\ncontaining EC members of different join domains.\n\nAnd it seems this would make the following statement in README not hold\nany more.\n\n We don't have to worry about this for Vars (or expressions\n containing Vars), because references to the \"same\" column from\n different join domains will have different varnullingrels and thus\n won't be equal() anyway.\n\nThanks\nRichard\n\nOn Tue, Dec 27, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThe thing that I couldn't get around before is that if you have,\nsay, a mergejoinable equality clause in an outer join:\n\n    select ... from a left join b on a.x = b.y;\n\nthat equality clause can only be associated with the join domain\nfor B, because it certainly can't be enforced against A.  However,\nyou'd still wish to be able to do a mergejoin using indexes on\na.x and b.y, and this means that we have to understand the ordering\ninduced by a PathKey based on this EC as applicable to A, even\nthough that relation is not in the same join domain.  So there are\nsituations where sort orderings apply across domain boundaries even\nthough equalities don't.  We might have to split the notion of\nEquivalenceClass into two sorts of objects, and somewhere right\nabout here is where I realized that this wasn't getting finished\nfor v16 :-(. I think I see where the problem is.  And I can see currently inget_eclass_for_sort_expr we always use the top JoinDomain.  So althoughthe equality clause 'a.x = b.y' belongs to JoinDomain {B}, we set up ECsfor 'a.x' and 'b.y' that belong to the top JoinDomain {A, B, A/B}.But doing so would lead to a situation where the \"same\" Vars fromdifferent join domains might have the same varnullingrels and thus wouldmatch by equal().  As an example, consider    select ... from a left join b on a.x = b.y where a.x = 1;As said we would set up EC for 'b.y' as belonging to the top JoinDomain.Then when reconsider_outer_join_clause generates the equality clause'b.y = 1', we figure out that the new clause belongs to JoinDomain {B}.Note that the two 'b.y' here belong to different join domains but theyhave the same varnullingrels (empty varnullingrels actually).  As aresult, the equality 'b.y = 1' would be merged into the existing EC for'b.y', because the two 'b.y' matches by equal() and we do not checkJoinDomain for non-const EC members.  So we would end up with an ECcontaining EC members of different join domains.And it seems this would make the following statement in README not holdany more.    We don't have to worry about this for Vars (or expressions    containing Vars), because references to the \"same\" column from    different join domains will have different varnullingrels and thus    won't be equal() anyway.ThanksRichard", "msg_date": "Wed, 28 Dec 2022 16:49:23 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I think I see where the problem is. And I can see currently in\n> get_eclass_for_sort_expr we always use the top JoinDomain. So although\n> the equality clause 'a.x = b.y' belongs to JoinDomain {B}, we set up ECs\n> for 'a.x' and 'b.y' that belong to the top JoinDomain {A, B, A/B}.\n\nYeah, that's a pretty squishy point, and likely wrong in detail.\nIf we want to associate an EC with the sort order of an index on\nb.y, we almost certainly want that EC to belong to join domain {B}.\nI had tried to do that in an earlier iteration of 0012, by dint of\nadding a JoinDomain argument to get_eclass_for_sort_expr and then\nhaving build_index_pathkeys specify the lowest join domain containing\nthe index's relation. It did not work very well: it couldn't generate\nmergejoins on full-join clauses, IIRC.\n\nMaybe some variant on that plan can be made to fly, but I'm not at\nall clear on what needs to be adjusted. Anyway, that's part of why\nI backed off on the notion of explicitly associating ECs with join\ndomains. As long as we only pay attention to the join domain in\nconnection with constants, get_eclass_for_sort_expr can get away with\njust using the top join domain, because we'd never apply it to a\nconstant unless perhaps somebody manages to ORDER BY or GROUP BY a\nconstant, and in those cases the top domain really is the right one.\n(It's syntactically difficult to write such a thing anyway, but not\nimpossible.)\n\nI think this is sort of an intermediate state, and hopefully a\nyear from now we'll have a better idea of how to manage all that.\nWhat I mainly settled for doing in 0012 was getting rid of the\n\"below outer join\" concept for ECs, because having to identify\na value for that had been giving me fits in several previous\nattempts at extending ECs to cover outer-join equalities.\nI think that the JoinDomain concept will offer a better-defined\nreplacement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Dec 2022 10:36:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "The cfbot shows that this needs to be rebased over 8eba3e3f0.\n(Just code motion, no interesting changes.)\n\nRichard, are you planning to review this any more? I'm getting\na little antsy to get it committed. For such a large patch,\nit's surprising it's had so few conflicts to date.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 23 Jan 2023 15:38:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Hello Tom\n\n\nI just noticed your new efforts in this area.\n\n\nI wanted to recurr to my old thread [1] considering constant propagation of quals.\n\n\nYou gave an elaborated explanation at that time, but my knowledge was/is not yet sufficient to reveil the technical details.\n\n\nIn our application the described method is widespread used with much success (now at pg15.1 Fedora), but for unexperienced SQL authors this is not really obviously to choose (i.e. using the explicit constant xx_season=3 as qual). This always requires a \"Macro\" processor to compose the queries (in my case php) and a lot of programmer effort in the source code.\n\n\nI can't review/understand your patchset for the planner, but since it covers the same area, the beformentioned optimization could perhaps be addressed too.\n\n\nWith respect of the nullability of these quals I immediately changed all of them to NOT NULL, which seems the most natural way when these quals are also used for partioning.\n\n\n[1] https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n\n\nThanks for looking\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\nHello Tom\n\n\nI just noticed your new efforts in this area.\n\n\nI wanted to recurr to my old thread [1] considering constant propagation of quals.\n\n\nYou gave an elaborated explanation at that time, but my knowledge was/is not yet sufficient to reveil the technical details.\n\n\nIn our application the described method is widespread used with much success (now at pg15.1 Fedora), but for unexperienced SQL authors this is not really obviously to choose (i.e. using the explicit constant xx_season=3 as qual). This always requires a \"Macro\"\n processor to compose the queries (in my case php) and a lot of programmer effort in the source code.\n\n\nI can't review/understand your patchset for the planner, but since it covers the same area, the beformentioned optimization could perhaps be addressed too.\n\n\nWith respect of the nullability of these quals I immediately changed all of them to NOT NULL, which seems the most natural way when these quals are also used for partioning.\n\n\n[1]  https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n\n\nThanks for looking\n\n\nHans Buschmann", "msg_date": "Tue, 24 Jan 2023 10:11:00 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Hans Buschmann <buschmann@nidsa.net> writes:\n> I just noticed your new efforts in this area.\n> I wanted to recurr to my old thread [1] considering constant propagation of quals.\n> [1] https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n\nYeah, this patch series is not yet quite up to the point of improving\nthat. That area is indeed the very next thing I want to work on, and\nI did spend some effort on it last month, but I ran out of time to get\nit working. Maybe we'll have something there for v17.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:30:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "I wrote:\n> Hans Buschmann <buschmann@nidsa.net> writes:\n>> I just noticed your new efforts in this area.\n>> I wanted to recurr to my old thread [1] considering constant propagation of quals.\n>> [1] https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n\n> Yeah, this patch series is not yet quite up to the point of improving\n> that. That area is indeed the very next thing I want to work on, and\n> I did spend some effort on it last month, but I ran out of time to get\n> it working. Maybe we'll have something there for v17.\n\nBTW, to clarify what's going on there: what I want to do is allow\nthe regular equivalence-class machinery to handle deductions from\nequality operators appearing in LEFT JOIN ON clauses (maybe full\njoins too, but I'd be satisfied if it works for one-sided outer\njoins). I'd originally hoped that distinguishing pre-nulled from\npost-nulled variables would be enough to make that safe, but it's\nnot. Here's an example:\n\n\tselect ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n\nIf we turn the generic equivclass.c logic loose on these clauses,\nit will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\nthe scan of t2, which is even better (since we might be able to turn\nthat into an indexscan qual). However, it will also try to apply\nt1.x = 1 at the scan of t1, and that's just wrong, because that\nwill eliminate t1 rows that should come through with null extension.\n\nMy current plan for making this work is to define\nEquivalenceClass-generated clauses as applying within \"join domains\",\nwhich are sets of inner-joined relations, and in the case of a one-sided\nouter join then the join itself belongs to the same join domain as its\nright-hand side --- but not to the join domain of its left-hand side.\nThis would allow us to push EC clauses from an outer join's qual down\ninto the RHS, but not into the LHS, and then anything leftover would\nstill have to be applied at the join. In this example we'd have to\napply t1.x = t2.y or t1.x = 1, but not both, at the join.\n\nI got as far as inventing join domains, in the 0012 patch of this\nseries, but I haven't quite finished puzzling out the clause application\nrules that would be needed for this scenario. Ordinarily an EC\ncontaining a constant would be fully enforced at the scan level\n(i.e., apply t1.x = 1 and t2.y = 1 at scan level) and generate no\nadditional clauses at join level; but that clearly doesn't work\nanymore when some of the scans are outside the join domain.\nI think that the no-constant case might need to be different too.\nI have some WIP code but nothing I can show.\n\nAlso, this doesn't seem to help for full joins. We can treat the\ntwo sides as each being their own join domains, but then the join's\nown ON clause doesn't belong to either one, since we can't throw\naway rows from either side on the basis of a restriction from ON.\nSo it seems like we'll still need ad-hoc logic comparable to\nreconsider_full_join_clause, if we want to preserve that optimization.\nI'm only mildly discontented with that, but still discontented.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 14:31:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Jan 24, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Hans Buschmann <buschmann@nidsa.net> writes:\n> >> I just noticed your new efforts in this area.\n> >> I wanted to recurr to my old thread [1] considering constant\n> propagation of quals.\n> >> [1]\n> https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n>\n> > Yeah, this patch series is not yet quite up to the point of improving\n> > that. That area is indeed the very next thing I want to work on, and\n> > I did spend some effort on it last month, but I ran out of time to get\n> > it working. Maybe we'll have something there for v17.\n>\n> BTW, to clarify what's going on there: what I want to do is allow\n> the regular equivalence-class machinery to handle deductions from\n> equality operators appearing in LEFT JOIN ON clauses (maybe full\n> joins too, but I'd be satisfied if it works for one-sided outer\n> joins). I'd originally hoped that distinguishing pre-nulled from\n> post-nulled variables would be enough to make that safe, but it's\n> not. Here's an example:\n>\n> select ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n>\n> If we turn the generic equivclass.c logic loose on these clauses,\n> it will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\n> the scan of t2, which is even better (since we might be able to turn\n> that into an indexscan qual). However, it will also try to apply\n> t1.x = 1 at the scan of t1, and that's just wrong, because that\n> will eliminate t1 rows that should come through with null extension.\n>\n>\nIs there a particular comment or README where that last conclusion is\nexplained so that it makes sense. Intuitively, I would expect t1.x = 1 to\nbe applied during the scan of t1 - it isn't like the output of the join is\nallowed to include t1 rows not matching that condition anyway.\n\nIOW, I thought the more verbose but equivalent syntax for that was:\n\nselect ... from (select * from t1 as insub where insub.x = 1) as t1 left\njoin t2 on (t1.x = t2.y)\n\nThanks!\n\nDavid J.\n\nOn Tue, Jan 24, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Hans Buschmann <buschmann@nidsa.net> writes:\n>> I just noticed your new efforts in this area.\n>> I wanted to recurr to my old thread [1] considering constant propagation of quals.\n>> [1]  https://www.postgresql.org/message-id/1571413123735.26467@nidsa.net\n\n> Yeah, this patch series is not yet quite up to the point of improving\n> that.  That area is indeed the very next thing I want to work on, and\n> I did spend some effort on it last month, but I ran out of time to get\n> it working.  Maybe we'll have something there for v17.\n\nBTW, to clarify what's going on there: what I want to do is allow\nthe regular equivalence-class machinery to handle deductions from\nequality operators appearing in LEFT JOIN ON clauses (maybe full\njoins too, but I'd be satisfied if it works for one-sided outer\njoins).  I'd originally hoped that distinguishing pre-nulled from\npost-nulled variables would be enough to make that safe, but it's\nnot.  Here's an example:\n\n        select ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n\nIf we turn the generic equivclass.c logic loose on these clauses,\nit will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\nthe scan of t2, which is even better (since we might be able to turn\nthat into an indexscan qual).  However, it will also try to apply\nt1.x = 1 at the scan of t1, and that's just wrong, because that\nwill eliminate t1 rows that should come through with null extension.Is there a particular comment or README where that last conclusion is explained so that it makes sense.  Intuitively, I would expect t1.x = 1 to be applied during the scan of t1 - it isn't like the output of the join is allowed to include t1 rows not matching that condition anyway.IOW, I thought the more verbose but equivalent syntax for that was:select ... from (select * from t1 as insub where insub.x = 1) as t1 left join t2 on (t1.x  = t2.y)Thanks!David J.", "msg_date": "Tue, 24 Jan 2023 12:47:53 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Tue, Jan 24, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> select ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n>> \n>> If we turn the generic equivclass.c logic loose on these clauses,\n>> it will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\n>> the scan of t2, which is even better (since we might be able to turn\n>> that into an indexscan qual). However, it will also try to apply\n>> t1.x = 1 at the scan of t1, and that's just wrong, because that\n>> will eliminate t1 rows that should come through with null extension.\n\n> Is there a particular comment or README where that last conclusion is\n> explained so that it makes sense.\n\nHm? It's a LEFT JOIN, so it must not eliminate any rows from t1.\nA row that doesn't have t1.x = 1 will appear in the output with\nnull columns for t2 ... but it must still appear, so we cannot\nfilter on t1.x = 1 in the scan of t1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 15:25:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Jan 24, 2023 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Tue, Jan 24, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> select ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n> >>\n> >> If we turn the generic equivclass.c logic loose on these clauses,\n> >> it will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\n> >> the scan of t2, which is even better (since we might be able to turn\n> >> that into an indexscan qual). However, it will also try to apply\n> >> t1.x = 1 at the scan of t1, and that's just wrong, because that\n> >> will eliminate t1 rows that should come through with null extension.\n>\n> > Is there a particular comment or README where that last conclusion is\n> > explained so that it makes sense.\n>\n> Hm? It's a LEFT JOIN, so it must not eliminate any rows from t1.\n> A row that doesn't have t1.x = 1 will appear in the output with\n> null columns for t2 ... but it must still appear, so we cannot\n> filter on t1.x = 1 in the scan of t1.\n>\n>\nRan some queries, figured it out. Sorry for the noise. I had turned the\nbehavior of the RHS side appearing in the ON clause into a personal general\nrule then tried to apply it to the LHS (left join mental model) without\nworking through the rules from first principles.\n\nDavid J.\n\nOn Tue, Jan 24, 2023 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Tue, Jan 24, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> select ... from t1 left join t2 on (t1.x = t2.y and t1.x = 1);\n>> \n>> If we turn the generic equivclass.c logic loose on these clauses,\n>> it will deduce t2.y = 1, which is good, and then apply t2.y = 1 at\n>> the scan of t2, which is even better (since we might be able to turn\n>> that into an indexscan qual).  However, it will also try to apply\n>> t1.x = 1 at the scan of t1, and that's just wrong, because that\n>> will eliminate t1 rows that should come through with null extension.\n\n> Is there a particular comment or README where that last conclusion is\n> explained so that it makes sense.\n\nHm?  It's a LEFT JOIN, so it must not eliminate any rows from t1.\nA row that doesn't have t1.x = 1 will appear in the output with\nnull columns for t2 ... but it must still appear, so we cannot\nfilter on t1.x = 1 in the scan of t1.Ran some queries, figured it out.  Sorry for the noise.  I had turned the behavior of the RHS side appearing in the ON clause into a personal general rule then tried to apply it to the LHS (left join mental model) without working through the rules from first principles.David J.", "msg_date": "Tue, 24 Jan 2023 13:39:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Tue, Jan 24, 2023 at 4:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard, are you planning to review this any more? I'm getting\n> a little antsy to get it committed. For such a large patch,\n> it's surprising it's had so few conflicts to date.\n\n\nSorry for the delayed reply. I don't have any more review comments at\nthe moment, except a nitpicking one.\n\nIn optimizer/README at line 729 there is a query as\n\n SELECT * FROM a\n LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = b.y)\n WHERE a.x = 1;\n\nI think it should be\n\n SELECT * FROM a\n LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = ss.y)\n WHERE a.x = 1;\n\nI have no objection to get it committed.\n\nThanks\nRichard\n\nOn Tue, Jan 24, 2023 at 4:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nRichard, are you planning to review this any more?  I'm getting\na little antsy to get it committed.  For such a large patch,\nit's surprising it's had so few conflicts to date. Sorry for the delayed reply.  I don't have any more review comments atthe moment, except a nitpicking one.In optimizer/README at line 729 there is a query as    SELECT * FROM a      LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = b.y)    WHERE a.x = 1;I think it should be    SELECT * FROM a      LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = ss.y)    WHERE a.x = 1;I have no objection to get it committed.ThanksRichard", "msg_date": "Mon, 30 Jan 2023 17:56:38 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Sorry for the delayed reply. I don't have any more review comments at\n> the moment, except a nitpicking one.\n\n> In optimizer/README at line 729 there is a query as\n\n> SELECT * FROM a\n> LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = b.y)\n> WHERE a.x = 1;\n\n> I think it should be\n\n> SELECT * FROM a\n> LEFT JOIN (SELECT * FROM b WHERE b.z = 1) ss ON (a.x = ss.y)\n> WHERE a.x = 1;\n\nOh, good catch, thanks.\n\n> I have no objection to get it committed.\n\nI'll push forward then. Thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:45:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Jan 23, 2023 at 03:38:06PM -0500, Tom Lane wrote:\n> Richard, are you planning to review this any more? I'm getting\n> a little antsy to get it committed. For such a large patch,\n> it's surprising it's had so few conflicts to date.\n\nThe patch broke this query:\n\nselect from pg_inherits inner join information_schema.element_types\nright join (select from pg_constraint as sample_2) on true\non false, lateral (select scope_catalog, inhdetachpending from pg_publication_namespace limit 3);\nERROR: could not devise a query plan for the given query\n\n\n\n", "msg_date": "Sun, 12 Feb 2023 17:58:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Feb 13, 2023 at 7:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> The patch broke this query:\n>\n> select from pg_inherits inner join information_schema.element_types\n> right join (select from pg_constraint as sample_2) on true\n> on false, lateral (select scope_catalog, inhdetachpending from\n> pg_publication_namespace limit 3);\n> ERROR: could not devise a query plan for the given query\n\n\nThanks for the report! I've looked at it a little bit and traced down\nto function have_unsafe_outer_join_ref(). The comment there says\n\n * In practice, this test never finds a problem ...\n * ...\n * It still seems worth checking\n * as a backstop, but we don't go to a lot of trouble: just reject if the\n * unsatisfied part includes any outer-join relids at all.\n\nThis seems not correct as showed by the counterexample. ISTM that we\nneed to do the check honestly as what the other comment says\n\n * If the parameterization is only partly satisfied by the outer rel,\n * the unsatisfied part can't include any outer-join relids that could\n * null rels of the satisfied part.\n\nThe NOT_USED part of code is doing this check. But I think we need a\nlittle tweak. We should check the nullable side of related outer joins\nagainst the satisfied part, rather than inner_paramrels. Maybe\nsomething like attached.\n\nHowever, this test seems to cost some cycles after the change. So I\nwonder if it's worthwhile to perform it, considering that join order\nrestrictions should be able to guarantee there is no problem here.\n\nBTW, here is a simplified query that can trigger this issue on HEAD.\n\nselect * from t1 inner join t2 left join (select null as c from t3 left\njoin t4 on true) as sub on true on true, lateral (select c, t1.a from t5\noffset 0 ) ss;\n\nThanks\nRichard", "msg_date": "Mon, 13 Feb 2023 15:33:15 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Thanks for the report! I've looked at it a little bit and traced down\n> to function have_unsafe_outer_join_ref(). The comment there says\n> * In practice, this test never finds a problem ...\n> This seems not correct as showed by the counterexample.\n\nRight. I'd noticed that the inner loop of that function was not\nreached in our regression tests, and incorrectly concluded that it\nwas not reachable --- but I failed to consider cases where the\ninner rel's parameterization depends on Vars from multiple places.\n\n> The NOT_USED part of code is doing this check. But I think we need a\n> little tweak. We should check the nullable side of related outer joins\n> against the satisfied part, rather than inner_paramrels. Maybe\n> something like attached.\n\nAgreed.\n\n> However, this test seems to cost some cycles after the change. So I\n> wonder if it's worthwhile to perform it, considering that join order\n> restrictions should be able to guarantee there is no problem here.\n\nYeah, I think we should reduce it to an Assert check. It shouldn't be\nworth the cycles to run in production, and that will also make it easier\nto notice in sqlsmith testing if anyone happens across another\ncounterexample.\n\nPushed that way. Thanks for the report and the patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Feb 2023 11:50:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On Mon, Feb 13, 2023 at 03:33:15PM +0800, Richard Guo wrote:\n> On Mon, Feb 13, 2023 at 7:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > The patch broke this query:\n> >\n> > select from pg_inherits inner join information_schema.element_types\n> > right join (select from pg_constraint as sample_2) on true\n> > on false, lateral (select scope_catalog, inhdetachpending from\n> > pg_publication_namespace limit 3);\n> > ERROR: could not devise a query plan for the given query\n\n> BTW, here is a simplified query that can trigger this issue on HEAD.\n> \n> select * from t1 inner join t2 left join (select null as c from t3 left\n> join t4 on true) as sub on true on true, lateral (select c, t1.a from t5\n> offset 0 ) ss;\n\nIt probably doesn't need to be said that the original query was reduced\nfrom sqlsmith... But I mention that now to make it searchable.\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Feb 2023 12:48:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Hello!\n\nI'm having doubts about this fix but most likely i don't understand something.\nCould you help me to figure it out, please.\n\nThe thing is that for custom scan nodes as readme says:\n\"INDEX_VAR is abused to signify references to columns of a custom scan tuple type\"\nBut INDEX_VAR has a negative value, so it can not be used in varnullingrels bitmapset.\nAnd therefore this improvement seems will not work with custom scan nodes and some\nextensions that use such nodes.\n\nIf i'm wrong in my doubts and bitmapset for varnullingrels is ok, may be add a check before\nadjust_relid_set() call like this:\n\n@@ -569,9 +569,10 @@ ChangeVarNodes_walker(Node *node, ChangeVarNodes_context *context)\n {\n if (var->varno == context->rt_index)\n var->varno = context->new_index;\n- var->varnullingrels = adjust_relid_set(var->varnullingrels,\n- context->rt_index,\n- context->new_index);\n+ if (context->rt_index >= 0 && context->new_index >= 0)\n+ var->varnullingrels = adjust_relid_set(var->varnullingrels,\n+ context->rt_index,\n+\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 4 May 2023 10:22:36 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> The thing is that for custom scan nodes as readme says:\n> \"INDEX_VAR is abused to signify references to columns of a custom scan tuple type\"\n> But INDEX_VAR has a negative value, so it can not be used in varnullingrels bitmapset.\n> And therefore this improvement seems will not work with custom scan nodes and some\n> extensions that use such nodes.\n\nUnder what circumstances would you be trying to inject INDEX_VAR\ninto a nullingrel set? Only outer-join relids should ever appear there.\nAFAICS the change you propose would serve only to mask bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 May 2023 08:22:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "Hello and sorry for the big delay in reply!\n\nOn 04.05.2023 15:22, Tom Lane wrote:\n> AFAICS the change you propose would serve only to mask bugs.\n\nYes, it's a bad decision.\n\n> Under what circumstances would you be trying to inject INDEX_VAR\n> into a nullingrel set? Only outer-join relids should ever appear there.\n\nThe thing is that i don't try to push INDEX_VAR into a nullingrel set at all,\ni just try to replace the existing rt_index that equals to INDEX_VAR in Var nodes with\nthe defined positive indexes by using ChangeVarNodes_walker() function call. It checks\nif the nullingrel contains the existing rt_index and does it need to be updated too.\nIt calls bms_is_member() for that, but the last immediately throws an error\nif the value to be checked is negative like INDEX_VAR.\n\nBut we are not trying to corrupt the existing nullingrel with this bad index,\nso it doesn't seem like an serious error.\nAnd this index certainly cannot be a member of the Bitmapset.\n\nTherefore it also seems better and more logical to me in the case of an index that\ncannot possibly be a member of the Bitmapset, immediately return false.\n\nHere is a patch like that.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 29 May 2023 15:01:03 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "[ back from PGCon ... ]\n\n\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> On 04.05.2023 15:22, Tom Lane wrote:\n>> Under what circumstances would you be trying to inject INDEX_VAR\n>> into a nullingrel set? Only outer-join relids should ever appear there.\n\n> The thing is that i don't try to push INDEX_VAR into a nullingrel set at all,\n> i just try to replace the existing rt_index that equals to INDEX_VAR in Var nodes with\n> the defined positive indexes by using ChangeVarNodes_walker() function call.\n\nHmm. That implies that you're changing plan data structures around after\nsetrefs.c, which doesn't seem like a great design to me --- IMO that ought\nto happen in PlanCustomPath, which will still see the original varnos.\nHowever, it's probably not worth breaking existing code for this, so\nnow I agree that ChangeVarNodes ought to (continue to) allow negative\nrt_index.\n\n> Therefore it also seems better and more logical to me in the case of an index that\n> cannot possibly be a member of the Bitmapset, immediately return false.\n> Here is a patch like that.\n\nI do not like the blast radius of this patch. Yes, I know about that\ncomment in bms_is_member --- I wrote it, if memory serves. But it's\nstood like that for more than two decades, and I believe it's caught\nits share of mistakes. This issue doesn't seem like a sufficient\nreason to change a globally-visible behavior.\n\nI think the right thing here is not either of your patches, but\nto tweak adjust_relid_set() to not fail on negative oldrelid.\nI'll go make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Jun 2023 12:58:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Vars outer-join aware" }, { "msg_contents": "On 08.06.2023 19:58, Tom Lane wrote:\n> I think the right thing here is not either of your patches, but\n> to tweak adjust_relid_set() to not fail on negative oldrelid.\n> I'll go make it so.\n\nThanks! This fully solves the problem with ChangeVarNodes() that i wrote above.\n\n\n> Hmm. That implies that you're changing plan data structures around after\n> setrefs.c, which doesn't seem like a great design to me --- IMO that ought\n> to happen in PlanCustomPath, which will still see the original varnos.\n\nMy further searchers led to the fact that it is possible to immediately set the\nnecessary varnos during custom_scan->scan.plan.targetlist creation and leave the\nсustom_scan->custom_scan_tlist = NIL rather than changing them later using ChangeVarNodes().\nThis resulted in a noticeable code simplification.\nThanks a lot for pointing on it!\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 23 Jul 2023 15:20:21 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: Making Vars outer-join aware" } ]
[ { "msg_contents": "Hi hackers,\n\nThe unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n(ad44d50). Should it be removed in v16? If not, should we start emitting\nWARNINGs when it is used?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:56:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Time to remove unparenthesized syntax for VACUUM?" }, { "msg_contents": "Hi,\n\nOn 2022-07-01 14:56:42 -0700, Nathan Bossart wrote:\n> The unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n> (ad44d50). Should it be removed in v16? If not, should we start emitting\n> WARNINGs when it is used?\n\nWhat would we gain? ISTM that the number of scripts and typing habits that'd\nbe broken would vastly exceed the benefit.\n\n- Andres\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:05:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Time to remove unparenthesized syntax for VACUUM?" }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:05:55PM -0700, Andres Freund wrote:\n> On 2022-07-01 14:56:42 -0700, Nathan Bossart wrote:\n>> The unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n>> (ad44d50). Should it be removed in v16? If not, should we start emitting\n>> WARNINGs when it is used?\n> \n> What would we gain? ISTM that the number of scripts and typing habits that'd\n> be broken would vastly exceed the benefit.\n\nBeyond removing a few lines from gram.y and vacuum.sgml, probably not much.\nIf it isn't going to be removed, IMO we should consider removing the\ndeprecation notice in the docs.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:13:16 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Time to remove unparenthesized syntax for VACUUM?" }, { "msg_contents": "Hi,\n\nOn 2022-07-01 15:13:16 -0700, Nathan Bossart wrote:\n> On Fri, Jul 01, 2022 at 03:05:55PM -0700, Andres Freund wrote:\n> > On 2022-07-01 14:56:42 -0700, Nathan Bossart wrote:\n> >> The unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n> >> (ad44d50). Should it be removed in v16? If not, should we start emitting\n> >> WARNINGs when it is used?\n> > \n> > What would we gain? ISTM that the number of scripts and typing habits that'd\n> > be broken would vastly exceed the benefit.\n> \n> Beyond removing a few lines from gram.y and vacuum.sgml, probably not much.\n> If it isn't going to be removed, IMO we should consider removing the\n> deprecation notice in the docs.\n\nStill serves as an explanation as to why newer options haven't been / won't be\nadded in an unparenthesized manner. And maybe there one day will be reason to\nremove them, e.g. grammar ambiguities.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:19:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Time to remove unparenthesized syntax for VACUUM?" }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:19:28PM -0700, Andres Freund wrote:\n> On 2022-07-01 15:13:16 -0700, Nathan Bossart wrote:\n>> On Fri, Jul 01, 2022 at 03:05:55PM -0700, Andres Freund wrote:\n>> > On 2022-07-01 14:56:42 -0700, Nathan Bossart wrote:\n>> >> The unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n>> >> (ad44d50). Should it be removed in v16? If not, should we start emitting\n>> >> WARNINGs when it is used?\n>> > \n>> > What would we gain? ISTM that the number of scripts and typing habits that'd\n>> > be broken would vastly exceed the benefit.\n>> \n>> Beyond removing a few lines from gram.y and vacuum.sgml, probably not much.\n>> If it isn't going to be removed, IMO we should consider removing the\n>> deprecation notice in the docs.\n> \n> Still serves as an explanation as to why newer options haven't been / won't be\n> added in an unparenthesized manner. And maybe there one day will be reason to\n> remove them, e.g. grammar ambiguities.\n\nFair point. Thanks for the discussion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:23:26 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Time to remove unparenthesized syntax for VACUUM?" }, { "msg_contents": "On Fri, Jul 01, 2022 at 03:13:16PM -0700, Nathan Bossart wrote:\n> On Fri, Jul 01, 2022 at 03:05:55PM -0700, Andres Freund wrote:\n> > On 2022-07-01 14:56:42 -0700, Nathan Bossart wrote:\n> >> The unparenthesized syntax for VACUUM has been marked deprecated since v9.1\n> >> (ad44d50). Should it be removed in v16? If not, should we start emitting\n> >> WARNINGs when it is used?\n> > \n> > What would we gain? ISTM that the number of scripts and typing habits that'd\n> > be broken would vastly exceed the benefit.\n> \n> Beyond removing a few lines from gram.y and vacuum.sgml, probably not much.\n> If it isn't going to be removed, IMO we should consider removing the\n> deprecation notice in the docs.\n\nDeprecation doesn't imply eventual removal. java.io.StringBufferInputStream\nhas been deprecated for 25 years. One should not expect it or the old VACUUM\nsyntax to go away.\n\n\n", "msg_date": "Fri, 1 Jul 2022 18:45:53 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Time to remove unparenthesized syntax for VACUUM?" } ]
[ { "msg_contents": "I noticed this during beta1, but dismissed the issue when it wasn't easily\nreproducible. Now, I saw the same problem while upgrading from beta1 to beta2,\nso couldn't dismiss it. It turns out that LOs are lost if VACUUM FULL was run.\n\n| /usr/pgsql-15b1/bin/initdb --no-sync -D pg15b1.dat -k\n| /usr/pgsql-15b1/bin/postgres -D pg15b1.dat -c logging_collector=no -p 5678 -k /tmp&\n| psql -h /tmp postgres -p 5678 -c '\\lo_import /etc/shells' -c 'VACUUM FULL pg_largeobject'\n| rm -fr pg15b2.dat && /usr/pgsql-15b2/bin/initdb --no-sync -k -D pg15b2.dat && /usr/pgsql-15b2/bin/pg_upgrade -d pg15b1.dat -D pg15b2.dat -b /usr/pgsql-15b1/bin\n| /usr/pgsql-15b2/bin/postgres -D pg15b2.dat -c logging_collector=no -p 5678 -k /tmp&\n\nOr, for your convenience, with paths in tmp_install:\n| ./tmp_install/usr/local/pgsql/bin/initdb --no-sync -D pg15b1.dat -k\n| ./tmp_install/usr/local/pgsql/bin/postgres -D pg15b1.dat -c logging_collector=no -p 5678 -k /tmp&\n| psql -h /tmp postgres -p 5678 -c '\\lo_import /etc/shells' -c 'VACUUM FULL pg_largeobject'\n| rm -fr pg15b2.dat && ./tmp_install/usr/local/pgsql/bin/initdb --no-sync -k -D pg15b2.dat && ./tmp_install/usr/local/pgsql/bin/pg_upgrade -d pg15b1.dat -D pg15b2.dat -b ./tmp_install/usr/local/pgsql/bin\n| ./tmp_install/usr/local/pgsql/bin/postgres -D pg15b2.dat -c logging_collector=no -p 5678 -k /tmp&\n\npostgres=# table pg_largeobject_metadata ;\n 16384 | 10 | \n\npostgres=# \\lo_list \n 16384 | pryzbyj | \n\npostgres=# \\lo_export 16384 /dev/stdout\nlo_export\npostgres=# table pg_largeobject;\n\npostgres=# \\dt+ pg_largeobject\n pg_catalog | pg_largeobject | table | pryzbyj | permanent | heap | 0 bytes | \n\nI reproduced the problem at 9a974cbcba but not its parent commit.\n\ncommit 9a974cbcba005256a19991203583a94b4f9a21a9\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Jan 17 13:32:44 2022 -0500\n\n pg_upgrade: Preserve relfilenodes and tablespace OIDs.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 1 Jul 2022 18:14:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 01, 2022 at 06:14:13PM -0500, Justin Pryzby wrote:\n> I reproduced the problem at 9a974cbcba but not its parent commit.\n> \n> commit 9a974cbcba005256a19991203583a94b4f9a21a9\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: Mon Jan 17 13:32:44 2022 -0500\n> \n> pg_upgrade: Preserve relfilenodes and tablespace OIDs.\n\nOops. Robert?\n\nThis reproduces as well when using pg_upgrade with the same version as\norigin and target, meaning that this extra step in the TAP test is\nable to reproduce the issue (extra VACUUM FULL before chdir'ing):\n--- a/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n+++ b/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n@@ -208,6 +208,8 @@ if (defined($ENV{oldinstall}))\n }\n }\n\n+$oldnode->safe_psql(\"regression\", \"VACUUM FULL pg_largeobject;\");\n+\n# In a VPATH build, we'll be started in the source directory, but we want\n--\nMichael", "msg_date": "Sat, 2 Jul 2022 12:17:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 1, 2022 at 7:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I noticed this during beta1, but dismissed the issue when it wasn't easily\n> reproducible. Now, I saw the same problem while upgrading from beta1 to beta2,\n> so couldn't dismiss it. It turns out that LOs are lost if VACUUM FULL was run.\n\nYikes. That's really bad, and I have no idea what might be causing it,\neither. I'll plan to investigate this on Tuesday unless someone gets\nto it before then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 Jul 2022 08:34:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Sat, Jul 02, 2022 at 08:34:04AM -0400, Robert Haas wrote:\n> On Fri, Jul 1, 2022 at 7:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I noticed this during beta1, but dismissed the issue when it wasn't easily\n> > reproducible. Now, I saw the same problem while upgrading from beta1 to beta2,\n> > so couldn't dismiss it. It turns out that LOs are lost if VACUUM FULL was run.\n> \n> Yikes. That's really bad, and I have no idea what might be causing it,\n> either. I'll plan to investigate this on Tuesday unless someone gets\n> to it before then.\n\nI suppose it's like Bruce said, here.\n\nhttps://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n\n|One tricky case is pg_largeobject, which is copied from the old to new\n|cluster since it has user data. To preserve that relfilenode, you would\n|need to have pg_upgrade perform cluster surgery in each database to\n|renumber its relfilenode to match since it is created by initdb. I\n|can't think of a case where pg_upgrade already does something like that.\n\nRather than setting the filenode of the next object as for user tables,\npg-upgrade needs to UPDATE the relfilenode.\n\nThis patch \"works for me\" but feel free to improve on it.", "msg_date": "Sat, 2 Jul 2022 10:49:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Sat, Jul 02, 2022 at 08:34:04AM -0400, Robert Haas wrote:\n> On Fri, Jul 1, 2022 at 7:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I noticed this during beta1, but dismissed the issue when it wasn't easily\n> > reproducible. Now, I saw the same problem while upgrading from beta1 to beta2,\n> > so couldn't dismiss it. It turns out that LOs are lost if VACUUM FULL was run.\n> \n> Yikes. That's really bad, and I have no idea what might be causing it,\n> either. I'll plan to investigate this on Tuesday unless someone gets\n> to it before then.\n\nAs far as I can see the data is still there, it's just that the new cluster\nkeeps its default relfilenode instead of preserving the old cluster's value:\n\nregression=# table pg_largeobject;\n loid | pageno | data\n------+--------+------\n(0 rows)\n\nregression=# select oid, relfilenode from pg_class where relname = 'pg_largeobject';\n oid | relfilenode\n------+-------------\n 2613 | 2613\n(1 row)\n\n-- using the value from the old cluster\nregression=# update pg_class set relfilenode = 39909 where oid = 2613;\nUPDATE 1\n\nregression=# table pg_largeobject;\n loid | pageno |\n-------+--------+-----------------\n 33211 | 0 | \\x0a4920776[...]\n 34356 | 0 | \\xdeadbeef\n(2 rows)\n\n\n", "msg_date": "Sat, 2 Jul 2022 23:52:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "I was able to reproduce the issue. Also, the issue does not occur with code\nbefore to preserve relfilenode commit.\nI tested your patch and it fixes the problem.\nI am currently analyzing a few things related to the issue. I will come\nback once my analysis is completed.\n\nOn Sat, Jul 2, 2022 at 9:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, Jul 02, 2022 at 08:34:04AM -0400, Robert Haas wrote:\n> > On Fri, Jul 1, 2022 at 7:14 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > I noticed this during beta1, but dismissed the issue when it wasn't\n> easily\n> > > reproducible. Now, I saw the same problem while upgrading from beta1\n> to beta2,\n> > > so couldn't dismiss it. It turns out that LOs are lost if VACUUM FULL\n> was run.\n> >\n> > Yikes. That's really bad, and I have no idea what might be causing it,\n> > either. I'll plan to investigate this on Tuesday unless someone gets\n> > to it before then.\n>\n> I suppose it's like Bruce said, here.\n>\n> https://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n>\n> |One tricky case is pg_largeobject, which is copied from the old to new\n> |cluster since it has user data. To preserve that relfilenode, you would\n> |need to have pg_upgrade perform cluster surgery in each database to\n> |renumber its relfilenode to match since it is created by initdb. I\n> |can't think of a case where pg_upgrade already does something like that.\n>\n> Rather than setting the filenode of the next object as for user tables,\n> pg-upgrade needs to UPDATE the relfilenode.\n>\n> This patch \"works for me\" but feel free to improve on it.\n>\n\nI was able to reproduce the issue. Also, the issue does not occur with code before to preserve relfilenode commit. I tested your patch and it fixes the problem.I am currently analyzing a few things related to the issue. I will come back once my analysis is completed.On Sat, Jul 2, 2022 at 9:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Jul 02, 2022 at 08:34:04AM -0400, Robert Haas wrote:\n> On Fri, Jul 1, 2022 at 7:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I noticed this during beta1, but dismissed the issue when it wasn't easily\n> > reproducible.  Now, I saw the same problem while upgrading from beta1 to beta2,\n> > so couldn't dismiss it.  It turns out that LOs are lost if VACUUM FULL was run.\n> \n> Yikes. That's really bad, and I have no idea what might be causing it,\n> either. I'll plan to investigate this on Tuesday unless someone gets\n> to it before then.\n\nI suppose it's like Bruce said, here.\n\nhttps://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n\n|One tricky case is pg_largeobject, which is copied from the old to new\n|cluster since it has user data.  To preserve that relfilenode, you would\n|need to have pg_upgrade perform cluster surgery in each database to\n|renumber its relfilenode to match since it is created by initdb.  I\n|can't think of a case where pg_upgrade already does something like that.\n\nRather than setting the filenode of the next object as for user tables,\npg-upgrade needs to UPDATE the relfilenode.\n\nThis patch \"works for me\" but feel free to improve on it.", "msg_date": "Tue, 5 Jul 2022 17:03:24 +0530", "msg_from": "Shruthi Gowda <gowdashru@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Sat, Jul 2, 2022 at 11:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I suppose it's like Bruce said, here.\n>\n> https://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n\nWell, I feel dumb. I remember reading that email back when Bruce sent\nit, but it seems that it slipped out of my head between then and when\nI committed. I think your patch is fine, except that I think maybe we\nshould adjust this dump comment:\n\n-- For binary upgrade, set pg_largeobject relfrozenxid, relminmxid and\nrelfilenode\n\nPerhaps:\n\n-- For binary upgrade, preserve values for pg_largeobject and its index\n\nListing the exact properties preserved seems less important to me than\nmentioning that the second UPDATE statement is for its index --\nbecause if you look at the SQL that's generated, you can see what's\nbeing preserved, but you don't automatically know why there are two\nUPDATE statements or what the rows with those OIDs are.\n\nI had a moment of panic this morning where I thought maybe the whole\npatch needed to be reverted. I was worried that we might need to\npreserve the OID of every system table and index. Otherwise, what\nhappens if the OID counter in the old cluster wraps around and some\nuser-created object gets an OID that the system tables are using in\nthe new cluster? However, I think this can't happen, because when the\nOID counter wraps around, it wraps around to 16384, and the\nrelfilenode values for newly created system tables and indexes are all\nless than 16384. So I believe we only need to fix pg_largeobject and\nits index, and I think your patch does that.\n\nAnyone else see it differently?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 12:43:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 05, 2022 at 12:43:54PM -0400, Robert Haas wrote:\n> On Sat, Jul 2, 2022 at 11:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I suppose it's like Bruce said, here.\n> >\n> > https://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n> \n> Well, I feel dumb. I remember reading that email back when Bruce sent\n> it, but it seems that it slipped out of my head between then and when\n> I committed. I think your patch is fine, except that I think maybe we\n\nMy patch also leaves a 0 byte file around from initdb, which is harmless, but\ndirty.\n\nI've seen before where a bunch of 0 byte files are abandoned in an\notherwise-empty tablespace, with no associated relation, and I have to \"rm\"\nthem to be able to drop the tablespace. Maybe that's a known issue, maybe it's\ndue to crashes or other edge case, maybe it's of no consequence, and maybe it's\nalready been fixed or being fixed already. But it'd be nice to avoid another\nway to have a 0 byte files - especially ones named with system OIDs.\n\n> Listing the exact properties preserved seems less important to me than\n> mentioning that the second UPDATE statement is for its index --\n\n+1\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 Jul 2022 11:56:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 5, 2022 at 12:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> My patch also leaves a 0 byte file around from initdb, which is harmless, but\n> dirty.\n>\n> I've seen before where a bunch of 0 byte files are abandoned in an\n> otherwise-empty tablespace, with no associated relation, and I have to \"rm\"\n> them to be able to drop the tablespace. Maybe that's a known issue, maybe it's\n> due to crashes or other edge case, maybe it's of no consequence, and maybe it's\n> already been fixed or being fixed already. But it'd be nice to avoid another\n> way to have a 0 byte files - especially ones named with system OIDs.\n\nDo you want to add something to the patch to have pg_upgrade remove\nthe stray file?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 14:40:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 05, 2022 at 02:40:21PM -0400, Robert Haas wrote:\n> On Tue, Jul 5, 2022 at 12:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > My patch also leaves a 0 byte file around from initdb, which is harmless, but\n> > dirty.\n> >\n> > I've seen before where a bunch of 0 byte files are abandoned in an\n> > otherwise-empty tablespace, with no associated relation, and I have to \"rm\"\n> > them to be able to drop the tablespace. Maybe that's a known issue, maybe it's\n> > due to crashes or other edge case, maybe it's of no consequence, and maybe it's\n> > already been fixed or being fixed already. But it'd be nice to avoid another\n> > way to have a 0 byte files - especially ones named with system OIDs.\n> \n> Do you want to add something to the patch to have pg_upgrade remove\n> the stray file?\n\nI'm looking into it, but it'd help to hear suggestions about where to put it.\nMy current ideas aren't very good.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 6 Jul 2022 06:56:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Jul 6, 2022 at 7:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm looking into it, but it'd help to hear suggestions about where to put it.\n> My current ideas aren't very good.\n\nIn main() there is a comment that begins \"Most failures happen in\ncreate_new_objects(), which has just completed at this point.\" I am\nthinking you might want to insert a new function call just before that\ncomment, like remove_orphaned_files() or tidy_up_new_cluster().\n\nAnother option could be to do something at the beginning of\ntransfer_all_new_tablespaces().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 08:25:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Jul 06, 2022 at 08:25:04AM -0400, Robert Haas wrote:\n> On Wed, Jul 6, 2022 at 7:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'm looking into it, but it'd help to hear suggestions about where to put it.\n> > My current ideas aren't very good.\n> \n> In main() there is a comment that begins \"Most failures happen in\n> create_new_objects(), which has just completed at this point.\" I am\n> thinking you might want to insert a new function call just before that\n> comment, like remove_orphaned_files() or tidy_up_new_cluster().\n> \n> Another option could be to do something at the beginning of\n> transfer_all_new_tablespaces().\n\nThat seems like the better option, since it has access to the custer's\nfilenodes.\n\nI checked upgrades from 9.2, upgrades with/out vacuum full, and upgrades with a\nDB tablespace.\n\nMaybe it's a good idea to check that the file is empty before unlinking...\n\n-- \nJustin", "msg_date": "Thu, 7 Jul 2022 12:10:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 7, 2022 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Maybe it's a good idea to check that the file is empty before unlinking...\n\nIf we want to verify that there are no large objects in the cluster,\nwe could do that in check_new_cluster_is_empty(). However, even if\nthere aren't, the length of the file could still be more than 0, if\nthere were some large objects previously and then they were removed.\nSo it's not entirely obvious to me that we should refuse to remove a\nnon-empty file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 13:38:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 7, 2022 at 01:38:44PM -0400, Robert Haas wrote:\n> On Thu, Jul 7, 2022 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Maybe it's a good idea to check that the file is empty before unlinking...\n> \n> If we want to verify that there are no large objects in the cluster,\n> we could do that in check_new_cluster_is_empty(). However, even if\n> there aren't, the length of the file could still be more than 0, if\n> there were some large objects previously and then they were removed.\n> So it's not entirely obvious to me that we should refuse to remove a\n> non-empty file.\n\nUh, that initdb-created pg_largeobject file should not have any data in\nit ever, as far as I know at that point in pg_upgrade. How would values\nhave gotten in there? Via pg_dump?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 7 Jul 2022 14:24:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 7, 2022 at 2:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, Jul 7, 2022 at 01:38:44PM -0400, Robert Haas wrote:\n> > On Thu, Jul 7, 2022 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Maybe it's a good idea to check that the file is empty before unlinking...\n> >\n> > If we want to verify that there are no large objects in the cluster,\n> > we could do that in check_new_cluster_is_empty(). However, even if\n> > there aren't, the length of the file could still be more than 0, if\n> > there were some large objects previously and then they were removed.\n> > So it's not entirely obvious to me that we should refuse to remove a\n> > non-empty file.\n>\n> Uh, that initdb-created pg_largeobject file should not have any data in\n> it ever, as far as I know at that point in pg_upgrade. How would values\n> have gotten in there? Via pg_dump?\n\nI was thinking if the user had done it manually before running pg_upgrade.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 14:38:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 07, 2022 at 02:38:44PM -0400, Robert Haas wrote:\n> On Thu, Jul 7, 2022 at 2:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Thu, Jul 7, 2022 at 01:38:44PM -0400, Robert Haas wrote:\n> > > On Thu, Jul 7, 2022 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > Maybe it's a good idea to check that the file is empty before unlinking...\n> > >\n> > > If we want to verify that there are no large objects in the cluster,\n> > > we could do that in check_new_cluster_is_empty(). However, even if\n> > > there aren't, the length of the file could still be more than 0, if\n> > > there were some large objects previously and then they were removed.\n> > > So it's not entirely obvious to me that we should refuse to remove a\n> > > non-empty file.\n> >\n> > Uh, that initdb-created pg_largeobject file should not have any data in\n> > it ever, as far as I know at that point in pg_upgrade. How would values\n> > have gotten in there? Via pg_dump?\n> \n> I was thinking if the user had done it manually before running pg_upgrade.\n\nWe're referring to the new cluster which should have been initdb'd more or less\nimmediately before running pg_upgrade [0].\n\nIt'd be weird to me if someone were to initdb a new cluster, then create some\nlarge objects, and then maybe delete them, and then run pg_upgrade. Why\nwouldn't they do their work on the old cluster before upgrading, or on the new\ncluster afterwards ?\n\nDoes anybody actually do anything significant between initdb and pg_upgrade ?\nIs that considered to be supported ? It seems like pg_upgrade could itself run\ninitdb, with the correct options for locale, checksum, block size, etc\n(although it probably has to support the existing way to handle \"compatible\nencodings\").\n\nActually, I think check_new_cluster_is_empty() ought to prohibit doing work\nbetween initdb and pg_upgrade by checking that no objects have *ever* been\ncreated in the new cluster, by checking that NextOid == 16384. But I have a\nseparate thread about \"pg-upgrade allows itself to be re-run\", and this has\nmore to do with that than about whether to check that the file is empty before\nremoving it.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 7 Jul 2022 13:44:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Jul 07, 2022 at 02:38:44PM -0400, Robert Haas wrote:\n>> On Thu, Jul 7, 2022 at 2:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>> Uh, that initdb-created pg_largeobject file should not have any data in\n>>> it ever, as far as I know at that point in pg_upgrade. How would values\n>>> have gotten in there? Via pg_dump?\n\n>> I was thinking if the user had done it manually before running pg_upgrade.\n\n> We're referring to the new cluster which should have been initdb'd more or less\n> immediately before running pg_upgrade [0].\n\n> It'd be weird to me if someone were to initdb a new cluster, then create some\n> large objects, and then maybe delete them, and then run pg_upgrade.\n\nAFAIK you're voiding the warranty if you make any changes at all in the\ndestination cluster before pg_upgrade'ing. As an example, if you created\na table there you'd be risking an OID and/or relfilenode collision with\nsomething due to be imported from the source cluster.\n\n> Actually, I think check_new_cluster_is_empty() ought to prohibit doing work\n> between initdb and pg_upgrade by checking that no objects have *ever* been\n> created in the new cluster, by checking that NextOid == 16384.\n\nIt would be good to have some such check; I'm not sure that that one in\nparticular is the best option.\n\n> But I have a\n> separate thread about \"pg-upgrade allows itself to be re-run\", and this has\n> more to do with that than about whether to check that the file is empty before\n> removing it.\n\nYeah, that's another foot-gun in the same area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Jul 2022 15:05:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "+ /* Keep track of whether a filenode matches the OID */\n+ if (maps[mapnum].relfilenumber == LargeObjectRelationId)\n+ *has_lotable = true;\n+ if (maps[mapnum].relfilenumber == LargeObjectLOidPNIndexId)\n+ *has_loindex = true;\nOn Thu, Jul 7, 2022 at 2:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> We're referring to the new cluster which should have been initdb'd more or less\n> immediately before running pg_upgrade [0].\n>\n> It'd be weird to me if someone were to initdb a new cluster, then create some\n> large objects, and then maybe delete them, and then run pg_upgrade. Why\n> wouldn't they do their work on the old cluster before upgrading, or on the new\n> cluster afterwards ?\n>\n> Does anybody actually do anything significant between initdb and pg_upgrade ?\n> Is that considered to be supported ? It seems like pg_upgrade could itself run\n> initdb, with the correct options for locale, checksum, block size, etc\n> (although it probably has to support the existing way to handle \"compatible\n> encodings\").\n>\n> Actually, I think check_new_cluster_is_empty() ought to prohibit doing work\n> between initdb and pg_upgrade by checking that no objects have *ever* been\n> created in the new cluster, by checking that NextOid == 16384. But I have a\n> separate thread about \"pg-upgrade allows itself to be re-run\", and this has\n> more to do with that than about whether to check that the file is empty before\n> removing it.\n\nI think you're getting at a really good point here which is also my\npoint: we assume that nothing significant has happened between when\nthe cluster was created and when pg_upgrade is run, but we don't check\nit. Either we shouldn't assume it, or we should check it.\n\nSo, is such activity ever legitimate? I think there are people doing\nit. The motivation is that maybe you have a dump from the old database\nthat doesn't quite restore on the new version, but by doing something\nto the new cluster, you can make it restore. For instance, maybe there\nare some functions that used to be part of core and are now only\navailable in an extension. That's going to make pg_upgrade's\ndump-and-restore workflow fail, but if you install that extension onto\nthe new cluster, perhaps you can work around the problem. It doesn't\nhave to be an extension, even. Maybe some function in core just got an\nextra argument, and you're using it, so the calls to that function\ncause dump-and-restore to fail. You might try overloading it in the\nnew database with the old calling sequence to get things working.\n\nNow, are these kinds of things considered to be supported? Well, I\ndon't know that we've made any statement about that one way or the\nother. Perhaps they are not. But I can see why people want to use\nworkarounds like this. The alternative is having to dump-and-restore\ninstead of an in-place upgrade, and that's painfully slow by\ncomparison. pg_upgrade itself doesn't give you any tools to deal with\nthis kind of situation, but the process is just loose enough to allow\npeople to insert their own workarounds, so they do. I'm sure I'd do\nthe same, in their shoes.\n\nMy view on this is that, while we probably don't want to make such\nthings officially supported, I don't think we should ban it outright,\neither. We probably can't support an upgrade after the next cluster\nhas been subjected to arbitrary amounts of tinkering, but we're making\na mistake if we write code that has fragile assumptions for no really\ngood reason. I think we can do better than this excerpt from your\npatch, for example:\n\n+ /* Keep track of whether a filenode matches the OID */\n+ if (maps[mapnum].relfilenumber == LargeObjectRelationId)\n+ *has_lotable = true;\n+ if (maps[mapnum].relfilenumber == LargeObjectLOidPNIndexId)\n+ *has_loindex = true;\n\nI spent a while struggling to understand this because it seems to me\nthat every database has an LO table and an LO index, so what's up with\nthese names? I think what these names are really tracking is whether\nthe relfilenumber of pg_largeobject and its index in the old database\nhad their default values. But this makes the assumption that the LO\ntable and LO index in the new database have never been subjected to\nVACUUM FULL or CLUSTER and, while there's no real reason to do that, I\ncan't quite see what the point of such an unnecessary and fragile\nassumption might be. If Dilip's patch to make relfilenodes 56 bits\ngets committed, this is going to break anyway, because with that\npatch, the relfilenode for a table or index doesn't start out being\nequal to its OID.\n\nPerhaps a better solution to this particular problem is to remove the\nbacking files for the large object table and index *before* restoring\nthe dump, deciding what files to remove by asking the running server\nfor the file path. It might seem funny to allow for dangling pg_class\nentries, but we're going to create that situation for all other user\nrels anyway, and pg_upgrade treats pg_largeobject as a user rel.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 15:11:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 5, 2022 at 12:43:54PM -0400, Robert Haas wrote:\n> On Sat, Jul 2, 2022 at 11:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I suppose it's like Bruce said, here.\n> >\n> > https://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n> \n> Well, I feel dumb. I remember reading that email back when Bruce sent\n> it, but it seems that it slipped out of my head between then and when\n> I committed. I think your patch is fine, except that I think maybe we\n\nIt happens to us all.\n\n> I had a moment of panic this morning where I thought maybe the whole\n\nYes, I have had those panics too.\n\n> patch needed to be reverted. I was worried that we might need to\n> preserve the OID of every system table and index. Otherwise, what\n> happens if the OID counter in the old cluster wraps around and some\n> user-created object gets an OID that the system tables are using in\n> the new cluster? However, I think this can't happen, because when the\n> OID counter wraps around, it wraps around to 16384, and the\n> relfilenode values for newly created system tables and indexes are all\n> less than 16384. So I believe we only need to fix pg_largeobject and\n> its index, and I think your patch does that.\n\nSo, let me explain how I look at this. There are two number-spaces, oid\nand relfilenode. In each number-space, there are system-assigned ones\nless than 16384, and higher ones for post-initdb use.\n\nWhat we did in pre-PG15 was to preserve only oids, and have the\nrelfilenode match the oid, and we have discussed the negatives of this.\n\nFor PG 15+, we preserve relfilenodes too. These number assignment cases\nonly work if we handle _all_ numbering, except for non-pg_largeobject\nsystem tables.\n\nIn pre-PG15, pg_largeobject was easily handled because initdb already\nassigned the oid and relfilenode to be the same for pg_largeobject, so a\nsimple copy worked fine. pg_largeobject is an anomaly in PG 15 because\nit is assigned a relfilenode in the system number space by initdb, but\nthen it needs to be potentially renamed into the relfilenode user number\nspace. This is the basis for my email as already posted:\n\n\thttps://www.postgresql.org/message-id/20210601140949.GC22012%40momjian.us\n\nYou are right to be concerned since you are spanning number spaces, but\nI think you are fine because the relfilenode in the user-space cannot\nhave been used since it already was being used in each database. It is\ntrue we never had a per-database rename like this before.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 7 Jul 2022 16:15:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 7, 2022 at 4:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> You are right to be concerned since you are spanning number spaces, but\n> I think you are fine because the relfilenode in the user-space cannot\n> have been used since it already was being used in each database. It is\n> true we never had a per-database rename like this before.\n\nThanks for checking over the reasoning, and the kind words in general.\nI just committed Justin's fix for the bug, without fixing the fact\nthat the new cluster's original pg_largeobject files will be left\norphaned afterward. That's a relatively minor problem by comparison,\nand it seemed best to me not to wait too long to get the main issue\naddressed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 10:44:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Jul 07, 2022 at 03:11:38PM -0400, Robert Haas wrote:\n> point: we assume that nothing significant has happened between when\n> the cluster was created and when pg_upgrade is run, but we don't check\n> it. Either we shouldn't assume it, or we should check it.\n> \n> So, is such activity ever legitimate? I think there are people doing\n> it. The motivation is that maybe you have a dump from the old database\n> that doesn't quite restore on the new version, but by doing something\n> to the new cluster, you can make it restore. For instance, maybe there\n> are some functions that used to be part of core and are now only\n> available in an extension. That's going to make pg_upgrade's\n> dump-and-restore workflow fail, but if you install that extension onto\n> the new cluster, perhaps you can work around the problem. It doesn't\n> have to be an extension, even. Maybe some function in core just got an\n> extra argument, and you're using it, so the calls to that function\n> cause dump-and-restore to fail. You might try overloading it in the\n> new database with the old calling sequence to get things working.\n\nI don't think that's even possible.\n\npg_upgrade drops template1 and postgres before upgrading:\n\n * template1 database will already exist in the target installation,\n * so tell pg_restore to drop and recreate it; otherwise we would fail\n * to propagate its database-level properties.\n\n * postgres database will already exist in the target installation, so\n * tell pg_restore to drop and recreate it; otherwise we would fail to\n * propagate its database-level properties.\n\nFor any other DBs, you'd hit an error if, after initdb'ing, you started the new\ncluster, connected to it, created a DB (?!) and then tried to upgrade:\n\n\tpg_restore: error: could not execute query: ERROR: database \"pryzbyj\" already exists\n\nSo if people start, connect, and then futz with a cluster before upgrading it,\nit must be for global stuff (roles, tablespaces), and not per-DB stuff.\nAlso, pg_upgrade refuses to run if additional roles are defined...\nSo I'm not seeing what someone could do on the new cluster.\n\nThat supports the idea that it'd be okay to refuse to upgrade anything other\nthan a pristine cluster.\n\n> Now, are these kinds of things considered to be supported? Well, I\n> don't know that we've made any statement about that one way or the\n> other. Perhaps they are not. But I can see why people want to use\n> workarounds like this. The alternative is having to dump-and-restore\n> instead of an in-place upgrade, and that's painfully slow by\n> comparison.\n\nThe alternative in cases that I know about is to fix the old DB to allow it to\nbe upgraded. check.c has a list of the things that aren't upgradable, and The\nfixes are some things like ALTER TABLE DROP OIDs. We just added another one to\nhandle v14 aggregates (09878cdd4).\n\n> My view on this is that, while we probably don't want to make such\n> things officially supported, I don't think we should ban it outright,\n> either. We probably can't support an upgrade after the next cluster\n> has been subjected to arbitrary amounts of tinkering, but we're making\n> a mistake if we write code that has fragile assumptions for no really\n> good reason. I think we can do better than this excerpt from your\n> patch, for example:\n> \n> + /* Keep track of whether a filenode matches the OID */\n> + if (maps[mapnum].relfilenumber == LargeObjectRelationId)\n> + *has_lotable = true;\n> + if (maps[mapnum].relfilenumber == LargeObjectLOidPNIndexId)\n> + *has_loindex = true;\n> \n> I spent a while struggling to understand this because it seems to me\n> that every database has an LO table and an LO index, so what's up with\n> these names? I think what these names are really tracking is whether\n> the relfilenumber of pg_largeobject and its index in the old database\n> had their default values. \n\nYes, has_lotable means \"has a LO table whose filenode matches the OID\".\nI will solicit suggestions for a better name.\n\n> But this makes the assumption that the LO\n> table and LO index in the new database have never been subjected to\n> VACUUM FULL or CLUSTER and, while there's no real reason to do that, I\n> can't quite see what the point of such an unnecessary and fragile\n> assumption might be.\n\nThe idea of running initdb, starting the cluster, and connecting to it to run\nVACUUM FULL scares me. Now that I think about it, it might be almost\ninconsequential, since the initial DBs are dropped, and the upgrade will fail\nif any non-template DB exists. But .. maybe something exciting happens if you\nvacuum full a shared catalog... Yup.\n\nWith my patch:\n\n./tmp_install/usr/local/pgsql/bin/initdb --no-sync -D pg15b1.dat -k\n./tmp_install/usr/local/pgsql/bin/postgres -D pg15b1.dat -c logging_collector=no -p 5678 -k /tmp&\npostgres=# \\lo_import /etc/shells \npostgres=# VACUUM FULL pg_largeobject;\n./tmp_install/usr/local/pgsql/bin/initdb --no-sync -D pg15b2.dat -k\n./tmp_install/usr/local/pgsql/bin/postgres -D pg15b2.dat -c logging_collector=no -p 5678 -k /tmp&\npostgres=# VACUUM FULL pg_database;\ntmp_install/usr/local/pgsql/bin/pg_upgrade -D pg15b2.dat -b tmp_install/usr/local/pgsql/bin -d pg15b1.dat\n\npostgres=# SELECT COUNT(1), pg_relation_filenode(oid), array_agg(relname) FROM pg_class WHERE pg_relation_filenode(oid) IS NOT NULL GROUP BY 2 HAVING COUNT(1)>1 ORDER BY 1 DESC ;\n count | pg_relation_filenode | array_agg\n-------+----------------------+----------------------------------------------------\n 2 | 16388 | {pg_toast_1262_index,pg_largeobject_loid_pn_index}\n\nI don't have a deep understanding why the DB hasn't imploded at this point,\nmaybe related to the filenode map file, but it seems very close to being\ncatastrophic.\n\nIt seems like pg_upgrade should at least check that the new cluster has no\nobjects with either OID or relfilenodes in the user range..\nYou could blame my patch, since I the issue is limited to pg_largeobject.\n\n> Perhaps a better solution to this particular problem is to remove the\n> backing files for the large object table and index *before* restoring\n> the dump, deciding what files to remove by asking the running server\n> for the file path. It might seem funny to allow for dangling pg_class\n> entries, but we're going to create that situation for all other user\n> rels anyway, and pg_upgrade treats pg_largeobject as a user rel.\n\nI'll think about it more later.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 8 Jul 2022 10:53:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 8, 2022 at 11:53 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> pg_upgrade drops template1 and postgres before upgrading:\n\nHmm, but I bet you could fiddle with template0. Indeed what's the\ndifference between a user fiddling with template0 and me committing a\npatch that bumps catversion? If the latter doesn't prevent pg_upgrade\nfrom working when the release comes out, why should the former?\n\n> I don't have a deep understanding why the DB hasn't imploded at this point,\n> maybe related to the filenode map file, but it seems very close to being\n> catastrophic.\n\nYeah, good example.\n\n> It seems like pg_upgrade should at least check that the new cluster has no\n> objects with either OID or relfilenodes in the user range..\n\nWell... I think it's not quite that simple. There's an argument for\nthat rule, to be sure, but in some sense it's far too strict. We only\npreserve the OIDs of tablespaces, types, enums, roles, and now\nrelations. So if you create any other type of object in the new\ncluster, like say a function, it's totally fine. You could still fail\nif the old cluster happens to contain a function with the same\nsignature, but that's kind of a different issue. An OID collision for\nany of the many object types for which OIDs are not preserved is no\nproblem at all.\n\nBut even if we restrict ourselves to talking about an object type for\nwhich OIDs are preserved, it's still not quite that simple. For\nexample, if I create a relation in the new cluster, its OID or\nrelfilenode might be the same as a relation that exists in the old\ncluster. In such a case, a failure is inevitable. We're definitely in\nbig trouble, and the question is only whether pg_upgrade will notice.\nBut it's also possible that, either by planning or by sure dumb luck,\nneither the relation nor the OID that I've created in the new cluster\nis in use in the old cluster. In such a case, the upgrade can succeed\nwithout breaking anything, or at least nothing other than our sense of\norder in the universe.\n\nWithout a doubt, there are holes in pg_upgrade's error checking that\nneed to be plugged, but I think there is room to debate exactly what\nsize plug we want to use. I can't really say that it's definitely\nstupid to use a plug that's definitely big enough to catch all the\nscenarios that might break stuff, but I think my own preference would\nbe to try to craft it so that it isn't too much larger than necessary.\nThat's partly because I do think there are some scenarios in which\nmodifying the new cluster might be the easiest way of working around\nsome problem, but also because, as a matter of principle, I like the\nidea of making rules that correspond to the real dangers. If we write\na rule that says essentially \"it's no good if there are two relations\nsharing a relfilenode,\" nobody with any understanding of how the\nsystem works can argue that bypassing it is a sensible thing to do,\nand probably nobody will even try, because it's so obviously bonkers\nto do so. It's a lot less obviously bonkers to try to bypass the\nbroader prohibition which you suggest should never be bypassed, so\nsomeone may do it, and get themselves in trouble.\n\nNow I'm not saying such a person will get any sympathy from this list.\nIf for example you #if out the sanity check and hose yourself, people\nhere are, including me, are going to suggest that you've hosed\nyourself and it's not our problem. But ... the world is full of\nwarnings about problems that aren't really that serious, and sometimes\nthose have the effect of discouraging people from taking warnings\nabout very serious problems as seriously as they should. I know that I\nno longer panic when the national weather service texts me to say that\nthere's a tornado or a flash flood in my area. They've just done that\ntoo many times when there was no real issue with which I needed to be\nconcerned. If I get caught out by a tornado at some point, they're\nprobably going to say \"well that's why you should always take our\nwarnings seriously,\" but I'm going to say \"well that's why you\nshouldn't send spurious warnings.\"\n\n> > Perhaps a better solution to this particular problem is to remove the\n> > backing files for the large object table and index *before* restoring\n> > the dump, deciding what files to remove by asking the running server\n> > for the file path. It might seem funny to allow for dangling pg_class\n> > entries, but we're going to create that situation for all other user\n> > rels anyway, and pg_upgrade treats pg_largeobject as a user rel.\n>\n> I'll think about it more later.\n\nSounds good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 13:12:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 08, 2022 at 10:44:07AM -0400, Robert Haas wrote:\n> Thanks for checking over the reasoning, and the kind words in general.\n\nThanks for fixing the main issue.\n\n> I just committed Justin's fix for the bug, without fixing the fact\n> that the new cluster's original pg_largeobject files will be left\n> orphaned afterward. That's a relatively minor problem by comparison,\n> and it seemed best to me not to wait too long to get the main issue\n> addressed.\n\nHmm. That would mean that the more LOs a cluster has, the more bloat\nthere will be in the new cluster once the upgrade is done. That could\nbe quite a few gigs worth of data laying around depending on the data\ninserted in the source cluster, and we don't have a way to know which\nfiles to remove post-upgrade, do we?\n--\nMichael", "msg_date": "Mon, 11 Jul 2022 10:31:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Sun, Jul 10, 2022 at 9:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Hmm. That would mean that the more LOs a cluster has, the more bloat\n> there will be in the new cluster once the upgrade is done. That could\n> be quite a few gigs worth of data laying around depending on the data\n> inserted in the source cluster, and we don't have a way to know which\n> files to remove post-upgrade, do we?\n\nThe files that are being leaked here are the files backing the\npg_largeobject table and the corresponding index as they existed in\nthe new cluster just prior to the upgrade. Hopefully, the table is a\nzero-length file and the index is just one block, because you're\nsupposed to use a newly-initdb'd cluster as the target for a\npg_upgrade operation. Now, you don't actually have to do that: as\nwe've been discussing, there aren't as many sanity checks in this code\nas there probably should be. But it would still be surprising to\ninitdb a new cluster, load gigabytes of large objects into it, and\nthen use it as the target cluster for a pg_upgrade.\n\nAs for whether it's possible to know which files to remove\npost-upgrade, that's the same problem as trying to figure out whether\ntheir are orphaned files in any other PostgreSQL cluster. We don't\nhave a tool for it, but if you're sure that the system is more or less\nquiescent - no uncommitted DDL, in particular - you can compare\npg_class.relfilenode to the actual filesystem contents and figure out\nwhat extra stuff is present on the filesystem level.\n\nI am not saying we shouldn't try to fix this up more thoroughly, just\nthat I think you are overestimating the consequences.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 09:16:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Mon, Jul 11, 2022 at 9:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I am not saying we shouldn't try to fix this up more thoroughly, just\n> that I think you are overestimating the consequences.\n\nI spent a bunch of time looking at this today and I have more sympathy\nfor Justin's previous proposal now. I found it somewhat hacky that he\nwas relying on the hard-coded value of LargeObjectRelationId and\nLargeObjectLOidPNIndexId, but I discovered that it's harder to do\nbetter than I had assumed. Suppose we don't want to compare against a\nhard-coded constant but against the value that is actually present\nbefore the dump overwrites the pg_class row's relfilenode. Well, we\ncan't get that value from the database in question before restoring\nthe dump, because restoring either the dump creates or recreates the\ndatabase in all cases. The CREATE DATABASE command that will be part\nof the dump always specifies TEMPLATE template0, so if we want\nsomething other than a hard-coded constant, we need the\npg_class.relfilenode values from template0 for pg_largeobject and\npg_largeobject_loid_pn_index. But we can't connect to that database to\nquery those values, because it has datallowconn = false. Oops.\n\nI have a few more ideas to try here. It occurs to me that we could fix\nthis more cleanly if we could get the dump itself to set the\nrelfilenode for pg_largeobject to the desired value. Right now, it's\njust overwriting the relfilenode stored in the catalog without\nactually doing anything that would cause a change on disk. But if we\ncould make it change the relfilenode in a more principled way that\nwould actually cause an on-disk change, then the orphaned-file problem\nwould be fixed, because we'd always be installing the new file over\ntop of the old file. I'm going to investigate how hard it would be to\nmake that work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Jul 2022 16:51:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 12, 2022 at 04:51:44PM -0400, Robert Haas wrote:\n> I spent a bunch of time looking at this today and I have more sympathy\n> for Justin's previous proposal now. I found it somewhat hacky that he\n> was relying on the hard-coded value of LargeObjectRelationId and\n> LargeObjectLOidPNIndexId, but I discovered that it's harder to do\n> better than I had assumed. Suppose we don't want to compare against a\n> hard-coded constant but against the value that is actually present\n> before the dump overwrites the pg_class row's relfilenode. Well, we\n> can't get that value from the database in question before restoring\n> the dump, because restoring either the dump creates or recreates the\n> database in all cases. The CREATE DATABASE command that will be part\n> of the dump always specifies TEMPLATE template0, so if we want\n> something other than a hard-coded constant, we need the\n> pg_class.relfilenode values from template0 for pg_largeobject and\n> pg_largeobject_loid_pn_index. But we can't connect to that database to\n> query those values, because it has datallowconn = false. Oops.\n> \n> I have a few more ideas to try here. It occurs to me that we could fix\n> this more cleanly if we could get the dump itself to set the\n> relfilenode for pg_largeobject to the desired value. Right now, it's\n> just overwriting the relfilenode stored in the catalog without\n> actually doing anything that would cause a change on disk. But if we\n> could make it change the relfilenode in a more principled way that\n> would actually cause an on-disk change, then the orphaned-file problem\n> would be fixed, because we'd always be installing the new file over\n> top of the old file. I'm going to investigate how hard it would be to\n> make that work.\n\nThanks for all the details here. This originally sounded like the new\ncluster was keeping around some orphaned relation files with the old\nLOs still stored in it. But as that's just the freshly initdb'd\nrelfilenodes of pg_largeobject, that does not strike me as something\nabsolutely critical to fix for v15 as orphaned relfilenodes are an\nexisting problem. If we finish with a solution rather simple in\ndesign, I'd be fine to stick a fix in REL_15_STABLE, but playing with\nthis stable branch more than necessary may be risky after beta2. At\nthe end, I would be fine to drop the open item now that the main issue\nhas been fixed.\n--\nMichael", "msg_date": "Wed, 13 Jul 2022 09:03:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 12, 2022 at 4:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I have a few more ideas to try here. It occurs to me that we could fix\n> this more cleanly if we could get the dump itself to set the\n> relfilenode for pg_largeobject to the desired value. Right now, it's\n> just overwriting the relfilenode stored in the catalog without\n> actually doing anything that would cause a change on disk. But if we\n> could make it change the relfilenode in a more principled way that\n> would actually cause an on-disk change, then the orphaned-file problem\n> would be fixed, because we'd always be installing the new file over\n> top of the old file. I'm going to investigate how hard it would be to\n> make that work.\n\nWell, it took a while to figure out how to make that work, but I\nbelieve I've got it now. Attached please find a couple of patches that\nshould get the job done. They might need a bit of polish, but I think\nthe basic concepts are sound.\n\nMy first thought was to have the dump issue VACUUM FULL pg_largeobject\nafter first calling binary_upgrade_set_next_heap_relfilenode() and\nbinary_upgrade_set_next_index_relfilenode(), and have the VACUUM FULL\nuse the values configured by those calls for the new heap and index\nOID. I got this working in standalone testing, only to find that this\ndoesn't work inside pg_upgrade. The complaint is \"ERROR: VACUUM\ncannot run inside a transaction block\", and I think that happens\nbecause pg_restore sends the entire TOC entry for a single object to\nthe server as a single query, and here it contains multiple SQL\ncommands. That multi-command string ends up being treated like an\nimplicit transaction block.\n\nSo my second thought was to have the dump still call\nbinary_upgrade_set_next_heap_relfilenode() and\nbinary_upgrade_set_next_index_relfilenode(), but then afterwards call\nTRUNCATE rather than VACUUM FULL. I found that a simple change to\nRelationSetNewRelfilenumber() did the trick: I could then easily\nchange the heap and index relfilenodes for pg_largeobject to any new\nvalues I liked. However, I realized that this approach had a problem:\nwhat if the pg_largeobject relation had never been rewritten in the\nold cluster? Then the heap and index relfilenodes would be unchanged.\nIt's also possible that someone might have run REINDEX in the old\ncluster, in which case it might happen that the heap relfilenode was\nunchanged, but the index relfilenode had changed. I spent some time\nfumbling around with trying to get the non-transactional truncate path\nto cover these cases, but the fact that we might need to change the\nrelfilenode for the index but not the heap makes this approach seem\npretty awful.\n\nBut it occurred to me that in the case of a pg_upgrade, we don't\nreally need to keep the old storage around until commit time. We can\nunlink it first, before creating the new storage, and then if the old\nand new storage happen to be the same, it still works. I can think of\ntwo possible objections to this way forward. First, it means that the\noperation isn't properly transactional. However, if the upgrade fails\nat this stage, the new cluster is going to have to be nuked and\nrecreated anyway, so the fact that things might be in an unclean state\nafter an ERROR is irrelevant. Second, one might wonder whether such a\nfix is really sufficient. For example, what happens if the relfilenode\nallocated to pg_largebject in the old cluster is assigned to its index\nin the new cluster, and vice versa? To make that work, we would need\nto remove all storage for all relfilenodes first, and then recreate\nthem all afterward. However, I don't think we need to make that work.\nIf an object in the old cluster has a relfilenode < 16384, then that\nshould mean it's never been rewritten and therefore its relfilenode in\nthe new cluster should be the same. The only way this wouldn't be true\nis if we shuffled around the initial relfilenode assignments in a new\nversion of PG so that the same values were used but now for different\nobjects, which would be a really dumb idea. And on the other hand, if\nthe object in the old cluster has a relfilenode > 16384, then that\nrelfilenode value should be unused in the new cluster. If not, the\nuser has tinkered with the new cluster more than they ought.\n\nSo I tried implementing this but I didn't get it quite right the first\ntime. It's not enough to call smgrdounlinkall() instead of\nRelationDropStorage(), because just as RelationDropStorage() does not\nactually drop the storage but only schedules it to be dropped later,\nso also smgrdounlinkall() does not in fact unlink all, but only some.\nIt leaves the first segment of the main relation fork around to guard\nagainst the hazards discussed in the header comments for mdunlink().\nBut those hazards don't really matter here either, because, again, any\nfailure will necessarily require that the entire new cluster be\ndestroyed, and also because there shouldn't be any concurrent activity\nin the new cluster while pg_upgrade is running. So I adjusted\nsmgrdounlinkall() to actually remove everything when IsBinaryUpgrade =\ntrue. And then it all seems to work: pg_upgrade of a cluster that has\nhad a rewrite of pg_largeobject works, and pg_upgrade of a cluster\nthat has not had such a rewrite works too. Wa-hoo.\n\nAs to whether this is a good fix, I think someone could certainly\nargue otherwise. This is all a bit grotty. However, I don't find it\nall that bad. As long as we're moving files from between one PG\ncluster and another using an external tool rather than logic inside\nthe server itself, I think we're bound to have some hacks someplace to\nmake it all work. To me, extending them to a few more places to avoid\nleaving files behind on disk seems like a good trade-off. Your mileage\nmay vary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jul 2022 14:57:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Hi,\n\nOn 2022-07-18 14:57:40 -0400, Robert Haas wrote:\n> As to whether this is a good fix, I think someone could certainly\n> argue otherwise. This is all a bit grotty. However, I don't find it\n> all that bad. As long as we're moving files from between one PG\n> cluster and another using an external tool rather than logic inside\n> the server itself, I think we're bound to have some hacks someplace to\n> make it all work. To me, extending them to a few more places to avoid\n> leaving files behind on disk seems like a good trade-off. Your mileage\n> may vary.\n\nHow about adding a new binary_upgrade_* helper function for this purpose\ninstead, instead of tying it into truncate?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:06:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Mon, Jul 18, 2022 at 02:57:40PM -0400, Robert Haas wrote:\n> So I tried implementing this but I didn't get it quite right the first\n> time. It's not enough to call smgrdounlinkall() instead of\n> RelationDropStorage(), because just as RelationDropStorage() does not\n> actually drop the storage but only schedules it to be dropped later,\n> so also smgrdounlinkall() does not in fact unlink all, but only some.\n> It leaves the first segment of the main relation fork around to guard\n> against the hazards discussed in the header comments for mdunlink().\n> But those hazards don't really matter here either, because, again, any\n> failure will necessarily require that the entire new cluster be\n> destroyed, and also because there shouldn't be any concurrent activity\n> in the new cluster while pg_upgrade is running. So I adjusted\n> smgrdounlinkall() to actually remove everything when IsBinaryUpgrade =\n> true. And then it all seems to work: pg_upgrade of a cluster that has\n> had a rewrite of pg_largeobject works, and pg_upgrade of a cluster\n> that has not had such a rewrite works too. Wa-hoo.\n\nUsing the IsBinaryUpgrade flag makes sense to me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 18 Jul 2022 16:09:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Mon, Jul 18, 2022 at 4:06 PM Andres Freund <andres@anarazel.de> wrote:\n> How about adding a new binary_upgrade_* helper function for this purpose\n> instead, instead of tying it into truncate?\n\nI considered that briefly, but it would need to do a lot of things\nthat TRUNCATE already knows how to do, so it does not seem like a good\nidea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 16:28:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Mon, Jul 18, 2022 at 2:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, it took a while to figure out how to make that work, but I\n> believe I've got it now. Attached please find a couple of patches that\n> should get the job done. They might need a bit of polish, but I think\n> the basic concepts are sound.\n\nSo, would people like these patches (1) committed to master only, (2)\ncommitted to master and back-patched into v15, or (3) not committed at\nall? Michael argued upthread that it was too risky to be tinkering\nwith things at this stage in the release cycle and, certainly, the\nmore time goes by, the more true that gets. But I'm not convinced that\nthese patches involve an inordinate degree of risk, and using beta as\na time to fix things that turned out to be buggy is part of the point\nof the whole thing. On the other hand, the underlying issue isn't that\nserious either, and nobody seems to have reviewed the patches in\ndetail, either. I don't mind committing them on my own recognizance if\nthat's what people would prefer; I can take responsibility for fixing\nanything that is further broken, as I suppose would be expected even\nif someone else did review. But, I don't want to do something that\nother people feel is the wrong thing to have done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 15:45:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 26, 2022 at 03:45:11PM -0400, Robert Haas wrote:\n> On Mon, Jul 18, 2022 at 2:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Well, it took a while to figure out how to make that work, but I\n> > believe I've got it now. Attached please find a couple of patches that\n> > should get the job done. They might need a bit of polish, but I think\n> > the basic concepts are sound.\n> \n> So, would people like these patches (1) committed to master only, (2)\n> committed to master and back-patched into v15, or (3) not committed at\n> all? Michael argued upthread that it was too risky to be tinkering\n> with things at this stage in the release cycle and, certainly, the\n> more time goes by, the more true that gets. But I'm not convinced that\n> these patches involve an inordinate degree of risk, and using beta as\n> a time to fix things that turned out to be buggy is part of the point\n> of the whole thing. On the other hand, the underlying issue isn't that\n> serious either, and nobody seems to have reviewed the patches in\n> detail, either. I don't mind committing them on my own recognizance if\n> that's what people would prefer; I can take responsibility for fixing\n> anything that is further broken, as I suppose would be expected even\n> if someone else did review. But, I don't want to do something that\n> other people feel is the wrong thing to have done.\n\nThis behavior is new in PG 15, and I would be worried to have one new\nbehavior in PG 15 and another one in PG 16. Therefore, I would like to\nsee it in PG 15 and master. I also think not doing anything and leaving\nthese zero-length files around would also be risky.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Tue, 26 Jul 2022 21:09:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Jul 26, 2022 at 9:09 PM Bruce Momjian <bruce@momjian.us> wrote:\n> This behavior is new in PG 15, and I would be worried to have one new\n> behavior in PG 15 and another one in PG 16. Therefore, I would like to\n> see it in PG 15 and master.\n\nThat's also my preference, so committed and back-patched to v15.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Jul 2022 16:16:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That's also my preference, so committed and back-patched to v15.\n\ncrake has been failing its cross-version-upgrade tests [1] since\nthis went in:\n\nlog files for step xversion-upgrade-REL9_4_STABLE-HEAD:\n==~_~===-=-===~_~== /home/andrew/bf/root/upgrade.crake/HEAD/REL9_4_STABLE-amcheck-1.log ==~_~===-=-===~_~==\nheap table \"regression.pg_catalog.pg_largeobject\", block 0, offset 7:\n xmin 7707 precedes relation freeze threshold 0:14779\nheap table \"regression.pg_catalog.pg_largeobject\", block 201, offset 5:\n xmin 8633 precedes relation freeze threshold 0:14779\n\nI'm not very sure what to make of that, but it's failed identically\nfour times in four attempts.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-07-28%2020%3A33%3A20\n\n\n", "msg_date": "Fri, 29 Jul 2022 13:49:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 1:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> crake has been failing its cross-version-upgrade tests [1] since\n> this went in:\n>\n> log files for step xversion-upgrade-REL9_4_STABLE-HEAD:\n> ==~_~===-=-===~_~== /home/andrew/bf/root/upgrade.crake/HEAD/REL9_4_STABLE-amcheck-1.log ==~_~===-=-===~_~==\n> heap table \"regression.pg_catalog.pg_largeobject\", block 0, offset 7:\n> xmin 7707 precedes relation freeze threshold 0:14779\n> heap table \"regression.pg_catalog.pg_largeobject\", block 201, offset 5:\n> xmin 8633 precedes relation freeze threshold 0:14779\n>\n> I'm not very sure what to make of that, but it's failed identically\n> four times in four attempts.\n\nThat's complaining about two tuples in the pg_largeobject table with\nxmin values that precedes relfrozenxid -- which suggests that even\nafter 80d6907219, relfrozenxid isn't being correctly preserved in this\ntest case, since the last run still failed the same way.\n\nBut what exactly is this test case testing? I've previously complained\nabout buildfarm outputs not being as labelled as well as they need to\nbe in order to be easily understood by, well, me anyway. It'd sure\nhelp if the commands that led up to this problem were included in the\noutput. I downloaded latest-client.tgz from the build farm server and\nam looking at TestUpgradeXversion.pm, but there's no mention of\n-amcheck-1.log in there, just -analyse.log, -copy.log, and following.\nSo I suppose this is running some different code or special\nconfiguration...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 14:35:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 2:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But what exactly is this test case testing? I've previously complained\n> about buildfarm outputs not being as labelled as well as they need to\n> be in order to be easily understood by, well, me anyway. It'd sure\n> help if the commands that led up to this problem were included in the\n> output. I downloaded latest-client.tgz from the build farm server and\n> am looking at TestUpgradeXversion.pm, but there's no mention of\n> -amcheck-1.log in there, just -analyse.log, -copy.log, and following.\n> So I suppose this is running some different code or special\n> configuration...\n\nI was able to reproduce the problem by running 'make installcheck'\nagainst a 9.4 instance and then doing a pg_upgrade to 16devel (which\ntook many tries because it told me about many different kinds of\nthings that it didn't like one at a time; I just dropped objects from\nthe regression DB until it worked). The dump output looks like this:\n\n-- For binary upgrade, set pg_largeobject relfrozenxid and relminmxid\nUPDATE pg_catalog.pg_class\nSET relfrozenxid = '0', relminmxid = '0'\nWHERE oid = 2683;\nUPDATE pg_catalog.pg_class\nSET relfrozenxid = '990', relminmxid = '1'\nWHERE oid = 2613;\n\n-- For binary upgrade, preserve pg_largeobject and index relfilenodes\nSELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('12364'::pg_catalog.oid);\nSELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('12362'::pg_catalog.oid);\nTRUNCATE pg_catalog.pg_largeobject;\n\nHowever, the catalogs show the relfilenode being correct, and the\nrelfrozenxid set to a larger value. I suspect the problem here is that\nthis needs to be done in the other order, with the TRUNCATE first and\nthe update to the pg_class columns afterward.\n\nI think I better look into improving the TAP tests for this, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 15:10:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 3:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> However, the catalogs show the relfilenode being correct, and the\n> relfrozenxid set to a larger value. I suspect the problem here is that\n> this needs to be done in the other order, with the TRUNCATE first and\n> the update to the pg_class columns afterward.\n\nThat fix appears to be correct. Patch attached.\n\n> I think I better look into improving the TAP tests for this, too.\n\nTAP test enhancement also included in the attached patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 Jul 2022 15:51:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jul 29, 2022 at 3:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> However, the catalogs show the relfilenode being correct, and the\n>> relfrozenxid set to a larger value. I suspect the problem here is that\n>> this needs to be done in the other order, with the TRUNCATE first and\n>> the update to the pg_class columns afterward.\n\n> That fix appears to be correct. Patch attached.\n\nLooks plausible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 16:00:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Looks plausible.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 17:13:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\nOn 2022-07-29 Fr 14:35, Robert Haas wrote:\n> On Fri, Jul 29, 2022 at 1:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> crake has been failing its cross-version-upgrade tests [1] since\n>> this went in:\n>>\n>> log files for step xversion-upgrade-REL9_4_STABLE-HEAD:\n>> ==~_~===-=-===~_~== /home/andrew/bf/root/upgrade.crake/HEAD/REL9_4_STABLE-amcheck-1.log ==~_~===-=-===~_~==\n>> heap table \"regression.pg_catalog.pg_largeobject\", block 0, offset 7:\n>> xmin 7707 precedes relation freeze threshold 0:14779\n>> heap table \"regression.pg_catalog.pg_largeobject\", block 201, offset 5:\n>> xmin 8633 precedes relation freeze threshold 0:14779\n>>\n>> I'm not very sure what to make of that, but it's failed identically\n>> four times in four attempts.\n> That's complaining about two tuples in the pg_largeobject table with\n> xmin values that precedes relfrozenxid -- which suggests that even\n> after 80d6907219, relfrozenxid isn't being correctly preserved in this\n> test case, since the last run still failed the same way.\n>\n> But what exactly is this test case testing? I've previously complained\n> about buildfarm outputs not being as labelled as well as they need to\n> be in order to be easily understood by, well, me anyway. It'd sure\n> help if the commands that led up to this problem were included in the\n> output. I downloaded latest-client.tgz from the build farm server and\n> am looking at TestUpgradeXversion.pm, but there's no mention of\n> -amcheck-1.log in there, just -analyse.log, -copy.log, and following.\n> So I suppose this is running some different code or special\n> configuration...\n\n\n\nNot really, but it is running git bleeding edge. This code comes from\n<https://github.com/PGBuildFarm/client-code/commit/191df23bd25eb5546b0989d71ae92747151f9f39>\nat lines 704-705\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 29 Jul 2022 17:26:25 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 5:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jul 29, 2022 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Looks plausible.\n>\n> Committed.\n\nwrasse just failed the new test:\n\n[00:09:44.167](0.001s) not ok 16 - old and new horizons match after pg_upgrade\n[00:09:44.167](0.001s)\n[00:09:44.167](0.000s) # Failed test 'old and new horizons match\nafter pg_upgrade'\n# at t/002_pg_upgrade.pl line 345.\n[00:09:44.168](0.000s) # got: '1'\n# expected: '0'\n=== diff of /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon1.txt\nand /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon2.txt\n=== stdout ===\n1c1\n< pg_backend_pid|21767\n---\n> pg_backend_pid|22045=== stderr ===\n=== EOF ===\n\nI'm slightly befuddled as to how we're ending up with a table named\npg_backend_pid. That said, perhaps this is just a case of needing to\nprevent autovacuum from running on the new cluster before we've had a\nchance to record the horizons? But I'm not very confident in that\nexplanation at this point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 18:28:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> wrasse just failed the new test:\n\n> [00:09:44.167](0.001s) not ok 16 - old and new horizons match after pg_upgrade\n> [00:09:44.167](0.001s)\n> [00:09:44.167](0.000s) # Failed test 'old and new horizons match\n> after pg_upgrade'\n> # at t/002_pg_upgrade.pl line 345.\n> [00:09:44.168](0.000s) # got: '1'\n> # expected: '0'\n> === diff of /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon1.txt\n> and /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon2.txt\n> === stdout ===\n> 1c1\n> < pg_backend_pid|21767\n> ---\n> > pg_backend_pid|22045=== stderr ===\n> === EOF ===\n\n> I'm slightly befuddled as to how we're ending up with a table named\n> pg_backend_pid.\n\nThat's not the only thing weird about this printout: there should be\nthree columns not two in that query's output, and what happened to\nthe trailing newline? I don't think we're looking at desired\noutput at all.\n\nI am suspicious that the problem stems from the nonstandard\nway you've invoked psql to collect the horizon data. At the very\nleast this code is duplicating bits of Cluster::psql that it'd be\nbetter not to; and I wonder if the issue is that it's not duplicating\nenough. The lack of -X and the lack of use of installed_command()\nare red flags. Do you really need to do it like this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 19:16:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That's not the only thing weird about this printout: there should be\n> three columns not two in that query's output, and what happened to\n> the trailing newline? I don't think we're looking at desired\n> output at all.\n\nWell, that's an awfully good point.\n\n> I am suspicious that the problem stems from the nonstandard\n> way you've invoked psql to collect the horizon data. At the very\n> least this code is duplicating bits of Cluster::psql that it'd be\n> better not to; and I wonder if the issue is that it's not duplicating\n> enough. The lack of -X and the lack of use of installed_command()\n> are red flags. Do you really need to do it like this?\n\nWell, I just copied the pg_dump block which occurs directly beforehand\nand modified it. I think that must take care of setting the path\nproperly, else we'd have things blowing up all over the place. But the\nlack of -X could be an issue.\n\nThe missing newline thing happens for me locally too, if I revert the\nbug fix portion of that commit, but I do seem to get the right columns\nin the output. It looks like this:\n\n19:24:16.057](0.000s) not ok 16 - old and new horizons match after pg_upgrade\n[19:24:16.058](0.000s)\n[19:24:16.058](0.000s) # Failed test 'old and new horizons match\nafter pg_upgrade'\n# at t/002_pg_upgrade.pl line 345.\n[19:24:16.058](0.000s) # got: '1'\n# expected: '0'\n=== diff of /Users/rhaas/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_K8Fs/horizon1.txt\nand /Users/rhaas/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_K8Fs/horizon2.txt\n=== stdout ===\n1c1\n< pg_largeobject|718|1\n---\n> pg_largeobject|17518|3=== stderr ===\n=== EOF ===\n[19:24:16.066](0.008s) 1..16\n\nI can't account for the absence of a newline there, because hexdump\nsays that both horizon1.txt and horizon2.txt end with one, and if I\nrun diff on those two files and pipe the output into hexdump, it sees\na newline at the end of that output too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 19:36:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jul 29, 2022 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I am suspicious that the problem stems from the nonstandard\n>> way you've invoked psql to collect the horizon data.\n\n> Well, I just copied the pg_dump block which occurs directly beforehand\n> and modified it. I think that must take care of setting the path\n> properly, else we'd have things blowing up all over the place. But the\n> lack of -X could be an issue.\n\nHmm. Now that I look, I do see two pre-existing \"naked\" invocations\nof psql in 002_pg_upgrade.pl, ie\n\n\t$oldnode->command_ok([ 'psql', '-X', '-f', $olddumpfile, 'postgres' ],\n\t\t'loaded old dump file');\n\n\t$oldnode->command_ok(\n\t\t[\n\t\t\t'psql', '-X',\n\t\t\t'-f', \"$srcdir/src/bin/pg_upgrade/upgrade_adapt.sql\",\n\t\t\t'regression'\n\t\t],\n\t\t'ran adapt script');\n\nThose suggest that maybe all you need is -X. However, I don't think\neither of those calls is reached by the majority of buildfarm animals,\nonly ones that are doing cross-version-upgrade tests. So there\ncould be more secret sauce needed to get this to pass everywhere.\n\nPersonally I'd try to replace the two horizon-collection steps with\n$newnode->psql calls, using extra_params to inject the '-o' and target\nfilename command line words. But if you want to try adding -X as\na quicker answer, maybe that will be enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 20:02:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 8:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Personally I'd try to replace the two horizon-collection steps with\n> $newnode->psql calls, using extra_params to inject the '-o' and target\n> filename command line words. But if you want to try adding -X as\n> a quicker answer, maybe that will be enough.\n\nHere's a patch that uses a variant of that approach: it just runs\nsafe_psql straight up and gets the output, then writes it out to temp\nfiles if the output doesn't match and we need to run diff. Let me know\nwhat you think of this.\n\nWhile working on this, I noticed a few other problems. One is that the\nquery doesn't have an ORDER BY clause, which it really should, or the\noutput won't be stable. And the other is that I think we should be\ntesting against the regression database, not the postgres database,\nbecause it's got a bunch of user tables in it, not just\npg_largeobject.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 Jul 2022 20:08:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Here's a patch that uses a variant of that approach: it just runs\n> safe_psql straight up and gets the output, then writes it out to temp\n> files if the output doesn't match and we need to run diff. Let me know\n> what you think of this.\n\nThat looks good to me, although obviously I don't know for sure\nif it will make wrasse happy.\n\n> While working on this, I noticed a few other problems. One is that the\n> query doesn't have an ORDER BY clause, which it really should, or the\n> output won't be stable. And the other is that I think we should be\n> testing against the regression database, not the postgres database,\n> because it's got a bunch of user tables in it, not just\n> pg_largeobject.\n\nBoth of those sound like \"d'oh\" observations to me. +1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 20:22:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Fri, Jul 29, 2022 at 07:16:34PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > wrasse just failed the new test:\n> \n> > [00:09:44.167](0.001s) not ok 16 - old and new horizons match after pg_upgrade\n> > [00:09:44.167](0.001s)\n> > [00:09:44.167](0.000s) # Failed test 'old and new horizons match\n> > after pg_upgrade'\n> > # at t/002_pg_upgrade.pl line 345.\n> > [00:09:44.168](0.000s) # got: '1'\n> > # expected: '0'\n> > === diff of /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon1.txt\n> > and /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_D3cJ/horizon2.txt\n> > === stdout ===\n> > 1c1\n> > < pg_backend_pid|21767\n> > ---\n> > > pg_backend_pid|22045=== stderr ===\n> > === EOF ===\n> \n> > I'm slightly befuddled as to how we're ending up with a table named\n> > pg_backend_pid.\n\n> The lack of -X and the lack of use of installed_command()\n> are red flags.\n\nThe pg_backend_pid is from \"SELECT pg_catalog.pg_backend_pid();\" in ~/.psqlrc,\nso the lack of -X caused that. The latest commit fixes things on a normal\nGNU/Linux box, so I bet it will fix wrasse. (thorntail managed not to fail\nthat way. For unrelated reasons, I override thorntail's $HOME to a\nmostly-empty directory.)\n\n\n", "msg_date": "Fri, 29 Jul 2022 22:44:01 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> The pg_backend_pid is from \"SELECT pg_catalog.pg_backend_pid();\" in ~/.psqlrc,\n> so the lack of -X caused that. The latest commit fixes things on a normal\n> GNU/Linux box, so I bet it will fix wrasse.\n\nYup, looks like we're all good now. Thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Jul 2022 10:40:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 7/30/22 10:40 AM, Tom Lane wrote:\r\n> Noah Misch <noah@leadboat.com> writes:\r\n>> The pg_backend_pid is from \"SELECT pg_catalog.pg_backend_pid();\" in ~/.psqlrc,\r\n>> so the lack of -X caused that. The latest commit fixes things on a normal\r\n>> GNU/Linux box, so I bet it will fix wrasse.\r\n> \r\n> Yup, looks like we're all good now. Thanks!\r\n\r\nGiven this appears to be resolved, I have removed this from \"Open \r\nItems\". Thanks!\r\n\r\nJonathan", "msg_date": "Mon, 1 Aug 2022 17:17:53 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Given this appears to be resolved, I have removed this from \"Open \n> Items\". Thanks!\n\nSadly, we're still not out of the woods. I see three buildfarm\nfailures in this test since Robert resolved the \"-X\" problem [1][2][3]:\n\ngrassquit reported\n\n[19:34:15.249](0.001s) not ok 14 - old and new horizons match after pg_upgrade\n[19:34:15.249](0.001s) \n[19:34:15.249](0.000s) # Failed test 'old and new horizons match after pg_upgrade'\n# at t/002_pg_upgrade.pl line 336.\n=== diff of /mnt/resource/bf/build/grassquit/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_z3zV/horizon1.txt and /mnt/resource/bf/build/grassquit/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_z3zV/horizon2.txt\n=== stdout ===\n785c785\n< spgist_point_tbl|7213|1\n---\n> spgist_point_tbl|7356|3\n787c787\n< spgist_text_tbl|7327|1\n---\n> spgist_text_tbl|8311|3=== stderr ===\n=== EOF ===\n\nwrasse reported\n\n[06:36:35.834](0.001s) not ok 14 - old and new horizons match after pg_upgrade\n[06:36:35.835](0.001s) \n[06:36:35.835](0.000s) # Failed test 'old and new horizons match after pg_upgrade'\n# at t/002_pg_upgrade.pl line 336.\n=== diff of /export/home/nm/farm/studio64v12_6/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_mLle/horizon1.txt and /export/home/nm/farm/studio64v12_6/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_mLle/horizon2.txt\n=== stdout ===\n138c138\n< delete_test_table|7171|1\n---\n> delete_test_table|7171|3=== stderr ===\nWarning: missing newline at end of file /export/home/nm/farm/studio64v12_6/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_mLle/horizon1.txt\nWarning: missing newline at end of file /export/home/nm/farm/studio64v12_6/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_mLle/horizon2.txt=== EOF ===\n\nconchuela doesn't seem to have preserved the detailed log, but it\nfailed at the same place:\n\n# Failed test 'old and new horizons match after pg_upgrade'\n# at t/002_pg_upgrade.pl line 336.\n\nNot sure what to make of this, except that maybe the test is telling\nus about an actual bug of exactly the kind it's designed to expose.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2022-08-01%2019%3A25%3A43\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-08-02%2004%3A18%3A18\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-08-02%2014%3A56%3A49\n\n\n", "msg_date": "Tue, 02 Aug 2022 13:12:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/2/22 1:12 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Given this appears to be resolved, I have removed this from \"Open\r\n>> Items\". Thanks!\r\n> \r\n> Sadly, we're still not out of the woods. I see three buildfarm\r\n> failures in this test since Robert resolved the \"-X\" problem [1][2][3]:\r\n> \r\n> Not sure what to make of this, except that maybe the test is telling\r\n> us about an actual bug of exactly the kind it's designed to expose.\r\n\r\nLooking at the test code, is there anything that could have changed the \r\nrelfrozenxid or relminxid independently of the test on these systems?\r\n\r\nThat said, I do think we should reopen the item to figure out what's \r\ngoing on. Doing so now.\r\n\r\nJonathan", "msg_date": "Tue, 2 Aug 2022 15:19:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Aug 2, 2022 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Not sure what to make of this, except that maybe the test is telling\n> us about an actual bug of exactly the kind it's designed to expose.\n\nThat could be, but what would the bug be exactly? It's hard to think\nof a more direct way of setting relminmxid and relfrozenxid than\nupdating pg_class. It doesn't seem realistic to suppose that we have a\nbug where setting a column in a system table to an integer value\nsometimes sets it to a slightly larger integer instead. If the values\non the new cluster seemed like they had never been set, or if it\nseemed like they had been set to completely random values, then I'd\nsuspect a bug in the mechanism, but here it seems more believable to\nme to think that we're actually setting the correct values and then\nsomething - maybe autovacuum - bumps them again before we have a\nchance to look at them.\n\nI'm not quite sure how to rule that theory in or out, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Aug 2022 15:23:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/2/22 3:23 PM, Robert Haas wrote:\r\n> On Tue, Aug 2, 2022 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> Not sure what to make of this, except that maybe the test is telling\r\n>> us about an actual bug of exactly the kind it's designed to expose.\r\n> \r\n> That could be, but what would the bug be exactly? It's hard to think\r\n> of a more direct way of setting relminmxid and relfrozenxid than\r\n> updating pg_class. It doesn't seem realistic to suppose that we have a\r\n> bug where setting a column in a system table to an integer value\r\n> sometimes sets it to a slightly larger integer instead. If the values\r\n> on the new cluster seemed like they had never been set, or if it\r\n> seemed like they had been set to completely random values, then I'd\r\n> suspect a bug in the mechanism, but here it seems more believable to\r\n> me to think that we're actually setting the correct values and then\r\n> something - maybe autovacuum - bumps them again before we have a\r\n> chance to look at them.\r\n\r\nFWIW (and I have not looked deeply at the code), I was thinking it could \r\nbe something along those lines, given 1. the randomness of the \r\nunderlying systems of the impacted farm animals and 2. it was only the \r\nthree mentioned.\r\n\r\n> I'm not quite sure how to rule that theory in or out, though.\r\n\r\nWithout overcomplicating this, are we able to check to see if autovacuum \r\nran during the course of the test?\r\n\r\nJonathan", "msg_date": "Tue, 2 Aug 2022 15:27:49 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 8/2/22 1:12 PM, Tom Lane wrote:\n>> Sadly, we're still not out of the woods. I see three buildfarm\n>> failures in this test since Robert resolved the \"-X\" problem [1][2][3]:\n\n> Looking at the test code, is there anything that could have changed the \n> relfrozenxid or relminxid independently of the test on these systems?\n\nHmmm ... now that you mention it, I see nothing in 002_pg_upgrade.pl\nthat attempts to turn off autovacuum on either the source server or\nthe destination. So one plausible theory is that autovac moved the\nnumbers since we checked.\n\nIf that is the explanation, then it leaves us with few good options.\nI am not in favor of disabling autovacuum in the test: ordinary\nusers are not going to do that while pg_upgrade'ing, so it'd make\nthe test less representative of real-world usage, which seems like\na bad idea. We could either drop this particular check again, or\nweaken it to allow new relfrozenxid >= old relfrozenxid, likewise\nrelminxid.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Aug 2022 15:32:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 8/2/22 3:23 PM, Robert Haas wrote:\n>> I'm not quite sure how to rule that theory in or out, though.\n\n> Without overcomplicating this, are we able to check to see if autovacuum \n> ran during the course of the test?\n\nLooks like we're all thinking along the same lines.\n\ngrassquit shows this at the end of the old server's log,\nimmediately after the query to retrieve the old horizons:\n\n2022-08-01 19:33:41.608 UTC [1487114][postmaster][:0] LOG: received fast shutdown request\n2022-08-01 19:33:41.611 UTC [1487114][postmaster][:0] LOG: aborting any active transactions\n2022-08-01 19:33:41.643 UTC [1487114][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID 1487132) exited with exit code 1\n2022-08-01 19:33:41.643 UTC [1493875][autovacuum worker][5/6398:0] FATAL: terminating autovacuum process due to administrator command\n2022-08-01 19:33:41.932 UTC [1487121][checkpointer][:0] LOG: checkpoint complete: wrote 1568 buffers (9.6%); 0 WAL file(s) added, 0 removed, 33 recycled; write=31.470 s, sync=0.156 s, total=31.711 s; sync files=893, longest=0.002 s, average=0.001 s; distance=33792 kB, estimate=34986 kB\n2022-08-01 19:33:41.933 UTC [1487121][checkpointer][:0] LOG: shutting down\n\nand wrasse shows this:\n\n2022-08-02 06:35:01.974 CEST [5606:6] LOG: received fast shutdown request\n2022-08-02 06:35:01.974 CEST [5606:7] LOG: aborting any active transactions\n2022-08-02 06:35:01.975 CEST [6758:1] FATAL: terminating autovacuum process due to administrator command\n2022-08-02 06:35:01.975 CEST [6758:2] CONTEXT: while vacuuming index \"spgist_point_idx\" of relation \"public.spgist_point_tbl\"\n2022-08-02 06:35:01.981 CEST [5606:8] LOG: background worker \"logical replication launcher\" (PID 5612) exited with exit code 1\n2022-08-02 06:35:01.995 CEST [5607:42] LOG: shutting down\n\nWhile not smoking guns, these definitely prove that autovac was active.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Aug 2022 15:39:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/2/22 3:39 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 8/2/22 3:23 PM, Robert Haas wrote:\r\n>>> I'm not quite sure how to rule that theory in or out, though.\r\n> \r\n>> Without overcomplicating this, are we able to check to see if autovacuum\r\n>> ran during the course of the test?\r\n> \r\n> Looks like we're all thinking along the same lines.\r\n> \r\n> While not smoking guns, these definitely prove that autovac was active.\r\n\r\n > If that is the explanation, then it leaves us with few good options.\r\n > I am not in favor of disabling autovacuum in the test: ordinary\r\n > users are not going to do that while pg_upgrade'ing, so it'd make\r\n > the test less representative of real-world usage, which seems like\r\n > a bad idea. We could either drop this particular check again, or\r\n > weaken it to allow new relfrozenxid >= old relfrozenxid, likewise\r\n > relminxid.\r\n\r\nThe test does look helpful and it would catch regressions. Loosely \r\nquoting Robert on a different point upthread, we don't want to turn off \r\nthe alarm just because it's spuriously going off.\r\n\r\nI think the weakened check is OK (and possibly mimics the real-world \r\nwhere autovacuum runs), unless you see a major drawback to it?\r\n\r\nJonathan", "msg_date": "Tue, 2 Aug 2022 15:44:40 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 8/2/22 3:39 PM, Tom Lane wrote:\n>>> I am not in favor of disabling autovacuum in the test: ordinary\n>>> users are not going to do that while pg_upgrade'ing, so it'd make\n>>> the test less representative of real-world usage, which seems like\n>>> a bad idea. We could either drop this particular check again, or\n>>> weaken it to allow new relfrozenxid >= old relfrozenxid, likewise\n>>> relminxid.\n\n> The test does look helpful and it would catch regressions. Loosely \n> quoting Robert on a different point upthread, we don't want to turn off \n> the alarm just because it's spuriously going off.\n> I think the weakened check is OK (and possibly mimics the real-world \n> where autovacuum runs), unless you see a major drawback to it?\n\nI also think that \">=\" is a sufficient requirement. It'd be a\nbit painful to test if we had to cope with potential XID wraparound,\nbut we know that these installations haven't been around nearly\nlong enough for that, so a plain \">=\" test ought to be good enough.\n(Replacing the simple \"eq\" code with something that can handle that\ndoesn't look like much fun, though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Aug 2022 15:51:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/2/22 3:51 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 8/2/22 3:39 PM, Tom Lane wrote:\r\n>>>> I am not in favor of disabling autovacuum in the test: ordinary\r\n>>>> users are not going to do that while pg_upgrade'ing, so it'd make\r\n>>>> the test less representative of real-world usage, which seems like\r\n>>>> a bad idea. We could either drop this particular check again, or\r\n>>>> weaken it to allow new relfrozenxid >= old relfrozenxid, likewise\r\n>>>> relminxid.\r\n> \r\n>> The test does look helpful and it would catch regressions. Loosely\r\n>> quoting Robert on a different point upthread, we don't want to turn off\r\n>> the alarm just because it's spuriously going off.\r\n>> I think the weakened check is OK (and possibly mimics the real-world\r\n>> where autovacuum runs), unless you see a major drawback to it?\r\n> \r\n> I also think that \">=\" is a sufficient requirement. It'd be a\r\n> bit painful to test if we had to cope with potential XID wraparound,\r\n> but we know that these installations haven't been around nearly\r\n> long enough for that, so a plain \">=\" test ought to be good enough.\r\n> (Replacing the simple \"eq\" code with something that can handle that\r\n> doesn't look like much fun, though.)\r\n\r\n...if these systems are hitting XID wraparound, we have another issue to \r\nworry about.\r\n\r\nI started modifying the test to support this behavior, but thought that \r\nbecause 1. we want to ensure the OID is still equal and 2. in the \r\nexamples you showed, both relfrozenxid or relminxid could increment, we \r\nmay want to have the individual checks on each column.\r\n\r\nI may be able to conjure something up that does the above, but it's been \r\na minute since I wrote anything in Perl.\r\n\r\nJonathan", "msg_date": "Tue, 2 Aug 2022 16:20:25 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/2/22 4:20 PM, Jonathan S. Katz wrote:\r\n> On 8/2/22 3:51 PM, Tom Lane wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> On 8/2/22 3:39 PM, Tom Lane wrote:\r\n>>>>> I am not in favor of disabling autovacuum in the test: ordinary\r\n>>>>> users are not going to do that while pg_upgrade'ing, so it'd make\r\n>>>>> the test less representative of real-world usage, which seems like\r\n>>>>> a bad idea.  We could either drop this particular check again, or\r\n>>>>> weaken it to allow new relfrozenxid >= old relfrozenxid, likewise\r\n>>>>> relminxid.\r\n>>\r\n>>> The test does look helpful and it would catch regressions. Loosely\r\n>>> quoting Robert on a different point upthread, we don't want to turn off\r\n>>> the alarm just because it's spuriously going off.\r\n>>> I think the weakened check is OK (and possibly mimics the real-world\r\n>>> where autovacuum runs), unless you see a major drawback to it?\r\n>>\r\n>> I also think that \">=\" is a sufficient requirement.  It'd be a\r\n>> bit painful to test if we had to cope with potential XID wraparound,\r\n>> but we know that these installations haven't been around nearly\r\n>> long enough for that, so a plain \">=\" test ought to be good enough.\r\n>> (Replacing the simple \"eq\" code with something that can handle that\r\n>> doesn't look like much fun, though.)\r\n> \r\n> ...if these systems are hitting XID wraparound, we have another issue to \r\n> worry about.\r\n> \r\n> I started modifying the test to support this behavior, but thought that \r\n> because 1. we want to ensure the OID is still equal and 2. in the \r\n> examples you showed, both relfrozenxid or relminxid could increment, we \r\n> may want to have the individual checks on each column.\r\n> \r\n> I may be able to conjure something up that does the above, but it's been \r\n> a minute since I wrote anything in Perl.\r\n\r\nPlease see attached patch that does the above. Tests pass on my local \r\nenvironment (though I did not trigger autovacuum).\r\n\r\nJonathan", "msg_date": "Tue, 2 Aug 2022 16:44:52 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Aug 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The test does look helpful and it would catch regressions. Loosely\n> > quoting Robert on a different point upthread, we don't want to turn off\n> > the alarm just because it's spuriously going off.\n> > I think the weakened check is OK (and possibly mimics the real-world\n> > where autovacuum runs), unless you see a major drawback to it?\n>\n> I also think that \">=\" is a sufficient requirement. It'd be a\n> bit painful to test if we had to cope with potential XID wraparound,\n> but we know that these installations haven't been around nearly\n> long enough for that, so a plain \">=\" test ought to be good enough.\n> (Replacing the simple \"eq\" code with something that can handle that\n> doesn't look like much fun, though.)\n\nI don't really like this approach. Imagine that the code got broken in\nsuch a way that relfrozenxid and relminmxid were set to a value chosen\nat random - say, the contents of 4 bytes of unallocated memory that\ncontained random garbage. Well, right now, the chances that this would\ncause a test failure are nearly 100%. With this change, they'd be\nnearly 0%.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 09:59:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Aug 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I also think that \">=\" is a sufficient requirement.\n\n> I don't really like this approach. Imagine that the code got broken in\n> such a way that relfrozenxid and relminmxid were set to a value chosen\n> at random - say, the contents of 4 bytes of unallocated memory that\n> contained random garbage. Well, right now, the chances that this would\n> cause a test failure are nearly 100%. With this change, they'd be\n> nearly 0%.\n\nIf you have a different solution that you can implement by, say,\ntomorrow, then go for it. But I want to see some fix in there\nwithin about 24 hours, because 15beta3 wraps on Monday and we\nwill need at least a few days to see if the buildfarm is actually\nstable with whatever solution is applied.\n\nA possible compromise is to allow new values that are between\nold value and old-value-plus-a-few-dozen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Aug 2022 10:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\n> On Aug 3, 2022, at 10:14 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Tue, Aug 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I also think that \">=\" is a sufficient requirement.\n> \n>> I don't really like this approach. Imagine that the code got broken in\n>> such a way that relfrozenxid and relminmxid were set to a value chosen\n>> at random - say, the contents of 4 bytes of unallocated memory that\n>> contained random garbage. Well, right now, the chances that this would\n>> cause a test failure are nearly 100%. With this change, they'd be\n>> nearly 0%.\n> \n> If you have a different solution that you can implement by, say,\n> tomorrow, then go for it. But I want to see some fix in there\n> within about 24 hours, because 15beta3 wraps on Monday and we\n> will need at least a few days to see if the buildfarm is actually\n> stable with whatever solution is applied.\n\nYeah, I would argue that the current proposal\nguards against the false positives as they currently stand.\n\nI do think Robert raises a fair point, but I wonder\nif another test would catch that? I don’t want to\nsay “this would never happen” because, well,\nit could happen. But AIUI this would probably\nmanifest itself in other places too?\n\n> A possible compromise is to allow new values that are between\n> old value and old-value-plus-a-few-dozen.\n\nWell, that’s kind of deterministic :-) I’m OK\nwith that tweak, where “OK” means not thrilled,\nbut I don’t see a better way to get more granular\ndetails (at least through my phone searches).\n\nI can probably have a tweak for this in a couple\nof hours if and when I’m on plane wifi.\n\nJonathan \n\n\n", "msg_date": "Wed, 3 Aug 2022 10:53:06 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If you have a different solution that you can implement by, say,\n> tomorrow, then go for it. But I want to see some fix in there\n> within about 24 hours, because 15beta3 wraps on Monday and we\n> will need at least a few days to see if the buildfarm is actually\n> stable with whatever solution is applied.\n\nI doubt that I can come up with something that quickly, so I guess we\nneed some stopgap for now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 12:13:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Aug 2, 2022 at 12:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmmm ... now that you mention it, I see nothing in 002_pg_upgrade.pl\n> that attempts to turn off autovacuum on either the source server or\n> the destination. So one plausible theory is that autovac moved the\n> numbers since we checked.\n\nIt's very easy to believe that my work in commit 0b018fab could make\nthat happen, which is only a few months old. It's now completely\nroutine for non-aggressive autovacuums to advance relfrozenxid by at\nleast a small amount.\n\nFor example, when autovacuum runs against either the tellers table or\nthe branches table during a pgbench run, it will now advance\nrelfrozenxid, every single time. And to a very recent value. This will\nhappen in spite of the fact that no freezing ever takes place -- it's\njust a consequence of the oldest extant XID consistently being quite\nyoung each time, due to workload characteristics.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:52:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 6:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't really like this approach. Imagine that the code got broken in\n> such a way that relfrozenxid and relminmxid were set to a value chosen\n> at random - say, the contents of 4 bytes of unallocated memory that\n> contained random garbage. Well, right now, the chances that this would\n> cause a test failure are nearly 100%. With this change, they'd be\n> nearly 0%.\n\nIf that kind of speculative bug existed, and somehow triggered before\nthe concurrent autovacuum ran (which seems very likely to be the\nsource of the test flappiness), then it would still be caught, most\nlikely. VACUUM itself has the following defenses:\n\n* The defensive \"can't happen\" errors added to\nheap_prepare_freeze_tuple() and related freezing routines by commit\n699bf7d0 in 2017, as hardening following the \"freeze the dead\" bug.\nThat'll catch XIDs that are before the relfrozenxid at the start of\nthe VACUUM (ditto for MXIDs/relminmxid).\n\n* The assertion added in my recent commit 0b018fab, which verifies\nthat we're about to set relfrozenxid to something sane.\n\n* VACUUM now warns when it sees a *previous* relfrozenxid that's\napparently \"in the future\", following recent commit e83ebfe6. This\nproblem scenario is associated with several historic bugs in\npg_upgrade, where for one reason or another it failed to carry forward\ncorrect relfrozenxid and/or relminmxid values for a table (see the\ncommit message for references to those old pg_upgrade bugs).\n\nIt might make sense to run a manual VACUUM right at the end of the\ntest, so that you reliably get this kind of coverage, even without\nautovacuum.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 Aug 2022 12:41:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Hi,\n\nOn 2022-08-03 09:59:40 -0400, Robert Haas wrote:\n> On Tue, Aug 2, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > The test does look helpful and it would catch regressions. Loosely\n> > > quoting Robert on a different point upthread, we don't want to turn off\n> > > the alarm just because it's spuriously going off.\n> > > I think the weakened check is OK (and possibly mimics the real-world\n> > > where autovacuum runs), unless you see a major drawback to it?\n> >\n> > I also think that \">=\" is a sufficient requirement. It'd be a\n> > bit painful to test if we had to cope with potential XID wraparound,\n> > but we know that these installations haven't been around nearly\n> > long enough for that, so a plain \">=\" test ought to be good enough.\n> > (Replacing the simple \"eq\" code with something that can handle that\n> > doesn't look like much fun, though.)\n> \n> I don't really like this approach. Imagine that the code got broken in\n> such a way that relfrozenxid and relminmxid were set to a value chosen\n> at random - say, the contents of 4 bytes of unallocated memory that\n> contained random garbage. Well, right now, the chances that this would\n> cause a test failure are nearly 100%. With this change, they'd be\n> nearly 0%.\n\nCan't that pretty easily be addressed by subsequently querying txid_current(),\nand checking that the value isn't newer than that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Aug 2022 13:20:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't really like this approach. Imagine that the code got broken in\n> > such a way that relfrozenxid and relminmxid were set to a value chosen\n> > at random - say, the contents of 4 bytes of unallocated memory that\n> > contained random garbage. Well, right now, the chances that this would\n> > cause a test failure are nearly 100%. With this change, they'd be\n> > nearly 0%.\n>\n> Can't that pretty easily be addressed by subsequently querying txid_current(),\n> and checking that the value isn't newer than that?\n\nIt couldn't hurt to do that as well, in passing (at the same time as\ntesting that newrelfrozenxid >= oldrelfrozenxid directly). But\ndeliberately running VACUUM afterwards seems like a good idea. We\nreally ought to expect VACUUM to catch cases where\nrelfrozenxid/relminmxid is faulty, at least in cases where it can be\nproven wrong by noticing some kind of inconsistency.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 Aug 2022 13:29:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It couldn't hurt to do that as well, in passing (at the same time as\n> testing that newrelfrozenxid >= oldrelfrozenxid directly). But\n> deliberately running VACUUM afterwards seems like a good idea. We\n> really ought to expect VACUUM to catch cases where\n> relfrozenxid/relminmxid is faulty, at least in cases where it can be\n> proven wrong by noticing some kind of inconsistency.\n\nThat doesn't seem like it'd be all that thorough: we expect VACUUM\nto skip pages whenever possible. I'm also a bit concerned about\nthe expense, though admittedly this test is ridiculously expensive\nalready.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Aug 2022 16:34:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't really like this approach. Imagine that the code got broken in\n> > such a way that relfrozenxid and relminmxid were set to a value chosen\n> > at random - say, the contents of 4 bytes of unallocated memory that\n> > contained random garbage. Well, right now, the chances that this would\n> > cause a test failure are nearly 100%. With this change, they'd be\n> > nearly 0%.\n>\n> Can't that pretty easily be addressed by subsequently querying txid_current(),\n> and checking that the value isn't newer than that?\n\nHmm, maybe. The old cluster shouldn't have wrapped around ever, since\nwe just created it. So the value in the new cluster should be >= that\nvalue and <= the result of txid_curent() ignoring wraparound.\n\nOr we could disable autovacuum on the new cluster, which I think is a\nbetter solution. I like it when things match exactly; it makes me feel\nthat the universe is well-ordered.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 16:41:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That doesn't seem like it'd be all that thorough: we expect VACUUM\n> to skip pages whenever possible. I'm also a bit concerned about\n> the expense, though admittedly this test is ridiculously expensive\n> already.\n\nI bet the SKIP_PAGES_THRESHOLD stuff will be enough to make VACUUM\nvisit every heap page in practice for a test case like this. That is\nall it takes to be able to safely advance relfrozenxid to whatever the\noldest extant XID happened to be. However, I'm no fan of the\nSKIP_PAGES_THRESHOLD behavior, and already have plans to get rid of it\n-- so I wouldn't rely on that continuing to be true forever.\n\nIt's probably not really necessary to have that kind of coverage in\nthis particular test case. VACUUM will complain about weird\nrelfrozenxid values in a large variety of contexts, even without\nassertions enabled. Mostly I was just saying: if we really do need\ntest coverage of relfrozenxid in this context, then VACUUM is probably\nthe way to go.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 Aug 2022 13:42:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Or we could disable autovacuum on the new cluster, which I think is a\n> better solution. I like it when things match exactly; it makes me feel\n> that the universe is well-ordered.\n\nAgain, this seems to me to be breaking the test's real-world applicability\nfor a (false?) sense of stability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Aug 2022 16:46:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 2022-08-03 16:46:57 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Or we could disable autovacuum on the new cluster, which I think is a\n> > better solution. I like it when things match exactly; it makes me feel\n> > that the universe is well-ordered.\n> \n> Again, this seems to me to be breaking the test's real-world applicability\n> for a (false?) sense of stability.\n\nYea, that doesn't seem like an improvement. I e.g. found the issues around\nrelfilenode reuse in 15 due to autovacuum running in the pg_upgrade target\ncluster. And I recall other bugs in the area...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Aug 2022 14:01:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Again, this seems to me to be breaking the test's real-world applicability\n> for a (false?) sense of stability.\n\nI agree.\n\nA lot of the VACUUM test flappiness issues we've had to deal with in\nthe past now seem like problems with VACUUM itself, the test's design,\nor both. For example, why should we get a totally different\npg_class.reltuples because we couldn't get a cleanup lock on some\npage? Why not just make sure to give the same answer either way,\nwhich happens to be the most useful behavior to the user? That way\nthe test isn't just targeting implementation details.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 Aug 2022 14:08:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/3/22 2:08 PM, Peter Geoghegan wrote:\n> On Wed, Aug 3, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Again, this seems to me to be breaking the test's real-world applicability\n>> for a (false?) sense of stability.\n> \n> I agree.\n> \n> A lot of the VACUUM test flappiness issues we've had to deal with in\n> the past now seem like problems with VACUUM itself, the test's design,\n> or both. For example, why should we get a totally different\n> pg_class.reltuples because we couldn't get a cleanup lock on some\n> page? Why not just make sure to give the same answer either way,\n> which happens to be the most useful behavior to the user? That way\n> the test isn't just targeting implementation details.\n\nAfter catching up (and reviewing approaches that could work while on \npoor wifi), it does make me wonder if we can have a useful test ready \nbefore beta 3.\n\nI did rule out wanting to do the \"xid + $X\" check after reviewing some \nof the output. I think that both $X could end up varying, and it really \nfeels like a bandaid.\n\nAndres suggested upthread using \"txid_current()\" -- for the comparison, \nthat's one thing I looked at. Would any of the XID info from \n\"pg_control_checkpoint()\" also serve for this test?\n\nIf yes to the above, I should be able to modify this fairly quickly.\n\nJonathan\n\n\n", "msg_date": "Wed, 3 Aug 2022 15:55:05 -0700", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I did rule out wanting to do the \"xid + $X\" check after reviewing some \n> of the output. I think that both $X could end up varying, and it really \n> feels like a bandaid.\n\nIt is that. I wouldn't feel comfortable with $X less than 100 or so,\nwhich is probably sloppy enough to draw Robert's ire. Still, realizing\nthat what we want right now is a band-aid for 15beta3, I don't think\nit's an unreasonable short-term option.\n\n> Andres suggested upthread using \"txid_current()\" -- for the comparison, \n> that's one thing I looked at. Would any of the XID info from \n> \"pg_control_checkpoint()\" also serve for this test?\n\nI like the idea of txid_current(), but we have no comparable\nfunction for mxid do we? While you could get both numbers from\npg_control_checkpoint(), I doubt that's sufficiently up-to-date.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Aug 2022 19:19:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On 8/3/22 4:19 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> I did rule out wanting to do the \"xid + $X\" check after reviewing some\n>> of the output. I think that both $X could end up varying, and it really\n>> feels like a bandaid.\n> \n> It is that. I wouldn't feel comfortable with $X less than 100 or so,\n> which is probably sloppy enough to draw Robert's ire. Still, realizing\n> that what we want right now is a band-aid for 15beta3, I don't think\n> it's an unreasonable short-term option.\n\nAttached is the \"band-aid / sloppy\" version of the patch. Given from the \ntest examples I kept seeing deltas over 100 for relfrozenxid, I chose \n1000. The mxid delta was less, but I kept it at 1000 for consistency \n(and because I hope this test is short lived in this state), but can be \ntalked into otherwise.\n\n>> Andres suggested upthread using \"txid_current()\" -- for the comparison,\n>> that's one thing I looked at. Would any of the XID info from\n>> \"pg_control_checkpoint()\" also serve for this test?\n> \n> I like the idea of txid_current(), but we have no comparable\n> function for mxid do we? While you could get both numbers from\n> pg_control_checkpoint(), I doubt that's sufficiently up-to-date.\n\n...unless we force a checkpoint in the test?\n\nJonathan", "msg_date": "Thu, 4 Aug 2022 07:02:28 -0700", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Wed, Aug 3, 2022 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > I did rule out wanting to do the \"xid + $X\" check after reviewing some\n> > of the output. I think that both $X could end up varying, and it really\n> > feels like a bandaid.\n>\n> It is that. I wouldn't feel comfortable with $X less than 100 or so,\n> which is probably sloppy enough to draw Robert's ire. Still, realizing\n> that what we want right now is a band-aid for 15beta3, I don't think\n> it's an unreasonable short-term option.\n\n100 << 2^32, so it's not terrible, but I'm honestly coming around to\nthe view that we ought to just nuke this test case.\n\n From my point of view, the assertion that disabling autovacuum during\nthis test case would make the test case useless seems to be incorrect.\nThe original purpose of the test was to make sure that the pre-upgrade\nschema matched the post-upgrade schema. If having autovacuum running\nor not running affects that, we have a serious problem, but this test\ncase isn't especially likely to find it, because whether autovacuum\nruns or not during the brief window where the test is running is\ntotally unpredictable. Furthermore, if we do have such a problem, it\nwould probably indicate that vacuum is using the wrong horizons to\nprune or test the visibility of the tuples. To find that out, we might\nwant to compare values upon which the behavior of vacuum might depend,\nlike relfrozenxid. But to do that, we have to disable autovacuum, so\nthat the value can't change under us. From my point of view, that's\nmaking test coverage better, not worse, because any bugs in this area\nthat can be found without explicit testing of relevant horizons are\ndependent on low-probability race conditions materializing in the\nbuildfarm. If we disable autovacuum and then compare relfrozenxid and\nwhatever else we care about explicitly, we can find bugs in that\ncategory reliably.\n\nHowever, if people don't accept that argument, then this new test case\nis kind of silly. It's not the worst idea in the world to use a\nthreshold of 100 XIDs or something, but without disabling autovacuum,\nwe're basically comparing two things that can't be expected to be\nequal, so we test and see if they're approximately equal and then call\nthat good enough. I don't know that I believe we'll ever find a bug\nthat way, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Aug 2022 10:08:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 10:02 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is the \"band-aid / sloppy\" version of the patch. Given from the\n> test examples I kept seeing deltas over 100 for relfrozenxid, I chose\n> 1000. The mxid delta was less, but I kept it at 1000 for consistency\n> (and because I hope this test is short lived in this state), but can be\n> talked into otherwise.\n\nISTM that you'd need to loop over the rows and do this for each row.\nOtherwise I think you're just comparing results for the first relation\nand ignoring all the rest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 10:09:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 8/3/22 4:19 PM, Tom Lane wrote:\n>> I like the idea of txid_current(), but we have no comparable\n>> function for mxid do we? While you could get both numbers from\n>> pg_control_checkpoint(), I doubt that's sufficiently up-to-date.\n\n> ...unless we force a checkpoint in the test?\n\nHmm ... maybe if you take a snapshot and hold that open while\nforcing the checkpoint and doing the subsequent checks. That\nseems messy though. Also, while that should serve to hold back\nglobal xmin, I'm not at all clear on whether that has a similar\neffect on minmxid.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 10:16:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 100 << 2^32, so it's not terrible, but I'm honestly coming around to\n> the view that we ought to just nuke this test case.\n\nI'd hesitated to suggest that, but I think that's a fine solution.\nEspecially since we can always put it back in later if we think\nof a more robust way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 10:26:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 10:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > 100 << 2^32, so it's not terrible, but I'm honestly coming around to\n> > the view that we ought to just nuke this test case.\n>\n> I'd hesitated to suggest that, but I think that's a fine solution.\n> Especially since we can always put it back in later if we think\n> of a more robust way.\n\nIMHO it's 100% clear how to make it robust. If you want to check that\ntwo values are the same, you can't let one of them be overwritten by\nan unrelated event in the middle of the check. There are many specific\nthings we could do here, a few of which I proposed in my previous\nemail, but they all boil down to \"don't let autovacuum screw up the\nresults\".\n\nBut if you don't want to do that, and you also don't want to have\nrandom failures, the only alternatives are weakening the check and\nremoving the test. It's kind of hard to say which is better, but I'm\ninclined to think that if we just weaken the test we're going to think\nwe've got coverage for this kind of problem when we really don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 12:43:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 12:43:49 -0400, Robert Haas wrote:\n> On Thu, Aug 4, 2022 at 10:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > 100 << 2^32, so it's not terrible, but I'm honestly coming around to\n> > > the view that we ought to just nuke this test case.\n> >\n> > I'd hesitated to suggest that, but I think that's a fine solution.\n> > Especially since we can always put it back in later if we think\n> > of a more robust way.\n> \n> IMHO it's 100% clear how to make it robust. If you want to check that\n> two values are the same, you can't let one of them be overwritten by\n> an unrelated event in the middle of the check. There are many specific\n> things we could do here, a few of which I proposed in my previous\n> email, but they all boil down to \"don't let autovacuum screw up the\n> results\".\n>\n> But if you don't want to do that, and you also don't want to have\n> random failures, the only alternatives are weakening the check and\n> removing the test. It's kind of hard to say which is better, but I'm\n> inclined to think that if we just weaken the test we're going to think\n> we've got coverage for this kind of problem when we really don't.\n\nWhy you think it's better to not have the test than to have a very limited\namount of fuzziness (by using the next xid as an upper limit). What's the bug\nthat will reliably pass the nextxid fuzzy comparison, but not an exact\ncomparison?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Aug 2022 09:59:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> IMHO it's 100% clear how to make it robust. If you want to check that\n> two values are the same, you can't let one of them be overwritten by\n> an unrelated event in the middle of the check. There are many specific\n> things we could do here, a few of which I proposed in my previous\n> email, but they all boil down to \"don't let autovacuum screw up the\n> results\".\n\nIt doesn't really matter how robust a test case is, if it isn't testing\nthe thing you need to have tested. So I remain unwilling to disable\nautovac in a way that won't match real-world usage.\n\nNote that the patch you proposed at [1] will not fix anything.\nIt turns off autovac in the new node, but the buildfarm failures\nwe've seen appear to be due to autovac running on the old node.\n(I believe that autovac in the new node is *also* a hazard, but\nit seems to be a lot less of one, presumably because of timing\nconsiderations.) To make it work, we'd have to shut off autovac\nin the old node before starting pg_upgrade, and that would make it\nunacceptably (IMHO) different from what real users will do.\n\nConceivably, we could move all of this processing into pg_upgrade\nitself --- autovac disable/re-enable and capturing of the horizon\ndata --- and that would address my complaint. I don't really want\nto go there though, especially when in the final analysis IT IS\nNOT A BUG if a rel's horizons advance a bit during pg_upgrade.\nIt's only a bug if they become inconsistent with the rel's data,\nwhich is not what this test is testing for.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoZkBcMi%2BNikxfc54dgkWj41Q%3DZ4nuyHpheTcxA-qfS5Qg%40mail.gmail.com\n\n\n", "msg_date": "Thu, 04 Aug 2022 13:49:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 9:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But if you don't want to do that, and you also don't want to have\n> random failures, the only alternatives are weakening the check and\n> removing the test. It's kind of hard to say which is better, but I'm\n> inclined to think that if we just weaken the test we're going to think\n> we've got coverage for this kind of problem when we really don't.\n\nPerhaps amcheck's verify_heapam() function can be used here. What\ncould be better than exhaustively verifying that the relfrozenxid (and\nrelminmxid) invariants hold for every single tuple in the table? Those\nare the exact conditions that we care about, as far as\nrelfrozenxid/relminmxid goes.\n\nMy sense is that that has a much better chance of detecting a real bug\nat some point. This approach is arguably an example of property-based\ntesting.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 4 Aug 2022 11:04:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Perhaps amcheck's verify_heapam() function can be used here. What\n> could be better than exhaustively verifying that the relfrozenxid (and\n> relminmxid) invariants hold for every single tuple in the table?\n\nHow much will that add to the test's runtime? I could get behind this\nidea if it's not exorbitantly expensive.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 14:07:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> How much will that add to the test's runtime? I could get behind this\n> idea if it's not exorbitantly expensive.\n\nI'm not sure offhand, but I suspect it wouldn't be too bad.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 4 Aug 2022 11:08:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 1:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Note that the patch you proposed at [1] will not fix anything.\n> It turns off autovac in the new node, but the buildfarm failures\n> we've seen appear to be due to autovac running on the old node.\n> (I believe that autovac in the new node is *also* a hazard, but\n> it seems to be a lot less of one, presumably because of timing\n> considerations.) To make it work, we'd have to shut off autovac\n> in the old node before starting pg_upgrade,\n\nYeah, that's a fair point.\n\n> and that would make it\n> unacceptably (IMHO) different from what real users will do.\n\nI don't agree with that, but as you say, it is a matter of opinion. In\nany case, what exactly do you want to do now?\n\nJonathon Katz has proposed a patch to do the fuzzy comparison which I\nbelieve to be incorrect because I think it compares, at most, the\nhorizons for one table in the database.\n\nI could go work on a better version of that, or he could, or you\ncould, but it seems like we're running out of time awfully quick here,\ngiven that you wanted to have this resolved today and it's almost the\nend of today.\n\nI think the most practical alternative is to put this file back to the\nway it was before I started tinkering with it, and revisit this issue\nafter the release. If you want to do something else, that's fine, but\nI'm not going to be available to work on this issue over the weekend,\nso if you want to do something else, you or someone else is going to\nhave to take responsibility for whatever further stabilization that\nother approach may require between now and the release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:02:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think the most practical alternative is to put this file back to the\n> way it was before I started tinkering with it, and revisit this issue\n> after the release.\n\nYeah, that seems like the right thing. We are running too low on time\nto have any confidence that a modified version of the test will be\nreliable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 15:10:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:59 PM Andres Freund <andres@anarazel.de> wrote:\n> Why you think it's better to not have the test than to have a very limited\n> amount of fuzziness (by using the next xid as an upper limit). What's the bug\n> that will reliably pass the nextxid fuzzy comparison, but not an exact\n> comparison?\n\nI don't know. I mean, I guess one possibility is that the nextXid\nvalue could be wrong too, because I doubt we have any separate test\nthat would catch that. But more generally, I don't have a lot of\nconfidence in fuzzy tests. It's too easy for things to look like\nthey're working when they really aren't.\n\nLet's say that the value in the old cluster was 100 and the nextXid in\nthe new cluster is 1000. Well, it's not like every value between 100\nand 1000 is OK. The overwhelming majority of those values could never\nbe produced, neither from the old cluster nor from any subsequent\nvacuum. Given that the old cluster is suffering no new write\ntransactions, there's probably exactly two values that are legal: one\nbeing the value from the old cluster, which we know, and the other\nbeing whatever a vacuum of that table would produce, which we don't\nknow, although we do know that it's somewhere in that range. Let's\nflip the question on its head: why should some hypothetical future bug\nthat we have in this area produce a value outside that range?\n\nIf it's a failure to set the value at all, or if it generates a value\nat random, we'd likely still catch it. And those are pretty likely, so\nmaybe the value of such a test is not zero. On the other hand, subtle\nbreakage might be more likely to survive developer testing than\ncomplete breakage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:15:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Given that the old cluster is suffering no new write\n> transactions, there's probably exactly two values that are legal: one\n> being the value from the old cluster, which we know, and the other\n> being whatever a vacuum of that table would produce, which we don't\n> know, although we do know that it's somewhere in that range.\n\nWhat about autoanalyze?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 4 Aug 2022 12:22:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 3:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Aug 4, 2022 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Given that the old cluster is suffering no new write\n> > transactions, there's probably exactly two values that are legal: one\n> > being the value from the old cluster, which we know, and the other\n> > being whatever a vacuum of that table would produce, which we don't\n> > know, although we do know that it's somewhere in that range.\n>\n> What about autoanalyze?\n\nWhat about it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:31:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think the most practical alternative is to put this file back to the\n> > way it was before I started tinkering with it, and revisit this issue\n> > after the release.\n>\n> Yeah, that seems like the right thing. We are running too low on time\n> to have any confidence that a modified version of the test will be\n> reliable.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:32:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > What about autoanalyze?\n>\n> What about it?\n\nIt has a tendency to consume an XID, here or there, quite\nunpredictably. I've noticed that this often involves an analyze of\npg_statistic. Have you accounted for that?\n\nYou said upthread that you don't like \"fuzzy\" tests, because it's too\neasy for things to look like they're working when they really aren't.\nI suppose that there may be some truth to that, but ISTM that there is\nalso a lot to be said for a test that can catch failures that weren't\nspecifically anticipated. Users won't be running pg_upgrade with\nautovacuum disabled. And so ISTM that just testing that relfrozenxid\nhas been carried forward is more precise about one particular detail\n(more precise than alternative approaches to testing), but less\nprecise about the thing that we actually care about.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 4 Aug 2022 12:52:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Tue, Aug 2, 2022 at 03:32:05PM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 8/2/22 1:12 PM, Tom Lane wrote:\n> >> Sadly, we're still not out of the woods. I see three buildfarm\n> >> failures in this test since Robert resolved the \"-X\" problem [1][2][3]:\n> \n> > Looking at the test code, is there anything that could have changed the \n> > relfrozenxid or relminxid independently of the test on these systems?\n> \n> Hmmm ... now that you mention it, I see nothing in 002_pg_upgrade.pl\n> that attempts to turn off autovacuum on either the source server or\n> the destination. So one plausible theory is that autovac moved the\n> numbers since we checked.\n\nUh, pg_upgrade assumes autovacuum is not running, and tries to enforce\nthis:\n\n start_postmaster()\n ...\n /*\n * Use -b to disable autovacuum.\n *\n * Turn off durability requirements to improve object creation speed, and\n * we only modify the new cluster, so only use it there. If there is a\n * crash, the new cluster has to be recreated anyway. fsync=off is a big\n * win on ext4.\n *\n * Force vacuum_defer_cleanup_age to 0 on the new cluster, so that\n * vacuumdb --freeze actually freezes the tuples.\n */\n snprintf(cmd, sizeof(cmd),\n \"\\\"%s/pg_ctl\\\" -w -l \\\"%s/%s\\\" -D \\\"%s\\\" -o \\\"-p %d -b%s %s%s\\\" start\",\n cluster->bindir,\n log_opts.logdir,\n SERVER_LOG_FILE, cluster->pgconfig, cluster->port,\n (cluster == &new_cluster) ?\n \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0\" : \"\",\n cluster->pgopts ? cluster->pgopts : \"\", socket_string);\n\nPerhaps the test script should do something similar, or this method\ndoesn't work anymore.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 8 Aug 2022 20:59:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n>> Hmmm ... now that you mention it, I see nothing in 002_pg_upgrade.pl\n>> that attempts to turn off autovacuum on either the source server or\n>> the destination. So one plausible theory is that autovac moved the\n>> numbers since we checked.\n\n> Uh, pg_upgrade assumes autovacuum is not running, and tries to enforce\n> this:\n\nThe problems come from autovac running before or after pg_upgrade.\n\n> Perhaps the test script should do something similar,\n\nI'm not on board with that, for the reasons I gave upthread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 21:51:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "On Mon, Aug 8, 2022 at 09:51:46PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> >> Hmmm ... now that you mention it, I see nothing in 002_pg_upgrade.pl\n> >> that attempts to turn off autovacuum on either the source server or\n> >> the destination. So one plausible theory is that autovac moved the\n> >> numbers since we checked.\n> \n> > Uh, pg_upgrade assumes autovacuum is not running, and tries to enforce\n> > this:\n> \n> The problems come from autovac running before or after pg_upgrade.\n> \n> > Perhaps the test script should do something similar,\n> \n> I'm not on board with that, for the reasons I gave upthread.\n\nUh, I assume it is this paragraph:\n\n> If that is the explanation, then it leaves us with few good options.\n> I am not in favor of disabling autovacuum in the test: ordinary\n> users are not going to do that while pg_upgrade'ing, so it'd make\n> the test less representative of real-world usage, which seems like\n> a bad idea. We could either drop this particular check again, or\n> weaken it to allow new relfrozenxid >= old relfrozenxid, likewise\n> relminxid.\n\nI thought the test was setting up a configuration that would never be\nused by normal servers. Is that false?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 8 Aug 2022 22:53:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I thought the test was setting up a configuration that would never be\n> used by normal servers. Is that false?\n\n*If* we made it disable autovac before starting pg_upgrade,\nthen that would be a process not used by normal users.\nI don't care whether pg_upgrade disables autovac during its\nrun; that's not what's at issue here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 23:07:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15b2: large objects lost on upgrade" } ]
[ { "msg_contents": "Hi,\n\n(sorry for sending this twice to you Noah, forgot -hackers the first time\nround)\n\nWe've had a bunch of changes to manually deal with our alignment code not\nunderstanding AIX alignment.\n\ncommit f3b421da5f4addc95812b9db05a24972b8fd9739\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: 2016-12-21 12:00:00 -0500\n\n Reorder pg_sequence columns to avoid alignment issue\n\ncommit 79b716cfb7a1be2a61ebb4418099db1258f35e30\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2022-04-07 09:39:25 +0530\n\n Reorder subskiplsn in pg_subscription to avoid alignment issues.\n\n\nA good explanation of the problem is in https://postgr.es/m/20220402081346.GD3719101%40rfd.leadboat.com\n\n\nI strikes me as a remarkably bad idea to manually try to maintain the correct\nalignment. Even with the tests added it's still quite manual and requires\ncontorted struct layouts (see e.g. [1]).\n\nI think we should either teach our system the correct alignment rules or we\nshould drop AIX support.\n\n\nIf we decide we want to continue supporting AIX we should bite the bullet and\nadd a 64bit-int TYPALIGN_*. It might be worth to translate that to bytes when\nbuilding tupledescs, so we don't need more branches (reducing them compared to\ntoday).\n\n\nPersonally I think we should just drop AIX. The amount of effort to keep it\nworking is substantial due to being quite different from other unices ([2]), the is\nvery outdated, the whole ecosystem is barely on lifesupport ([3]). And all of that\nfor very little real world use.\n\nAfaics we don't have access to an up2date AIX system. Some of have access to\n7.2 via the gcc compile farm, but not 7.3. Most other niche-y operating\nsystems we can start in a VM, but I've yet to see a legal and affordable way\nto do that with AIX.\n\n\nI think Noah has done quite a heroic effort at keeping the AIX animals in a\nkind-of-healthy state, but without more widespread access and more widespread\nusage it seems like a doomed effort.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CAFiTN-uiAngcW50Trwa94F1EWY2BxEx%2BB38QSyX3DtV3dzEGhA%40mail.gmail.com\n\n[2] linking etc is handled entirely different, so there's a fair bit of\n dedicated AIX code around the buildsystem - a lot of it vestigial stuff,\n see references to aix3.2.5 etc.\n\n[3] 7.2 was released in 2015-10-05, 7.3 in 2021-12-10, the set of changes is\n pretty darn small for that timeframe\n https://www.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS221-328&appname=USN\n\n Bull / Atos stopped their AIX work in 2022-03-01 - unfortunately they\n didn't even keep the announcement of that online.\n https://www.linkedin.com/pulse/said-say-bull-closing-down-aix-open-source-platform-michaelis\n https://github.com/power-devops/bullfreeware\n\n\n", "msg_date": "Sat, 2 Jul 2022 11:33:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "AIX support - alignment issues" }, { "msg_contents": "On Sat, Jul 2, 2022 at 11:34 AM Andres Freund <andres@anarazel.de> wrote:\n> Personally I think we should just drop AIX. The amount of effort to keep it\n> working is substantial due to being quite different from other unices ([2]), the is\n> very outdated, the whole ecosystem is barely on lifesupport ([3]). And all of that\n> for very little real world use.\n\nI tend to agree about dropping AIX. But I wonder if there is an\nargument against that proposal that doesn't rely on AIX being relevant\nto at least one user. Has supporting AIX ever led to the discovery of\na bug that didn't just affect AIX? In other words, are AIX systems\npeculiar in some particular way that clearly makes them more likely to\nflush out a certain class of bugs? What is the best argument *against*\ndesupporting AIX that you know of?\n\nDesupporting AIX doesn't mean that any AIX users will be left in the\nlurch immediately. Obviously these users will be able to use a\nsupported version of Postgres for several more years.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 2 Jul 2022 11:54:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I tend to agree about dropping AIX. But I wonder if there is an\n> argument against that proposal that doesn't rely on AIX being relevant\n> to at least one user. Has supporting AIX ever led to the discovery of\n> a bug that didn't just affect AIX?\n\nSearching the commit log quickly finds\n\n591e088dd\n\n datetime.c's parsing logic has assumed that strtod() will accept\n a string that looks like \".\", which it does in glibc, but not on\n some less-common platforms such as AIX.\n\nglibc's behavior is clearly not meeting the letter of the POSIX spec here.\n\na745b9365\n \n I'm not sure how we've managed not to notice this problem, but it\n seems to explain slow execution of the 017_shm.pl test script on AIX\n since commit 4fdbf9af5, which added a speculative \"pg_ctl stop\" with\n the idea of making real sure that the postmaster isn't there. In the\n test steps that kill-9 and then restart the postmaster, it's possible\n to get past the initial signal attempt before kill() stops working\n for the doomed postmaster. If that happens, pg_ctl waited till\n PGCTLTIMEOUT before giving up ... and the buildfarm's AIX members\n have that set very high.\n\nAdmittedly, this one is more about \"slow\" than about \"AIX\".\n\n57b5a9646\n \n Most versions of tar are willing to overlook the missing terminator, but\n the AIX buildfarm animals were not. Fix by inventing a new kind of\n bbstreamer that just blindly adds a terminator, and using it whenever we\n don't parse the tar archive.\n\nAnother place where we failed to conform to relevant standards. \n\nb9b610577\n\n Fix ancient violation of zlib's API spec.\n \nAnd another.\n\nNow, it's certainly possible that AIX is the only surviving platform\nthat hasn't adopted bug-compatible-with-glibc interpretations of\nPOSIX. But I think the standard is the standard, and we ought to\nstay within it. So I find value in these fixes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 15:22:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-02 11:54:16 -0700, Peter Geoghegan wrote:\n> I tend to agree about dropping AIX. But I wonder if there is an\n> argument against that proposal that doesn't rely on AIX being relevant\n> to at least one user. Has supporting AIX ever led to the discovery of\n> a bug that didn't just affect AIX?\n\nYes, it clearly has. But I tend to think that that's far outweighed by the\ncomplications triggered by AIX support. It'd be a different story if AIX\nhadn't a very peculiar linking model and was more widely accessible.\n\n\n> What is the best argument *against* desupporting AIX that you know of?\n\nHm.\n\n- a distinct set of system libraries that can help find portability issues\n\n- With IBM's compiler it adds a, not otherwise used, compiler that PG builds\n with. So the warnings could theoretically help find issues that we wouldn't\n otherwise see - but I don't think that's been particularly useful (nor\n monitored). And the compiler is buggy enough to add a fair bit work over the\n years.\n\n\n> Desupporting AIX doesn't mean that any AIX users will be left in the\n> lurch immediately. Obviously these users will be able to use a\n> supported version of Postgres for several more years.\n\nRight.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Jul 2022 12:42:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Sat, Jul 2, 2022 at 12:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now, it's certainly possible that AIX is the only surviving platform\n> that hasn't adopted bug-compatible-with-glibc interpretations of\n> POSIX. But I think the standard is the standard, and we ought to\n> stay within it. So I find value in these fixes.\n\nI imagine that there is strong evolutionary pressure pushing minority\nplatforms in the direction of bug-compatible-with-glibc. There is\ndefinitely a similar trend around things like endianness and alignment\npickiness. But it wasn't always so.\n\nIt seems fair to wonder if AIX bucks the glibc-compatible trend\nbecause it is already on the verge of extinction. If it wasn't just\nabout dead already then somebody would have gone to the trouble of\nmaking it bug-compatible-with-glibc by now. (To be clear, I'm not\narguing that this is a good thing.)\n\nMaybe it is still worth hanging on to AIX support for the time being,\nbut it would be nice to have some idea of where we *will* finally draw\nthe line. If the complaints from Andres aren't a good enough reason\nnow, then what other hypothetical reasons might be good enough in the\nfuture? It seems fairly likely that Postgres desupporting AIX will\nhappen (say) at some time in the next decade, no matter what happens\ntoday.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 2 Jul 2022 13:12:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Maybe it is still worth hanging on to AIX support for the time being,\n> but it would be nice to have some idea of where we *will* finally draw\n> the line. If the complaints from Andres aren't a good enough reason\n> now, then what other hypothetical reasons might be good enough in the\n> future? It seems fairly likely that Postgres desupporting AIX will\n> happen (say) at some time in the next decade, no matter what happens\n> today.\n\nAgreed. But I think that this sort of thing is better driven by\n\"when there's no longer anyone willing to do the legwork\" than\nby project policy. IOW, we'll stop when Noah gets tired of doing\nit (and no one steps up to take his place).\n\nIn the case at hand, given that the test added by 79b716cfb/c1da0acbb\ncorrectly detects troublesome catalog layouts (and no I've not studied\nit myself), I don't see that we have to do more right now.\n\nI am a little concerned though that we don't have access to the latest\nversion of AIX --- that seems like a non-maintainable situation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 16:34:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-02 16:34:35 -0400, Tom Lane wrote:\n> Agreed. But I think that this sort of thing is better driven by\n> \"when there's no longer anyone willing to do the legwork\" than\n> by project policy. IOW, we'll stop when Noah gets tired of doing\n> it (and no one steps up to take his place).\n\nI do think we should take the impact it has on everyone into account, not just\nNoah's willingness. If it's just \"does somebody still kind of maintain it\"\nthen we'll bear the distributed cost of complications for irrelevant platforms\nway longer than worthwhile.\n\n\n> In the case at hand, given that the test added by 79b716cfb/c1da0acbb\n> correctly detects troublesome catalog layouts (and no I've not studied\n> it myself), I don't see that we have to do more right now.\n\nWhat made me look at this issue right now is that the alignment issue lead the\n56bit relfilenode patch to move the relfilenode field to the start of pg_class\n(ahead of the oid), because a 64bit value cannot be after a NameData. Now, I\nthink we can do a bit better by moving a few more fields around. But the\nrestriction still seems quite onerous.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Jul 2022 13:51:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What made me look at this issue right now is that the alignment issue lead the\n> 56bit relfilenode patch to move the relfilenode field to the start of pg_class\n> (ahead of the oid),\n\nAgreed, up with that we should not put. However ...\n\n> because a 64bit value cannot be after a NameData.\n\n... this coding rule strikes me as utterly ridiculous. Why can't we\ninstead insist that NAMEDATALEN must be a multiple of 8? Anyone who\ntries to make it different from that is likely to be wasting padding\nspace even on platforms where there's not a deeper problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 17:31:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Sun, Jul 3, 2022 at 8:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I am a little concerned though that we don't have access to the latest\n> version of AIX --- that seems like a non-maintainable situation.\n\nThe release history doesn't look toooo bad on that front: the live\nversions are 7.1 (2010-2023), 7.2 (2015-TBA) and 7.3 (2021-TBA). 7.3\nonly came out half a year ago, slightly after Windows 11, which we\naren't testing yet either. Those GCC AIX systems seem to be provided\nby IBM and the Open Source Lab at Oregon State University which has a\nPOWER lab providing ongoing CI services etc to various OSS projects,\nso I would assume that upgrades (and retirement of the\nabout-to-be-desupported 7.1 system) will come along eventually.\n\nI don't have a dog in this race, but AIX is clearly not in the same\ncategory as HP-UX (and maybe Solaris is somewhere in between). AIX\nruns on hardware you can buy today that got a major refresh last year\n(Power 10), while HP-UX runs only on discontinued CPUs, so while it's\na no-brainer to drop HP-UX support, it's a trickier question for AIX.\nI guess the way open source is supposed to work is that someone with a\nreal interest in PostgreSQL on AIX helps maintain it, not only keeping\nit building and passing tests, but making it work really well (cf huge\npages, scalable event handling, probably more things that would be\nobvious to an AIX expert...), and representing ongoing demand and\ninterests from the AIX user community...\n\n\n", "msg_date": "Mon, 4 Jul 2022 10:33:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Jul 3, 2022 at 8:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I am a little concerned though that we don't have access to the latest\n>> version of AIX --- that seems like a non-maintainable situation.\n\n> The release history doesn't look toooo bad on that front: the live\n> versions are 7.1 (2010-2023), 7.2 (2015-TBA) and 7.3 (2021-TBA). 7.3\n> only came out half a year ago, slightly after Windows 11, which we\n> aren't testing yet either. Those GCC AIX systems seem to be provided\n> by IBM and the Open Source Lab at Oregon State University which has a\n> POWER lab providing ongoing CI services etc to various OSS projects,\n> so I would assume that upgrades (and retirement of the\n> about-to-be-desupported 7.1 system) will come along eventually.\n\nOK, we can wait awhile to see what happens on that.\n\n> I don't have a dog in this race, but AIX is clearly not in the same\n> category as HP-UX (and maybe Solaris is somewhere in between). AIX\n> runs on hardware you can buy today that got a major refresh last year\n> (Power 10), while HP-UX runs only on discontinued CPUs, so while it's\n> a no-brainer to drop HP-UX support, it's a trickier question for AIX.\n\nYeah. FTR, I'm out of the HP-UX game: due to a hardware failure,\nI can no longer boot that installation. I would have preferred to\nkeep pademelon, with its pre-C99 compiler, going until v11 is EOL,\nbut that ain't happening. I see that EDB are still running a couple\nof HP-UX/IA64 animals, but I wonder if they're prepared to do anything\nto support those animals --- like, say, fix platform-specific bugs.\nRobert has definitely indicated displeasure with doing so, but\nI don't know if he makes the decisions on that.\n\nI would not stand in the way of dropping HP-UX and IA64 support as of\nv16. (I do still feel that HPPA is of interest, to keep us honest\nabout spinlock support --- but that dual-stack arrangement that IA64\nuses is surely not part of anyone's future.)\n\nI have no opinion either way about Solaris.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 20:08:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-04 10:33:37 +1200, Thomas Munro wrote:\n> I don't have a dog in this race, but AIX is clearly not in the same\n> category as HP-UX (and maybe Solaris is somewhere in between).\n\nThe reason to consider whether it's worth supporting AIX is that it's library\nmodel is very different from other unix like platforms (much closer to windows\nthough). We also have dedicated compiler support for it, which I guess could\nseparately be dropped.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Jul 2022 17:35:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-03 20:08:19 -0400, Tom Lane wrote:\n> I would have preferred to keep pademelon, with its pre-C99 compiler, going\n> until v11 is EOL, but that ain't happening.\n\nI'm not too worried about that - clang with\n -std=c89 -Wc99-extensions -Werror=c99-extensions\nas it's running on mylodon for the older branches seems to do a decent\njob. And is obviously much faster :)\n\n\n> I would not stand in the way of dropping HP-UX and IA64 support as of\n> v16.\n\nCool.\n\n\n> I do still feel that HPPA is of interest, to keep us honest\n> about spinlock support\n\nI.e. forgetting to initialize them? Or the weird alignment stuff it has?\n\nI'd started to work a patch to detect missing initialization for both\nspinlocks and lwlocks, I think that'd be good to have for more common cases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Jul 2022 17:43:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-03 20:08:19 -0400, Tom Lane wrote:\n>> I do still feel that HPPA is of interest, to keep us honest\n>> about spinlock support\n\n> I.e. forgetting to initialize them? Or the weird alignment stuff it has?\n\nThe nonzero initialization mainly, and to a lesser extent the weird\nsize of a lock. I think the fact that the active word is only part\nof the lock struct is pretty well encapsulated.\n\n> I'd started to work a patch to detect missing initialization for both\n> spinlocks and lwlocks, I think that'd be good to have for more common cases.\n\nNo objection to having more than one check for this ;-)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 20:47:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Mon, Jul 4, 2022 at 12:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I would not stand in the way of dropping HP-UX and IA64 support as of\n> v16. (I do still feel that HPPA is of interest, to keep us honest\n> about spinlock support --- but that dual-stack arrangement that IA64\n> uses is surely not part of anyone's future.)\n\nI tried to find everything relating to HP-UX, aCC, ia64 and hppa. Or\ndo you still want to keep the hppa bits for NetBSD (I wasn't sure if\nyour threat to set up a NetBSD/hppa system was affected by the\nhardware failure you mentioned)? Or just leave it in there in\norphaned hall-of-fame state, like m68k, m88k, Vax?", "msg_date": "Tue, 5 Jul 2022 16:38:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jul 4, 2022 at 12:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I would not stand in the way of dropping HP-UX and IA64 support as of\n>> v16. (I do still feel that HPPA is of interest, to keep us honest\n>> about spinlock support --- but that dual-stack arrangement that IA64\n>> uses is surely not part of anyone's future.)\n\n> I tried to find everything relating to HP-UX, aCC, ia64 and hppa. Or\n> do you still want to keep the hppa bits for NetBSD (I wasn't sure if\n> your threat to set up a NetBSD/hppa system was affected by the\n> hardware failure you mentioned)?\n\nNo, the hardware failure is that the machine's SCSI controller seems\nto be fried, thus internal drives no longer accessible. I have a\nworking NetBSD-current installation on an external USB drive, and plan\nto commission it as a buildfarm animal once NetBSD 10 is officially\nbranched. It'll be a frankencritter of the first order, because\nUSB didn't exist when the machine was built, but hey...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 00:53:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-02 11:33:54 -0700, Andres Freund wrote:\n> If we decide we want to continue supporting AIX we should bite the bullet and\n> add a 64bit-int TYPALIGN_*. It might be worth to translate that to bytes when\n> building tupledescs, so we don't need more branches (reducing them compared to\n> today).\n\nI just thought an easier way - why don't we introduce a 'catalog_double'\nthat's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\ncan get rid of the manually enforced alignedness and we don't need to contort\ncatalog order.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Jul 2022 22:31:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I just thought an easier way - why don't we introduce a 'catalog_double'\n> that's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\n> can get rid of the manually enforced alignedness and we don't need to contort\n> catalog order.\n\nHm, do all the AIX compilers we care about have support for that?\nIf so, it seems like a great idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 01:36:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On 05.07.22 07:31, Andres Freund wrote:\n> On 2022-07-02 11:33:54 -0700, Andres Freund wrote:\n>> If we decide we want to continue supporting AIX we should bite the bullet and\n>> add a 64bit-int TYPALIGN_*. It might be worth to translate that to bytes when\n>> building tupledescs, so we don't need more branches (reducing them compared to\n>> today).\n> \n> I just thought an easier way - why don't we introduce a 'catalog_double'\n> that's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\n> can get rid of the manually enforced alignedness and we don't need to contort\n> catalog order.\n\nIsn't the problem that on AIX, double and int64 have different alignment \nrequirements, and we just check the one for double and apply it to \nint64? That ought to be fixable by two separate alignment checks in \nconfigure and a new alignment letter for pg_type.\n\n\n", "msg_date": "Tue, 5 Jul 2022 08:13:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-05 01:36:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I just thought an easier way - why don't we introduce a 'catalog_double'\n> > that's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\n> > can get rid of the manually enforced alignedness and we don't need to contort\n> > catalog order.\n> \n> Hm, do all the AIX compilers we care about have support for that?\n> If so, it seems like a great idea.\n\nAfaics we support xlc and gcc on AIX, and we enable the attribute for both\nalready. So, I think they do.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Jul 2022 23:30:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-05 08:13:21 +0200, Peter Eisentraut wrote:\n> On 05.07.22 07:31, Andres Freund wrote:\n> > On 2022-07-02 11:33:54 -0700, Andres Freund wrote:\n> > > If we decide we want to continue supporting AIX we should bite the bullet and\n> > > add a 64bit-int TYPALIGN_*. It might be worth to translate that to bytes when\n> > > building tupledescs, so we don't need more branches (reducing them compared to\n> > > today).\n> > \n> > I just thought an easier way - why don't we introduce a 'catalog_double'\n> > that's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\n> > can get rid of the manually enforced alignedness and we don't need to contort\n> > catalog order.\n> \n> Isn't the problem that on AIX, double and int64 have different alignment\n> requirements, and we just check the one for double and apply it to int64?\n> That ought to be fixable by two separate alignment checks in configure and a\n> new alignment letter for pg_type.\n\nExcept that that's quite a bit of work to get right, particularly without\nregressing the performance on all platforms. The attalign switches during\ntuple deforming are already quite hot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Jul 2022 23:31:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Tue, Jul 5, 2022 at 4:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, Jul 4, 2022 at 12:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I would not stand in the way of dropping HP-UX and IA64 support as of\n> >> v16. (I do still feel that HPPA is of interest, to keep us honest\n> >> about spinlock support --- but that dual-stack arrangement that IA64\n> >> uses is surely not part of anyone's future.)\n>\n> > I tried to find everything relating to HP-UX, aCC, ia64 and hppa. Or\n> > do you still want to keep the hppa bits for NetBSD (I wasn't sure if\n> > your threat to set up a NetBSD/hppa system was affected by the\n> > hardware failure you mentioned)?\n>\n> No, the hardware failure is that the machine's SCSI controller seems\n> to be fried, thus internal drives no longer accessible. I have a\n> working NetBSD-current installation on an external USB drive, and plan\n> to commission it as a buildfarm animal once NetBSD 10 is officially\n> branched. It'll be a frankencritter of the first order, because\n> USB didn't exist when the machine was built, but hey...\n\nOK, here's a new attempt, this time leaving the hppa bits in. The\nmain tricksy bit is where s_lock.h is simplified a bit by moving the\nfully inline GCC-only hppa support up a bit (it was handled a bit\nweirdly with some #undef jiggery-pokery before to share stuff between\naCC and GCC), making the diff a little hard to follow. Does this make\nsense? It might also be possible to drop one of __hppa and __hppa__\nwhere they are both tested (not clear to me if that is an aCC/GCC\nthing). I have no idea if this'll actually work (or ever worked) on\nNetBSD/hppa... if it comes to it I could try to boot it under\nqemu-system-hppa if that's what it takes, but it may be easy for you\nto test...", "msg_date": "Wed, 6 Jul 2022 14:21:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 14:21:50 +1200, Thomas Munro wrote:\n> --- a/src/backend/port/hpux/tas.c.template\n> +++ /dev/null\n> @@ -1,40 +0,0 @@\n> -/*\n> - * tas() for HPPA.\n> - *\n> - * To generate tas.s using this template:\n> - *\t1. cc +O2 -S -c tas.c\n> - *\t2. edit tas.s:\n> - *\t\t- replace the LDW with LDCWX\n> - *\t3. install as src/backend/port/tas/hpux_hppa.s.\n> - *\n> - * For details about the LDCWX instruction, see the \"Precision\n> - * Architecture and Instruction Reference Manual\" (09740-90014 of June\n> - * 1987), p. 5-38.\n> - */\n> -\n> -int\n> -tas(lock)\n> - int *lock;\t/* LDCWX is a word instruction */\n> -{\n> - /*\n> - * LDCWX requires that we align the \"semaphore\" to a 16-byte\n> - * boundary. The actual datum is a single word (4 bytes).\n> - */\n> - lock = ((uintptr_t) lock + 15) & ~15;\n> -\n> - /*\n> - * The LDCWX instruction atomically clears the target word and\n> - * returns the previous value. Hence, if the instruction returns\n> - * 0, someone else has already acquired the lock before we tested\n> - * it (i.e., we have failed).\n\n> - *\n> - * Notice that this means that we actually clear the word to set\n> - * the lock and set the word to clear the lock. This is the\n> - * opposite behavior from the SPARC LDSTUB instruction. For some\n> - * reason everything that H-P does is rather baroque...\n> - */\n> - if (*lock) {\t/* this generates the LDW */\n> -\treturn(0);\t/* success */\n> - }\n> - return(1); \t/* failure */\n> -}\n\nAre these comments retained elsewhere? It's confusing enough that I think we\nshould make sure they're somewhere until we remove hppa support...\n\n\n> -#if defined(__ia64__) || defined(__ia64)\n> -/*\n> - * Intel Itanium, gcc or Intel's compiler.\n\nHm. Personally I'd do HPUX removal separately from IA64 removal.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 20:26:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> OK, here's a new attempt, this time leaving the hppa bits in. The\n> main tricksy bit is where s_lock.h is simplified a bit by moving the\n> fully inline GCC-only hppa support up a bit (it was handled a bit\n> weirdly with some #undef jiggery-pokery before to share stuff between\n> aCC and GCC), making the diff a little hard to follow. Does this make\n> sense? It might also be possible to drop one of __hppa and __hppa__\n> where they are both tested (not clear to me if that is an aCC/GCC\n> thing). I have no idea if this'll actually work (or ever worked) on\n> NetBSD/hppa... if it comes to it I could try to boot it under\n> qemu-system-hppa if that's what it takes, but it may be easy for you\n> to test...\n\nOur HEAD does work on that NetBSD installation. I can try this\npatch, but it'll take an hour or two to get results ... stay tuned.\n\nI'm not sure about the __hppa vs __hppa__ thing. If we're assuming\nthat NetBSD is the only remaining hppa platform of interest, then\nclearly only one of those is needed, but I don't know which one\nshould be preferred. It appears that both are defined on NetBSD.\n\n(FWIW, I know that OpenBSD works on this machine too, or did the\nlast time I tried it. But it probably has the same opinions\nas NetBSD about predefined macros.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 23:47:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Wed, Jul 6, 2022 at 3:26 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-06 14:21:50 +1200, Thomas Munro wrote:\n> > - * Notice that this means that we actually clear the word to set\n> > - * the lock and set the word to clear the lock. This is the\n> > - * opposite behavior from the SPARC LDSTUB instruction. For some\n> > - * reason everything that H-P does is rather baroque...\n\n> Are these comments retained elsewhere? It's confusing enough that I think we\n> should make sure they're somewhere until we remove hppa support...\n\nOK, I moved them into s_lock.h where the remaining asm lives.\n\n> > -#if defined(__ia64__) || defined(__ia64)\n> > -/*\n> > - * Intel Itanium, gcc or Intel's compiler.\n>\n> Hm. Personally I'd do HPUX removal separately from IA64 removal.\n\nOK, split.", "msg_date": "Wed, 6 Jul 2022 16:52:56 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\n0001 looks good to me.\n\nThere's a leftover itanium reference in a comment in\nsrc/include/port/atomics/generic-msvc.h\n\nThere's also a bunch of #ifdef __ia64__ in src/backend/utils/misc/guc-file.c,\ncontrib/seg/segscan.c and contrib/cube/cubescan.c\n\nOtherwise lgtm as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 22:28:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> There's also a bunch of #ifdef __ia64__ in src/backend/utils/misc/guc-file.c,\n> contrib/seg/segscan.c and contrib/cube/cubescan.c\n\nAnd all our other flex output files --- AFAICS that's part of flex's\nrecipe and not under our control.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 01:33:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On 2022-07-06 01:33:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > There's also a bunch of #ifdef __ia64__ in src/backend/utils/misc/guc-file.c,\n> > contrib/seg/segscan.c and contrib/cube/cubescan.c\n> \n> And all our other flex output files --- AFAICS that's part of flex's\n> recipe and not under our control.\n\nClearly I need to stop reviewing things for the rest of the day :)\n\n\n", "msg_date": "Tue, 5 Jul 2022 22:43:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On 06.07.22 04:21, Thomas Munro wrote:\n> \t/*\n> \t * Do not try to collapse these into one \"w+\" mode file. Doesn't work on\n> -\t * some platforms (eg, HPUX 10.20).\n> +\t * some platforms.\n> \t */\n> \ttermin = fopen(\"/dev/tty\", \"r\");\n> \ttermout = fopen(\"/dev/tty\", \"w\");\n\nI don't know how /dev/tty behaves in detail under stdio. I think \nremoving this part of the comment might leave the impression that \nattempting to use \"w+\" will never work, whereas the existing comment \nappears to indicate that it was only very old platforms that had the \nissue. If we don't have an immediate answer to that, I'd leave the \ncomment as is.\n\n\n\n", "msg_date": "Wed, 6 Jul 2022 15:02:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 06.07.22 04:21, Thomas Munro wrote:\n>> /*\n>> * Do not try to collapse these into one \"w+\" mode file. Doesn't work on\n>> -\t * some platforms (eg, HPUX 10.20).\n>> +\t * some platforms.\n>> */\n>> termin = fopen(\"/dev/tty\", \"r\");\n>> termout = fopen(\"/dev/tty\", \"w\");\n\n> I don't know how /dev/tty behaves in detail under stdio. I think \n> removing this part of the comment might leave the impression that \n> attempting to use \"w+\" will never work, whereas the existing comment \n> appears to indicate that it was only very old platforms that had the \n> issue. If we don't have an immediate answer to that, I'd leave the \n> comment as is.\n\nYeah, I was kind of wondering whether we should give w+ a try now.\nIIRC, the code was like that at one point, but we had to change it\n(ie the comment comes from bitter experience). On the other hand,\nit's probably not worth the trouble and risk to change it again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 10:01:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "I wrote:\n> Our HEAD does work on that NetBSD installation. I can try this\n> patch, but it'll take an hour or two to get results ... stay tuned.\n\nIndeed, I still get a clean build and \"make check\" passes with\nthis patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 10:20:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Sat, Jul 2, 2022 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> I strikes me as a remarkably bad idea to manually try to maintain the correct\n> alignment. Even with the tests added it's still quite manual and requires\n> contorted struct layouts (see e.g. [1]).\n>\n> I think we should either teach our system the correct alignment rules or we\n> should drop AIX support.\n\nI raised this same issue at\nhttp://postgr.es/m/CA+TgmoaK377MXCWJqEXM3VvKDDC-frNUMKb=7u07TJa59wTAeQ@mail.gmail.com\nand discussion ensued from there. I agree that manually maintaining\nalignment, even with a regression test to help, is a really bad plan.\n\nThe rule about columns of type \"name\" can be relaxed easily enough,\njust by insisting that NAMEDATALEN must be a multiple of 8. As Tom\nalso said on this thread, adding such a constraint seems to have no\nreal downside. But the problem has a second aspect not related to\nNameData, which is that int64 and double have different alignment\nrequirements on that platform. To get out from under that part of it,\nit seems we either need to de-support AIX and any other platforms that\nhave such a discrepancy, or else have separate typalign values for\nint64-align vs. double-align.\n\n From a theoretical point of view, I think what we're doing now is\npretty unprincipled. I've always found it a bit surprising that we get\naway with just assuming that a bunch of various different primitive\ndata types are all going to have the same alignment requirement. The\npurist in me feels that it would be better to have separate typalign\nvalues for things that aren't guaranteed to behave the same. However,\nthere's a practical difficulty with that approach: if the only\noperating system where this issue occurs in practice is AIX, I feel\nit's going to be pretty hard for us to keep the code that caters to\nthis unusual situation working properly. And I'd rather have no code\nfor it at all than have code which doesn't really work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:55:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 11:55:57 -0400, Robert Haas wrote:\n> On Sat, Jul 2, 2022 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > I strikes me as a remarkably bad idea to manually try to maintain the correct\n> > alignment. Even with the tests added it's still quite manual and requires\n> > contorted struct layouts (see e.g. [1]).\n> >\n> > I think we should either teach our system the correct alignment rules or we\n> > should drop AIX support.\n>\n> I raised this same issue at\n> http://postgr.es/m/CA+TgmoaK377MXCWJqEXM3VvKDDC-frNUMKb=7u07TJa59wTAeQ@mail.gmail.com\n> and discussion ensued from there. I agree that manually maintaining\n> alignment, even with a regression test to help, is a really bad plan.\n>\n> The rule about columns of type \"name\" can be relaxed easily enough,\n> just by insisting that NAMEDATALEN must be a multiple of 8. As Tom\n> also said on this thread, adding such a constraint seems to have no\n> real downside. But the problem has a second aspect not related to\n> NameData, which is that int64 and double have different alignment\n> requirements on that platform. To get out from under that part of it,\n> it seems we either need to de-support AIX and any other platforms that\n> have such a discrepancy, or else have separate typalign values for\n> int64-align vs. double-align.\n\nI think my proposal of introducing a version of double that is marked to be 8\nbyte aligned should do the trick as well, and doesn't have the problem of\nchanging the meaning of 'double' references in external headers. In fact, we\nalready have float8 as a type, so we could just add it there.\n\nWe don't currently have a float8 in the catalogs afaics, but I think it'd be\nbetter to not rely on that.\n\nIt's not pretty, but still seems a lot better than doing this stuff manually.\n\n\n> From a theoretical point of view, I think what we're doing now is\n> pretty unprincipled. I've always found it a bit surprising that we get\n> away with just assuming that a bunch of various different primitive\n> data types are all going to have the same alignment requirement. The\n> purist in me feels that it would be better to have separate typalign\n> values for things that aren't guaranteed to behave the same. However,\n> there's a practical difficulty with that approach: if the only\n> operating system where this issue occurs in practice is AIX, I feel\n> it's going to be pretty hard for us to keep the code that caters to\n> this unusual situation working properly. And I'd rather have no code\n> for it at all than have code which doesn't really work.\n\nThe problem with having a lot more alignment values is that it adds a bunch of\noverhead to very performance critical paths. We don't want to add more\nbranches to att_align_nominal() if we can avoid it.\n\nI guess we can try to introduce TYPALIGN_INT64 and then hide the relevant\nbranch with an ifdef for the common case of TYPALIGN_INT64 == TYPALIGN_DOUBLE.\n\n\nI'm fairly certain that we're going to add a lot more 64bit ints to catalogs\nin the next few years, so this will become a bigger issue over time...\n\n\nOutside of the catalogs I still think that we should work towards not aligning\nbyval values (and instead memcpy-ing the values to deal with alignment\nsensitive platforms), so we don't waste so much space. And for catalogs we've\nbeen talking about giving up the struct mapping as well, in the thread about\nvariable length names. In which case we could the cost of handling more\nalignment values wouldn't be incurred as frequently.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 09:27:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Wed, Jul 6, 2022 at 12:27 PM Andres Freund <andres@anarazel.de> wrote:\n> I think my proposal of introducing a version of double that is marked to be 8\n> byte aligned should do the trick as well, and doesn't have the problem of\n> changing the meaning of 'double' references in external headers. In fact, we\n> already have float8 as a type, so we could just add it there.\n\nYeah, but how easily will it be to know whether we've used that in\nevery relevant place?\n\nCould we insist on 8-byte alignment even on 32-bit platforms? I think\nwe have a few of those in the buildfarm, so maybe that would help us\nspot problems. Although I'm not sure how, exactly.\n\n> The problem with having a lot more alignment values is that it adds a bunch of\n> overhead to very performance critical paths. We don't want to add more\n> branches to att_align_nominal() if we can avoid it.\n\nFair.\n\n> I'm fairly certain that we're going to add a lot more 64bit ints to catalogs\n> in the next few years, so this will become a bigger issue over time...\n\nAbsolutely.\n\n> Outside of the catalogs I still think that we should work towards not aligning\n> byval values (and instead memcpy-ing the values to deal with alignment\n> sensitive platforms), so we don't waste so much space. And for catalogs we've\n> been talking about giving up the struct mapping as well, in the thread about\n> variable length names. In which case we could the cost of handling more\n> alignment values wouldn't be incurred as frequently.\n\n+1. Aligning stuff on disk appears to have few redeeming properties\nfor the amount of pain it causes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 13:17:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Thu, Jul 7, 2022 at 1:02 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 06.07.22 04:21, Thomas Munro wrote:\n> > /*\n> > * Do not try to collapse these into one \"w+\" mode file. Doesn't work on\n> > - * some platforms (eg, HPUX 10.20).\n> > + * some platforms.\n> > */\n> > termin = fopen(\"/dev/tty\", \"r\");\n> > termout = fopen(\"/dev/tty\", \"w\");\n>\n> I don't know how /dev/tty behaves in detail under stdio. I think\n> removing this part of the comment might leave the impression that\n> attempting to use \"w+\" will never work, whereas the existing comment\n> appears to indicate that it was only very old platforms that had the\n> issue. If we don't have an immediate answer to that, I'd leave the\n> comment as is.\n\nThanks. I put that bit back, removed the stray mention of \"itanium\"\nin Windows-specific stuff that Andres mentioned, and pushed these\npatches.\n\nWhile adjusting the docs, I noticed a few little inconsistencies here\nand there for other ISAs.\n\n* The documented list of ISAs should by now mention RISC-V. I'm sure\nit needs some fine tuning but it's working fine and tested by the\nbuild farm.\n* The documented list mentions some in different endiannesses and word\nsizes explicitly but not others; I think it'd be tidier to list the\nmain architecture names and then tack on a \"big and little endian, 32\nand 64 bit\" sentence.\n* Under \"code exists, not tested\" we mentioned M68K, M32R, VAX, but\nM88K and SuperH are also in that category and have been added/tweaked\nin the past decade with reports that imply that they were working on\nretro-gear. AFAIK only SuperH-family stuff is still produced. I\ndon't know much about that and I'm not planning to change anything,\nexcept one special mention...\n* Since Greg Stark's magnificent Vax talk[1], we became even more\ndependent on IEEE 754 via the Ryu algorithm. AFAICT, unless someone\nproduces a software IEEE math implementation for GCC/VAX... if I had\na pick one to bump off that list, that's the easiest to argue because\nit definitely doesn't work.\n* When we removed Alpha we left a couple of traces.\n\nWhat do you think about the attached?\n\n[1] https://archive.fosdem.org/2016/schedule/event/postgresql_on_vax/", "msg_date": "Fri, 8 Jul 2022 14:35:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> * The documented list mentions some in different endiannesses and word\n> sizes explicitly but not others; I think it'd be tidier to list the\n> main architecture names and then tack on a \"big and little endian, 32\n> and 64 bit\" sentence.\n\nAs phrased, this seems to be saying that we can do both\nendiannesses on any of the supported arches, which is a little\nweird considering that most of them are single-endianness. It's\nnot a big deal, but maybe a tad more word-smithing there would\nhelp?\n\n> * Since Greg Stark's magnificent Vax talk[1], we became even more\n> dependent on IEEE 754 via the Ryu algorithm. AFAICT, unless someone\n> produces a software IEEE math implementation for GCC/VAX... if I had\n> a pick one to bump off that list, that's the easiest to argue because\n> it definitely doesn't work.\n\nAgreed. In principle I'd wish that we were not tied to one\nfloating-point format, but the benefits of Ryu are too hard to\npass up; and reality on the ground is that IEEE 754 achieved\ntotal victory a couple decades ago. We should stop claiming\nthat VAX is a realistic target platform.\n\n> What do you think about the attached?\n\nWFM. Also, that crypt-blowfish.c hunk offers an answer to\nyour question about whether to worry about \"__hppa\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 00:24:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Thu, 7 Jul 2022 at 22:36, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> * Since Greg Stark's magnificent Vax talk[1], we became even more\n> dependent on IEEE 754 via the Ryu algorithm. AFAICT, unless someone\n> produces a software IEEE math implementation for GCC/VAX... if I had\n> a pick one to bump off that list, that's the easiest to argue because\n> it definitely doesn't work.\n\nYeah that's definitely true. I think you could possibly build with a\nsoftware fp implementation but then you would have to recompile libc\nand any other libraries as well.\n\nIf it was worth spending a lot of effort we could perhaps separate the\nFloat4/Float8 data type from the core C code floating point and\ncompile with just the former using soft floats but use native floats\nfor core code. That's probably way more effort than it's worth for VAX\nbut it would conceivably be worthwhile if it helped for running on\nsome embedded platform but I don't think so since they would\npresumably be using soft floats everywhere anyways.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 8 Jul 2022 00:26:15 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Tue, Jul 5, 2022 at 1:32 AM Andres Freund <andres@anarazel.de> wrote:\n> I just thought an easier way - why don't we introduce a 'catalog_double'\n> that's defined to be pg_attribute_aligned(whatever-we-need) on AIX? Then we\n> can get rid of the manually enforced alignedness and we don't need to contort\n> catalog order.\n\nI investigated this a little bit today. It seems that\natt_align_nominal() thinks that typalign=='d' means ALIGNOF_DOUBLE,\nwhich on AIX is 4. So I think what we would need to do first is\nredefine typalign=='d' to mean alignment to MAXIMUM_ALIGNOF. If we\ndon't do that, then there's no automatic way to get uint64 fields to\nbe placed on 8-byte boundaries, which it requires. Such a change would\nhave no effect on many systems, but if as on AIX double requires less\nalignment than either \"long\" or \"long long int\", it will break on-disk\ncompatibility and in particular pg_upgrade compatibility.\n\nIf we did that, then we could pursue your proposal above. Rather than\ncreating an altogether new typedef, we could just apply\npg_attribute_aligned(MAXIMUM_ALIGNOF) to the existing typedef for\nfloat8, which is documented as being the name that should be used in\nthe catalogs, and is. Since pg_attribute_aligned() is not supported on\nall platforms, we elsewhere apply it conditionally, so we would\npresumably do the same thing here. That would mean that it might fail\nto apply on some platform somewhere, but we could compensate for that\nby adding a static assertion checking that if we do struct\nfloat8_alignmment_test { char pad; float8 x; } then\nalignof(float8_alignment_test, x) == MAXIMUM_ALIGNOF. That way, if\npg_attribute_aligned() isn't supported but the platform doesn't have\nthis issue in the first place, all is well. If pg_attribute_aligned()\nisn't supported and the platform does have this issue, compilation\nwill fail.\n\nIn theory, we could have the same issue with int64 on some other\nplatform. On this hypothetical system, ALIGNOF_LONG_LONG_INT <\nALIGNOF_DOUBLE. The compile would then align int64 catalog columns on,\nsay, 4-byte boundaries, but our tuple deforming code would think that\nthey were aligned to 8 byte boundaries. We could fix that by forcing\nthe int64 type to have maximum alignment as well or introducing a new\ntypedef that does. However, such a fix could probably be postponed\nuntil such time as a system of this kind turns up. It might never\nhappen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:11:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Fri, Jul 8, 2022 at 4:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > * The documented list mentions some in different endiannesses and word\n> > sizes explicitly but not others; I think it'd be tidier to list the\n> > main architecture names and then tack on a \"big and little endian, 32\n> > and 64 bit\" sentence.\n>\n> As phrased, this seems to be saying that we can do both\n> endiannesses on any of the supported arches, which is a little\n> weird considering that most of them are single-endianness. It's\n> not a big deal, but maybe a tad more word-smithing there would\n> help?\n\nOK, I word-smothe thusly:\n\n+ and PA-RISC, including\n+ big-endian, little-endian, 32-bit, and 64-bit variants where applicable.\n\nI also realised that we should list a couple more OSes (we know they\nwork, they are automatically tested). Then I wondered why we bother\nto state a Windows version here. For consistency, we could list the\nminimum Linux kernel, and so on for every other OS, but that's silly\nfor such brief and general documentation. So I propose that we just\nsay \"current versions of ...\" and remove the bit about Windows 10.", "msg_date": "Mon, 11 Jul 2022 11:19:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> OK, I word-smothe thusly:\n\n> + and PA-RISC, including\n> + big-endian, little-endian, 32-bit, and 64-bit variants where applicable.\n\nWFM. I also wonder if in\n\n+ <productname>PostgreSQL</productname> can be expected to work on current\n+ versions of these operating systems: Linux (all recent distributions), Windows,\n+ FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, AIX, Solaris, and illumos.\n\nwe could drop \"(all recent distributions)\", figuring that \"current\nversions\" covers that already. Other than that niggle, this\nlooks good to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Jul 2022 19:38:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Mon, Jul 11, 2022 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> WFM. I also wonder if in\n>\n> + <productname>PostgreSQL</productname> can be expected to work on current\n> + versions of these operating systems: Linux (all recent distributions), Windows,\n> + FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, AIX, Solaris, and illumos.\n>\n> we could drop \"(all recent distributions)\", figuring that \"current\n> versions\" covers that already. Other than that niggle, this\n> looks good to me.\n\nYeah. I wasn't too sure if that was mostly about \"recent\" or mostly\nabout \"all distributions\" but it wasn't doing much. Thanks, pushed.\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:56:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah. I wasn't too sure if that was mostly about \"recent\" or mostly\n> about \"all distributions\" but it wasn't doing much. Thanks, pushed.\n\nWhile we're here ...\n\n+ Code support exists for M68K, M88K, M32R, and SuperH, but these\n architectures are not known to have been tested recently.\n\nI confess great fondness for M68K, having spent a goodly chunk of\nthe eighties hacking M68K assembly code. However, of these four\narchitectures, I fear only SuperH has anything resembling a\ndetectable pulse. According to Wikipedia:\n\n* Motorola ended development of M68K in 1994. The last processors\nhad clock rates around 75MHz (and this was a CISC architecture,\nso instruction rates were a good bit less). Considering how\ndepressingly slow my late-90s 360MHz HPPA box is, it's impossible\nto believe that anyone wants to run PG on M68K today.\n\n* M88K was introduced in 1988 and discontinued in 1991. Max clock\nrate was apparently somewhere under 100MHz, and in any case it's\nhard to believe that any remain alive in the wild.\n\n* M32R ... hard to tell for sure, because Wikipedia's only concrete\ninfo is a link to a 404 page at renasas.com. But they do say that\nthe Linux kernel dropped support for it some years ago.\n\nSuperH might be twitching a bit less feebly than these three,\nbut it seems to be a legacy architecture as well. Not much\nhas happened there since the early 2000's AFAICS.\n\nI think it'd be pretty reasonable to disclaim support for\nany architecture that doesn't have a representative in our\nbuildfarm, which would lead to dropping all four of these.\nIf you don't like it, step up and run a buildfarm animal.\n\n(The same policy could be applied to operating systems,\nbut it looks like we're good on that side.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 02:49:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Mon, Jul 11, 2022 at 6:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> SuperH might be twitching a bit less feebly than these three,\n> but it seems to be a legacy architecture as well. Not much\n> has happened there since the early 2000's AFAICS.\n\nIt looks like there's an sh3el package for PostgreSQL on NetBSD here,\nso whoever maintains that might be in touch:\n\nhttps://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/databases/postgresql14-server/index.html\n\n> I think it'd be pretty reasonable to disclaim support for\n> any architecture that doesn't have a representative in our\n> buildfarm, which would lead to dropping all four of these.\n> If you don't like it, step up and run a buildfarm animal.\n\n+1\n\nIt's funny to think that you probably could run modern PostgreSQL on\nthe Sun 3 boxes the project started on in 1986 (based on clues from\nthe papers in our history section) if you put NetBSD on them, but\nyou'd probably need to cross compile due to lack of RAM. The grammar\nin particular.\n\n\n", "msg_date": "Mon, 11 Jul 2022 19:50:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Mon, Jul 11, 2022 at 2:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While we're here ...\n>\n> + Code support exists for M68K, M88K, M32R, and SuperH, but these\n> architectures are not known to have been tested recently.\n>\n> I think it'd be pretty reasonable to disclaim support for\n> any architecture that doesn't have a representative in our\n> buildfarm, which would lead to dropping all four of these.\n> If you don't like it, step up and run a buildfarm animal.\n\n+1. Keeping stuff like this in the documentation doesn't make those\nplatforms supported. What it does do is make it look like we're bad at\nupdating our documentation.\n\nI strongly suspect that anyone who tried to use a modern PostgreSQL on\nany of these platforms would find it quite an adventure, which is\nfine, because if you're trying to use any of those platforms in 2022,\nyou are probably the sort of person who enjoys an adventure. But it\ncan't really be useful to list them in the documentation, and it's\nunlikely that any of them \"just work\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 12:34:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 11, 2022 at 2:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think it'd be pretty reasonable to disclaim support for\n>> any architecture that doesn't have a representative in our\n>> buildfarm, which would lead to dropping all four of these.\n>> If you don't like it, step up and run a buildfarm animal.\n\n> I strongly suspect that anyone who tried to use a modern PostgreSQL on\n> any of these platforms would find it quite an adventure, which is\n> fine, because if you're trying to use any of those platforms in 2022,\n> you are probably the sort of person who enjoys an adventure. But it\n> can't really be useful to list them in the documentation, and it's\n> unlikely that any of them \"just work\".\n\nIt's possible that they \"just work\", but we have no way of knowing that,\nor knowing if we break them in future. Thus the importance of having\na buildfarm animal to tell us that.\n\nMore generally, I think the value of carrying support for niche\narchitectures is that it helps keep us from falling into the\nsoftware-monoculture trap, from which we'd be unable to escape when\nthe hardware landscape inevitably changes. However, it only helps\nif somebody is testing such arches on a regular basis. The fact that\nthere's some #ifdef'd code somewhere for M88K proves diddly-squat\nabout whether we could actually run on M88K today. The situation\nfor niche operating systems is precisely analogous.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 12:58:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jul 11, 2022 at 6:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> SuperH might be twitching a bit less feebly than these three,\n>> but it seems to be a legacy architecture as well. Not much\n>> has happened there since the early 2000's AFAICS.\n\n> It looks like there's an sh3el package for PostgreSQL on NetBSD here,\n> so whoever maintains that might be in touch:\n> https://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/databases/postgresql14-server/index.html\n\nHm. For a moment there I was feeling bad about recommending cutting\noff a platform somebody still pays attention to ... but looking at\nthe relevant NetBSD mailing list archives makes it look like that\nport is pretty darn moribund.\n\n> It's funny to think that you probably could run modern PostgreSQL on\n> the Sun 3 boxes the project started on in 1986 (based on clues from\n> the papers in our history section) if you put NetBSD on them, but\n> you'd probably need to cross compile due to lack of RAM.\n\nYeah. I'm wondering if that sh3el package was cross-compiled,\nand if so whether it was just part of a mass package build rather\nthan something somebody was specifically interested in. You'd\nhave to be a glutton for pain to want to do actual work with PG\non the kind of SH3 hardware that seems to be available.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:24:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Tue, Jul 12, 2022 at 7:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > It's funny to think that you probably could run modern PostgreSQL on\n> > the Sun 3 boxes the project started on in 1986 (based on clues from\n> > the papers in our history section) if you put NetBSD on them, but\n> > you'd probably need to cross compile due to lack of RAM.\n>\n> Yeah. I'm wondering if that sh3el package was cross-compiled,\n> and if so whether it was just part of a mass package build rather\n> than something somebody was specifically interested in. You'd\n> have to be a glutton for pain to want to do actual work with PG\n> on the kind of SH3 hardware that seems to be available.\n\n/me pictures Stark wheeling a real Sun 3 into a conference room\n\nYeah, we can always consider putting SuperH back if someone showed up\nto maintain/test it. That seems unlikely, but apparently there's an\nopen source silicon project based on this ISA, so maybe a fast one\nisn't impossible...\n\nHere's a patch to remove all of these.\n\nI didn't originally suggest that because of some kind of (mostly\nvicarious) nostalgia. I wonder if we should allow ourselves a\nparagraph where we remember these systems. I personally think it's\none of the amazing things about this project. Here's what I came up\nwith, but I'm sure there are more.", "msg_date": "Tue, 12 Jul 2022 10:13:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here's a patch to remove all of these.\n\nLooks sane by eyeball --- I didn't grep for other references, though.\n\n> I didn't originally suggest that because of some kind of (mostly\n> vicarious) nostalgia. I wonder if we should allow ourselves a\n> paragraph where we remember these systems. I personally think it's\n> one of the amazing things about this project. Here's what I came up\n> with, but I'm sure there are more.\n\nPlayStation 2 [1]? Although I suppose that falls under MIPS,\nwhich probably means we could still run on it, if you can find one.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/05e101c1834a%24e398b920%24f90e10ac%40toronto.redhat.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 18:29:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" }, { "msg_contents": "On Tue, Jul 12, 2022 at 10:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Here's a patch to remove all of these.\n>\n> Looks sane by eyeball --- I didn't grep for other references, though.\n\nThanks, pushed.\n\n> > I didn't originally suggest that because of some kind of (mostly\n> > vicarious) nostalgia. I wonder if we should allow ourselves a\n> > paragraph where we remember these systems. I personally think it's\n> > one of the amazing things about this project. Here's what I came up\n> > with, but I'm sure there are more.\n>\n> PlayStation 2 [1]? Although I suppose that falls under MIPS,\n> which probably means we could still run on it, if you can find one.\n\nYeah. PS had MIPS, then PowerPC (Cell), and currently AMD\n(interestingly they also run a modified FreeBSD kernel, but you can't\nreally get at it...). Sega Dreamcast had SH4.\n\nI added one more: Tru64 (but I didn't bother to list Digital UNIX or\nOSF/1, not sure if software historians consider those different OSes\nor just rebrands...). Patches to improve this little paragraph\nwelcome. Pushed.\n\n\n", "msg_date": "Tue, 12 Jul 2022 11:20:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIX support - alignment issues" } ]
[ { "msg_contents": "Buildfarm member thorntail has yet to pass the pg_upgrade test\nin the REL_15_STABLE branch. It looks like the problem reduces to\nan overlength pathname:\n\n2022-07-04 00:27:03.404 MSK [2212393:2] LOG: Unix-domain socket path \"/home/nm/farm/sparc64_deb10_gcc_64_ubsan/REL_15_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/.s.PGSQL.49714\" is too long (maximum 107 bytes)\n\nThat path name is 3 bytes over the platform limit. Evidently,\n\"REL_15_STABLE\" is just enough longer than \"HEAD\" to make this fail,\nwhereas we didn't see the problem as long as the test case only\nran in HEAD.\n\nMembers butterflyfish, massasauga, and myna likewise have yet to pass\nthis test in REL_15_STABLE, though they're perfectly happy in HEAD.\nThey are returning cut-down logs that don't allow diagnosing for\ncertain, but a reasonable bet is that it's the same kind of problem.\n\nI think that the conversion of pg_upgrade's test script to TAP\nform missed a bet. IIRC, we have mechanism somewhere to ensure\nthat test socket path names are created under /tmp, or someplace else\nthat's not subject to possibly-long paths of installation directories.\nThat's evidently not being used here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 19:22:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Too-long socket paths are breaking several buildfarm members" }, { "msg_contents": "On Sun, Jul 03, 2022 at 07:22:11PM -0400, Tom Lane wrote:\n> That path name is 3 bytes over the platform limit. Evidently,\n> \"REL_15_STABLE\" is just enough longer than \"HEAD\" to make this fail,\n> whereas we didn't see the problem as long as the test case only\n> ran in HEAD.\n\nThat tells enough about UNIXSOCK_PATH_BUFLEN. It looks like test.sh\nhas been using for ages /tmp/pg_upgrade_check* as socket directory to\ncounter this issue.\n\n> Members butterflyfish, massasauga, and myna likewise have yet to pass\n> this test in REL_15_STABLE, though they're perfectly happy in HEAD.\n> They are returning cut-down logs that don't allow diagnosing for\n> certain, but a reasonable bet is that it's the same kind of problem.\n\nHmm. That's possible.\n\n> I think that the conversion of pg_upgrade's test script to TAP\n> form missed a bet. IIRC, we have mechanism somewhere to ensure\n> that test socket path names are created under /tmp, or someplace else\n> that's not subject to possibly-long paths of installation directories.\n> That's evidently not being used here.\n\nThere is PostgreSQL::Test::Utils::tempdir_short for that, which is\nwhat all the nodes created in Cluster.pm use for\nunix_socket_directories. One way to address the issue would be to\npass that to pg_upgrade with --socketdir, as of the attached.\n--\nMichael", "msg_date": "Mon, 4 Jul 2022 10:34:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Too-long socket paths are breaking several buildfarm members" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> There is PostgreSQL::Test::Utils::tempdir_short for that, which is\n> what all the nodes created in Cluster.pm use for\n> unix_socket_directories. One way to address the issue would be to\n> pass that to pg_upgrade with --socketdir, as of the attached.\n\nYeah, I just came to the same conclusion and pushed an equivalent\npatch. Sorry for the duplicated effort.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 21:40:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Too-long socket paths are breaking several buildfarm members" }, { "msg_contents": "On Sun, Jul 03, 2022 at 09:40:23PM -0400, Tom Lane wrote:\n> Yeah, I just came to the same conclusion and pushed an equivalent\n> patch. Sorry for the duplicated effort.\n\nNo problem. Thanks for the quick fix.\n--\nMichael", "msg_date": "Mon, 4 Jul 2022 12:01:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Too-long socket paths are breaking several buildfarm members" } ]
[ { "msg_contents": "Change timeline field of IDENTIFY_SYSTEM to int8\n\nIt was int4, but in the other replication commands, timelines are\nreturned as int8.\n\nReviewed-by: Nathan Bossart <nathandbossart@gmail.com>\nDiscussion: https://www.postgresql.org/message-id/flat/7e4fdbdc-699c-4cd0-115d-fb78a957fc22@enterprisedb.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ec40f3422412cfdc140b5d3f67db7fd2dac0f1e2\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 2 +-\nsrc/backend/replication/walsender.c | 2 +-\n2 files changed, 2 insertions(+), 2 deletions(-)", "msg_date": "Mon, 04 Jul 2022 05:38:57 +0000", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> Change timeline field of IDENTIFY_SYSTEM to int8\n\nSurely this patch is far from complete?\n\nTo start with, just a few lines down in IdentifySystem() the column\nis filled using Int32GetDatum not Int64GetDatum. I will get some\npopcorn and await the opinions of the 32-bit buildfarm animals.\n\nBut what about whatever code is reading the output? And what if\nthat code isn't v16? I can't believe that we can make a wire\nprotocol change as summarily as this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 01:55:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "I wrote:\n> To start with, just a few lines down in IdentifySystem() the column\n> is filled using Int32GetDatum not Int64GetDatum. I will get some\n> popcorn and await the opinions of the 32-bit buildfarm animals.\n\nDidn't need to wait long:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-07-04%2005%3A39%3A50\n\nThe other questions still stand.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 01:57:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "\nOn 04.07.22 07:55, Tom Lane wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n>> Change timeline field of IDENTIFY_SYSTEM to int8\n> \n> Surely this patch is far from complete?\n> \n> To start with, just a few lines down in IdentifySystem() the column\n> is filled using Int32GetDatum not Int64GetDatum. I will get some\n> popcorn and await the opinions of the 32-bit buildfarm animals.\n> \n> But what about whatever code is reading the output? And what if\n> that code isn't v16? I can't believe that we can make a wire\n> protocol change as summarily as this.\n\nI think a client will either just read the string value and convert it \nto some numeric type without checking what type was actually sent, or if \nthe client API is type-aware and automatically converts to a native type \nof some sort, then it will probably already support 64-bit ints. Do you \nsee some problem scenario?\n\nI'm seeing a bigger problem now, which is that our client code doesn't \nparse bigger-than-int32 timeline IDs correctly.\n\nlibpqwalreceiver uses pg_strtoint32(), which will error on overflow.\n\npg_basebackup uses atoi(), so it will just truncate the value, except \nfor READ_REPLICATION_SLOT, where it uses atol(), so it will do the wrong \nthing on Windows only.\n\nThere is clearly very little use for such near-overflow timeline IDs in \npractice. But it still seems pretty inconsistent.\n\n\n", "msg_date": "Mon, 4 Jul 2022 15:55:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 04.07.22 07:55, Tom Lane wrote:\n>> But what about whatever code is reading the output? And what if\n>> that code isn't v16? I can't believe that we can make a wire\n>> protocol change as summarily as this.\n\n> I think a client will either just read the string value and convert it \n> to some numeric type without checking what type was actually sent, or if \n> the client API is type-aware and automatically converts to a native type \n> of some sort, then it will probably already support 64-bit ints. Do you \n> see some problem scenario?\n\nIf the result of IDENTIFY_SYSTEM is always sent in text format, then\nI agree that this isn't very problematic. If there are any clients\nthat fetch it in binary mode, though, this is absolutely a wire\nprotocol break for them ... and no, I don't believe an unsupported\nclaim that they'd adapt automatically.\n\n> I'm seeing a bigger problem now, which is that our client code doesn't \n> parse bigger-than-int32 timeline IDs correctly.\n\nYup.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 13:32:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "On 04.07.22 19:32, Tom Lane wrote:\n> If the result of IDENTIFY_SYSTEM is always sent in text format, then\n> I agree that this isn't very problematic. If there are any clients\n> that fetch it in binary mode, though, this is absolutely a wire\n> protocol break for them\n\nThe result rows of the replication commands are always sent in text format.\n\n\n", "msg_date": "Mon, 4 Jul 2022 21:40:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" }, { "msg_contents": "On Mon, Jul 04, 2022 at 01:55:13AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > Change timeline field of IDENTIFY_SYSTEM to int8\n> \n> Surely this patch is far from complete?\n\nYeah..\n\n> But what about whatever code is reading the output? And what if\n> that code isn't v16? I can't believe that we can make a wire\n> protocol change as summarily as this.\n\nAssuming that one reaches a timeline of 2 billion, this change would\nmake the TLI consumption of the client safe to signedness. But why is\nit safe to do a protocol change when running IDENTIFY_SYSTEM? We've\nbeen very strict to maintain compatibility for any protocol change,\nhence why should the replication protocol be treated differently?\n--\nMichael", "msg_date": "Tue, 5 Jul 2022 09:49:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Change timeline field of IDENTIFY_SYSTEM to int8" } ]
[ { "msg_contents": "Hi.\n\nBy convention, the tab-complete logical replication subscription\nparameters are listed in the COMPLETE_WITH lists in alphabetical\norder, but when the \"disable_on_error\" parameter was added this was\nnot done.\n\nThis patch just tidies that up; there is no functional change.\n\nPSA\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 4 Jul 2022 17:37:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Mon, Jul 4, 2022 at 1:07 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> By convention, the tab-complete logical replication subscription\n> parameters are listed in the COMPLETE_WITH lists in alphabetical\n> order, but when the \"disable_on_error\" parameter was added this was\n> not done.\n>\n\nYeah, it seems we have overlooked this point. I think we can do this\njust for HEAD but as the feature is introduced in PG-15 so there is no\nharm in pushing it to PG-15 as well especially because it is a\nstraightforward change. What do you or others think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 4 Jul 2022 14:07:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Mon, Jul 4, 2022, at 5:37 AM, Amit Kapila wrote:\n> Yeah, it seems we have overlooked this point. I think we can do this\n> just for HEAD but as the feature is introduced in PG-15 so there is no\n> harm in pushing it to PG-15 as well especially because it is a\n> straightforward change. What do you or others think?\nNo objection. It is a good thing for future backpatches.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Jul 4, 2022, at 5:37 AM, Amit Kapila wrote:Yeah, it seems we have overlooked this point. I think we can do thisjust for HEAD but as the feature is introduced in PG-15 so there is noharm in pushing it to PG-15 as well especially because it is astraightforward change. What do you or others think?No objection. It is a good thing for future backpatches.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 04 Jul 2022 09:28:58 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Mon, Jul 4, 2022 at 10:29 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Jul 4, 2022, at 5:37 AM, Amit Kapila wrote:\n>\n> Yeah, it seems we have overlooked this point. I think we can do this\n> just for HEAD but as the feature is introduced in PG-15 so there is no\n> harm in pushing it to PG-15 as well especially because it is a\n> straightforward change. What do you or others think?\n>\n> No objection. It is a good thing for future backpatches.\n>\n\nSince there is no function change or bugfix here I thought it was only\napplicable for HEAD. This change is almost in the same category as a\ncode comment typo patch - do those normally get backpatched? - maybe\nfollow the same convention here. OTOH, if you think it may be helpful\nfor future backpatches then I am also fine if you wanted to push it to\nPG15.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 5 Jul 2022 08:33:05 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Tue, Jul 5, 2022 at 4:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jul 4, 2022 at 10:29 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Mon, Jul 4, 2022, at 5:37 AM, Amit Kapila wrote:\n> >\n> > Yeah, it seems we have overlooked this point. I think we can do this\n> > just for HEAD but as the feature is introduced in PG-15 so there is no\n> > harm in pushing it to PG-15 as well especially because it is a\n> > straightforward change. What do you or others think?\n> >\n> > No objection. It is a good thing for future backpatches.\n> >\n>\n> Since there is no function change or bugfix here I thought it was only\n> applicable for HEAD. This change is almost in the same category as a\n> code comment typo patch - do those normally get backpatched? - maybe\n> follow the same convention here. OTOH, if you think it may be helpful\n> for future backpatches then I am also fine if you wanted to push it to\n> PG15.\n>\n\nIt can help if there is any bug-fix in the same code path or if some\nother code adjustment in the same area is required in the back branch.\nI feel the chances of both are less but I just wanted to keep the code\nconsistent for such a possibility. Anyway, I'll wait for a day or so\nand see if anyone has objections to it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 5 Jul 2022 09:34:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Tuesday, July 5, 2022 1:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jul 5, 2022 at 4:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > On Mon, Jul 4, 2022 at 10:29 PM Euler Taveira <euler@eulerto.com> wrote:\r\n> > >\r\n> > > On Mon, Jul 4, 2022, at 5:37 AM, Amit Kapila wrote:\r\n> > >\r\n> > > Yeah, it seems we have overlooked this point. I think we can do this\r\n> > > just for HEAD but as the feature is introduced in PG-15 so there is\r\n> > > no harm in pushing it to PG-15 as well especially because it is a\r\n> > > straightforward change. What do you or others think?\r\n> > >\r\n> > > No objection. It is a good thing for future backpatches.\r\n> > >\r\n> >\r\n> > Since there is no function change or bugfix here I thought it was only\r\n> > applicable for HEAD. This change is almost in the same category as a\r\n> > code comment typo patch - do those normally get backpatched? - maybe\r\n> > follow the same convention here. OTOH, if you think it may be helpful\r\n> > for future backpatches then I am also fine if you wanted to push it to\r\n> > PG15.\r\n> >\r\n> \r\n> It can help if there is any bug-fix in the same code path or if some other code\r\n> adjustment in the same area is required in the back branch.\r\n> I feel the chances of both are less but I just wanted to keep the code consistent\r\n> for such a possibility. Anyway, I'll wait for a day or so and see if anyone has\r\n> objections to it.\r\nThank you all for catching and discussing this fix.\r\n\r\nI also agree with pushing it to PG-15 for comfortability of future backpatches.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 6 Jul 2022 02:48:07 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "On Wed, Jul 06, 2022 at 02:48:07AM +0000, osumi.takamichi@fujitsu.com wrote:\n> On Tuesday, July 5, 2022 1:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> It can help if there is any bug-fix in the same code path or if some other code\n>> adjustment in the same area is required in the back branch.\n>> I feel the chances of both are less but I just wanted to keep the code consistent\n>> for such a possibility. Anyway, I'll wait for a day or so and see if anyone has\n>> objections to it.\n> \n> I also agree with pushing it to PG-15 for comfortability of future backpatches.\n\nYeah, backpatching that is just but fine.\n--\nMichael", "msg_date": "Wed, 6 Jul 2022 12:42:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" }, { "msg_contents": "FYI, I confirmed the same patch applies and works OK for tags/REL_15_BETA2.\n\n------\n\n[postgres@CentOS7-x64 ~]$ psql --version\npsql (PostgreSQL) 15beta2\n[postgres@CentOS7-x64 ~]$ psql\npsql (15beta2)\nType \"help\" for help.\n\npostgres=# create subscription mysub connection 'blah' publication\nmypub with ( <press-tab>\nBINARY COPY_DATA DISABLE_ON_ERROR SLOT_NAME\n SYNCHRONOUS_COMMIT\nCONNECT CREATE_SLOT ENABLED STREAMING\n TWO_PHASE\npostgres=# create subscription mysub connection 'blah' publication mypub with (\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 6 Jul 2022 14:38:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Re-order \"disable_on_error\" in tab-complete COMPLETE_WITH" } ]
[ { "msg_contents": "Hi hackers,\n\nI think having number of index scans of the last vacuum in \npg_stat_all_tables can be helpful. This value shows how efficiently \nvacuums have performed and can be an indicator to increase \nmaintenance_work_mem.\n\nIt was proposed previously[1], but it was not accepted due to the \nlimitation of stats collector. Statistics are now stored in shared \nmemory, so we got more rooms to store statistics. I think this \nstatistics is still valuable for some people, so I am proposing this \nagain.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n[1] \nhttps://www.postgresql.org/message-id/20171010.192616.108347483.horiguchi.kyotaro%40lab.ntt.co.jp", "msg_date": "Mon, 04 Jul 2022 18:29:08 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On 2022-Jul-04, Ken Kato wrote:\n\n> I think having number of index scans of the last vacuum in\n> pg_stat_all_tables can be helpful. This value shows how efficiently vacuums\n> have performed and can be an indicator to increase maintenance_work_mem.\n\nYeah, this would be a good metric to expose, since it directly tells how\nto set autovacuum_work_mem. I'm not sure that the shape you propose is\ncorrect, though, because each vacuum run would clobber whatever value\nwas there before. No other stats counter works that way; they are all\nadditive. But I'm not sure that adding the current number each time is\nsensible, either, because then the only thing you know is the average of\nthe last X runs, which doesn't tell you much.\n\nSaving some sort of history would be much more useful, but of course a\nlot more work.\n\n> It was proposed previously[1], but it was not accepted due to the limitation\n> of stats collector. Statistics are now stored in shared memory, so we got\n> more rooms to store statistics. I think this statistics is still valuable\n> for some people, so I am proposing this again.\n\n> [1] https://www.postgresql.org/message-id/20171010.192616.108347483.horiguchi.kyotaro%40lab.ntt.co.jp\n\nI read this thread, but what was proposed there is a bunch of metrics\nthat are not this one. The discussions there centered about how it\nwould be unacceptable to incur in the space cost that would be taken by\nadding autovacuum-related metrics completely different from the one you\npropose. That debate is now over, so we're clear to proceed. But we\nneed to agree on what to add.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 8 Jul 2022 18:40:52 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On Fri, Jul 8, 2022 at 10:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Saving some sort of history would be much more useful, but of course a\n> lot more work.\n\nI think that storing a certain amount of history would be very useful,\nfor lots of reasons. Not just for instrumentation purposes; I envisage\na design where VACUUM itself makes certain decisions based on the\nhistory of each VACUUM operation against the table. The direction that\nthings have taken suggests a certain amount about the direction that\nthings are going in, which we should try to influence.\n\nThe simplest and best example of how this could help is probably\nfreezing, and freeze debt. Currently, the visibility map interacts\nwith vacuum_freeze_min_age in a way that allows unfrozen all-visible\npages to accumulate. These pages won't be frozen until the next\naggressive VACUUM. But there is no fixed relationship between the\nnumber of XIDs consumed by the system (per unit of wallclock time) and\nthe number of unfrozen all-visible pages (over the same duration). So\nwe might end up having to freeze an absolutely enormous number of\npages in the eventual aggressive vacuum. We also might not -- it's\nreally hard to predict, for reasons that just don't make much sense.\n\nThere are a few things we could do here, but having a sense of history\nseems like the important part. If (say) the table exceeds a certain\nsize, and the number of all-visible pages grows and grows (without any\nfreezing taking place), then we should \"proactively\" freeze at least\nsome of the unfrozen all-visible pages in earlier VACUUM operations.\nIn other words, we should (at the very least) spread out the burden of\nfreezing those pages over time, while being careful to not pay too\nmuch more than we would with the old approach if and when the workload\ncharacteristics change again.\n\nMore generally, I think that we should blur the distinction between\naggressive and non-aggressive autovacuum. Sure, we'd still need VACUUM\nto \"behave aggressively\" in some sense, but that could all happen\ndynamically, without committing to a particular course of action until\nthe last moment -- being able to change our minds at the last minute\ncan be very valuable, even though we probably won't change our minds\ntoo often.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Jul 2022 11:18:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On 2022-07-09 03:18, Peter Geoghegan wrote:\n> On Fri, Jul 8, 2022 at 10:47 AM Alvaro Herrera \n> <alvherre@alvh.no-ip.org> wrote:\n>> Saving some sort of history would be much more useful, but of course a\n>> lot more work.\n\nThank you for the comments!\nYes, having some sort of history would be ideal in this case.\nHowever, I am not sure how to implement those features at this moment, \nso I will take some time to consider.\n\nAt the same time, I think having this metrics exposed in the \npg_stat_all_tables comes in handy when tuning the \nmaintenance_work_mem/autovacuume_work_mem even though it shows the value \nof only last vacuum/autovacuum.\n\nRegards,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 Jul 2022 17:49:44 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On Fri, Jul 15, 2022 at 1:49 PM Ken Kato <katouknl@oss.nttdata.com> wrote:\n\n> On 2022-07-09 03:18, Peter Geoghegan wrote:\n> > On Fri, Jul 8, 2022 at 10:47 AM Alvaro Herrera\n> > <alvherre@alvh.no-ip.org> wrote:\n> >> Saving some sort of history would be much more useful, but of course a\n> >> lot more work.\n>\n> Thank you for the comments!\n> Yes, having some sort of history would be ideal in this case.\n> However, I am not sure how to implement those features at this moment,\n> so I will take some time to consider.\n>\n> At the same time, I think having this metrics exposed in the\n> pg_stat_all_tables comes in handy when tuning the\n> maintenance_work_mem/autovacuume_work_mem even though it shows the value\n> of only last vacuum/autovacuum.\n>\n> Regards,\n>\n> --\n> Ken Kato\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n> Regression is failing on all platforms; please correct that and resubmit\nthe patch.\n\n[06:17:08.194] Failed test: 2\n[06:17:08.194] Non-zero exit status: 1\n[06:17:08.194] Files=33, Tests=411, 167 wallclock secs ( 0.20 usr 0.05 sys\n+ 37.96 cusr 21.61 csys = 59.82 CPU)\n[06:17:08.194] Result: FAIL\n[06:17:08.194] make[2]: *** [Makefile:23: check] Error 1\n[06:17:08.194] make[1]: *** [Makefile:52: check-recovery-recurse] Error 2\n[06:17:08.194] make: *** [GNUmakefile:71: check-world-src/test-recurse]\nError 2\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Jul 15, 2022 at 1:49 PM Ken Kato <katouknl@oss.nttdata.com> wrote:On 2022-07-09 03:18, Peter Geoghegan wrote:\n> On Fri, Jul 8, 2022 at 10:47 AM Alvaro Herrera \n> <alvherre@alvh.no-ip.org> wrote:\n>> Saving some sort of history would be much more useful, but of course a\n>> lot more work.\n\nThank you for the comments!\nYes, having some sort of history would be ideal in this case.\nHowever, I am not sure how to implement those features at this moment, \nso I will take some time to consider.\n\nAt the same time, I think having this metrics exposed in the \npg_stat_all_tables comes in handy when tuning the \nmaintenance_work_mem/autovacuume_work_mem even though it shows the value \nof only last vacuum/autovacuum.\n\nRegards,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\nRegression is failing on all platforms; please correct that and resubmit the patch.[06:17:08.194] Failed test: 2[06:17:08.194] Non-zero exit status: 1[06:17:08.194] Files=33, Tests=411, 167 wallclock secs ( 0.20 usr 0.05 sys + 37.96 cusr 21.61 csys = 59.82 CPU)[06:17:08.194] Result: FAIL[06:17:08.194] make[2]: *** [Makefile:23: check] Error 1[06:17:08.194] make[1]: *** [Makefile:52: check-recovery-recurse] Error 2[06:17:08.194] make: *** [GNUmakefile:71: check-world-src/test-recurse] Error 2-- Ibrar Ahmed", "msg_date": "Wed, 7 Sep 2022 12:45:58 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "> Regression is failing on all platforms; please correct that and\n> resubmit the patch.\n\nHi,\n\nThank you for the review!\nI fixed it and resubmitting the patch.\n\nRegards,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 16 Sep 2022 13:23:06 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "\n\nOn 2022/09/16 13:23, Ken Kato wrote:\n>> Regression is failing on all platforms; please correct that and\n>> resubmit the patch.\n> \n> Hi,\n> \n> Thank you for the review!\n> I fixed it and resubmitting the patch.\n\nCould you tell me why the number of index scans should be tracked for\neach table? Instead, isn't it enough to have one global counter, to\ncheck whether the current setting of maintenance_work_mem is sufficient\nor not? That is, I'm thinking to have something like pg_stat_vacuum view\nthat reports, for example, the number of vacuum runs, the total\nnumber of index scans, the maximum number of index scans by one\nvacuum run, the number of cancellation of vacuum because of\nlock conflicts, etc. If so, when these global counters are high or\nincreasing, we can think that it may worth tuning maintenance_work_mem.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 16 Sep 2022 14:52:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "Hi,\n\nOn 2022-09-16 13:23:06 +0900, Ken Kato wrote:\n> Thank you for the review!\n> I fixed it and resubmitting the patch.\n\ncfbot flags that the docs aren't valid:\nhttps://cirrus-ci.com/task/5309377937670144?logs=docs_build#L295\n[15:05:39.683] monitoring.sgml:4574: parser error : Opening and ending tag mismatch: entry line 4567 and row\n[15:05:39.683] </row>\n[15:05:39.683] ^\n\n\nThe problem is that you're not closing the <entry>\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 08:28:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On 2022-Sep-16, Fujii Masao wrote:\n\n> Could you tell me why the number of index scans should be tracked for\n> each table? Instead, isn't it enough to have one global counter, to\n> check whether the current setting of maintenance_work_mem is sufficient\n> or not? That is, I'm thinking to have something like pg_stat_vacuum view\n> that reports, for example, the number of vacuum runs, the total\n> number of index scans, the maximum number of index scans by one\n> vacuum run, the number of cancellation of vacuum because of\n> lock conflicts, etc. If so, when these global counters are high or\n> increasing, we can think that it may worth tuning maintenance_work_mem.\n\nI think that there are going to be cases where some tables in a database\ndefinitely require multiple index scans no matter what; but you\ndefinitely want to know how many occurred for others, not so highly\ntrafficked tables. So I *think* a single counter across the whole\ndatabase might not be sufficient.\n\nThe way I imagine using this (and I haven't operated databases in quite\na while so this may be all wet) is that I would have a report of which\ntables have the highest numbers of indexscans, then study the detailed\nvacuum reports for those tables as a way to change autovacuum_work_mem.\n\n\nOn the other hand, we have an absolute high cap of 1 GB for autovacuum's\nwork_mem, and many systems are already using that as the configured\nvalue. Maybe trying to fine-tune it is a waste of time. If a 1TB table\nsays that it had 4 index scans, what are you going to do about it? It's\na lost cause. It sounds like we need more code changes so that more\nmemory can be used; and also changes so that that memory is used more\nefficiently. We had a patch for this, I don't know if that was\ncommitted already.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n", "msg_date": "Tue, 27 Sep 2022 10:11:19 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "I applied this patch in my local environment and would like to reviewthe same before 14-October-2022 with some test data\r\n\r\npostgres=# \\d pg_stat_all_tables\r\n View \"pg_catalog.pg_stat_all_tables\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------------------+--------------------------+-----------+----------+---------\r\n relid | oid | | |\r\n schemaname | name | | |\r\n relname | name | | |\r\n seq_scan | bigint | | |\r\n seq_tup_read | bigint | | |\r\n idx_scan | bigint | | |\r\n idx_tup_fetch | bigint | | |\r\n n_tup_ins | bigint | | |\r\n n_tup_upd | bigint | | |\r\n n_tup_del | bigint | | |\r\n n_tup_hot_upd | bigint | | |\r\n n_live_tup | bigint | | |\r\n n_dead_tup | bigint | | |\r\n n_mod_since_analyze | bigint | | |\r\n n_ins_since_vacuum | bigint | | |\r\n last_vacuum | timestamp with time zone | | |\r\n last_autovacuum | timestamp with time zone | | |\r\n last_analyze | timestamp with time zone | | |\r\n last_autoanalyze | timestamp with time zone | | |\r\n vacuum_count | bigint | | |\r\n autovacuum_count | bigint | | |\r\n analyze_count | bigint | | |\r\n autoanalyze_count | bigint | | |\r\n last_vacuum_index_scans | bigint | | |\r\n\r\npostgres=# select version();\r\n version\r\n----------------------------------------------------------------------------------------------------------\r\n PostgreSQL 15beta4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\r\n(1 row)", "msg_date": "Sun, 02 Oct 2022 06:56:04 +0000", "msg_from": "Kshetrapaldesi Tutika <kshetra1@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "> The problem is that you're not closing the <entry>\n\nThank you for the reviews and comments.\nI closed the <entry> so that the problem should be fixed now.\n\nRegards,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 04 Oct 2022 04:49:57 +0000", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" }, { "msg_contents": "On Tue, 4 Oct 2022 at 10:20, Ken Kato <katouknl@oss.nttdata.com> wrote:\n>\n> > The problem is that you're not closing the <entry>\n>\n> Thank you for the reviews and comments.\n> I closed the <entry> so that the problem should be fixed now.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./show_index_scans_in_pg_stat_all_tables_v3.patch\npatching file src/backend/utils/activity/pgstat_relation.c\nHunk #1 succeeded at 209 (offset 1 line).\nHunk #2 FAILED at 232.\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/utils/activity/pgstat_relation.c.rej\npatching file src/include/pgstat.h\nHunk #1 FAILED at 366.\n1 out of 2 hunks FAILED -- saving rejects to file src/include/pgstat.h.rej\npatching file src/test/regress/expected/rules.out\nHunk #1 succeeded at 1800 (offset 8 lines).\nHunk #2 succeeded at 2145 (offset 11 lines).\nHunk #3 succeeded at 2193 (offset 14 lines).\n\n[1] - http://cfbot.cputube.org/patch_41_3756.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 16:10:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add last_vacuum_index_scans in pg_stat_all_tables" } ]
[ { "msg_contents": "Hi team, \n\n\n      We are using a script to install Postgres from source, the script works fine in ubuntu and Mac(intel) but mostly fails(works sometimes) in Mac M1. \n\n\nConfigure and make world works fine. But fails during make install-world.\n\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ./src/backend generated-headers\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C catalog distprep generated-header-symlinks\n\nmake[2]: Nothing to be done for `distprep'.\n\nmake[2]: Nothing to be done for `generated-header-symlinks'.\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C utils distprep generated-header-symlinks\n\nmake[2]: Nothing to be done for `distprep'.\n\nmake[2]: Nothing to be done for `generated-header-symlinks'.\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C doc install\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C src install\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C sgml install\n\n/bin/sh ../../../config/install-sh -c -d '/Users/sairam/work/postgresql-11.14/share/doc/'/html '/Users/sairam/work/postgresql-11.14/share/man'/man1 '/Users/sairam/work/postgresql-11.14/share/man'/man3 '/Users/sairam/work/postgresql-11.14/share/man'/man7\n\ncp -R `for f in ./html; do test -r $f && echo $f && break; done` '/Users/sairam/work/postgresql-11.14/share/doc/'\n\ncp -R `for f in ./man1; do test -r $f && echo $f && break; done` `for f in ./man3; do test -r $f && echo $f && break; done` `for f in ./man7; do test -r $f && echo $f && break; done` '/Users/sairam/work/postgresql-11.14/share/man'\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C src install\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C common install\n\n/bin/sh ../../config/install-sh -c -d '/Users/sairam/work/postgresql-11.14/lib'\n\n/usr/bin/install -c -m 644  libpgcommon.a '/Users/sairam/work/postgresql-11.14/lib/libpgcommon.a'\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C port install\n\n/bin/sh ../../config/install-sh -c -d '/Users/sairam/work/postgresql-11.14/lib'\n\n/usr/bin/install -c -m 644  libpgport.a '/Users/sairam/work/postgresql-11.14/lib/libpgport.a'\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C timezone install\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../src/port all\n\nmake[3]: Nothing to be done for `all'.\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../src/common all\n\nmake[3]: Nothing to be done for `all'.\n\n/bin/sh ../../config/install-sh -c -d '/Users/sairam/work/postgresql-11.14/share'\n\n./zic -d '/Users/sairam/work/postgresql-11.14/share/timezone' -p 'US/Eastern' -b fat  ./data/tzdata.zi\n\nmake[2]: *** [install] Killed: 9\n\nmake[1]: *** [install-timezone-recurse] Error 2\n\nmake: *** [install-world-src-recurse] Error 2\n\n2\n\n\n\n\n\n\nIt's not like it fails every time, sometimes the same script works just fine. Sometimes after few retries it works.\n\n\n\nI have also noticed that, if it works without issue, binaries generated does't work immediately. i.e when I tried to query pg_config, the command waits for sometime and gets killed. After a few retries, it works there onwards.\n\n\n\nI'm wondering what could be the issue. I'm attaching the script to the same, kindly go through it and help me understand the issue.\n\n\n\nPostgres Version: 11.14\nMachine details: MacBook Pro (13-inch, M1, 2020), Version 12.4\n\n\nRegards\nG. Sai Ram", "msg_date": "Mon, 04 Jul 2022 22:44:51 +0530", "msg_from": "Gaddam Sai Ram <gaddamsairam.n@zohocorp.com>", "msg_from_op": true, "msg_subject": "make install-world fails sometimes in Mac M1" }, { "msg_contents": "Gaddam Sai Ram <gaddamsairam.n@zohocorp.com> writes:\n>       We are using a script to install Postgres from source, the script works fine in ubuntu and Mac(intel) but mostly fails(works sometimes) in Mac M1. \n\nWe have developers (including me) and buildfarm machines using M1 Macs,\nand nobody else is reporting any big problem with them, so I don't believe\nthat this is specifically due to that.\n\n> make[2]: *** [install] Killed: 9\n\nkill -9 is not something that would happen internally to the install\nprocess. My guess is that that is interference from some external agent.\nPerhaps you have some resource-consumption-limiting daemon installed on\nthat machine, and it's deciding that the command ran too long?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 13:45:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make install-world fails sometimes in Mac M1" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote ---\n\n > We have developers (including me) and buildfarm machines using M1 Macs , \n\n > and nobody else is reporting any big problem with them, so I don't believe \n\n > that this is specifically due to that. \n\n\n\n      Even we don't have any problem when we run commands via terminal. Problem occurs only when we run as a part of script.\n\n      We have tried this script in 5 Mac m1 machines and it is the same issue everywhere.\n\n\n\n > kill -9 is not something that would happen internally to the install \n\n > process. My guess is that that is interference from some external agent. \n\n > Perhaps you have some resource-consumption-limiting daemon installed on \n\n > that machine, and it's deciding that the command ran too long? \n\n\n\n      I don't think of any other external agent other than the anti-virus software running at that moment. And I also don't think that it will cause this issue.\n\n\n\n      If possible, please do try this script(attached as screenshot as well as pastebin link)\n\n      https://pastebin.com/qwqYHcvA\n\n      \n\n      Steps:\n\n      chmod +x install_pg.sh\n\n      ./install_pg.sh -i <install_dir>\n\n\n\n         \n\n\nThank you,\n\nG. Sai Ram", "msg_date": "Mon, 11 Jul 2022 21:41:25 +0530", "msg_from": "Gaddam Sai Ram <gaddamsairam.n@zohocorp.com>", "msg_from_op": true, "msg_subject": "Re: make install-world fails sometimes in Mac M1" }, { "msg_contents": "On 2022-Jul-11, Gaddam Sai Ram wrote:\n\n>       Even we don't have any problem when we run commands via\n> terminal. Problem occurs only when we run as a part of script.\n\nIt must be a problem induced by the shell used to run the script, then.\nWhat is it? The script itself doesn't say.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n", "msg_date": "Wed, 13 Jul 2022 16:56:14 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: make install-world fails sometimes in Mac M1" }, { "msg_contents": "> It must be a problem induced by the shell used to run the script, then. \n\n> What is it? The script itself doesn't say. \n\n\n\nTried with,\n1. Bash shell\n2. zsh shell\n3. Started terminal via rosetta(Again with both bash and zsh)\n\n\nSame issue in all 3 cases.\n\n\nRegards\nG. Sai Ram\n\n\n\n\n\n\n\n---- On Wed, 13 Jul 2022 20:26:14 +0530 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote ---\n\n\n\nOn 2022-Jul-11, Gaddam Sai Ram wrote: \n \n>       Even we don't have any problem when we run commands via \n> terminal. Problem occurs only when we run as a part of script. \n \nIt must be a problem induced by the shell used to run the script, then. \nWhat is it? The script itself doesn't say. \n \n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ \n\"¿Cómo puedes confiar en algo que pagas y que no ves, \ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n> It must be a problem induced by the shell used to run the script, then. > What is it? The script itself doesn't say. Tried with,1. Bash shell2. zsh shell3. Started terminal via rosetta(Again with both bash and zsh)Same issue in all 3 cases.RegardsG. Sai Ram---- On Wed, 13 Jul 2022 20:26:14 +0530 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote ---On 2022-Jul-11, Gaddam Sai Ram wrote: >       Even we don't have any problem when we run commands via > terminal. Problem occurs only when we run as a part of script. It must be a problem induced by the shell used to run the script, then. What is it? The script itself doesn't say. -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ \"¿Cómo puedes confiar en algo que pagas y que no ves, y no confiar en algo que te dan y te lo muestran?\" (Germán Poo)", "msg_date": "Mon, 18 Jul 2022 16:54:47 +0530", "msg_from": "Gaddam Sai Ram <gaddamsairam.n@zohocorp.com>", "msg_from_op": true, "msg_subject": "Re: make install-world fails sometimes in Mac M1" } ]
[ { "msg_contents": "In reviewing Peter's patch to auto-generate the backend/nodes\nsupport files, I compared what the patch's script produces to\nwhat is in the code now. I found several discrepancies in the\nrecently-added parse node types for JSON functions, and as far\nas I can see every one of those discrepancies is an error in\nthe existing code. Some of them are relatively harmless\n(e.g. COPY_LOCATION_FIELD isn't really different from\nCOPY_SCALAR_FIELD), but some of them definitely are live bugs.\nI propose the attached patch.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 04 Jul 2022 21:23:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bugs in copyfuncs/equalfuncs support for JSON node types" }, { "msg_contents": "On Mon, Jul 4, 2022 at 6:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> In reviewing Peter's patch to auto-generate the backend/nodes\n> support files, I compared what the patch's script produces to\n> what is in the code now. I found several discrepancies in the\n> recently-added parse node types for JSON functions, and as far\n> as I can see every one of those discrepancies is an error in\n> the existing code. Some of them are relatively harmless\n> (e.g. COPY_LOCATION_FIELD isn't really different from\n> COPY_SCALAR_FIELD), but some of them definitely are live bugs.\n> I propose the attached patch.\n>\n> regards, tom lane\n>\n> Hi,\nPatch looks good to me.\n\nThanks\n\nOn Mon, Jul 4, 2022 at 6:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:In reviewing Peter's patch to auto-generate the backend/nodes\nsupport files, I compared what the patch's script produces to\nwhat is in the code now.  I found several discrepancies in the\nrecently-added parse node types for JSON functions, and as far\nas I can see every one of those discrepancies is an error in\nthe existing code.  Some of them are relatively harmless\n(e.g. COPY_LOCATION_FIELD isn't really different from\nCOPY_SCALAR_FIELD), but some of them definitely are live bugs.\nI propose the attached patch.\n\n                        regards, tom lane\nHi,Patch looks good to me.Thanks", "msg_date": "Mon, 4 Jul 2022 18:48:48 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bugs in copyfuncs/equalfuncs support for JSON node types" }, { "msg_contents": "On Mon, Jul 04, 2022 at 09:23:08PM -0400, Tom Lane wrote:\n> In reviewing Peter's patch to auto-generate the backend/nodes\n> support files, I compared what the patch's script produces to\n> what is in the code now. I found several discrepancies in the\n> recently-added parse node types for JSON functions, and as far\n> as I can see every one of those discrepancies is an error in\n> the existing code. Some of them are relatively harmless\n> (e.g. COPY_LOCATION_FIELD isn't really different from\n> COPY_SCALAR_FIELD), but some of them definitely are live bugs.\n> I propose the attached patch.\n\nDo the missing fields indicate a deficiency in test coverage ?\n_copyJsonTablePlan.pathname and _equalJsonTable.plan.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 4 Jul 2022 21:18:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Bugs in copyfuncs/equalfuncs support for JSON node types" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Do the missing fields indicate a deficiency in test coverage ?\n> _copyJsonTablePlan.pathname and _equalJsonTable.plan.\n\nYeah, I'd say so, but I think constructing a test case to prove\nit's broken might be more trouble than it's worth --- particularly\nseeing that we're about to automate this stuff. Because of that,\nI wouldn't even be really concerned about these bugs in HEAD; but\nthis needs to be back-patched into v15.\n\nThe existing COPY_PARSE_PLAN_TREES logic purports to test this\narea, but it fails to notice these bugs for a few reasons:\n\n* JsonTable.lateral: COPY_PARSE_PLAN_TREES itself failed to detect\nthis problem because of matching omissions in _copyJsonTable and\n_equalJsonTable. But the lack of any follow-on failure implies\nthat we don't have any test cases where the lateral flag is significant.\nMaybe that means the field is useless? This one would be worth a closer\nlook, perhaps.\n\n* JsonTableColumn.format: this scalar-instead-of-deep-copy bug\nwould only be detectable if you were able to clobber the original\nparse tree after copying. I have no ideas about an easy way to\ndo that. It'd surely bite somebody in the field someday, but\nmaking a reproducible test is way harder.\n\n* JsonTable.plan: to detect the missed comparison, you'd have to\nbuild a test case where comparing two such trees actually made\na visible difference; which would require a fair amount of thought\nI fear. IIUC this node type will only appear down inside jointrees,\nwhich we don't usually do comparisons on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 22:40:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bugs in copyfuncs/equalfuncs support for JSON node types" } ]
[ { "msg_contents": "Hi all,\n\nWhile looking at the buildfarm logs, I have noticed the following\nthing:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=morepork&dt=2022-07-05%2002%3A45%3A32&stg=recovery-check\n\npostgres:/usr/local/lib/libldap_r.so.13.2:\n/usr/local/lib/libldap.so.13.2 : WARNING:\nsymbol(ldap_int_global_options) size mismatch, relink your program\n\nThat seems to be pretty ancient, not related to aff45c8 as logs from\ntwo months ago also show this warning.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 5 Jul 2022 12:47:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Linking issue with libldap on morepork (OpenBSD 6.9)" } ]
[ { "msg_contents": "I found that provolatile attribute of to_timestamp in pg_proc is\nwrong:\n\ntest=# select provolatile, proargtypes from pg_proc where proname = 'to_timestamp' and proargtypes[0] = 701;\n provolatile | proargtypes \n-------------+-------------\n i | 701\n(1 row)\n\n'i' (immutable) is clearly wrong since the function's return value can\nbe changed depending on the time zone settings.\n\nActually the manual says functions depending on time zone settings\nshould be labeled STABLE.\n\nhttps://www.postgresql.org/docs/14/xfunc-volatility.html\n\n\"A common error is to label a function IMMUTABLE when its results\ndepend on a configuration parameter. For example, a function that\nmanipulates timestamps might well have results that depend on the\nTimeZone setting. For safety, such functions should be labeled STABLE\ninstead.\"\n\nIt's intersting that two arguments form of to_timestamp has correct\nattribute value ('s': stable) for provolatile in pg_proc.\n\nDo we want to fix this for PG16? I think it's too late for 15.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 05 Jul 2022 17:29:57 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Wrong provolatile value for to_timestamp (1 argument)" }, { "msg_contents": "On Tue, 2022-07-05 at 17:29 +0900, Tatsuo Ishii wrote:\n> I found that provolatile attribute of to_timestamp in pg_proc is\n> wrong:\n> \n> test=# select provolatile, proargtypes from pg_proc where proname = 'to_timestamp' and proargtypes[0] = 701;\n>  provolatile | proargtypes \n> -------------+-------------\n>  i           | 701\n> (1 row)\n> \n> 'i' (immutable) is clearly wrong s\n\nAre you sure? I'd say that \"to_timestamp(double precision)\" always\nproduces the same timestamp for the same argument. What changes with\nthe setting of \"timezone\" is how that timestamp is converted to a\nstring, but that's a different affair.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 05 Jul 2022 10:44:44 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Wrong provolatile value for to_timestamp (1 argument)" }, { "msg_contents": "> Are you sure? I'd say that \"to_timestamp(double precision)\" always\n> produces the same timestamp for the same argument. What changes with\n> the setting of \"timezone\" is how that timestamp is converted to a\n> string, but that's a different affair.\n\nOf course the internal representation of timestamp with time zone data\ntype is not affected by the time zone setting. But why other form of\nto_timestamp is labeled as stable? If your theory is correct, then\nother form of to_timestamp shouldn't be labeled immutable as well?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 05 Jul 2022 19:37:02 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Wrong provolatile value for to_timestamp (1 argument)" }, { "msg_contents": "On Tue, 2022-07-05 at 19:37 +0900, Tatsuo Ishii wrote:\n> > Are you sure?  I'd say that \"to_timestamp(double precision)\" always\n> > produces the same timestamp for the same argument.  What changes with\n> > the setting of \"timezone\" is how that timestamp is converted to a\n> > string, but that's a different affair.\n> \n> Of course the internal representation of timestamp with time zone data\n> type is not affected by the time zone setting. But why other form of\n> to_timestamp is labeled as stable? If your theory is correct, then\n> other form of to_timestamp shouldn't be labeled immutable as well?\n\nThe result of the two-argument form of \"to_timestamp\" can depend on\nthe setting of \"lc_time\":\n\ntest=> SET lc_time = 'en_US.utf8';\nSET\ntest=> SELECT to_timestamp('2022-July-05', 'YYYY-TMMonth-DD');\n to_timestamp \n════════════════════════\n 2022-07-05 00:00:00+02\n(1 row)\n\ntest=> SET lc_time = 'de_DE.utf8';\nSET\ntest=> SELECT to_timestamp('2022-July-05', 'YYYY-TMMonth-DD');\nERROR: invalid value \"July-05\" for \"Month\"\nDETAIL: The given value did not match any of the allowed values for this field.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 05 Jul 2022 16:24:16 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Wrong provolatile value for to_timestamp (1 argument)" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2022-07-05 at 19:37 +0900, Tatsuo Ishii wrote:\n>> Of course the internal representation of timestamp with time zone data\n>> type is not affected by the time zone setting. But why other form of\n>> to_timestamp is labeled as stable? If your theory is correct, then\n>> other form of to_timestamp shouldn't be labeled immutable as well?\n\n> The result of the two-argument form of \"to_timestamp\" can depend on\n> the setting of \"lc_time\":\n\nIt also depends on the session's timezone setting, in a way that\nthe single-argument form does not.\n\nregression=# show timezone;\n TimeZone \n------------------\n America/New_York\n(1 row)\n\nregression=# select to_timestamp(0);\n to_timestamp \n------------------------\n 1969-12-31 19:00:00-05\n(1 row)\n\nregression=# select to_timestamp('1970-01-01', 'YYYY-MM-DD');\n to_timestamp \n------------------------\n 1970-01-01 00:00:00-05\n(1 row)\n\nregression=# set timezone = 'utc';\nSET\nregression=# select to_timestamp(0);\n to_timestamp \n------------------------\n 1970-01-01 00:00:00+00\n(1 row)\n\nregression=# select to_timestamp('1970-01-01', 'YYYY-MM-DD');\n to_timestamp \n------------------------\n 1970-01-01 00:00:00+00\n(1 row)\n\nThe two results of to_timestamp(0) represent the same UTC\ninstant, but the other two are different instants.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 10:33:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Wrong provolatile value for to_timestamp (1 argument)" } ]
[ { "msg_contents": "Hello hackers,\n \nIt seems useful to have [OR REPLACE] option in CREATE OPERATOR statement, as in CREATE FUNCTION. This option may be good for writing extension update scripts, to avoid errors with re-creating the same operator.\n \nBecause of cached query plans, only RESTRICT and JOIN options can be changed for existing operator, as in ALTER OPERATOR statement.\n(discussed here:  https://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO%40dinodell )\n \nThe attached patch will be proposed for September CF.\n \nBest regards,\n--\nSvetlana Derevyanko\nPostgres Professional:  http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 05 Jul 2022 11:40:23 +0300", "msg_from": "=?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?W1BBVENIXSBPcHRpb25hbCBPUiBSRVBMQUNFIGluIENSRUFURSBPUEVSQVRP?=\n =?UTF-8?B?UiBzdGF0ZW1lbnQ=?=" }, { "msg_contents": "=?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru> writes:\n> It seems useful to have [OR REPLACE] option in CREATE OPERATOR statement, as in CREATE FUNCTION. This option may be good for writing extension update scripts, to avoid errors with re-creating the same operator.\n\nNo, that's not acceptable. CREATE OR REPLACE should always produce\nexactly the same final state of the object, but in this case we cannot\nchange the underlying function if the operator already exists.\n\n(At least, not without writing a bunch of infrastructure to update\nexisting views/rules that might use the operator; which among other\nthings would create a lot of deadlock risks.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 11:29:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?B?W1BBVENIXSBPcHRpb25hbCBPUiBSRVBMQUNFIGluIENSRUFURSBPUEVSQVRP?=\n =?UTF-8?B?UiBzdGF0ZW1lbnQ=?=" }, { "msg_contents": ">Вторник, 5 июля 2022, 18:29 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n> \n>=?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= < s.derevyanko@postgrespro.ru > writes:\n>> It seems useful to have [OR REPLACE] option in CREATE OPERATOR statement, as in CREATE FUNCTION. This option may be good for writing extension update scripts, to avoid errors with re-creating the same operator.\n>No, that's not acceptable. CREATE OR REPLACE should always produce\n>exactly the same final state of the object, but in this case we cannot\n>change the underlying function if the operator already exists.\n>\n>(At least, not without writing a bunch of infrastructure to update\n>existing views/rules that might use the operator; which among other\n>things would create a lot of deadlock risks.)\n>\n>regards, tom lane\nHello,\n \n> CREATE OR REPLACE should always produce exactly the same final state of the object,\n> but in this case we cannot change the underlying function if the operator already exists.\n   \nDo you mean that for existing operator CREATE OR REPLACE should be the same as DROP OPERATOR and CREATE OPERATOR,  with relevant re-creation of existing view/rules/..., using this operator? If yes, what exactly is wrong with  changing only RESTRICT and JOIN parameters (or is the problem in possible user`s confusion with attempts to change something more?). If no, could you, please, clarify what \"final state\" here means?\n \nAlso, if OR REPLACE is unacceptable, then what do you think about IF NOT EXISTS option?\n \nThanks,\n \n--\nSvetlana Derevyanko\nPostgres Professional:  http://www.postgrespro.com\nThe Russian Postgres Company\n Вторник, 5 июля 2022, 18:29 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>: =?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru> writes:> It seems useful to have [OR REPLACE] option in CREATE OPERATOR statement, as in CREATE FUNCTION. This option may be good for writing extension update scripts, to avoid errors with re-creating the same operator.No, that's not acceptable. CREATE OR REPLACE should always produceexactly the same final state of the object, but in this case we cannotchange the underlying function if the operator already exists.(At least, not without writing a bunch of infrastructure to updateexisting views/rules that might use the operator; which among otherthings would create a lot of deadlock risks.)regards, tom laneHello, > CREATE OR REPLACE should always produce exactly the same final state of the object,> but in this case we cannot change the underlying function if the operator already exists.   Do you mean that for existing operator CREATE OR REPLACE should be the same as DROP OPERATOR and CREATE OPERATOR,  with relevant re-creation of existing view/rules/..., using this operator? If yes, what exactly is wrong with  changing only RESTRICT and JOIN parameters (or is the problem in possible user`s confusion with attempts to change something more?). If no, could you, please, clarify what \"final state\" here means? Also, if OR REPLACE is unacceptable, then what do you think about IF NOT EXISTS option? Thanks, --Svetlana DerevyankoPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Wed, 06 Jul 2022 15:00:55 +0300", "msg_from": "=?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQQVRDSF0gT3B0aW9uYWwgT1IgUkVQTEFDRSBpbiBDUkVBVEUg?=\n =?UTF-8?B?T1BFUkFUT1Igc3RhdGVtZW50?=" }, { "msg_contents": "Hi,\n\nSvetlana, yes, Tom means that CREATE OR REPLACE should always produce\nthe same result no matter which branch actually worked - CREATE or REPLACE.\nREPLACE case must produce exactly the same result as you've mentioned -\nDROP and CREATE.\n\nAs for IF NOT EXISTS option I agree, it seems a reasonable addition to\nsimplify\nerror handling in scripts, go on.\n\n\nOn Wed, Jul 6, 2022 at 3:01 PM Svetlana Derevyanko <\ns.derevyanko@postgrespro.ru> wrote:\n\n>\n>\n> Вторник, 5 июля 2022, 18:29 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> =?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru\n> <http:///compose?To=s.derevyanko@postgrespro.ru>> writes:\n> > It seems useful to have [OR REPLACE] option in CREATE OPERATOR\n> statement, as in CREATE FUNCTION. This option may be good for\n> writing extension update scripts, to avoid errors with re-creating the same\n> operator.\n>\n> No, that's not acceptable. CREATE OR REPLACE should always produce\n> exactly the same final state of the object, but in this case we cannot\n> change the underlying function if the operator already exists.\n>\n> (At least, not without writing a bunch of infrastructure to update\n> existing views/rules that might use the operator; which among other\n> things would create a lot of deadlock risks.)\n>\n> regards, tom lane\n>\n> Hello,\n>\n> > CREATE OR REPLACE should always produce exactly the same final state of\n> the object,\n> > but in this case we cannot change the underlying function if the\n> operator already exists.\n>\n> Do you mean that for existing operator CREATE OR REPLACE should be the\n> same as DROP OPERATOR and CREATE OPERATOR, with relevant re-creation of\n> existing view/rules/..., using this operator? If yes, what exactly is wrong\n> with changing only RESTRICT and JOIN parameters (or is the problem in\n> possible user`s confusion with attempts to change something more?). If no,\n> could you, please, clarify what \"final state\" here means?\n>\n> Also, if OR REPLACE is unacceptable, then what do you think about IF NOT\n> EXISTS option?\n>\n> Thanks,\n>\n> --\n> Svetlana Derevyanko\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Svetlana, yes, Tom means that CREATE OR REPLACE should always producethe same result no matter which branch actually worked - CREATE or REPLACE.REPLACE case must produce exactly the same result as you've mentioned -DROP and CREATE.As for IF NOT EXISTS option I agree, it seems a reasonable addition to simplifyerror handling in scripts, go on.On Wed, Jul 6, 2022 at 3:01 PM Svetlana Derevyanko <s.derevyanko@postgrespro.ru> wrote:\n Вторник, 5 июля 2022, 18:29 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>: =?UTF-8?B?U3ZldGxhbmEgRGVyZXZ5YW5rbw==?= <s.derevyanko@postgrespro.ru> writes:> It seems useful to have [OR REPLACE] option in CREATE OPERATOR statement, as in CREATE FUNCTION. This option may be good for writing extension update scripts, to avoid errors with re-creating the same operator.No, that's not acceptable. CREATE OR REPLACE should always produceexactly the same final state of the object, but in this case we cannotchange the underlying function if the operator already exists.(At least, not without writing a bunch of infrastructure to updateexisting views/rules that might use the operator; which among otherthings would create a lot of deadlock risks.)regards, tom laneHello, > CREATE OR REPLACE should always produce exactly the same final state of the object,> but in this case we cannot change the underlying function if the operator already exists.   Do you mean that for existing operator CREATE OR REPLACE should be the same as DROP OPERATOR and CREATE OPERATOR,  with relevant re-creation of existing view/rules/..., using this operator? If yes, what exactly is wrong with  changing only RESTRICT and JOIN parameters (or is the problem in possible user`s confusion with attempts to change something more?). If no, could you, please, clarify what \"final state\" here means? Also, if OR REPLACE is unacceptable, then what do you think about IF NOT EXISTS option? Thanks, --Svetlana DerevyankoPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 12 Dec 2022 22:14:08 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [PATCH] Optional OR REPLACE in CREATE OPERATOR statement" }, { "msg_contents": "On Tue, 5 Jul 2022 at 11:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> No, that's not acceptable. CREATE OR REPLACE should always produce\n> exactly the same final state of the object, but in this case we cannot\n> change the underlying function if the operator already exists.\n\nIt sounds like this patch isn't the direction to go in. I don't know\nif IF NOT EXISTS is better but that design discussion should probably\nhappen after this commitfest.\n\nI'm sorry but I guess I'll mark this patch Rejected.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Tue, 21 Mar 2023 23:29:12 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Optional OR REPLACE in CREATE OPERATOR statement" } ]
[ { "msg_contents": "Hi hackers,\n\nI created a patch to reuse tablesync workers and their replication slots\nfor more tables that are not synced yet. So that overhead of creating and\ndropping workers/replication slots can be reduced.\n\nCurrent version of logical replication has two steps: tablesync and apply.\nIn tablesync step, apply worker creates a tablesync worker for each table\nand those tablesync workers are killed when they're done with their\nassociated table. (the number of tablesync workers running at the same time\nis limited by \"max_sync_workers_per_subscription\")\nEach tablesync worker also creates a replication slot on publisher during\nits lifetime and drops the slot before exiting.\n\nThe purpose of this patch is getting rid of the overhead of\ncreating/killing a new worker (and replication slot) for each table.\nIt aims to reuse tablesync workers and their replication slots so that\ntablesync workers can copy multiple tables from publisher to subscriber\nduring their lifetime.\n\nThe benefits of reusing tablesync workers can be significant if tables are\nempty or close to empty.\nIn an empty table case, spawning tablesync workers and handling replication\nslots are where the most time is spent since the actual copy phase takes\ntoo little time.\n\n\nThe changes in the behaviour of tablesync workers with this patch as\nfollows:\n1- After tablesync worker is done with syncing the current table, it takes\na lock and fetches tables in init state\n2- it looks for a table that is not already being synced by another worker\nfrom the tables with init state\n3- If it founds one, updates its state for the new table and loops back to\nbeginning to start syncing\n4- If no table found, it drops the replication slot and exits\n\n\nWith those changes, I did some benchmarking to see if it improves anything.\nThis results compares this patch with the latest version of master branch.\n\"max_sync_workers_per_subscription\" is set to 2 as default.\nGot some results simply averaging timings from 5 consecutive runs for each\nbranch.\n\nFirst, tested logical replication with empty tables.\n10 tables\n----------------\n- master: 286.964 ms\n- the patch: 116.852 ms\n\n100 tables\n----------------\n- master: 2785.328 ms\n- the patch: 706.817 ms\n\n10K tables\n----------------\n- master: 39612.349 ms\n- the patch: 12526.981 ms\n\n\nAlso tried replication tables with some data\n10 tables loaded with 10MB data\n----------------\n- master: 1517.714 ms\n- the patch: 1399.965 ms\n\n100 tables loaded with 10MB data\n----------------\n- master: 16327.229 ms\n- the patch: 11963.696 ms\n\n\nThen loaded more data\n10 tables loaded with 100MB data\n----------------\n- master: 13910.189 ms\n- the patch: 14770.982 ms\n\n100 tables loaded with 100MB data\n----------------\n- master: 146281.457 ms\n- the patch: 156957.512\n\n\nIf tables are mostly empty, the improvement can be significant - up to 3x\nfaster logical replication.\nWith some data loaded, it can still be faster to some extent.\nWhen the table size increases more, the advantage of reusing workers\nbecomes insignificant.\n\n\nI would appreciate your comments and suggestions.Thanks in advance for\nreviewing.\n\nBest,\nMelih", "msg_date": "Tue, 5 Jul 2022 16:50:20 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 5, 2022 at 7:20 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> I created a patch to reuse tablesync workers and their replication slots for more tables that are not synced yet. So that overhead of creating and dropping workers/replication slots can be reduced.\n>\n> Current version of logical replication has two steps: tablesync and apply.\n> In tablesync step, apply worker creates a tablesync worker for each table and those tablesync workers are killed when they're done with their associated table. (the number of tablesync workers running at the same time is limited by \"max_sync_workers_per_subscription\")\n> Each tablesync worker also creates a replication slot on publisher during its lifetime and drops the slot before exiting.\n>\n> The purpose of this patch is getting rid of the overhead of creating/killing a new worker (and replication slot) for each table.\n> It aims to reuse tablesync workers and their replication slots so that tablesync workers can copy multiple tables from publisher to subscriber during their lifetime.\n>\n> The benefits of reusing tablesync workers can be significant if tables are empty or close to empty.\n> In an empty table case, spawning tablesync workers and handling replication slots are where the most time is spent since the actual copy phase takes too little time.\n>\n>\n> The changes in the behaviour of tablesync workers with this patch as follows:\n> 1- After tablesync worker is done with syncing the current table, it takes a lock and fetches tables in init state\n> 2- it looks for a table that is not already being synced by another worker from the tables with init state\n> 3- If it founds one, updates its state for the new table and loops back to beginning to start syncing\n> 4- If no table found, it drops the replication slot and exits\n>\n\nHow would you choose the slot name for the table sync, right now it\ncontains the relid of the table for which it needs to perform sync?\nSay, if we ignore to include the appropriate identifier in the slot\nname, we won't be able to resue/drop the slot after restart of table\nsync worker due to an error.\n\n>\n> With those changes, I did some benchmarking to see if it improves anything.\n> This results compares this patch with the latest version of master branch. \"max_sync_workers_per_subscription\" is set to 2 as default.\n> Got some results simply averaging timings from 5 consecutive runs for each branch.\n>\n> First, tested logical replication with empty tables.\n> 10 tables\n> ----------------\n> - master: 286.964 ms\n> - the patch: 116.852 ms\n>\n> 100 tables\n> ----------------\n> - master: 2785.328 ms\n> - the patch: 706.817 ms\n>\n> 10K tables\n> ----------------\n> - master: 39612.349 ms\n> - the patch: 12526.981 ms\n>\n>\n> Also tried replication tables with some data\n> 10 tables loaded with 10MB data\n> ----------------\n> - master: 1517.714 ms\n> - the patch: 1399.965 ms\n>\n> 100 tables loaded with 10MB data\n> ----------------\n> - master: 16327.229 ms\n> - the patch: 11963.696 ms\n>\n>\n> Then loaded more data\n> 10 tables loaded with 100MB data\n> ----------------\n> - master: 13910.189 ms\n> - the patch: 14770.982 ms\n>\n> 100 tables loaded with 100MB data\n> ----------------\n> - master: 146281.457 ms\n> - the patch: 156957.512\n>\n>\n> If tables are mostly empty, the improvement can be significant - up to 3x faster logical replication.\n> With some data loaded, it can still be faster to some extent.\n>\n\nThese results indicate that it is a good idea, especially for very small tables.\n\n> When the table size increases more, the advantage of reusing workers becomes insignificant.\n>\n\nIt seems from your results that performance degrades for large\nrelations. Did you try to investigate the reasons for the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Jul 2022 09:06:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 6, 2022 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> How would you choose the slot name for the table sync, right now it\n> contains the relid of the table for which it needs to perform sync?\n> Say, if we ignore to include the appropriate identifier in the slot\n> name, we won't be able to resue/drop the slot after restart of table\n> sync worker due to an error.\n\nI had a quick look into the patch and it seems it is using the worker\narray index instead of relid while forming the slot name, and I think\nthat make sense, because now whichever worker is using that worker\nindex can reuse the slot created w.r.t that index.\n\n> >\n> > With those changes, I did some benchmarking to see if it improves anything.\n> > This results compares this patch with the latest version of master branch. \"max_sync_workers_per_subscription\" is set to 2 as default.\n> > Got some results simply averaging timings from 5 consecutive runs for each branch.\n> >\n> > First, tested logical replication with empty tables.\n> > 10 tables\n> > ----------------\n> > - master: 286.964 ms\n> > - the patch: 116.852 ms\n> >\n> > 100 tables\n> > ----------------\n> > - master: 2785.328 ms\n> > - the patch: 706.817 ms\n> >\n> > 10K tables\n> > ----------------\n> > - master: 39612.349 ms\n> > - the patch: 12526.981 ms\n> >\n> >\n> > Also tried replication tables with some data\n> > 10 tables loaded with 10MB data\n> > ----------------\n> > - master: 1517.714 ms\n> > - the patch: 1399.965 ms\n> >\n> > 100 tables loaded with 10MB data\n> > ----------------\n> > - master: 16327.229 ms\n> > - the patch: 11963.696 ms\n> >\n> >\n> > Then loaded more data\n> > 10 tables loaded with 100MB data\n> > ----------------\n> > - master: 13910.189 ms\n> > - the patch: 14770.982 ms\n> >\n> > 100 tables loaded with 100MB data\n> > ----------------\n> > - master: 146281.457 ms\n> > - the patch: 156957.512\n> >\n> >\n> > If tables are mostly empty, the improvement can be significant - up to 3x faster logical replication.\n> > With some data loaded, it can still be faster to some extent.\n> >\n>\n> These results indicate that it is a good idea, especially for very small tables.\n>\n> > When the table size increases more, the advantage of reusing workers becomes insignificant.\n> >\n>\n> It seems from your results that performance degrades for large\n> relations. Did you try to investigate the reasons for the same?\n\nYeah, that would be interesting to know that why there is a drop in some cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 13:47:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 6, 2022 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jul 6, 2022 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > How would you choose the slot name for the table sync, right now it\n> > contains the relid of the table for which it needs to perform sync?\n> > Say, if we ignore to include the appropriate identifier in the slot\n> > name, we won't be able to resue/drop the slot after restart of table\n> > sync worker due to an error.\n>\n> I had a quick look into the patch and it seems it is using the worker\n> array index instead of relid while forming the slot name, and I think\n> that make sense, because now whichever worker is using that worker\n> index can reuse the slot created w.r.t that index.\n>\n\nI think that won't work because each time on restart the slot won't be\nfixed. Now, it is possible that we may drop the wrong slot if that\nstate of copying rel is SUBREL_STATE_DATASYNC. Also, it is possible\nthat while creating a slot, we fail because the same name slot already\nexists due to some other worker which has created that slot has been\nrestarted. Also, what about origin_name, won't that have similar\nproblems? Also, if the state is already SUBREL_STATE_FINISHEDCOPY, if\nthe slot is not the same as we have used in the previous run of a\nparticular worker, it may start WAL streaming from a different point\nbased on the slot's confirmed_flush_location.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Jul 2022 14:48:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 6, 2022 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 6, 2022 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jul 6, 2022 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > How would you choose the slot name for the table sync, right now it\n> > > contains the relid of the table for which it needs to perform sync?\n> > > Say, if we ignore to include the appropriate identifier in the slot\n> > > name, we won't be able to resue/drop the slot after restart of table\n> > > sync worker due to an error.\n> >\n> > I had a quick look into the patch and it seems it is using the worker\n> > array index instead of relid while forming the slot name, and I think\n> > that make sense, because now whichever worker is using that worker\n> > index can reuse the slot created w.r.t that index.\n> >\n>\n> I think that won't work because each time on restart the slot won't be\n> fixed. Now, it is possible that we may drop the wrong slot if that\n> state of copying rel is SUBREL_STATE_DATASYNC.\n\nSo it will drop the previous slot the worker at that index was using,\nso it is possible that on that slot some relation was at\nSUBREL_STATE_FINISHEDCOPY or so and we will drop that slot. Because\nnow relid and replication slot association is not 1-1 so it would be\nwrong to drop based on the relstate which is picked by this worker.\nIn short it makes sense what you have pointed out.\n\nAlso, it is possible\n> that while creating a slot, we fail because the same name slot already\n> exists due to some other worker which has created that slot has been\n> restarted. Also, what about origin_name, won't that have similar\n> problems? Also, if the state is already SUBREL_STATE_FINISHEDCOPY, if\n> the slot is not the same as we have used in the previous run of a\n> particular worker, it may start WAL streaming from a different point\n> based on the slot's confirmed_flush_location.\n\nYeah this is also true, when a tablesync worker has to do catch up\nafter completing the copy then it might stream from the wrong lsn.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Jul 2022 16:10:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Amit and Dilip,\n\nThanks for the replies.\n\n\n> > I had a quick look into the patch and it seems it is using the worker\n> > array index instead of relid while forming the slot name\n>\n\nYes, I changed the slot names so they include slot index instead of\nrelation id.\nThis was needed because I aimed to separate replication slots from\nrelations.\n\nI think that won't work because each time on restart the slot won't be\n> fixed. Now, it is possible that we may drop the wrong slot if that\n> state of copying rel is SUBREL_STATE_DATASYNC. Also, it is possible\n> that while creating a slot, we fail because the same name slot already\n> exists due to some other worker which has created that slot has been\n> restarted. Also, what about origin_name, won't that have similar\n> problems? Also, if the state is already SUBREL_STATE_FINISHEDCOPY, if\n> the slot is not the same as we have used in the previous run of a\n> particular worker, it may start WAL streaming from a different point\n> based on the slot's confirmed_flush_location.\n>\n\nYou're right Amit. In case of a failure, tablesync phase of a relation may\ncontinue with different worker and replication slot due to this change in\nnaming.\nSeems like the same replication slot should be used from start to end for a\nrelation during tablesync. However, creating/dropping replication slots can\nbe a major overhead in some cases.\nIt would be nice if these slots are somehow reused.\n\nTo overcome this issue, I've been thinking about making some changes in my\npatch.\nSo far, my proposal would be as follows:\n\nSlot naming can be like pg_<sub_id>_<worker_pid> instead of\npg_<sub_id>_<slot_index>. This way each worker can use the same replication\nslot during their lifetime.\nBut if a worker is restarted, then it will switch to a new replication slot\nsince its pid has changed.\n\npg_subscription_rel catalog can store replication slot name for each\nnon-ready relation. Then we can find the slot needed for that particular\nrelation to complete tablesync.\nIf a worker syncs a relation without any error, everything works well and\nthis new replication slot column from the catalog will not be needed.\nHowever if a worker is restarted due to a failure, the previous run of that\nworker left its slot behind since it did not exit properly.\nAnd the restarted worker (with a different pid) will see that the relation\nis actually in SUBREL_STATE_FINISHEDCOPY and want to proceed for the\ncatchup step.\nThen the worker can look for that particular relation's replication slot\nfrom pg_subscription_rel catalog (slot name should be there since relation\nstate is not ready). And tablesync can proceed with that slot.\n\nThere might be some cases where some replication slots are left behind. An\nexample to such cases would be when the slot is removed from\npg_subscription_rel catalog and detached from any relation, but tha slot\nactually couldn't be dropped for some reason. For such cases, a slot\ncleanup logic is needed. This cleanup can also be done by tablesync workers.\nWhenever a tablesync worker is created, it can look for existing\nreplication slots that do not belong to any relation and any worker (slot\nname has pid for that), and drop those slots if it finds any.\n\nWhat do you think about this new way of handling slots? Do you see any\npoints of concern?\n\nI'm currently working on adding this change into the patch. And would\nappreciate any comment.\n\nThanks,\nMelih\n\nHi Amit and Dilip,Thanks for the replies.  \n> I had a quick look into the patch and it seems it is using the worker\n> array index instead of relid while forming the slot name Yes, I changed the slot names so they include slot index instead of relation id. This was needed because I aimed to separate replication slots from relations.\nI think that won't work because each time on restart the slot won't be\nfixed. Now, it is possible that we may drop the wrong slot if that\nstate of copying rel is SUBREL_STATE_DATASYNC. Also, it is possible\nthat while creating a slot, we fail because the same name slot already\nexists due to some other worker which has created that slot has been\nrestarted. Also, what about origin_name, won't that have similar\nproblems? Also, if the state is already SUBREL_STATE_FINISHEDCOPY, if\nthe slot is not the same as we have used in the previous run of a\nparticular worker, it may start WAL streaming from a different point\nbased on the slot's confirmed_flush_location.You're right Amit. In case of a failure, tablesync phase of a relation may continue with different worker and replication slot due to this change in naming.Seems like the same replication slot should be used from start to end for a relation during tablesync. However, creating/dropping replication slots can be a major overhead in some cases.It would be nice if these slots are somehow reused.To overcome this issue, I've been thinking about making some changes in my patch. So far, my proposal would be as follows:Slot naming can be like pg_<sub_id>_<worker_pid> instead of pg_<sub_id>_<slot_index>. This way each worker can use the same replication slot during their lifetime. But if a worker is restarted, then it will switch to a new replication slot since its pid has changed.pg_subscription_rel catalog can store replication slot name for each non-ready relation. Then we can find the slot needed for that particular relation to complete tablesync. If a worker syncs a relation without any error, everything works well and this new replication slot column from the catalog will not be needed. However if a worker is restarted due to a failure, the previous run of that worker left its slot behind since it did not exit properly.And the restarted worker (with a different pid) will see that the relation is actually in  SUBREL_STATE_FINISHEDCOPY and want to proceed for the catchup step. Then the worker can look for that particular relation's replication slot from pg_subscription_rel catalog (slot name should be there since relation state is not ready). And tablesync can proceed with that slot.There might be some cases where some replication slots are left behind. An example to such cases would be when the slot is removed from pg_subscription_rel catalog and detached from any relation, but tha slot actually couldn't be dropped for some reason. For such cases, a slot cleanup logic is needed. This cleanup can also be done by tablesync workers.Whenever a tablesync worker is created, it can look for existing replication slots that do not belong to any relation and any worker (slot name has pid for that), and drop those slots if it finds any.What do you think about this new way of handling slots? Do you see any points of concern? I'm currently working on adding this change into the patch. And would appreciate any comment.Thanks,Melih", "msg_date": "Fri, 8 Jul 2022 19:56:23 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": ">\n> It seems from your results that performance degrades for large\n> relations. Did you try to investigate the reasons for the same?\n>\n\nI have not tried to investigate the performance degradation for large\nrelations yet.\nOnce I'm done with changes for the slot usage, I'll look into this and come\nwith more findings.\n\nThanks,\nMelih\n\nIt seems from your results that performance degrades for large\nrelations. Did you try to investigate the reasons for the same?I have not tried to investigate the performance degradation for large relations yet. Once I'm done with changes for the slot usage, I'll look into this and come with more findings.Thanks,Melih", "msg_date": "Fri, 8 Jul 2022 19:59:43 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 8, 2022 at 10:26 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n>> I think that won't work because each time on restart the slot won't be\n>> fixed. Now, it is possible that we may drop the wrong slot if that\n>> state of copying rel is SUBREL_STATE_DATASYNC. Also, it is possible\n>> that while creating a slot, we fail because the same name slot already\n>> exists due to some other worker which has created that slot has been\n>> restarted. Also, what about origin_name, won't that have similar\n>> problems? Also, if the state is already SUBREL_STATE_FINISHEDCOPY, if\n>> the slot is not the same as we have used in the previous run of a\n>> particular worker, it may start WAL streaming from a different point\n>> based on the slot's confirmed_flush_location.\n>\n>\n> You're right Amit. In case of a failure, tablesync phase of a relation may continue with different worker and replication slot due to this change in naming.\n> Seems like the same replication slot should be used from start to end for a relation during tablesync. However, creating/dropping replication slots can be a major overhead in some cases.\n> It would be nice if these slots are somehow reused.\n>\n> To overcome this issue, I've been thinking about making some changes in my patch.\n> So far, my proposal would be as follows:\n>\n> Slot naming can be like pg_<sub_id>_<worker_pid> instead of pg_<sub_id>_<slot_index>. This way each worker can use the same replication slot during their lifetime.\n> But if a worker is restarted, then it will switch to a new replication slot since its pid has changed.\n>\n\nI think using worker_pid also has similar risks of mixing slots from\ndifferent workers because after restart same worker_pid could be\nassigned to a totally different worker. Can we think of using a unique\n64-bit number instead? This will be allocated when each workers\nstarted for the very first time and after that we can refer catalog to\nfind it as suggested in the idea below.\n\n> pg_subscription_rel catalog can store replication slot name for each non-ready relation. Then we can find the slot needed for that particular relation to complete tablesync.\n>\n\nYeah, this is worth investigating. However, instead of storing the\nslot_name, we can store just the unique number (as suggested above).\nWe should use the same for the origin name as well.\n\n> If a worker syncs a relation without any error, everything works well and this new replication slot column from the catalog will not be needed.\n> However if a worker is restarted due to a failure, the previous run of that worker left its slot behind since it did not exit properly.\n> And the restarted worker (with a different pid) will see that the relation is actually in SUBREL_STATE_FINISHEDCOPY and want to proceed for the catchup step.\n> Then the worker can look for that particular relation's replication slot from pg_subscription_rel catalog (slot name should be there since relation state is not ready). And tablesync can proceed with that slot.\n>\n> There might be some cases where some replication slots are left behind. An example to such cases would be when the slot is removed from pg_subscription_rel catalog and detached from any relation, but tha slot actually couldn't be dropped for some reason. For such cases, a slot cleanup logic is needed. This cleanup can also be done by tablesync workers.\n> Whenever a tablesync worker is created, it can look for existing replication slots that do not belong to any relation and any worker (slot name has pid for that), and drop those slots if it finds any.\n>\n\nThis sounds tricky. Why not first drop slot/origin and then detach it\nfrom pg_subscription_rel? On restarts, it is possible that we may\nerror out after dropping the slot or origin but before updating the\ncatalog entry but in such case we can ignore missing slot/origin and\ndetach them from pg_subscription_rel. Also, if we use the unique\nnumber as suggested above, I think even if we don't remove it after\nthe relation state is ready, it should be okay.\n\n> What do you think about this new way of handling slots? Do you see any points of concern?\n>\n> I'm currently working on adding this change into the patch. And would appreciate any comment.\n>\n\nThanks for making progress!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Jul 2022 17:54:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Amit,\n\nI updated the patch in order to prevent the problems that might be caused\nby using different replication slots for syncing a table.\nAs suggested in previous emails, replication slot names are stored in the\ncatalog. So slot names can be reached later and it is ensured\nthat same replication slot is used during tablesync step of a table.\n\nWith the current version of the patch:\n-. \"srrelslotname\" column is introduced into pg_subscibtion_rel catalog. It\nstores the slot name for tablesync\n\n-. Tablesync worker logic is now as follows:\n1. Tablesync worker is launched by apply worker for a table.\n2. Worker generates a default replication slot name for itself. Slot name\nincludes subid and worker pid for tracking purposes.\n3. If table has a slot name value in the catalog:\n\ni. If the table state is DATASYNC, drop the replication slot from the\ncatalog and proceed tablesync with a new slot.\n\nii. If the table state is FINISHEDCOPY, use the replicaton slot from the\ncatalog, do not create a new slot.\n\n4. Before worker moves to new table, drop any replication slot that are\nretrieved from the catalog and used.\n5. In case of no table left to sync, drop the replication slot of that sync\nworker with worker pid if it exists. (It's possible that a sync worker do\nnot create a replication slot for itself but uses slots read from the\ncatalog in each iteration)\n\n\nI think using worker_pid also has similar risks of mixing slots from\n> different workers because after restart same worker_pid could be\n> assigned to a totally different worker. Can we think of using a unique\n> 64-bit number instead? This will be allocated when each workers\n> started for the very first time and after that we can refer catalog to\n> find it as suggested in the idea below.\n>\n\nI'm not sure how likely to have colliding pid's for different tablesync\nworkers in the same subscription.\nThough ,having pid in slot name makes it easier to track which slot belongs\nto which worker. That's why I kept using pid in slot names.\nBut I think it should be simple to switch to using a unique 64-bit number.\nSo I can remove pid's from slot names, if you think that it would be\nbetter.\n\n\n> We should use the same for the origin name as well.\n>\n\nI did not really change anything related to origin names. Origin names are\nstill the same and include relation id. What do you think would be an issue\nwith origin names in this patch?\n\n\n> This sounds tricky. Why not first drop slot/origin and then detach it\n> from pg_subscription_rel? On restarts, it is possible that we may\n> error out after dropping the slot or origin but before updating the\n> catalog entry but in such case we can ignore missing slot/origin and\n> detach them from pg_subscription_rel. Also, if we use the unique\n> number as suggested above, I think even if we don't remove it after\n> the relation state is ready, it should be okay.\n>\n\nRight, I did not add an additional slot cleanup step. The patch now drops\nthe slot when we're done with it and then removes it from the catalog.\n\n\nThanks,\nMelih", "msg_date": "Wed, 27 Jul 2022 13:26:10 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 27, 2022 at 3:56 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Amit,\n>\n> I updated the patch in order to prevent the problems that might be caused by using different replication slots for syncing a table.\n> As suggested in previous emails, replication slot names are stored in the catalog. So slot names can be reached later and it is ensured\n> that same replication slot is used during tablesync step of a table.\n>\n> With the current version of the patch:\n> -. \"srrelslotname\" column is introduced into pg_subscibtion_rel catalog. It stores the slot name for tablesync\n>\n> -. Tablesync worker logic is now as follows:\n> 1. Tablesync worker is launched by apply worker for a table.\n> 2. Worker generates a default replication slot name for itself. Slot name includes subid and worker pid for tracking purposes.\n> 3. If table has a slot name value in the catalog:\n>\n> i. If the table state is DATASYNC, drop the replication slot from the catalog and proceed tablesync with a new slot.\n>\n> ii. If the table state is FINISHEDCOPY, use the replicaton slot from the catalog, do not create a new slot.\n>\n> 4. Before worker moves to new table, drop any replication slot that are retrieved from the catalog and used.\n>\n\nWhy after step 4, do you need to drop the replication slot? Won't just\nclearing the required info from the catalog be sufficient?\n\n> 5. In case of no table left to sync, drop the replication slot of that sync worker with worker pid if it exists. (It's possible that a sync worker do not create a replication slot for itself but uses slots read from the catalog in each iteration)\n>\n>\n>> I think using worker_pid also has similar risks of mixing slots from\n>> different workers because after restart same worker_pid could be\n>> assigned to a totally different worker. Can we think of using a unique\n>> 64-bit number instead? This will be allocated when each workers\n>> started for the very first time and after that we can refer catalog to\n>> find it as suggested in the idea below.\n>\n>\n> I'm not sure how likely to have colliding pid's for different tablesync workers in the same subscription.\n>\n\nHmm, I think even if there is an iota of a chance which I think is\nthere, we can't use worker_pid. Assume, that if the same worker_pid is\nassigned to another worker once the worker using it got an error out,\nthe new worker will fail as soon as it will try to create a\nreplication slot.\n\n> Though ,having pid in slot name makes it easier to track which slot belongs to which worker. That's why I kept using pid in slot names.\n> But I think it should be simple to switch to using a unique 64-bit number. So I can remove pid's from slot names, if you think that it would be better.\n>\n\nI feel it would be better or maybe we need to think of some other\nidentifier but one thing we need to think about before using a 64-bit\nunique identifier here is how will we retrieve its last used value\nafter restart of server. We may need to store it in a persistent way\nsomewhere.\n>>\n>> We should use the same for the origin name as well.\n>\n>\n> I did not really change anything related to origin names. Origin names are still the same and include relation id. What do you think would be an issue with origin names in this patch?\n>\n\nThe problems will be similar to the slot name. The origin is used to\ntrack the progress of replication, so, if we use the wrong origin name\nafter the restart, it can send the wrong start_streaming position to\nthe publisher.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 19:31:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": ">\n> Why after step 4, do you need to drop the replication slot? Won't just\n> clearing the required info from the catalog be sufficient?\n>\n\nThe replication slots that we read from the catalog will not be used for\nanything else after we're done with syncing the table which the rep slot\nbelongs to.\nIt's removed from the catalog when the sync is completed and it basically\nbecomes a slot that is not linked to any table or worker. That's why I\nthink it should be dropped rather than left behind.\n\nNote that if a worker dies and its replication slot continues to exist,\nthat slot will only be used to complete the sync process of the one table\nthat the dead worker was syncing but couldn't finish.\nWhen that particular table is synced and becomes ready, the replication\nslot has no use anymore.\n\n\n> Hmm, I think even if there is an iota of a chance which I think is\n> there, we can't use worker_pid. Assume, that if the same worker_pid is\n> assigned to another worker once the worker using it got an error out,\n> the new worker will fail as soon as it will try to create a\n> replication slot.\n>\n\nRight. If something like that happens, worker will fail without doing\nanything. Then a new one will be launched and that one will continue to\ndo the work.\nThe worst case might be having conflicting pid over and over again while\nalso having replication slots whose name includes one of those pids still\nexist.\nIt seems unlikely but possible, yes.\n\n\n> I feel it would be better or maybe we need to think of some other\n> identifier but one thing we need to think about before using a 64-bit\n> unique identifier here is how will we retrieve its last used value\n> after restart of server. We may need to store it in a persistent way\n> somewhere.\n>\n\nWe might consider storing this info in a catalog again. Since this last\nused value will be different for each subscription, pg_subscription can be\na good place to keep that.\n\n\n> The problems will be similar to the slot name. The origin is used to\n> track the progress of replication, so, if we use the wrong origin name\n> after the restart, it can send the wrong start_streaming position to\n> the publisher.\n>\n\nI understand. But origin naming logic is still the same. Its format is like\npg_<subid>_<relid> .\nI did not need to change this since it seems to me origins should belong to\nonly one table. The patch does not reuse origins.\nSo I don't think this change introduces an issue with origin. What do you\nthink?\n\nThanks,\nMelih\n\nWhy after step 4, do you need to drop the replication slot? Won't just\nclearing the required info from the catalog be sufficient?The replication slots that we read from the catalog will not be used for anything else after we're done with syncing the table which the rep slot belongs to. It's removed from the catalog when the sync is completed and it basically becomes a slot that is not linked to any table or worker. That's why I think it should be dropped rather than left behind.Note that if a worker dies and its replication slot continues to exist, that slot will only be used to complete the sync process of the one table that the dead worker was syncing but couldn't finish.When that particular table is synced and becomes ready, the replication slot has no use anymore.      \nHmm, I think even if there is an iota of a chance which I think is\nthere, we can't use worker_pid. Assume, that if the same worker_pid is\nassigned to another worker once the worker using it got an error out,\nthe new worker will fail as soon as it will try to create a\nreplication slot.Right. If something like that happens, worker will fail without doing anything. Then a new one will be launched and that one will continue to do the work.The worst case might be having conflicting pid over and over again while also having replication slots whose name includes one of those pids still exist.It seems unlikely but possible, yes.   \nI feel it would be better or maybe we need to think of some other\nidentifier but one thing we need to think about before using a 64-bit\nunique identifier here is how will we retrieve its last used value\nafter restart of server. We may need to store it in a persistent way\nsomewhere.We might consider storing this info in a catalog again. Since this last used value will be different for each subscription, pg_subscription can be a good place to keep that.  \nThe problems will be similar to the slot name. The origin is used to\ntrack the progress of replication, so, if we use the wrong origin name\nafter the restart, it can send the wrong start_streaming position to\nthe publisher.I understand. But origin naming logic is still the same. Its format is like pg_<subid>_<relid> . I did not need to change this since it seems to me origins should belong to only one table. The patch does not reuse origins.So I don't think this change introduces an issue with origin. What do you think?Thanks,Melih", "msg_date": "Thu, 28 Jul 2022 19:02:43 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 28, 2022 at 9:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>>\n>> Why after step 4, do you need to drop the replication slot? Won't just\n>> clearing the required info from the catalog be sufficient?\n>\n>\n> The replication slots that we read from the catalog will not be used for anything else after we're done with syncing the table which the rep slot belongs to.\n> It's removed from the catalog when the sync is completed and it basically becomes a slot that is not linked to any table or worker. That's why I think it should be dropped rather than left behind.\n>\n> Note that if a worker dies and its replication slot continues to exist, that slot will only be used to complete the sync process of the one table that the dead worker was syncing but couldn't finish.\n> When that particular table is synced and becomes ready, the replication slot has no use anymore.\n>\n\nWhy can't it be used to sync the other tables if any?\n\n>>\n>> Hmm, I think even if there is an iota of a chance which I think is\n>> there, we can't use worker_pid. Assume, that if the same worker_pid is\n>> assigned to another worker once the worker using it got an error out,\n>> the new worker will fail as soon as it will try to create a\n>> replication slot.\n>\n>\n> Right. If something like that happens, worker will fail without doing anything. Then a new one will be launched and that one will continue to do the work.\n> The worst case might be having conflicting pid over and over again while also having replication slots whose name includes one of those pids still exist.\n> It seems unlikely but possible, yes.\n>\n>>\n>> I feel it would be better or maybe we need to think of some other\n>> identifier but one thing we need to think about before using a 64-bit\n>> unique identifier here is how will we retrieve its last used value\n>> after restart of server. We may need to store it in a persistent way\n>> somewhere.\n>\n>\n> We might consider storing this info in a catalog again. Since this last used value will be different for each subscription, pg_subscription can be a good place to keep that.\n>\n\nThis sounds reasonable. Let's do this unless we get some better idea.\n\n>>\n>> The problems will be similar to the slot name. The origin is used to\n>> track the progress of replication, so, if we use the wrong origin name\n>> after the restart, it can send the wrong start_streaming position to\n>> the publisher.\n>\n>\n> I understand. But origin naming logic is still the same. Its format is like pg_<subid>_<relid> .\n> I did not need to change this since it seems to me origins should belong to only one table. The patch does not reuse origins.\n> So I don't think this change introduces an issue with origin. What do you think?\n>\n\nThere is no such restriction that origins should belong to only one\ntable. What makes you think like that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Jul 2022 15:47:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Amit,\n\n>> Why after step 4, do you need to drop the replication slot? Won't just\n> >> clearing the required info from the catalog be sufficient?\n> >\n> >\n> > The replication slots that we read from the catalog will not be used for\n> anything else after we're done with syncing the table which the rep slot\n> belongs to.\n> > It's removed from the catalog when the sync is completed and it\n> basically becomes a slot that is not linked to any table or worker. That's\n> why I think it should be dropped rather than left behind.\n> >\n> > Note that if a worker dies and its replication slot continues to exist,\n> that slot will only be used to complete the sync process of the one table\n> that the dead worker was syncing but couldn't finish.\n> > When that particular table is synced and becomes ready, the replication\n> slot has no use anymore.\n> >\n>\n> Why can't it be used to sync the other tables if any?\n>\n\nIt can be used. But I thought it would be better not to, for example in the\nfollowing case:\nLet's say a sync worker starts with a table in INIT state. The worker\ncreates a new replication slot to sync that table.\nWhen sync of the table is completed, it will move to the next one. This\ntime the new table may be in FINISHEDCOPY state, so the worker may need to\nuse the new table's existing replication slot.\nBefore the worker will move to the next table again, there will be two\nreplication slots used by the worker. We might want to keep one and drop\nthe other.\nAt this point, I thought it would be better to keep the replication slot\ncreated by this worker in the first place. I think it's easier to track\nslots this way since we know how to generate the rep slots name.\nOtherwise we would need to store the replication slot name somewhere too.\n\n\n\n> This sounds reasonable. Let's do this unless we get some better idea.\n>\n\nI updated the patch to use an unique id for replication slot names and\nstore the last used id in the catalog.\nCan you look into it again please?\n\n\nThere is no such restriction that origins should belong to only one\n> table. What makes you think like that?\n>\n\nI did not reuse origins since I didn't think it would significantly improve\nthe performance as reusing replication slots does.\nSo I just kept the origins as they were, even if it was possible to reuse\nthem. Does that make sense?\n\nBest,\nMelih", "msg_date": "Fri, 5 Aug 2022 16:55:09 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Aug 5, 2022 at 7:25 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>>\n>> Why can't it be used to sync the other tables if any?\n>\n>\n> It can be used. But I thought it would be better not to, for example in the following case:\n> Let's say a sync worker starts with a table in INIT state. The worker creates a new replication slot to sync that table.\n> When sync of the table is completed, it will move to the next one. This time the new table may be in FINISHEDCOPY state, so the worker may need to use the new table's existing replication slot.\n> Before the worker will move to the next table again, there will be two replication slots used by the worker. We might want to keep one and drop the other.\n> At this point, I thought it would be better to keep the replication slot created by this worker in the first place. I think it's easier to track slots this way since we know how to generate the rep slots name.\n> Otherwise we would need to store the replication slot name somewhere too.\n>\n\nI think there is some basic flaw in slot reuse design. Currently, we\ncopy the table by starting a repeatable read transaction (BEGIN READ\nONLY ISOLATION LEVEL REPEATABLE READ) and create a slot that\nestablishes a snapshot which is first used for copy and then LSN\nreturned by it is used in the catchup phase after the copy is done.\nThe patch won't establish such a snapshot before a table copy as it\nwon't create a slot each time. If this understanding is correct, I\nthink we need to use ExportSnapshot/ImportSnapshot functionality to\nachieve it or do something else to avoid the problem mentioned.\n\n>\n>>\n>> This sounds reasonable. Let's do this unless we get some better idea.\n>\n>\n>> There is no such restriction that origins should belong to only one\n>> table. What makes you think like that?\n>\n>\n> I did not reuse origins since I didn't think it would significantly improve the performance as reusing replication slots does.\n> So I just kept the origins as they were, even if it was possible to reuse them. Does that make sense?\n>\n\nFor small tables, it could have a visible performance difference as it\ninvolves database write operations to each time create and drop the\norigin. But if we don't want to reuse then also you need to set its\norigin_lsn appropriately. Currently (without this patch), after\ncreating the slot, we directly use the origin_lsn returned by\ncreate_slot API whereas now it won't be the same case as the patch\ndoesn't create a slot every time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 6 Aug 2022 18:31:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Amit,\n\nAmit Kapila <amit.kapila16@gmail.com>, 6 Ağu 2022 Cmt, 16:01 tarihinde şunu\nyazdı:\n\n> I think there is some basic flaw in slot reuse design. Currently, we\n> copy the table by starting a repeatable read transaction (BEGIN READ\n> ONLY ISOLATION LEVEL REPEATABLE READ) and create a slot that\n> establishes a snapshot which is first used for copy and then LSN\n> returned by it is used in the catchup phase after the copy is done.\n> The patch won't establish such a snapshot before a table copy as it\n> won't create a slot each time. If this understanding is correct, I\n> think we need to use ExportSnapshot/ImportSnapshot functionality to\n> achieve it or do something else to avoid the problem mentioned.\n>\n\nI did not really think about the snapshot created by replication slot while\nmaking this change. Thanks for pointing it out.\nI've been thinking about how to fix this issue. There are some points I'm\nstill not sure about.\nIf the worker will not create a new replication slot, which snapshot should\nwe actually export and then import?\nAt the line where the worker was supposed to create replication slot but\nnow will reuse an existing slot instead, calling pg_export_snapshot() can\nexport the snapshot instead of CREATE_REPLICATION_SLOT.\nHowever, importing that snapshot into the current transaction may not make\nany difference since we exported that snapshot from the same transaction. I\nthink this wouldn't make sense.\nHow else an export/import snapshot logic can be placed in this change?\n\nLSN also should be set accurately. The current change does not handle LSN\nproperly.\nI see that CREATE_REPLICATION_SLOT returns consistent_point which indicates\nthe earliest location which streaming can start from. And this\nconsistent_point is used as origin_startpos.\nIf that's the case, would it make sense to use \"confirmed_flush_lsn\" of the\nreplication slot in case the slot is being reused?\nSince confirmed_flush_lsn can be considered as the safest, earliest\nlocation which streaming can start from, I think it would work.\n\nAnd at this point, with the correct LSN, I'm wondering whether this\nexport/import logic is really necessary if the worker does not create a\nreplication slot. What do you think?\n\n\nFor small tables, it could have a visible performance difference as it\n> involves database write operations to each time create and drop the\n> origin. But if we don't want to reuse then also you need to set its\n> origin_lsn appropriately. Currently (without this patch), after\n> creating the slot, we directly use the origin_lsn returned by\n> create_slot API whereas now it won't be the same case as the patch\n> doesn't create a slot every time.\n>\n\nCorrect. For this issue, please consider the LSN logic explained above.\n\n\nThanks,\nMelih\n\nHi Amit,Amit Kapila <amit.kapila16@gmail.com>, 6 Ağu 2022 Cmt, 16:01 tarihinde şunu yazdı:\nI think there is some basic flaw in slot reuse design. Currently, we\ncopy the table by starting a repeatable read transaction (BEGIN READ\nONLY ISOLATION LEVEL REPEATABLE READ) and create a slot that\nestablishes a snapshot which is first used for copy and then LSN\nreturned by it is used in the catchup phase after the copy is done.\nThe patch won't establish such a snapshot before a table copy as it\nwon't create a slot each time. If this understanding is correct, I\nthink we need to use ExportSnapshot/ImportSnapshot functionality to\nachieve it or do something else to avoid the problem mentioned.I did not really think about the snapshot created by replication slot while making this change. Thanks for pointing it out.I've been thinking about how to fix this issue. There are some points I'm still not sure about. If the worker will not create a new replication slot, which snapshot should we actually export and then import?At the line where the worker was supposed to create replication slot but now will reuse an existing slot instead, calling pg_export_snapshot() can export the snapshot instead of CREATE_REPLICATION_SLOT.However, importing that snapshot into the current transaction may not make any difference since we exported that snapshot from the same transaction. I think this wouldn't make sense.How else an export/import snapshot logic can be placed in this change?  LSN also should be set accurately. The current change does not handle LSN properly. I see that CREATE_REPLICATION_SLOT returns consistent_point which indicates the earliest location which streaming can start from. And this consistent_point is used as origin_startpos.If that's the case, would it make sense to use \"confirmed_flush_lsn\" of the replication slot in case the slot is being reused? Since confirmed_flush_lsn can be considered as the safest, earliest location which streaming can start from, I think it would work. And at this point, with the correct LSN, I'm wondering whether this export/import logic is really necessary if the worker does not create a replication slot. What do you think?\nFor small tables, it could have a visible performance difference as it\ninvolves database write operations to each time create and drop the\norigin. But if we don't want to reuse then also you need to set its\norigin_lsn appropriately. Currently (without this patch), after\ncreating the slot, we directly use the origin_lsn returned by\ncreate_slot API whereas now it won't be the same case as the patch\ndoesn't create a slot every time.Correct. For this issue, please consider the LSN logic explained above.Thanks,Melih", "msg_date": "Mon, 15 Aug 2022 14:26:26 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Aug 15, 2022 at 4:56 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Amit,\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 6 Ağu 2022 Cmt, 16:01 tarihinde şunu yazdı:\n>>\n>> I think there is some basic flaw in slot reuse design. Currently, we\n>> copy the table by starting a repeatable read transaction (BEGIN READ\n>> ONLY ISOLATION LEVEL REPEATABLE READ) and create a slot that\n>> establishes a snapshot which is first used for copy and then LSN\n>> returned by it is used in the catchup phase after the copy is done.\n>> The patch won't establish such a snapshot before a table copy as it\n>> won't create a slot each time. If this understanding is correct, I\n>> think we need to use ExportSnapshot/ImportSnapshot functionality to\n>> achieve it or do something else to avoid the problem mentioned.\n>\n>\n> I did not really think about the snapshot created by replication slot while making this change. Thanks for pointing it out.\n> I've been thinking about how to fix this issue. There are some points I'm still not sure about.\n> If the worker will not create a new replication slot, which snapshot should we actually export and then import?\n>\n\nCan we (export/import) use the snapshot we used the first time when a\nslot is created for future transactions that copy other tables?\nBecause if we can do that then I think we can use the same LSN as\nreturned for the slot when it was created for all other table syncs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Sep 2022 10:05:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "2022年8月5日(金) 22:55 Melih Mutlu <m.melihmutlu@gmail.com>:\n>\n> Hi Amit,\n>\n>> >> Why after step 4, do you need to drop the replication slot? Won't just\n>> >> clearing the required info from the catalog be sufficient?\n>> >\n>> >\n>> > The replication slots that we read from the catalog will not be used for anything else after we're done with syncing the table which the rep slot belongs to.\n>> > It's removed from the catalog when the sync is completed and it basically becomes a slot that is not linked to any table or worker. That's why I think it should be dropped rather than left behind.\n>> >\n>> > Note that if a worker dies and its replication slot continues to exist, that slot will only be used to complete the sync process of the one table that the dead worker was syncing but couldn't finish.\n>> > When that particular table is synced and becomes ready, the replication slot has no use anymore.\n>> >\n>>\n>> Why can't it be used to sync the other tables if any?\n>\n>\n> It can be used. But I thought it would be better not to, for example in the following case:\n> Let's say a sync worker starts with a table in INIT state. The worker creates a new replication slot to sync that table.\n> When sync of the table is completed, it will move to the next one. This time the new table may be in FINISHEDCOPY state, so the worker may need to use the new table's existing replication slot.\n> Before the worker will move to the next table again, there will be two replication slots used by the worker. We might want to keep one and drop the other.\n> At this point, I thought it would be better to keep the replication slot created by this worker in the first place. I think it's easier to track slots this way since we know how to generate the rep slots name.\n> Otherwise we would need to store the replication slot name somewhere too.\n>\n>\n>>\n>> This sounds reasonable. Let's do this unless we get some better idea.\n>\n>\n> I updated the patch to use an unique id for replication slot names and store the last used id in the catalog.\n> Can you look into it again please?\n>\n>\n>> There is no such restriction that origins should belong to only one\n>> table. What makes you think like that?\n>\n>\n> I did not reuse origins since I didn't think it would significantly improve the performance as reusing replication slots does.\n> So I just kept the origins as they were, even if it was possible to reuse them. Does that make sense?\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\n[1] http://cfbot.cputube.org/patch_40_3784.log\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 11:47:15 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi hackers,\n\nI've been working on/struggling with this patch for a while. But I haven't\nupdated this thread regularly.\nSo sharing what I did with this patch so far.\n\n> Amit Kapila <amit.kapila16@gmail.com>, 6 Ağu 2022 Cmt, 16:01 tarihinde\n> şunu yazdı:\n> >>\n> >> I think there is some basic flaw in slot reuse design. Currently, we\n> >> copy the table by starting a repeatable read transaction (BEGIN READ\n> >> ONLY ISOLATION LEVEL REPEATABLE READ) and create a slot that\n> >> establishes a snapshot which is first used for copy and then LSN\n> >> returned by it is used in the catchup phase after the copy is done.\n> >> The patch won't establish such a snapshot before a table copy as it\n> >> won't create a slot each time. If this understanding is correct, I\n> >> think we need to use ExportSnapshot/ImportSnapshot functionality to\n> >> achieve it or do something else to avoid the problem mentioned.\n>\n\nThis issue that Amit mentioned causes some problems such as duplicated rows\nin the subscriber.\nBasically, with this patch, tablesync worker creates a replication slot\nonly in its first run. To ensure table copy and sync are consistent with\neach other, the worker needs the correct snapshot and LSN which both are\nreturned by slot create operation.\nSince this patch does not create a rep. slot in each table copy and instead\nreuses the one created in the beginning, we do not get a new snapshot and\nLSN for each table anymore. Snapshot gets lost right after the transaction\nis committed, but the patch continues to use the same LSN for next tables\nwithout the proper snapshot.\nIn the end, for example, the worker might first copy some rows, then apply\nchanges from rep. slot and inserts those rows again for the tables in\nlater iterations.\n\nI discussed some possible ways to resolve this with Amit offline:\n1- Copy all tables in one transaction so that we wouldn't need to deal with\nsnapshots.\nNot easy to keep track of the progress. If the transaction fails, we would\nneed to start all over again.\n\n2- Don't lose the first snapshot (by keeping a transaction open with the\nsnapshot imported or some other way) and use the same snapshot and LSN for\nall tables.\nI'm not sure about the side effects of keeping a transaction open that long\nor using a snapshot that might be too old after some time.\nStill seems like it might work.\n\n3- For each table, get a new snapshot and LSN by using an existing\nreplication slot.\nEven though this approach wouldn't create a new replication slot, preparing\nthe slot for snapshot and then taking the snapshot may be costly.\nNeed some numbers here to see how much this approach would improve the\nperformance.\n\nI decided to go with approach 3 (creating a new snapshot with an existing\nreplication slot) for now since it would require less change in the\ntablesync worker logic than the other options would.\nTo achieve this, this patch introduces a new command for Streaming\nReplication Protocol.\nThe new REPLICATION_SLOT_SNAPSHOT command basically mimics how\nCREATE_REPLICATION_SLOT creates a snapshot, but without actually creating a\nnew replication slot.\nLater the tablesync worker calls this command if it decides not to create a\nnew rep. slot, uses the snapshot created and LSN returned by the command.\n\nAlso;\nAfter the changes discussed here [1], concurrent replication origin drops\nby apply worker and tablesync workers may hold each other on wait due to\nlocks taken by replorigin_drop_by_name.\nI see that this harms the performance of logical replication quite a bit in\nterms of speed.\nEven though reusing replication origins was discussed in this thread\nbefore, the patch didn't include any change to do so.\nThe updated version of the patch now also reuses replication origins too.\nSeems like even only changes to reuse origins by itself improves the\nperformance significantly.\n\nAttached two patches:\n0001: adds REPLICATION_SLOT_SNAPSHOT command for replication protocol.\n0002: Reuses workers/replication slots and origins for tablesync\n\nI would appreciate any feedback/review/thought on the approach and both\npatches.\nI will also share some numbers to compare performances of the patch and\nmaster branch soon.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20220714115155.GA5439%40depesz.com\n\nBest,\n--\nMelih Mutlu\nMicrosoft", "msg_date": "Mon, 5 Dec 2022 16:00:12 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Dec 5, 2022 at 6:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Attached two patches:\n> 0001: adds REPLICATION_SLOT_SNAPSHOT command for replication protocol.\n> 0002: Reuses workers/replication slots and origins for tablesync\n>\n> I would appreciate any feedback/review/thought on the approach and both patches.\n> I will also share some numbers to compare performances of the patch and master branch soon.\n>\n\nIt would be interesting to see the numbers differently for resue of\nreplication slots and origins. This will let us know how much each of\nthose optimizations helps with the reuse of workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Dec 2022 18:55:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAttached new versions of the patch with some changes/fixes.\n\nHere also some numbers to compare the performance of log. rep. with this\npatch against the current master branch.\n\nMy method of benchmarking is the same with what I did earlier in this\nthread. (on a different environment, so not compare the result from this\nemail with the ones from earlier emails)\n\n> With those changes, I did some benchmarking to see if it improves anything.\n> This results compares this patch with the latest version of master branch.\n> \"max_sync_workers_per_subscription\" is set to 2 as default.\n> Got some results simply averaging timings from 5 consecutive runs for each\n> branch.\n\n\nSince this patch is expected to improve log. rep. of empty/close-to-empty\ntables, started with measuring performance with empty tables.\n\n | 10 tables | 100 tables | 1000 tables\n------------------------------------------------------------------------------\nmaster | 283.430 ms | 22739.107 ms | 105226.177 ms\n------------------------------------------------------------------------------\n patch | 189.139 ms | 1554.802 ms | 23091.434 ms\n\nAfter the changes discussed here [1], concurrent replication origin drops\n> by apply worker and tablesync workers may hold each other on wait due to\n> locks taken by replorigin_drop_by_name.\n> I see that this harms the performance of logical replication quite a bit\n> in terms of speed.\n> [1]\n> https://www.postgresql.org/message-id/flat/20220714115155.GA5439%40depesz.com\n\n\nFirstly, as I mentioned, replication origin drops made things worse for the\nmaster branch.\nLocks start being a more serious issue when the number of tables increases.\nThe patch reuses the origin so does not need to drop them in each\niteration. That's why the difference between the master and the patch is\nmore significant now than it was when I first sent the patch.\n\nTo just show that the improvement is not only the result of reuse of\norigins, but also reuse of rep. slots and workers, I just reverted those\ncommits which causes the origin drop issue.\n\n | 10 tables | 100 tables | 1000 tables\n-----------------------------------------------------------------------------\nreverted | 270.012 ms | 2483.907 ms | 31660.758 ms\n-----------------------------------------------------------------------------\n patch | 189.139 ms | 1554.802 ms | 23091.434 ms\n\nWith this patch, logical replication is still faster, even if we wouldn't\nhave an issue with rep. origin drops.\n\nAlso here are some numbers with 10 tables loaded with some data :\n\n | 10 MB | 100 MB\n----------------------------------------------------------\nmaster | 2868.524 ms | 14281.711 ms\n----------------------------------------------------------\n patch | 1750.226 ms | 14592.800 ms\n\nThe difference between the master and the patch is getting close when the\nsize of tables increase, as expected.\n\n\nI would appreciate any feedback/thought on the approach/patch/numbers etc.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 15 Dec 2022 15:03:16 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Dec 15, 2022 at 5:33 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Also here are some numbers with 10 tables loaded with some data :\n>\n> | 10 MB | 100 MB\n> ----------------------------------------------------------\n> master | 2868.524 ms | 14281.711 ms\n> ----------------------------------------------------------\n> patch | 1750.226 ms | 14592.800 ms\n>\n> The difference between the master and the patch is getting close when the size of tables increase, as expected.\n>\n\nRight, but when the size is 100MB, it seems to be taking a bit more\ntime. Do we want to evaluate with different sizes to see how it looks?\nOther than that all the numbers are good.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Dec 2022 08:16:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Amit,\n\nAmit Kapila <amit.kapila16@gmail.com>, 16 Ara 2022 Cum, 05:46 tarihinde\nşunu yazdı:\n\n> Right, but when the size is 100MB, it seems to be taking a bit more\n> time. Do we want to evaluate with different sizes to see how it looks?\n> Other than that all the numbers are good.\n>\n\nI did a similar testing with again 100MB and also 1GB this time.\n\n | 100 MB | 1 GB\n----------------------------------------------------------\nmaster | 14761.425 ms | 160932.982 ms\n----------------------------------------------------------\n patch | 14398.408 ms | 160593.078 ms\n\nThis time, it seems like the patch seems slightly faster than the master.\nNot sure if we can say the patch slows things down (or speeds up) if the\nsize of tables increases.\nThe difference may be something arbitrary or caused by other factors. What\ndo you think?\n\nI also wondered what happens when \"max_sync_workers_per_subscription\" is\nset to 1.\nWhich means tablesync will be done sequentially in both cases but the patch\nwill use only one worker and one replication slot during the whole\ntablesync process.\nHere are the numbers for that case:\n\n | 100 MB | 1 GB\n----------------------------------------------------------\nmaster | 27751.463 ms | 312424.999 ms\n----------------------------------------------------------\n patch | 27342.760 ms | 310021.767 ms\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Amit,Amit Kapila <amit.kapila16@gmail.com>, 16 Ara 2022 Cum, 05:46 tarihinde şunu yazdı:\nRight, but when the size is 100MB, it seems to be taking a bit more\ntime. Do we want to evaluate with different sizes to see how it looks?\nOther than that all the numbers are good.I did a similar testing with again 100MB and also 1GB this time.             |     100 MB           |     1 GB           ----------------------------------------------------------master  |  14761.425 ms   |  160932.982 ms   ---------------------------------------------------------- patch   |  14398.408 ms   |  160593.078 ms This time, it seems like the patch seems slightly faster than the master.Not sure if we can say the patch slows things down (or speeds up) if the size of tables increases.  The difference may be something arbitrary or caused by other factors. What do you think?I also wondered what happens when \"max_sync_workers_per_subscription\" is set to 1.Which means tablesync will be done sequentially in both cases but the patch will use only one worker and one replication slot during the whole tablesync process.Here are the numbers for that case:             |     100 MB          |     1 GB           ----------------------------------------------------------master  |  27751.463 ms  |  312424.999 ms   ---------------------------------------------------------- patch   |  27342.760 ms  |  310021.767 ms Best,-- Melih MutluMicrosoft", "msg_date": "Tue, 20 Dec 2022 17:44:36 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Dec 20, 2022 at 8:14 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 16 Ara 2022 Cum, 05:46 tarihinde şunu yazdı:\n>>\n>> Right, but when the size is 100MB, it seems to be taking a bit more\n>> time. Do we want to evaluate with different sizes to see how it looks?\n>> Other than that all the numbers are good.\n>\n>\n> I did a similar testing with again 100MB and also 1GB this time.\n>\n> | 100 MB | 1 GB\n> ----------------------------------------------------------\n> master | 14761.425 ms | 160932.982 ms\n> ----------------------------------------------------------\n> patch | 14398.408 ms | 160593.078 ms\n>\n> This time, it seems like the patch seems slightly faster than the master.\n> Not sure if we can say the patch slows things down (or speeds up) if the size of tables increases.\n> The difference may be something arbitrary or caused by other factors. What do you think?\n>\n\nYes, I agree with you as I also can't see an obvious reason for any\nslowdown with this patch's idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Dec 2022 17:35:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi hackers,\n\nSending an updated version of this patch to get rid of compiler warnings.\n\nI would highly appreciate any feedback.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Tue, 3 Jan 2023 17:53:05 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi hackers,\n\nRebased the patch to resolve conflicts.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 11 Jan 2023 11:31:12 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jan 11, 2023 4:31 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> \r\n> Hi hackers,\r\n> \r\n> Rebased the patch to resolve conflicts.\r\n> \r\n\r\nThanks for your patch. Here are some comments.\r\n\r\n0001 patch\r\n============\r\n1. walsender.c\r\n+\t/* Create a tuple to send consisten WAL location */\r\n\r\n\"consisten\" should be \"consistent\" I think.\r\n\r\n2. logical.c\r\n+\tif (need_full_snapshot)\r\n+\t{\r\n+\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\r\n+\r\n+\t\tSpinLockAcquire(&slot->mutex);\r\n+\t\tslot->effective_catalog_xmin = xmin_horizon;\r\n+\t\tslot->data.catalog_xmin = xmin_horizon;\r\n+\t\tslot->effective_xmin = xmin_horizon;\r\n+\t\tSpinLockRelease(&slot->mutex);\r\n+\r\n+\t\txmin_horizon = GetOldestSafeDecodingTransactionId(!need_full_snapshot);\r\n+\t\tReplicationSlotsComputeRequiredXmin(true);\r\n+\r\n+\t\tLWLockRelease(ProcArrayLock);\r\n+\t}\r\n\r\nIt seems that we should first get the safe decoding xid, then inform the slot\r\nmachinery about the new limit, right? Otherwise the limit will be\r\nInvalidTransactionId and that seems inconsistent with the comment.\r\n\r\n3. doc/src/sgml/protocol.sgml\r\n+ is used in the currenct transaction. This command is currently only supported\r\n+ for logical replication.\r\n+ slots.\r\n\r\nWe don't need to put \"slots\" in a new line.\r\n\r\n\r\n0002 patch\r\n============\r\n1.\r\nIn pg_subscription_rel.h, I think the type of \"srrelslotname\" can be changed to\r\nNameData, see \"subslotname\" in pg_subscription.h.\r\n\r\n2.\r\n+\t\t\t\t * Find the logical replication sync worker if exists store\r\n+\t\t\t\t * the slot number for dropping associated replication slots\r\n+\t\t\t\t * later.\r\n\r\nShould we add comma after \"if exists\"?\r\n\r\n3.\r\n+\tPG_FINALLY();\r\n+\t{\r\n+\t\tpfree(cmd.data);\r\n+\t}\r\n+\tPG_END_TRY();\r\n+\t\\\r\n+\t\treturn tablelist;\r\n+}\r\n\r\nDo we need the backslash?\r\n\r\n4.\r\n+\t/*\r\n+\t * Advance to the LSN got from walrcv_create_slot. This is WAL\r\n+\t * logged for the purpose of recovery. Locks are to prevent the\r\n+\t * replication origin from vanishing while advancing.\r\n\r\n\"walrcv_create_slot\" should be changed to\r\n\"walrcv_create_slot/walrcv_slot_snapshot\" I think.\r\n\r\n5.\r\n+\t\t\t/* Replication drop might still exist. Try to drop */\r\n+\t\t\treplorigin_drop_by_name(originname, true, false);\r\n\r\nShould \"Replication drop\" be \"Replication origin\"?\r\n\r\n6.\r\nI saw an assertion failure in the following case, could you please look into it?\r\nThe backtrace is attached.\r\n\r\n-- pub\r\nCREATE TABLE tbl1 (a int, b text);\r\nCREATE TABLE tbl2 (a int primary key, b text);\r\ncreate publication pub for table tbl1, tbl2;\r\ninsert into tbl1 values (1, 'a');\r\ninsert into tbl1 values (1, 'a');\r\n\r\n-- sub\r\nCREATE TABLE tbl1 (a int primary key, b text);\r\nCREATE TABLE tbl2 (a int primary key, b text);\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub;\r\n\r\nSubscriber log:\r\n2023-01-17 14:47:10.054 CST [1980841] LOG: logical replication apply worker for subscription \"sub\" has started\r\n2023-01-17 14:47:10.060 CST [1980843] LOG: logical replication table synchronization worker for subscription \"sub\", table \"tbl1\" has started\r\n2023-01-17 14:47:10.070 CST [1980845] LOG: logical replication table synchronization worker for subscription \"sub\", table \"tbl2\" has started\r\n2023-01-17 14:47:10.073 CST [1980843] ERROR: duplicate key value violates unique constraint \"tbl1_pkey\"\r\n2023-01-17 14:47:10.073 CST [1980843] DETAIL: Key (a)=(1) already exists.\r\n2023-01-17 14:47:10.073 CST [1980843] CONTEXT: COPY tbl1, line 2\r\n2023-01-17 14:47:10.074 CST [1980821] LOG: background worker \"logical replication worker\" (PID 1980843) exited with exit code 1\r\n2023-01-17 14:47:10.083 CST [1980845] LOG: logical replication table synchronization worker for subscription \"sub\", table \"tbl2\" has finished\r\n2023-01-17 14:47:10.083 CST [1980845] LOG: logical replication table synchronization worker for subscription \"sub\" has moved to sync table \"tbl1\".\r\nTRAP: failed Assert(\"node != InvalidRepOriginId\"), File: \"origin.c\", Line: 892, PID: 1980845\r\n\r\nRegards,\r\nShi yu", "msg_date": "Tue, 17 Jan 2023 07:46:06 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jan 11, 2023 4:31 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> Rebased the patch to resolve conflicts.\r\n\r\nThanks for your patch set.\r\n\r\nHere are some comments:\r\n\r\nv3-0001* patch\r\n===============\r\n\r\n1. typedefs.list\r\nI think we also need to add \"walrcv_slot_snapshot_fn\" to this file.\r\n\r\nv7-0002* patch\r\n===============\r\n1. About function ReplicationOriginNameForLogicalRep()\r\nDo we need to modify the API of this function? I think the original API could\r\nalso meet the current needs. Since this is not a static function, I think it\r\nseems better to keep the original API if there is no reason. Please let me know\r\nif I'm missing something.\r\n\r\n-----\r\n\r\n2. Comment atop the function GetSubscriptionRelReplicationSlot\r\n+/*\r\n+ * Get replication slot name of subscription table.\r\n+ *\r\n+ * Returns null if the subscription table does not have a replication slot.\r\n+ */\r\n\r\nSince this function always returns NULL, I think it would be better to say the\r\nvalue in \"slotname\" here instead of the function's return value.\r\n\r\nIf you agree with this, please also kindly modify the comment atop the function\r\nGetSubscriptionRelOrigin.\r\n\r\n-----\r\n\r\n3. typo\r\n+\t\t\t * At this point, there shouldn't be any existing replication\r\n+\t\t\t * origin wit the same name.\r\n\r\nwit -> with\r\n\r\n-----\r\n\r\n4. In function CreateSubscription\r\n+\tvalues[Anum_pg_subscription_sublastusedid - 1] = Int64GetDatum(1);\r\n\r\nI think it might be better to initialize this field to NULL or 0 here.\r\nBecause in the patch, we always ignore the initialized value when launching\r\nthe sync worker in the function process_syncing_tables_for_apply. And I think\r\nwe could document in pg-doc that this value means that no tables have been\r\nsynced yet.\r\n\r\n-----\r\n\r\n5. New member \"created_slot\" in structure LogicalRepWorker\r\n+\t/*\r\n+\t * Indicates if the sync worker created a replication slot or it reuses an\r\n+\t * existing one created by another worker.\r\n+\t */\r\n+\tbool\t\tcreated_slot;\r\n\r\nI think the second half of the sentence looks inaccurate.\r\nBecause I think this flag could be false even when we reuse an existing slot\r\ncreated by another worker. Assuming the first run for the worker tries to sync\r\na table which is synced by another sync worker before, and the relstate is set\r\nto SUBREL_STATE_FINISHEDCOPY by another sync worker, I think this flag will not\r\nbe set to true. (see function LogicalRepSyncTableStart)\r\n\r\nSo, what if we simplify the description here and just say that this worker\r\nalready has it's default slot?\r\n\r\nIf I'm not missing something and you agree with this, please also kindly modify\r\nthe relevant comment atop the if-statement (!MyLogicalRepWorker->created_slot)\r\nin the function LogicalRepSyncTableStart.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 17 Jan 2023 11:15:38 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nThanks for your review.\nAttached updated versions of the patches.\n\nwangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>, 17 Oca 2023 Sal, 14:15\ntarihinde şunu yazdı:\n\n> On Wed, Jan 11, 2023 4:31 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> v3-0001* patch\n> ===============\n>\n> 1. typedefs.list\n> I think we also need to add \"walrcv_slot_snapshot_fn\" to this file.\n>\n\nDone.\n\n\n> v7-0002* patch\n> ===============\n> 1. About function ReplicationOriginNameForLogicalRep()\n> Do we need to modify the API of this function? I think the original API\n> could\n> also meet the current needs. Since this is not a static function, I think\n> it\n> seems better to keep the original API if there is no reason. Please let me\n> know\n> if I'm missing something.\n>\n\nYou're right.\nI still need to modify ReplicationOriginNameForLogicalRep. Origin names are\nnot tied to relations anymore, so their name doesn't need to\ninclude relation id.\nBut I didn't really need to change the function signature. I reverted that\npart of the change in the updated version of the patch.\n\n\n> 2. Comment atop the function GetSubscriptionRelReplicationSlot\n>\n\nDone\n\n\n> 3. typo\n> + * At this point, there shouldn't be any existing\n> replication\n> + * origin wit the same name.\n>\n\nDone.\n\n\n> 4. In function CreateSubscription\n> + values[Anum_pg_subscription_sublastusedid - 1] = Int64GetDatum(1);\n>\n> I think it might be better to initialize this field to NULL or 0 here.\n> Because in the patch, we always ignore the initialized value when launching\n> the sync worker in the function process_syncing_tables_for_apply. And I\n> think\n> we could document in pg-doc that this value means that no tables have been\n> synced yet.\n>\n\nI changed it to start from 0 and added a line into the related doc to\nindicate that 0 means that no table has been synced yet.\n\n\n> 5. New member \"created_slot\" in structure LogicalRepWorker\n> + /*\n> + * Indicates if the sync worker created a replication slot or it\n> reuses an\n> + * existing one created by another worker.\n> + */\n> + bool created_slot;\n>\n> I think the second half of the sentence looks inaccurate.\n> Because I think this flag could be false even when we reuse an existing\n> slot\n> created by another worker. Assuming the first run for the worker tries to\n> sync\n> a table which is synced by another sync worker before, and the relstate is\n> set\n> to SUBREL_STATE_FINISHEDCOPY by another sync worker, I think this flag\n> will not\n> be set to true. (see function LogicalRepSyncTableStart)\n>\n> So, what if we simplify the description here and just say that this worker\n> already has it's default slot?\n>\n> If I'm not missing something and you agree with this, please also kindly\n> modify\n> the relevant comment atop the if-statement\n> (!MyLogicalRepWorker->created_slot)\n> in the function LogicalRepSyncTableStart.\n>\n\nThis \"created_slot\" indicates whether the current worker has created a\nreplication slot for its own use. If so, created_slot will be true,\notherwise false.\nLet's say the tablesync worker has not created its own slot yet in its\nprevious runs or this is its first run. And the worker decides to reuse an\nexisting replication slot (which created by another tablesync worker). Then\ncreated_slot is expected to be false. Because this particular\ntablesync worker has not created its own slot yet in either of its runs.\n\nIn your example, the worker is in its first run and begin to sync a table\nwhose state is FINISHEDCOPY. If the table's state is FINISHEDCOPY then the\ntable should already have a replication slot created for its own sync\nprocess. The worker will want to reuse that existing replication slot for\nthis particular table and it will not create a new replication slot. So\ncreated_slot will be false, because the worker has not actually created any\nreplication slot yet.\n\nBasically, created_slot is set to true only if \"walrcv_create_slot\" is\ncalled by the tablesync worker any time during its lifetime. Otherwise,\nit's possible that the worker can use existing replication slots for each\ntable it syncs. (e.g. if all the tables that the worker has synced were in\nFINISHEDCOPY state, then the worker will not need to create a new slot).\n\nDoes it make sense now? Maybe I need to improve comments to make it clearer.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Mon, 23 Jan 2023 16:00:01 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Jan 23, 2023 at 6:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for your review.\n> Attached updated versions of the patches.\n>\n\nHello,\nI am still in the process of reviewing the patch, before that I tried\nto run below test:\n\n--publisher\ncreate table tab1(id int , name varchar);\ncreate table tab2(id int primary key , name varchar);\ncreate table tab3(id int primary key , name varchar);\nInsert into tab1 values(10, 'a');\nInsert into tab1 values(20, 'b');\nInsert into tab1 values(30, 'c');\n\nInsert into tab2 values(10, 'a');\nInsert into tab2 values(20, 'b');\nInsert into tab2 values(30, 'c');\n\nInsert into tab3 values(10, 'a');\nInsert into tab3 values(20, 'b');\nInsert into tab3 values(30, 'c');\n\ncreate publication mypub for table tab1, tab2, tab3;\n\n--subscriber\ncreate table tab1(id int , name varchar);\ncreate table tab2(id int primary key , name varchar);\ncreate table tab3(id int primary key , name varchar);\ncreate subscription mysub connection 'dbname=postgres host=localhost\nuser=shveta port=5432' publication mypub;\n\n--I see initial data copied, but new catalog columns srrelslotname\nand srreloriginname are not updated:\npostgres=# select sublastusedid from pg_subscription;\n sublastusedid\n---------------\n 2\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn | srrelslotname | srreloriginname\n---------+---------+------------+-----------+---------------+-----------------\n 16409 | 16384 | r | 0/15219E0 | |\n 16409 | 16389 | r | 0/15219E0 | |\n 16409 | 16396 | r | 0/15219E0 | |\n\nWhen are these supposed to be updated? I thought the slotname created\nwill be updated here. Am I missing something here?\n\nAlso the v8 patch does not apply on HEAD, giving merge conflicts.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 25 Jan 2023 18:32:11 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Shveta,\n\nThanks for reviewing.\n\nshveta malik <shveta.malik@gmail.com>, 25 Oca 2023 Çar, 16:02 tarihinde\nşunu yazdı:\n\n> On Mon, Jan 23, 2023 at 6:30 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> --I see initial data copied, but new catalog columns srrelslotname\n> and srreloriginname are not updated:\n> postgres=# select sublastusedid from pg_subscription;\n> sublastusedid\n> ---------------\n> 2\n>\n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn | srrelslotname |\n> srreloriginname\n>\n> ---------+---------+------------+-----------+---------------+-----------------\n> 16409 | 16384 | r | 0/15219E0 | |\n> 16409 | 16389 | r | 0/15219E0 | |\n> 16409 | 16396 | r | 0/15219E0 | |\n>\n> When are these supposed to be updated? I thought the slotname created\n> will be updated here. Am I missing something here?\n>\n\nIf a relation is currently being synced by a tablesync worker and uses a\nreplication slot/origin for that operation, then srrelslotname and\nsrreloriginname fields will have values.\nWhen a relation is done with its replication slot/origin, their info gets\nremoved from related catalog row, so that slot/origin can be reused for\nanother table or dropped if not needed anymore.\nIn your case, all relations are in READY state so it's expected that\nsrrelslotname and srreloriginname are empty. READY relations do not need a\nreplication slot/origin anymore.\n\nTables are probably synced so quickly that you're missing the moments when\na tablesync worker copies a relation and stores its rep. slot/origin in the\ncatalog.\nIf initial sync is long enough, then you should be able to see the columns\nget updated. I follow [1] to make it longer and test if the patch really\nupdates the catalog.\n\n\n\n> Also the v8 patch does not apply on HEAD, giving merge conflicts.\n>\n\nRebased and resolved conflicts. Please check the new version\n\n---\n[1]\npublisher:\nSELECT 'CREATE TABLE table_'||i||'(i int);' FROM generate_series(1, 100)\ng(i) \\gexec\nSELECT 'INSERT INTO table_'||i||' SELECT x FROM generate_series(1, 10000)\nx' FROM generate_series(1, 100) g(i) \\gexec\nCREATE PUBLICATION mypub FOR ALL TABLES;\n\nsubscriber:\nSELECT 'CREATE TABLE table_'||i||'(i int);' FROM generate_series(1, 100)\ng(i) \\gexec\nCREATE SUBSCRIPTION mysub CONNECTION 'dbname=postgres port=5432 '\nPUBLICATION mypub;\nselect * from pg_subscription_rel where srrelslotname <> ''; \\watch 0.5\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 26 Jan 2023 17:23:22 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jan 26, 2023 at 7:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> If a relation is currently being synced by a tablesync worker and uses a replication slot/origin for that operation, then srrelslotname and srreloriginname fields will have values.\n> When a relation is done with its replication slot/origin, their info gets removed from related catalog row, so that slot/origin can be reused for another table or dropped if not needed anymore.\n> In your case, all relations are in READY state so it's expected that srrelslotname and srreloriginname are empty. READY relations do not need a replication slot/origin anymore.\n>\n> Tables are probably synced so quickly that you're missing the moments when a tablesync worker copies a relation and stores its rep. slot/origin in the catalog.\n> If initial sync is long enough, then you should be able to see the columns get updated. I follow [1] to make it longer and test if the patch really updates the catalog.\n>\n\nThank You for the details. It is clear now.\n>\n\n>\n> Rebased and resolved conflicts. Please check the new version\n>\nPlease find my suggestions on v9:\n\n1.\n--Can we please add a few more points to the Summary to make it more clear.\na) something telling that reusability of workers is for tables under\none subscription and not across multiple subscriptions.\nb) Since we are reusing both workers and slots, can we add:\n--when do we actually end up spawning a new worker\n--when do we actually end up creating a new slot in a worker rather\nthan using existing one.\n--if we reuse existing slots, what happens to the snapshot?\n\n\n2.\n+ The last used ID for tablesync workers. This ID is used to\n+ create replication slots. The last used ID needs to be stored\n+ to make logical replication can safely proceed after any interruption.\n+ If sublastusedid is 0, then no table has been synced yet.\n\n--typo:\n to make logical replication can safely proceed ==> to make logical\nreplication safely proceed\n\n--Also, does it sound better:\nThe last used ID for tablesync workers. It acts as an unique\nidentifier for replication slots\nwhich are created by table-sync workers. The last used ID needs to be\npersisted...\n\n\n3.\nis_first_run;\nmove_to_next_rel;\n--Do you think one variable is enough here as we do not get any extra\ninfo by using 2 variables? Can we have 1 which is more generic like\n'ready_to_reuse'. Otherwise, please let me know if we must use 2.\n\n\n4.\n/* missin_ok = true, since the origin might be already dropped. */\ntypo: missing_ok\n\n\n5. GetReplicationSlotNamesBySubId:\nerrmsg(\"not tuple returned.\"));\n\nCan we have a better error msg:\n ereport(ERROR,\n errmsg(\"could not receive list of slots\nassociated with subscription %d, error: %s\", subid, res->err));\n\n6.\nstatic void\nclean_sync_worker(void)\n\n--can we please add introductory comment for this function.\n\n7.\n /*\n * Pick the table for the next run if there is not another worker\n * already picked that table.\n */\nPick the table for the next run if it is not already picked up by\nanother worker.\n\n8.\nprocess_syncing_tables_for_sync()\n\n/* Cleanup before next run or ending the worker. */\n--can we please improve this comment:\nif there is no more work left for this worker, stop the worker\ngracefully, else do clean-up and make it ready for the next\nrelation/run.\n\n9.\nLogicalRepSyncTableStart:\n * Read previous slot name from the catalog, if exists.\n */\n prev_slotname = (char *) palloc0(NAMEDATALEN);\nDo we need to free this at the end?\n\n\n10.\n if (strlen(prev_slotname) == 0)\n {\n elog(ERROR, \"Replication slot could not be\nfound for relation %u\",\n MyLogicalRepWorker->relid);\n }\nshall we mention subid also in error msg.\n\nI am reviewing further...\nthanks\nShveta\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:41:56 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, 26 Jan 2023 at 19:53, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Shveta,\n>\n> Thanks for reviewing.\n>\n> shveta malik <shveta.malik@gmail.com>, 25 Oca 2023 Çar, 16:02 tarihinde şunu yazdı:\n>>\n>> On Mon, Jan 23, 2023 at 6:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> --I see initial data copied, but new catalog columns srrelslotname\n>> and srreloriginname are not updated:\n>> postgres=# select sublastusedid from pg_subscription;\n>> sublastusedid\n>> ---------------\n>> 2\n>>\n>> postgres=# select * from pg_subscription_rel;\n>> srsubid | srrelid | srsubstate | srsublsn | srrelslotname | srreloriginname\n>> ---------+---------+------------+-----------+---------------+-----------------\n>> 16409 | 16384 | r | 0/15219E0 | |\n>> 16409 | 16389 | r | 0/15219E0 | |\n>> 16409 | 16396 | r | 0/15219E0 | |\n>>\n>> When are these supposed to be updated? I thought the slotname created\n>> will be updated here. Am I missing something here?\n>\n>\n> If a relation is currently being synced by a tablesync worker and uses a replication slot/origin for that operation, then srrelslotname and srreloriginname fields will have values.\n> When a relation is done with its replication slot/origin, their info gets removed from related catalog row, so that slot/origin can be reused for another table or dropped if not needed anymore.\n> In your case, all relations are in READY state so it's expected that srrelslotname and srreloriginname are empty. READY relations do not need a replication slot/origin anymore.\n>\n> Tables are probably synced so quickly that you're missing the moments when a tablesync worker copies a relation and stores its rep. slot/origin in the catalog.\n> If initial sync is long enough, then you should be able to see the columns get updated. I follow [1] to make it longer and test if the patch really updates the catalog.\n>\n>\n>>\n>> Also the v8 patch does not apply on HEAD, giving merge conflicts.\n>\n>\n> Rebased and resolved conflicts. Please check the new version\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n[14:38:38.392] [827/1808] Compiling C object\nsrc/backend/postgres_lib.a.p/replication_logical_tablesync.c.o\n[14:38:38.392] ../src/backend/replication/logical/tablesync.c: In\nfunction ‘LogicalRepSyncTableStart’:\n[14:38:38.392] ../src/backend/replication/logical/tablesync.c:1629:3:\nwarning: implicit declaration of function ‘walrcv_slot_snapshot’\n[-Wimplicit-function-declaration]\n[14:38:38.392] 1629 | walrcv_slot_snapshot(LogRepWorkerWalRcvConn,\nslotname, &options, origin_startpos);\n[14:38:38.392] | ^~~~~~~~~~~~~~~~~~~~\n\n[14:38:45.125] FAILED: src/backend/postgres\n[14:38:45.125] cc @src/backend/postgres.rsp\n[14:38:45.125] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/replication_logical_tablesync.c.o: in\nfunction `LogicalRepSyncTableStart':\n[14:38:45.125] /tmp/cirrus-ci-build/build/../src/backend/replication/logical/tablesync.c:1629:\nundefined reference to `walrcv_slot_snapshot'\n[14:38:45.125] collect2: error: ld returned 1 exit status\n\n[1] - https://cirrus-ci.com/task/4897131543134208?logs=build#L1236\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:53:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:41 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n\n>\n> I am reviewing further...\n> thanks\n> Shveta\n\nFew more comments:\n\nv4-0001:\n\n1)\nREPLICATION_SLOT_SNAPSHOT\n--Do we need 'CREATE' prefix with it i.e. CREATE_REPLICATION_SNAPSHOT\n(or some other brief one with CREATE?). 'REPLICATION_SLOT_SNAPSHOT'\ndoes not look like a command/action and thus is confusing.\n\n2)\nis used in the currenct transaction. This command is currently only supported\nfor logical replication.\nslots.\n--typo: currenct-->current\n--slots can be moved to previous line\n\n3)\n/*\n * Signal that we don't need the timeout mechanism. We're just creating\n * the replication slot and don't yet accept feedback messages or send\n * keepalives. As we possibly need to wait for further WAL the walsender\n * would otherwise possibly be killed too soon.\n */\nWe're just creating the replication slot --> We're just creating the\nreplication snapshot\n\n\n4)\nI see XactReadOnly check in CreateReplicationSlot, do we need the same\nin ReplicationSlotSnapshot() as well?\n\n\n===============\nv9-0002:\n\n5)\n /* We are safe to drop the replication trackin origin after this\n--typo: tracking\n\n6)\n slot->data.catalog_xmin = xmin_horizon;\n slot->effective_xmin = xmin_horizon;\n SpinLockRelease(&slot->mutex);\n xmin_horizon =\nGetOldestSafeDecodingTransactionId(!need_full_snapshot);\n ReplicationSlotsComputeRequiredXmin(true);\n\n--do we need to set xmin_horizon in slot after\n'GetOldestSafeDecodingTransactionId' call, otherwise it will be set to\nInvalidId in slot. Is that intentional? I see that we do set this\ncorrect xmin_horizon in builder->initial_xmin_horizon but the slot is\ncarrying Invalid one.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:29:28 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Jan 23, 2023 21:00 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> Hi,\r\n>\r\n> Thanks for your review. \r\n> Attached updated versions of the patches.\r\n\r\nThanks for updating the patch set.\r\n\r\n> > 5. New member \"created_slot\" in structure LogicalRepWorker\r\n> > + /*\r\n> > + * Indicates if the sync worker created a replication slot or it reuses an\r\n> > + * existing one created by another worker.\r\n> > + */\r\n> > + bool created_slot;\r\n> >\r\n> > I think the second half of the sentence looks inaccurate.\r\n> > Because I think this flag could be false even when we reuse an existing slot\r\n> > created by another worker. Assuming the first run for the worker tries to sync\r\n> > a table which is synced by another sync worker before, and the relstate is set\r\n> > to SUBREL_STATE_FINISHEDCOPY by another sync worker, I think this flag will\r\n> not\r\n> > be set to true. (see function LogicalRepSyncTableStart)\r\n> >\r\n> > So, what if we simplify the description here and just say that this worker\r\n> > already has it's default slot?\r\n> >\r\n> > If I'm not missing something and you agree with this, please also kindly modify\r\n> > the relevant comment atop the if-statement (!MyLogicalRepWorker-\r\n> >created_slot)\r\n> > in the function LogicalRepSyncTableStart.\r\n> \r\n> This \"created_slot\" indicates whether the current worker has created a\r\n> replication slot for its own use. If so, created_slot will be true, otherwise false.\r\n> Let's say the tablesync worker has not created its own slot yet in its previous\r\n> runs or this is its first run. And the worker decides to reuse an existing\r\n> replication slot (which created by another tablesync worker). Then created_slot\r\n> is expected to be false. Because this particular tablesync worker has not created\r\n> its own slot yet in either of its runs.\r\n>\r\n> In your example, the worker is in its first run and begin to sync a table whose\r\n> state is FINISHEDCOPY. If the table's state is FINISHEDCOPY then the table\r\n> should already have a replication slot created for its own sync process. The\r\n> worker will want to reuse that existing replication slot for this particular table\r\n> and it will not create a new replication slot. So created_slot will be false, because\r\n> the worker has not actually created any replication slot yet.\r\n> \r\n> Basically, created_slot is set to true only if \"walrcv_create_slot\" is called by the\r\n> tablesync worker any time during its lifetime. Otherwise, it's possible that the\r\n> worker can use existing replication slots for each table it syncs. (e.g. if all the\r\n> tables that the worker has synced were in FINISHEDCOPY state, then the\r\n> worker will not need to create a new slot).\r\n> \r\n> Does it make sense now? Maybe I need to improve comments to make it\r\n> clearer.\r\n\r\nYes, I think it makes sense. Thanks for the detailed explanation.\r\nI think I misunderstood the second half of the comment. I previously thought it\r\nmeant that it was also true when reusing an existing slot.\r\n\r\nI found one typo in v9-0002, but it seems already mentioned by Shi in [1].#5\r\nbefore. Maybe you can have a look at that email for this and some other\r\ncomments.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 31 Jan 2023 10:27:26 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tues, Jan 31, 2023 18:27 PM I wrote:\r\n> I found one typo in v9-0002, but it seems already mentioned by Shi in [1].#5\r\n> before. Maybe you can have a look at that email for this and some other\r\n> comments.\r\n\r\nSorry, I forgot to add the link to the email. Please refer to [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 31 Jan 2023 10:40:03 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jan 31, 2023 at 3:57 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 23, 2023 21:00 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > Hi,\n> >\n> > Thanks for your review.\n> > Attached updated versions of the patches.\n>\n> Thanks for updating the patch set.\n>\n> > > 5. New member \"created_slot\" in structure LogicalRepWorker\n> > > + /*\n> > > + * Indicates if the sync worker created a replication slot or it reuses an\n> > > + * existing one created by another worker.\n> > > + */\n> > > + bool created_slot;\n> > >\n\n> Yes, I think it makes sense. Thanks for the detailed explanation.\n> I think I misunderstood the second half of the comment. I previously thought it\n> meant that it was also true when reusing an existing slot.\n>\n\nI agree with Wang-san that the comment is confusing, I too\nmisunderstood it initially during my first run of the code. Maybe it\ncan be improved.\n'Indicates if the sync worker created a replication slot for itself;\nset to false if sync worker reuses an existing one created by another\nworker'\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:30:48 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nPlease see attached patches for the below changes.\n\nshveta malik <shveta.malik@gmail.com>, 27 Oca 2023 Cum, 13:12 tarihinde\nşunu yazdı:\n\n> On Thu, Jan 26, 2023 at 7:53 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> 1.\n> --Can we please add a few more points to the Summary to make it more clear.\n> a) something telling that reusability of workers is for tables under\n> one subscription and not across multiple subscriptions.\n> b) Since we are reusing both workers and slots, can we add:\n> --when do we actually end up spawning a new worker\n> --when do we actually end up creating a new slot in a worker rather\n> than using existing one.\n> --if we reuse existing slots, what happens to the snapshot?\n>\n\nI modified the commit message if that's what you mean by the Summary.\n\n\n> 2.\n> + The last used ID for tablesync workers. This ID is used to\n> + create replication slots. The last used ID needs to be stored\n> + to make logical replication can safely proceed after any\n> interruption.\n> + If sublastusedid is 0, then no table has been synced yet.\n>\n> --typo:\n> to make logical replication can safely proceed ==> to make logical\n> replication safely proceed\n>\n\nDone\n\n\n> 3.\n> is_first_run;\n> move_to_next_rel;\n> --Do you think one variable is enough here as we do not get any extra\n> info by using 2 variables? Can we have 1 which is more generic like\n> 'ready_to_reuse'. Otherwise, please let me know if we must use 2.\n>\n\nRight. Removed is_first_run and renamed move_to_next_rel as ready_to_reuse.\n\n\n> 4.\n> /* missin_ok = true, since the origin might be already dropped. */\n> typo: missing_ok\n>\n\nDone.\n\n\n> 5. GetReplicationSlotNamesBySubId:\n> errmsg(\"not tuple returned.\"));\n>\n> Can we have a better error msg:\n> ereport(ERROR,\n> errmsg(\"could not receive list of slots\n> associated with subscription %d, error: %s\", subid, res->err));\n>\n\nDone.\n\n\n> 6.\n> static void\n> clean_sync_worker(void)\n>\n> --can we please add introductory comment for this function.\n>\n\nDone.\n\n\n> 7.\n> /*\n> * Pick the table for the next run if there is not another worker\n> * already picked that table.\n> */\n> Pick the table for the next run if it is not already picked up by\n> another worker.\n>\n\nDone.\n\n\n> 8.\n> process_syncing_tables_for_sync()\n>\n> /* Cleanup before next run or ending the worker. */\n> --can we please improve this comment:\n> if there is no more work left for this worker, stop the worker\n> gracefully, else do clean-up and make it ready for the next\n> relation/run.\n>\n\nDone\n\n\n> 9.\n> LogicalRepSyncTableStart:\n> * Read previous slot name from the catalog, if exists.\n> */\n> prev_slotname = (char *) palloc0(NAMEDATALEN);\n> Do we need to free this at the end?\n>\n\nPfree'd prev_slotname after we're done with it.\n\n\n> 10.\n> if (strlen(prev_slotname) == 0)\n> {\n> elog(ERROR, \"Replication slot could not be\n> found for relation %u\",\n> MyLogicalRepWorker->relid);\n> }\n> shall we mention subid also in error msg.\n>\n\nDone.\n\nThanks for reviewing,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 1 Feb 2023 14:35:44 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nI mistakenly attached v9 in my previous email.\nPlease see attached v6 and v10 for the previous and below changes.\n\nshveta malik <shveta.malik@gmail.com>, 31 Oca 2023 Sal, 12:59 tarihinde\nşunu yazdı:\n\n> On Fri, Jan 27, 2023 at 3:41 PM shveta malik <shveta.malik@gmail.com>\n> wrote:\n> 1)\n> REPLICATION_SLOT_SNAPSHOT\n> --Do we need 'CREATE' prefix with it i.e. CREATE_REPLICATION_SNAPSHOT\n> (or some other brief one with CREATE?). 'REPLICATION_SLOT_SNAPSHOT'\n> does not look like a command/action and thus is confusing.\n>\n\nRenamed it as CREATE_REPLICATION_SNAPSHOT\n\n\n> 2)\n> is used in the currenct transaction. This command is currently only\n> supported\n> for logical replication.\n> slots.\n> --typo: currenct-->current\n> --slots can be moved to previous line\n>\n\nDone.\n\n\n> 3)\n> /*\n> * Signal that we don't need the timeout mechanism. We're just creating\n> * the replication slot and don't yet accept feedback messages or send\n> * keepalives. As we possibly need to wait for further WAL the walsender\n> * would otherwise possibly be killed too soon.\n> */\n> We're just creating the replication slot --> We're just creating the\n> replication snapshot\n>\n\nDone.\n\n\n> 4)\n> I see XactReadOnly check in CreateReplicationSlot, do we need the same\n> in ReplicationSlotSnapshot() as well?\n>\n\nAdded this check too.\n\n\n> ===============\n> v9-0002:\n>\n> 5)\n> /* We are safe to drop the replication trackin origin after this\n> --typo: tracking\n>\n\nDone.\n\n\n> 6)\n> slot->data.catalog_xmin = xmin_horizon;\n> slot->effective_xmin = xmin_horizon;\n> SpinLockRelease(&slot->mutex);\n> xmin_horizon =\n> GetOldestSafeDecodingTransactionId(!need_full_snapshot);\n> ReplicationSlotsComputeRequiredXmin(true);\n>\n> --do we need to set xmin_horizon in slot after\n> 'GetOldestSafeDecodingTransactionId' call, otherwise it will be set to\n> InvalidId in slot. Is that intentional? I see that we do set this\n> correct xmin_horizon in builder->initial_xmin_horizon but the slot is\n> carrying Invalid one.\n>\n\nI think you're right. Moved GetOldestSafeDecodingTransactionId call before\nxmin_horizon assignment.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 1 Feb 2023 14:44:22 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:05 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Please see attached patches for the below changes.\n>\n\n> Thanks for reviewing,\n> --\n> Melih Mutlu\n> Microsoft\n\nHello Melih,\nThank you for making the changes.\n\nI have few more comments:\n1)\nsrc/backend/replication/logical/worker.c: (errmsg(\"logical replication\ntable synchronization worker for subscription \\\"%s\\\", table \\\"%s\\\" has\nstarted\",\nsrc/backend/replication/logical/worker.c: (errmsg(\"logical replication\ntable synchronization worker for subscription \\\"%s\\\" has moved to sync\ntable \\\"%s\\\".\",\nsrc/backend/replication/logical/tablesync.c: (errmsg(\"logical\nreplication table synchronization worker for subscription \\\"%s\\\",\ntable \\\"%s\\\" has finished\",\n\nIn above can we have rep_slot_id as well in trace message, else it is\nnot clear which worker moved to next relation. We may have:\nlogical replication table synchronization worker_%d for subscription\n\\\"%s\\\" has moved to sync table, rep_slot_id,....\n\nOverall we need to improve the tracing. I will give my suggestions on\nthis later (in detail).\n\n2) I found a crash in the previous patch (v9), but have not tested it\non the latest yet. Crash happens when all the replication slots are\nconsumed and we are trying to create new. I tweaked the settings like\nbelow so that it can be reproduced easily:\nmax_sync_workers_per_subscription=3\nmax_replication_slots = 2\nand then ran the test case shared by you. I think there is some memory\ncorruption happening. (I did test in debug mode, have not tried in\nrelease mode). I tried to put some traces to identify the root-cause.\nI observed that worker_1 keeps on moving from 1 table to another table\ncorrectly, but at some point, it gets corrupted i.e. origin-name\nobtained for it is wrong and it tries to advance that and since that\norigin does not exist, it asserts and then something else crashes.\n From log: (new trace lines added by me are prefixed by shveta, also\ntweaked code to have my comment 1 fixed to have clarity on worker-id).\n\nform below traces, it is clear that worker_1 was moving from one\nrelation to another, always getting correct origin 'pg_16688_1', but\nat the end it got 'pg_16688_49' which does not exist. Second part of\ntrace shows who updated 'pg_16688_49', it was done by worker_49 which\neven did not get chance to create this origin due to max_rep_slot\nreached.\n==============================\n..............\n2023-02-01 16:01:38.041 IST [9243] LOG: logical replication table\nsynchronization worker_1 for subscription \"mysub\", table \"table_93\"\nhas finished\n2023-02-01 16:01:38.047 IST [9243] LOG: logical replication table\nsynchronization worker_1 for subscription \"mysub\" has moved to sync\ntable \"table_98\".\n2023-02-01 16:01:38.055 IST [9243] LOG: shveta-\nLogicalRepSyncTableStart- worker_1 get-origin-id2 originid:2,\noriginname:pg_16688_1\n2023-02-01 16:01:38.055 IST [9243] LOG: shveta-\nLogicalRepSyncTableStart- Worker_1 reusing\nslot:pg_16688_sync_1_7195132648087016333, originid:2,\noriginname:pg_16688_1\n2023-02-01 16:01:38.094 IST [9243] LOG: shveta-\nLogicalRepSyncTableStart- worker_1 updated-origin2\noriginname:pg_16688_1\n2023-02-01 16:01:38.096 IST [9243] LOG: logical replication table\nsynchronization worker_1 for subscription \"mysub\", table \"table_98\"\nhas finished\n2023-02-01 16:01:38.096 IST [9243] LOG: logical replication table\nsynchronization worker_1 for subscription \"mysub\" has moved to sync\ntable \"table_60\".\n2023-02-01 16:01:38.099 IST [9243] LOG: shveta-\nLogicalRepSyncTableStart- worker_1 get-origin originid:0,\noriginname:pg_16688_49\n2023-02-01 16:01:38.099 IST [9243] LOG: could not drop replication\nslot \"pg_16688_sync_49_7195132648087016333\" on publisher: ERROR:\nreplication slot \"pg_16688_sync_49_7195132648087016333\" does not exist\n2023-02-01 16:01:38.103 IST [9243] LOG: shveta-\nLogicalRepSyncTableStart- Worker_1 reusing\nslot:pg_16688_sync_1_7195132648087016333, originid:0,\noriginname:pg_16688_49\nTRAP: failed Assert(\"node != InvalidRepOriginId\"), File: \"origin.c\",\nLine: 892, PID: 9243\npostgres: logical replication worker for subscription 16688 sync 16384\n(ExceptionalCondition+0xbb)[0x56019194d3b7]\npostgres: logical replication worker for subscription 16688 sync 16384\n(replorigin_advance+0x6d)[0x5601916b53d4]\npostgres: logical replication worker for subscription 16688 sync 16384\n(LogicalRepSyncTableStart+0xbb4)[0x5601916cb648]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x5d25e2)[0x5601916d35e2]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x5d282c)[0x5601916d382c]\npostgres: logical replication worker for subscription 16688 sync 16384\n(ApplyWorkerMain+0x17b)[0x5601916d4078]\npostgres: logical replication worker for subscription 16688 sync 16384\n(StartBackgroundWorker+0x248)[0x56019167f943]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x589ad3)[0x56019168aad3]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x589ee3)[0x56019168aee3]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x588d8d)[0x560191689d8d]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x584604)[0x560191685604]\npostgres: logical replication worker for subscription 16688 sync 16384\n(PostmasterMain+0x14f1)[0x560191684f1e]\npostgres: logical replication worker for subscription 16688 sync 16384\n(+0x446e05)[0x560191547e05]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f048cc58d90]\n==============================\n\n'pg_16688_49' updated by worker_49:\n2023-02-01 16:01:37.083 IST [9348] LOG: shveta-\nLogicalRepSyncTableStart- worker_49 get-origin originid:0,\noriginname:pg_16688_49\n2023-02-01 16:01:37.083 IST [9348] LOG: shveta-\nLogicalRepSyncTableStart- worker_49 updated-origin\noriginname:pg_16688_49\n2023-02-01 16:01:37.083 IST [9348] LOG: shveta-\nLogicalRepSyncTableStart- worker_49 get-origin-id2 originid:0,\noriginname:pg_16688_49\n2023-02-01 16:01:37.083 IST [9348] ERROR: could not create\nreplication slot \"pg_16688_sync_49_7195132648087016333\": ERROR: all\nreplication slots are in use\n HINT: Free one or increase max_replication_slots.\n==============================\n\n\nRest of the workers keep on exiting and getting recreated since they\ncould not create slot. The last_used_id kept on increasing on every\nrestart of subscriber until we kill it. In my case it reached 2k+.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 1 Feb 2023 17:30:50 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nwangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>, 31 Oca 2023 Sal, 13:40\ntarihinde şunu yazdı:\n\n> Sorry, I forgot to add the link to the email. Please refer to [1].\n>\n> [1] -\n> https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\n\nThanks for pointing out this review. I somehow skipped that, sorry.\n\nPlease see attached patches.\n\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 17 Oca 2023 Sal, 10:46\ntarihinde şunu yazdı:\n\n> On Wed, Jan 11, 2023 4:31 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> 0001 patch\n> ============\n> 1. walsender.c\n> + /* Create a tuple to send consisten WAL location */\n>\n> \"consisten\" should be \"consistent\" I think.\n>\n\nDone.\n\n\n> 2. logical.c\n> + if (need_full_snapshot)\n> + {\n> + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> +\n> + SpinLockAcquire(&slot->mutex);\n> + slot->effective_catalog_xmin = xmin_horizon;\n> + slot->data.catalog_xmin = xmin_horizon;\n> + slot->effective_xmin = xmin_horizon;\n> + SpinLockRelease(&slot->mutex);\n> +\n> + xmin_horizon =\n> GetOldestSafeDecodingTransactionId(!need_full_snapshot);\n> + ReplicationSlotsComputeRequiredXmin(true);\n> +\n> + LWLockRelease(ProcArrayLock);\n> + }\n>\n> It seems that we should first get the safe decoding xid, then inform the\n> slot\n> machinery about the new limit, right? Otherwise the limit will be\n> InvalidTransactionId and that seems inconsistent with the comment.\n>\n\nYou're right. Moved that call before assigning xmin_horizon.\n\n\n> 3. doc/src/sgml/protocol.sgml\n> + is used in the currenct transaction. This command is currently\n> only supported\n> + for logical replication.\n> + slots.\n>\n> We don't need to put \"slots\" in a new line.\n>\n\nDone.\n\n\n> 0002 patch\n> ============\n> 1.\n> In pg_subscription_rel.h, I think the type of \"srrelslotname\" can be\n> changed to\n> NameData, see \"subslotname\" in pg_subscription.h.\n>\n> 2.\n> + * Find the logical replication sync\n> worker if exists store\n> + * the slot number for dropping associated\n> replication slots\n> + * later.\n>\n> Should we add comma after \"if exists\"?\n>\n\nDone.\n\n3.\n> + PG_FINALLY();\n> + {\n> + pfree(cmd.data);\n> + }\n> + PG_END_TRY();\n> + \\\n> + return tablelist;\n> +}\n>\n> Do we need the backslash?\n>\n\nRemoved it.\n\n\n> 4.\n> + /*\n> + * Advance to the LSN got from walrcv_create_slot. This is WAL\n> + * logged for the purpose of recovery. Locks are to prevent the\n> + * replication origin from vanishing while advancing.\n>\n> \"walrcv_create_slot\" should be changed to\n> \"walrcv_create_slot/walrcv_slot_snapshot\" I think.\n\n\nRight, done.\n\n\n>\n>\n5.\n> + /* Replication drop might still exist. Try to drop\n> */\n> + replorigin_drop_by_name(originname, true, false);\n>\n> Should \"Replication drop\" be \"Replication origin\"?\n>\n\nDone.\n\n\n> 6.\n> I saw an assertion failure in the following case, could you please look\n> into it?\n> The backtrace is attached.\n>\n> -- pub\n> CREATE TABLE tbl1 (a int, b text);\n> CREATE TABLE tbl2 (a int primary key, b text);\n> create publication pub for table tbl1, tbl2;\n> insert into tbl1 values (1, 'a');\n> insert into tbl1 values (1, 'a');\n>\n> -- sub\n> CREATE TABLE tbl1 (a int primary key, b text);\n> CREATE TABLE tbl2 (a int primary key, b text);\n> create subscription sub connection 'dbname=postgres port=5432' publication\n> pub;\n>\n> Subscriber log:\n> 2023-01-17 14:47:10.054 CST [1980841] LOG: logical replication apply\n> worker for subscription \"sub\" has started\n> 2023-01-17 14:47:10.060 CST [1980843] LOG: logical replication table\n> synchronization worker for subscription \"sub\", table \"tbl1\" has started\n> 2023-01-17 14:47:10.070 CST [1980845] LOG: logical replication table\n> synchronization worker for subscription \"sub\", table \"tbl2\" has started\n> 2023-01-17 14:47:10.073 CST [1980843] ERROR: duplicate key value violates\n> unique constraint \"tbl1_pkey\"\n> 2023-01-17 14:47:10.073 CST [1980843] DETAIL: Key (a)=(1) already exists.\n> 2023-01-17 14:47:10.073 CST [1980843] CONTEXT: COPY tbl1, line 2\n> 2023-01-17 14:47:10.074 CST [1980821] LOG: background worker \"logical\n> replication worker\" (PID 1980843) exited with exit code 1\n> 2023-01-17 14:47:10.083 CST [1980845] LOG: logical replication table\n> synchronization worker for subscription \"sub\", table \"tbl2\" has finished\n> 2023-01-17 14:47:10.083 CST [1980845] LOG: logical replication table\n> synchronization worker for subscription \"sub\" has moved to sync table\n> \"tbl1\".\n> TRAP: failed Assert(\"node != InvalidRepOriginId\"), File: \"origin.c\", Line:\n> 892, PID: 1980845\n>\n\nI'm not able to reproduce this yet. Will look into it further.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 1 Feb 2023 15:07:25 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Shveta,\n\nshveta malik <shveta.malik@gmail.com>, 1 Şub 2023 Çar, 15:01 tarihinde şunu\nyazdı:\n\n> On Wed, Feb 1, 2023 at 5:05 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> 2) I found a crash in the previous patch (v9), but have not tested it\n> on the latest yet. Crash happens when all the replication slots are\n> consumed and we are trying to create new. I tweaked the settings like\n> below so that it can be reproduced easily:\n> max_sync_workers_per_subscription=3\n> max_replication_slots = 2\n> and then ran the test case shared by you. I think there is some memory\n> corruption happening. (I did test in debug mode, have not tried in\n> release mode). I tried to put some traces to identify the root-cause.\n> I observed that worker_1 keeps on moving from 1 table to another table\n> correctly, but at some point, it gets corrupted i.e. origin-name\n> obtained for it is wrong and it tries to advance that and since that\n> origin does not exist, it asserts and then something else crashes.\n> From log: (new trace lines added by me are prefixed by shveta, also\n> tweaked code to have my comment 1 fixed to have clarity on worker-id).\n>\n> form below traces, it is clear that worker_1 was moving from one\n> relation to another, always getting correct origin 'pg_16688_1', but\n> at the end it got 'pg_16688_49' which does not exist. Second part of\n> trace shows who updated 'pg_16688_49', it was done by worker_49 which\n> even did not get chance to create this origin due to max_rep_slot\n> reached.\n>\n\nThanks for investigating this error. I think it's the same error as the one\nShi reported earlier. [1]\nI couldn't reproduce it yet but will apply your tweaks and try again.\nLooking into this.\n\n[1]\nhttps://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Shveta,shveta malik <shveta.malik@gmail.com>, 1 Şub 2023 Çar, 15:01 tarihinde şunu yazdı:On Wed, Feb 1, 2023 at 5:05 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n2) I found a crash in the previous patch (v9), but have not tested it\non the latest yet. Crash happens when all the replication slots are\nconsumed and we are trying to create new. I tweaked the settings like\nbelow so that it can be reproduced easily:\nmax_sync_workers_per_subscription=3\nmax_replication_slots = 2\nand then ran the test case shared by you. I think there is some memory\ncorruption happening. (I did test in debug mode, have not tried in\nrelease mode). I tried to put some traces to identify the root-cause.\nI observed that worker_1 keeps on moving from 1 table to another table\ncorrectly, but at some point, it gets corrupted i.e. origin-name\nobtained for it is wrong and it tries to advance that and since that\norigin does not exist, it  asserts and then something else crashes.\n From log: (new trace lines added by me are prefixed by shveta, also\ntweaked code to have my comment 1 fixed to have clarity on worker-id).\n\nform below traces, it is clear that worker_1 was moving from one\nrelation to another, always getting correct origin 'pg_16688_1', but\nat the end it got 'pg_16688_49' which does not exist. Second part of\ntrace shows who updated 'pg_16688_49', it was done by worker_49 which\neven did not get chance to create this origin due to max_rep_slot\nreached.Thanks for investigating this error. I think it's the same error as the one Shi reported earlier. [1]I couldn't reproduce it yet but will apply your tweaks and try again. Looking into this.[1] https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com Thanks,-- Melih MutluMicrosoft", "msg_date": "Wed, 1 Feb 2023 15:12:19 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:42 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n\n>\n>\n> Thanks for investigating this error. I think it's the same error as the one Shi reported earlier. [1]\n> I couldn't reproduce it yet but will apply your tweaks and try again.\n> Looking into this.\n>\n> [1] https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nI tried Shi-san's testcase earlier but I too could not reproduce it,\nso I assumed that it is fixed in one of your patches already and thus\nthought that the current issue is a new one.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 1 Feb 2023 18:27:56 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:42 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Shveta,\n>\n> shveta malik <shveta.malik@gmail.com>, 1 Şub 2023 Çar, 15:01 tarihinde şunu yazdı:\n>>\n>> On Wed, Feb 1, 2023 at 5:05 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> 2) I found a crash in the previous patch (v9), but have not tested it\n>> on the latest yet. Crash happens when all the replication slots are\n>> consumed and we are trying to create new. I tweaked the settings like\n>> below so that it can be reproduced easily:\n>> max_sync_workers_per_subscription=3\n>> max_replication_slots = 2\n>> and then ran the test case shared by you. I think there is some memory\n>> corruption happening. (I did test in debug mode, have not tried in\n>> release mode). I tried to put some traces to identify the root-cause.\n>> I observed that worker_1 keeps on moving from 1 table to another table\n>> correctly, but at some point, it gets corrupted i.e. origin-name\n>> obtained for it is wrong and it tries to advance that and since that\n>> origin does not exist, it asserts and then something else crashes.\n>> From log: (new trace lines added by me are prefixed by shveta, also\n>> tweaked code to have my comment 1 fixed to have clarity on worker-id).\n>>\n>> form below traces, it is clear that worker_1 was moving from one\n>> relation to another, always getting correct origin 'pg_16688_1', but\n>> at the end it got 'pg_16688_49' which does not exist. Second part of\n>> trace shows who updated 'pg_16688_49', it was done by worker_49 which\n>> even did not get chance to create this origin due to max_rep_slot\n>> reached.\n>\n>\n> Thanks for investigating this error. I think it's the same error as the one Shi reported earlier. [1]\n> I couldn't reproduce it yet but will apply your tweaks and try again.\n> Looking into this.\n>\n> [1] https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n>\n\nHi Melih,\nI think I am able to identify the root cause. It is not memory\ncorruption, but the way origin-names are stored in system-catalog\nmapped to a particular relation-id before even those are created.\n\nAfter adding few more logs:\n\n[4227] LOG: shveta- LogicalRepSyncTableStart- worker_49 constructed\noriginname :pg_16684_49, relid:16540\n[4227] LOG: shveta- LogicalRepSyncTableStart- worker_49\nupdated-origin in system catalog:pg_16684_49,\nslot:pg_16684_sync_49_7195149685251088378, relid:16540\n[4227] LOG: shveta- LogicalRepSyncTableStart- worker_49\nget-origin-id2 originid:0, originname:pg_16684_49\n[4227] ERROR: could not create replication slot\n\"pg_16684_sync_49_7195149685251088378\": ERROR: all replication slots\nare in use\n HINT: Free one or increase max_replication_slots.\n\n\n[4428] LOG: shveta- LogicalRepSyncTableStart- worker_148 constructed\noriginname :pg_16684_49, relid:16540\n[4428] LOG: could not drop replication slot\n\"pg_16684_sync_49_7195149685251088378\" on publisher: ERROR:\nreplication slot \"pg_16684_sync_49_7195149 685251088378\" does not\nexist\n[4428] LOG: shveta- LogicalRepSyncTableStart- worker_148 drop-origin\noriginname:pg_16684_49\n[4428] LOG: shveta- LogicalRepSyncTableStart- worker_148\nupdated-origin:pg_16684_49,\nslot:pg_16684_sync_148_7195149685251088378, relid:16540\n\nSo from above, worker_49 came and picked up relid:16540, constructed\norigin-name and slot-name and updated in system-catalog and then it\ntried to actually create that slot and origin but since max-slot count\nwas reached, it failed and exited, but did not do cleanup from system\ncatalog for that relid.\n\nThen worker_148 came and also picked up table 16540 since it was not\ncompleted/started by previous worker, but this time it found that\norigin and slot entry present in system-catalog against this relid, so\nit picked the same names and started processing, but since those do\nnot exist, it asserted while advancing the origin.\n\nThe assert is only reproduced when an already running worker (say\nworker_1) who has 'created=true' set, gets to sync the relid for which\na previously failed worker has tried and updated origin-name w/o\ncreating it. In such a case worker_1 (with created=true) will try to\nreuse the origin and thus will try to advance it and will end up\nasserting. That is why you might not see the assertion always. The\ncondition 'created' is set to true for that worker and it goes to\nreuse the origin updated by the previous worker.\n\nSo to fix this, I think either we update origin and slot entries in\nthe system catalog after the creation has passed or we clean-up the\nsystem catalog in case of failure. What do you think?\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:18:03 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Feb 2, 2023 at 9:18 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n>\n> Hi Melih,\n> I think I am able to identify the root cause. It is not memory\n> corruption, but the way origin-names are stored in system-catalog\n> mapped to a particular relation-id before even those are created.\n>\n\nApart from the problem mentioned in my earlier email, I think there is\none more issue here as seen by the same assert causing testcase. The\n'lastusedid' stored in system-catalog kept on increasing w/o even slot\nand origin getting created. 2 workers worked well with\nmax_replication_slots=2 and then since all slots were consumed 3rd one\ncould not create any slot and exited but it increased lastusedid. Then\nanother worker came, incremented lastusedId in system-catalog and\nfailed to create slot and exited and so on. This makes lastUsedId\nincremented continuously until you kill the subscriber or free any\nslot used previously. If you keep subscriber running long enough, it\nwill make lastUsedId go beyond its limit.\nShouldn't lastUsedId be incremented only after making sure that worker\nhas created a slot and origin corresponding to that particular\nrep_slot_id (derived using lastUsedId). Please let me know if my\nunderstanding is not correct.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 2 Feb 2023 14:35:37 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:37 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n> Please see attached patches.\n>\n> Thanks,\n> --\n> Melih Mutlu\n> Microsoft\n\nHi Melih,\n\nFew suggestions on v10-0002-Reuse patch\n\n1)\n for (int64 i = 1; i <= lastusedid; i++)\n {\n char originname_to_drop[NAMEDATALEN] = {0};\n snprintf(originname_to_drop,\nsizeof(originname_to_drop), \"pg_%u_%lld\", subid, (long long) i);\n .......\n }\n\n--Is it better to use the function\n'ReplicationOriginNameForLogicalRep' here instead of sprintf, just to\nbe consistent everywhere to construct origin-name?\n\n\n2)\npa_launch_parallel_worker:\nlaunched = logicalrep_worker_launch(MyLogicalRepWorker->dbid,\n MySubscription->oid,\n\nMySubscription->name,\n\nMyLogicalRepWorker->userid,\n InvalidOid,\n\ndsm_segment_handle(winfo->dsm_seg),\n 0);\n\n--Can we please define 'InvalidRepSlotId' macro and pass it here as\nthe last arg to make it more readable.\n#define InvalidRepSlotId 0\nSame in ApplyLauncherMain where we are passing 0 as last arg to\nlogicalrep_worker_launch.\n\n3)\nWe are safe to drop the replication trackin origin after this\n--typo: trackin -->tracking\n\n4)\nprocess_syncing_tables_for_sync:\nif (MyLogicalRepWorker->slot_name && strcmp(syncslotname,\nMyLogicalRepWorker->slot_name) != 0)\n{\n ..............\nReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,\n\nMyLogicalRepWorker->relid,\n originname,\n\nsizeof(originname));\n\n/* Drop replication origin */\nreplorigin_drop_by_name(originname, true, false);\n}\n\n--Are we passing missing_ok as true (second arg) intentionally here in\nreplorigin_drop_by_name? Once we fix the issue reported in my earlier\nemail (ASSERT), do you think it makes sense to pass missing_ok as\nfalse here?\n\n5)\nprocess_syncing_tables_for_sync:\n foreach(lc, rstates)\n {\n\n rstate = (SubscriptionRelState *)\npalloc(sizeof(SubscriptionRelState));\n memcpy(rstate, lfirst(lc),\nsizeof(SubscriptionRelState));\n /*\n * Pick the table for the next run if it is\nnot already picked up\n * by another worker.\n */\n LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n if (rstate->state != SUBREL_STATE_SYNCDONE &&\n\n!logicalrep_worker_find(MySubscription->oid, rstate->relid, false))\n\n {\n .........\n }\n LWLockRelease(LogicalRepWorkerLock);\n }\n\n--Do we need to palloc for each relation separately? Shall we do it\nonce outside the loop and reuse it? Also pfree is not done for rstate\nhere.\n\n\n\n6)\nLogicalRepSyncTableStart:\n1385 slotname = (char *) palloc(NAMEDATALEN);\n1413 prev_slotname = (char *) palloc(NAMEDATALEN);\n1481 slotname = prev_slotname;\n1502 pfree(prev_slotname);\n1512 UpdateSubscriptionRel(MyLogicalRepWorker->subid,\n1513\nMyLogicalRepWorker->relid,\n1514\nMyLogicalRepWorker->relstate,\n1515\nMyLogicalRepWorker->relstate_lsn,\n1516 slotname,\n1517 originname);\n\nCan you please review the above flow (I have given line# along with),\nI think it could be problematic. We alloced prev_slotname, assigned it\nto slotname, freed prev_slotname and used slotname after freeing the\nprev_slotname.\nAlso slotname is allocated some memory too, that is not freed.\n\nReviewing further....\n\nJFYI, I get below while applying patch:\n\n========================\nshveta@shveta-vm:~/repos/postgres1/postgres$ git am\n~/Desktop/shared/reuse/v10-0002-Reuse-Logical-Replication-Background-worker.patch\nApplying: Reuse Logical Replication Background worker\n.git/rebase-apply/patch:142: trailing whitespace.\n values[Anum_pg_subscription_rel_srrelslotname - 1] =\n.git/rebase-apply/patch:692: indent with spaces.\n errmsg(\"could not receive list of slots associated\nwith the subscription %u, error: %s\",\n.git/rebase-apply/patch:1055: trailing whitespace.\n /*\n.git/rebase-apply/patch:1057: trailing whitespace.\n * relations.\n.git/rebase-apply/patch:1059: trailing whitespace.\n * and origin. Then stop the worker gracefully.\nwarning: 5 lines add whitespace errors.\n ========================\n\n\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 2 Feb 2023 17:01:09 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Feb 2, 2023 at 5:01 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> Reviewing further....\n>\n\nFew more comments for v10-0002 and v7-0001:\n\n1)\n+ * need_full_snapshot\n+ * if true, create a snapshot able to read all tables,\n+ * otherwise do not create any snapshot.\n+ *\nCreateDecodingContext(..,CreateDecodingContext,..)\n\n--Is the comment correct? Shall we have same comment here as that of\n'CreateDecodingContext'\n * need_full_snapshot -- if true, must obtain a snapshot able to read all\n * tables; if false, one that can read only catalogs is acceptable.\nThis function is not going to create a snapshot anyways. It is just a\npre-step and then the caller needs to call 'SnapBuild' functions to\nbuild a snapshot. Here need_full_snapshot decides whether we need all\ntables or only catalog tables changes only and thus the comment change\nis needed.\n\n==========\n\n2)\n\nCan we please add more logging:\n\n2a)\nwhen lastusedId is incremented and updated in pg_* table\nereport(DEBUG2,\n(errmsg(\"[subid:%d] Incremented lastusedid\nto:%ld\",MySubscription->oid, MySubscription->lastusedid)));\n\n\nComments for LogicalRepSyncTableStart():\n\n2b ) After every UpdateSubscriptionRel:\n\nereport(DEBUG2,\n(errmsg(\"[subid:%d] LogicalRepSyncWorker_%ld updated origin to %s and\nslot to %s for relid %d\",\nMyLogicalRepWorker->subid, MyLogicalRepWorker->rep_slot_id,\noriginname, slotname, MyLogicalRepWorker->relid)));\n\n\n2c )\nAfter walrcv_create_slot:\n\nereport(DEBUG2,\n(errmsg(\"[subid:%d] LogicalRepSyncWorker_%ld created slot %s\",\nMyLogicalRepWorker->subid, MyLogicalRepWorker->rep_slot_id, slotname)));\n\n\n2d)\nAfter replorigin_create:\n\nereport(DEBUG2,\n(errmsg(\"[subid:%d] LogicalRepSyncWorker_%ld created origin %s [id: %d]\",\nMyLogicalRepWorker->subid, MyLogicalRepWorker->rep_slot_id,\noriginname, originid)));\n\n\n2e)\nWhen it goes to reuse flow (i.e. before walrcv_slot_snapshot), if\nneeded we can dump newly obtained origin_startpos also:\n\nereport(DEBUG2,\n(errmsg(\"[subid:%d] LogicalRepSyncWorker_%ld reusing slot %s and origin %s\",\nMyLogicalRepWorker->subid, MyLogicalRepWorker->rep_slot_id, slotname,\noriginname)));\n\n\n2f)\nAlso in existing comment:\n\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has moved to sync table \\\"%s\\\".\",\n+ MySubscription->name, get_rel_name(MyLogicalRepWorker->relid))));\n\nwe can add relid also along with relname. relid is the one stored in\npg_subscription_rel and thus it becomes easy to map. Also we can\nchange 'logical replication table synchronization worker' to\n'LogicalRepSyncWorker_%ld'.\nSame change needed in other similar log lines where we say that worker\nstarted and finished.\n\n\nPlease feel free to change the above log lines as you find\nappropriate. I have given just a sample sort of thing.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 3 Feb 2023 11:50:27 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Feb 3, 2023 at 11:50 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Thu, Feb 2, 2023 at 5:01 PM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n>\n> 2e)\n> When it goes to reuse flow (i.e. before walrcv_slot_snapshot), if\n> needed we can dump newly obtained origin_startpos also:\n>\n> ereport(DEBUG2,\n> (errmsg(\"[subid:%d] LogicalRepSyncWorker_%ld reusing slot %s and origin %s\",\n> MyLogicalRepWorker->subid, MyLogicalRepWorker->rep_slot_id, slotname,\n> originname)));\n>\n\nOne addition, I think it will be good to add relid as well in above so\nthat we can get info as in we are reusing old slot for which relid.\nOnce we have all the above in log-file, it makes it very easy to\ndiagnose reuse-sync worker related problems just by looking at the\nlogfile.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 3 Feb 2023 12:04:58 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Feb 2, 2023 at 5:01 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> Reviewing further....\n>\n\nHi Melih,\n\nint64 rep_slot_id;\nint64 lastusedid;\nint64 sublastusedid\n\n--Should all of the above be unsigned integers?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 3 Feb 2023 15:49:23 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "\r\nOn Thu, Feb 2, 2023 11:48 AM shveta malik <shveta.malik@gmail.com> wrote:\r\n> \r\n> On Wed, Feb 1, 2023 at 5:42 PM Melih Mutlu <m.melihmutlu@gmail.com>\r\n> wrote:\r\n> >\r\n> > Hi Shveta,\r\n> >\r\n> > shveta malik <shveta.malik@gmail.com>, 1 Şub 2023 Çar, 15:01 tarihinde\r\n> şunu yazdı:\r\n> >>\r\n> >> On Wed, Feb 1, 2023 at 5:05 PM Melih Mutlu <m.melihmutlu@gmail.com>\r\n> wrote:\r\n> >> 2) I found a crash in the previous patch (v9), but have not tested it\r\n> >> on the latest yet. Crash happens when all the replication slots are\r\n> >> consumed and we are trying to create new. I tweaked the settings like\r\n> >> below so that it can be reproduced easily:\r\n> >> max_sync_workers_per_subscription=3\r\n> >> max_replication_slots = 2\r\n> >> and then ran the test case shared by you. I think there is some memory\r\n> >> corruption happening. (I did test in debug mode, have not tried in\r\n> >> release mode). I tried to put some traces to identify the root-cause.\r\n> >> I observed that worker_1 keeps on moving from 1 table to another table\r\n> >> correctly, but at some point, it gets corrupted i.e. origin-name\r\n> >> obtained for it is wrong and it tries to advance that and since that\r\n> >> origin does not exist, it asserts and then something else crashes.\r\n> >> From log: (new trace lines added by me are prefixed by shveta, also\r\n> >> tweaked code to have my comment 1 fixed to have clarity on worker-id).\r\n> >>\r\n> >> form below traces, it is clear that worker_1 was moving from one\r\n> >> relation to another, always getting correct origin 'pg_16688_1', but\r\n> >> at the end it got 'pg_16688_49' which does not exist. Second part of\r\n> >> trace shows who updated 'pg_16688_49', it was done by worker_49\r\n> which\r\n> >> even did not get chance to create this origin due to max_rep_slot\r\n> >> reached.\r\n> >\r\n> >\r\n> > Thanks for investigating this error. I think it's the same error as the one Shi\r\n> reported earlier. [1]\r\n> > I couldn't reproduce it yet but will apply your tweaks and try again.\r\n> > Looking into this.\r\n> >\r\n> > [1] https://www.postgresql.org/message-\r\n> id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpn\r\n> prd01.prod.outlook.com\r\n> >\r\n> \r\n> Hi Melih,\r\n> I think I am able to identify the root cause. It is not memory\r\n> corruption, but the way origin-names are stored in system-catalog\r\n> mapped to a particular relation-id before even those are created.\r\n> \r\n> After adding few more logs:\r\n> \r\n> [4227] LOG: shveta- LogicalRepSyncTableStart- worker_49 constructed\r\n> originname :pg_16684_49, relid:16540\r\n> [4227] LOG: shveta- LogicalRepSyncTableStart- worker_49\r\n> updated-origin in system catalog:pg_16684_49,\r\n> slot:pg_16684_sync_49_7195149685251088378, relid:16540\r\n> [4227] LOG: shveta- LogicalRepSyncTableStart- worker_49\r\n> get-origin-id2 originid:0, originname:pg_16684_49\r\n> [4227] ERROR: could not create replication slot\r\n> \"pg_16684_sync_49_7195149685251088378\": ERROR: all replication slots\r\n> are in use\r\n> HINT: Free one or increase max_replication_slots.\r\n> \r\n> \r\n> [4428] LOG: shveta- LogicalRepSyncTableStart- worker_148 constructed\r\n> originname :pg_16684_49, relid:16540\r\n> [4428] LOG: could not drop replication slot\r\n> \"pg_16684_sync_49_7195149685251088378\" on publisher: ERROR:\r\n> replication slot \"pg_16684_sync_49_7195149 685251088378\" does not\r\n> exist\r\n> [4428] LOG: shveta- LogicalRepSyncTableStart- worker_148 drop-origin\r\n> originname:pg_16684_49\r\n> [4428] LOG: shveta- LogicalRepSyncTableStart- worker_148\r\n> updated-origin:pg_16684_49,\r\n> slot:pg_16684_sync_148_7195149685251088378, relid:16540\r\n> \r\n> So from above, worker_49 came and picked up relid:16540, constructed\r\n> origin-name and slot-name and updated in system-catalog and then it\r\n> tried to actually create that slot and origin but since max-slot count\r\n> was reached, it failed and exited, but did not do cleanup from system\r\n> catalog for that relid.\r\n> \r\n> Then worker_148 came and also picked up table 16540 since it was not\r\n> completed/started by previous worker, but this time it found that\r\n> origin and slot entry present in system-catalog against this relid, so\r\n> it picked the same names and started processing, but since those do\r\n> not exist, it asserted while advancing the origin.\r\n> \r\n> The assert is only reproduced when an already running worker (say\r\n> worker_1) who has 'created=true' set, gets to sync the relid for which\r\n> a previously failed worker has tried and updated origin-name w/o\r\n> creating it. In such a case worker_1 (with created=true) will try to\r\n> reuse the origin and thus will try to advance it and will end up\r\n> asserting. That is why you might not see the assertion always. The\r\n> condition 'created' is set to true for that worker and it goes to\r\n> reuse the origin updated by the previous worker.\r\n> \r\n> So to fix this, I think either we update origin and slot entries in\r\n> the system catalog after the creation has passed or we clean-up the\r\n> system catalog in case of failure. What do you think?\r\n> \r\n\r\nI think the first way seems better.\r\n\r\nI reproduced the problem I reported before with latest patch (v7-0001,\r\nv10-0002), and looked into this problem. It is caused by a similar reason. Here\r\nis some analysis for the problem I reported [1].#6.\r\n\r\nFirst, a tablesync worker (worker-1) started for \"tbl1\", its originname is\r\n\"pg_16398_1\". And it exited because of unique constraint. In\r\nLogicalRepSyncTableStart(), originname in pg_subscription_rel is updated when\r\nupdating table state to DATASYNC, and the origin is created when updating table\r\nstate to FINISHEDCOPY. So when it exited with state DATASYNC , the origin is not\r\ncreated but the originname has been updated in pg_subscription_rel.\r\n\r\nThen a tablesync worker (worker-2) started for \"tbl2\", its originname is\r\n\"pg_16398_2\". After tablesync of \"tbl2\" finished, this worker moved to sync\r\ntable \"tbl1\". In LogicalRepSyncTableStart(), it got the originname of \"tbl1\" -\r\n\"pg_16398_1\", by calling ReplicationOriginNameForLogicalRep(), and tried to drop\r\nthe origin (although it is not actually created before). After that, it called\r\nreplorigin_by_name to get the originid whose name is \"pg_16398_1\" and the result\r\nis InvalidOid. Origin won't be created in this case because the sync worker has\r\ncreated a replication slot (when it synced tbl2), so the originid was still\r\ninvalid and it caused an assertion failure when calling replorigin_advance().\r\n\r\nIt seems we don't need to drop previous origin in worker-2 because the previous\r\norigin was not created in worker-1. I think one way to fix it is to not update\r\noriginname of pg_subscription_rel when setting state to DATASYNC, and only do\r\nthat when setting state to FINISHEDCOPY. If so, the originname in\r\npg_subscription_rel will be set at the same time the origin is created.\r\n(Besides, the slotname seems need to be updated when setting state to DATASYNC,\r\nbecause the previous slot might have been created successfully and we need to get\r\nthe previous slotname and drop that.)\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB631013C833C98E826B3CFCB9FDC69%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Tue, 7 Feb 2023 02:48:47 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 1, 2023 20:07 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> Thanks for pointing out this review. I somehow skipped that, sorry.\r\n> \r\n> Please see attached patches.\r\n\r\nThanks for updating the patch set.\r\nHere are some comments.\r\n\r\n1. In the function ApplyWorkerMain.\r\n+\t\t\t/* This is main apply worker */\r\n+\t\t\trun_apply_worker(&options, myslotname, originname, sizeof(originname), &origin_startpos);\r\n\r\nI think we need to keep the worker name as \"leader apply worker\" in the comment\r\nlike the current HEAD.\r\n\r\n---\r\n\r\n2. In the function LogicalRepApplyLoop.\r\n+\t\t\t\t * can be reused, we need to take care of memory contexts here\r\n+\t\t\t\t * before moving to sync a table.\r\n+\t\t\t\t */\r\n+\t\t\t\tif (MyLogicalRepWorker->ready_to_reuse)\r\n+\t\t\t\t{\r\n+\t\t\t\t\tMemoryContextResetAndDeleteChildren(ApplyMessageContext);\r\n+\t\t\t\t\tMemoryContextSwitchTo(TopMemoryContext);\r\n+\t\t\t\t\treturn;\r\n+\t\t\t\t}\r\n\r\nI think in this case we also need to pop the error context stack before\r\nreturning. Otherwise, I think we might use the wrong callback\r\n(apply error_callback) after we return from this function.\r\n\r\n---\r\n\r\n3. About the function UpdateSubscriptionRelReplicationSlot.\r\nThis newly introduced function UpdateSubscriptionRelReplicationSlot does not\r\nseem to be invoked. Do we need this function?\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 7 Feb 2023 07:28:33 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Feb 7, 2023 at 8:18 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n>\n> On Thu, Feb 2, 2023 11:48 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> >\n> > So to fix this, I think either we update origin and slot entries in\n> > the system catalog after the creation has passed or we clean-up the\n> > system catalog in case of failure. What do you think?\n> >\n>\n> I think the first way seems better.\n\nYes, I agree.\n\n>\n> I reproduced the problem I reported before with latest patch (v7-0001,\n> v10-0002), and looked into this problem. It is caused by a similar reason. Here\n> is some analysis for the problem I reported [1].#6.\n>\n> First, a tablesync worker (worker-1) started for \"tbl1\", its originname is\n> \"pg_16398_1\". And it exited because of unique constraint. In\n> LogicalRepSyncTableStart(), originname in pg_subscription_rel is updated when\n> updating table state to DATASYNC, and the origin is created when updating table\n> state to FINISHEDCOPY. So when it exited with state DATASYNC , the origin is not\n> created but the originname has been updated in pg_subscription_rel.\n>\n> Then a tablesync worker (worker-2) started for \"tbl2\", its originname is\n> \"pg_16398_2\". After tablesync of \"tbl2\" finished, this worker moved to sync\n> table \"tbl1\". In LogicalRepSyncTableStart(), it got the originname of \"tbl1\" -\n> \"pg_16398_1\", by calling ReplicationOriginNameForLogicalRep(), and tried to drop\n> the origin (although it is not actually created before). After that, it called\n> replorigin_by_name to get the originid whose name is \"pg_16398_1\" and the result\n> is InvalidOid. Origin won't be created in this case because the sync worker has\n> created a replication slot (when it synced tbl2), so the originid was still\n> invalid and it caused an assertion failure when calling replorigin_advance().\n>\n> It seems we don't need to drop previous origin in worker-2 because the previous\n> origin was not created in worker-1. I think one way to fix it is to not update\n> originname of pg_subscription_rel when setting state to DATASYNC, and only do\n> that when setting state to FINISHEDCOPY. If so, the originname in\n> pg_subscription_rel will be set at the same time the origin is created.\n\n+1. Update of system-catalog needs to be done carefully and only when\norigin is created.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 8 Feb 2023 19:18:51 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thur, Feb 7, 2023 15:29 PM I wrote:\r\n> On Wed, Feb 1, 2023 20:07 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> > Thanks for pointing out this review. I somehow skipped that, sorry.\r\n> >\r\n> > Please see attached patches.\r\n> \r\n> Thanks for updating the patch set.\r\n> Here are some comments.\r\n\r\nHi, here are some more comments on patch v7-0001*:\r\n\r\n1. The new comments atop the function CreateDecodingContext\r\n+ * need_full_snapshot\r\n+ * \t\tif true, create a snapshot able to read all tables,\r\n+ * \t\totherwise do not create any snapshot.\r\n\r\nI think if 'need_full_snapshot' is false, it means we will create a snapshot\r\nthat can read only catalogs. (see SnapBuild->building_full_snapshot)\r\n\r\n===\r\n\r\n2. This are two questions I'm not sure about.\r\n2a.\r\nBecause pg-doc has the following description in [1]: (option \"SNAPSHOT 'use'\")\r\n```\r\n'use' will use the snapshot for the current transaction executing the command.\r\nThis option must be used in a transaction, and CREATE_REPLICATION_SLOT must be\r\nthe first command run in that transaction.\r\n```\r\nSo I think in the function CreateDecodingContext, when \"need_full_snapshot\" is\r\ntrue, we seem to need the following check, just like in the function\r\nCreateInitDecodingContext:\r\n```\r\n\tif (IsTransactionState() &&\r\n\t\tGetTopTransactionIdIfAny() != InvalidTransactionId)\r\n\t\tereport(ERROR,\r\n\t\t\t\t(errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\r\n\t\t\t\t errmsg(\"cannot create logical replication slot in transaction that has performed writes\")));\r\n```\r\n\r\n2b.\r\nIt seems that we also need to invoke the function\r\nCheckLogicalDecodingRequirements in the new function CreateReplicationSnapshot,\r\njust like the function CreateReplicationSlot and the function\r\nStartLogicalReplication.\r\n\r\nIs there any reason not to do these two checks? Please let me know if I missed\r\nsomething.\r\n\r\n===\r\n\r\n3. The invocation of startup_cb_wrapper in the function CreateDecodingContext.\r\nI think we should change the third input parameter to true when invoke function \r\nstartup_cb_wrapper for CREATE_REPLICATION_SNAPSHOT. BTW, after applying patch\r\nv10-0002*, these settings will be inconsistent when sync workers use\r\n\"CREATE_REPLICATION_SLOT\" and \"CREATE_REPLICATION_SNAPSHOT\" to take snapshots.\r\nThis input parameter (true) will let us disable streaming and two-phase\r\ntransactions in function pgoutput_startup. See the last paragraph of the commit\r\nmessage for 4648243 for more details.\r\n\r\n[1] - https://www.postgresql.org/docs/devel/protocol-replication.html\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Tue, 14 Feb 2023 03:36:52 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Shveta and Shi,\n\nThanks for your investigations.\n\nshveta malik <shveta.malik@gmail.com>, 8 Şub 2023 Çar, 16:49 tarihinde şunu\nyazdı:\n\n> On Tue, Feb 7, 2023 at 8:18 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> > I reproduced the problem I reported before with latest patch (v7-0001,\n> > v10-0002), and looked into this problem. It is caused by a similar\n> reason. Here\n> > is some analysis for the problem I reported [1].#6.\n> >\n> > First, a tablesync worker (worker-1) started for \"tbl1\", its originname\n> is\n> > \"pg_16398_1\". And it exited because of unique constraint. In\n> > LogicalRepSyncTableStart(), originname in pg_subscription_rel is updated\n> when\n> > updating table state to DATASYNC, and the origin is created when\n> updating table\n> > state to FINISHEDCOPY. So when it exited with state DATASYNC , the\n> origin is not\n> > created but the originname has been updated in pg_subscription_rel.\n> >\n> > Then a tablesync worker (worker-2) started for \"tbl2\", its originname is\n> > \"pg_16398_2\". After tablesync of \"tbl2\" finished, this worker moved to\n> sync\n> > table \"tbl1\". In LogicalRepSyncTableStart(), it got the originname of\n> \"tbl1\" -\n> > \"pg_16398_1\", by calling ReplicationOriginNameForLogicalRep(), and tried\n> to drop\n> > the origin (although it is not actually created before). After that, it\n> called\n> > replorigin_by_name to get the originid whose name is \"pg_16398_1\" and\n> the result\n> > is InvalidOid. Origin won't be created in this case because the sync\n> worker has\n> > created a replication slot (when it synced tbl2), so the originid was\n> still\n> > invalid and it caused an assertion failure when calling\n> replorigin_advance().\n> >\n> > It seems we don't need to drop previous origin in worker-2 because the\n> previous\n> > origin was not created in worker-1. I think one way to fix it is to not\n> update\n> > originname of pg_subscription_rel when setting state to DATASYNC, and\n> only do\n> > that when setting state to FINISHEDCOPY. If so, the originname in\n> > pg_subscription_rel will be set at the same time the origin is created.\n>\n> +1. Update of system-catalog needs to be done carefully and only when\n> origin is created.\n>\n\nI see that setting originname in the catalog before actually creating it\ncauses issues. My concern with setting originname when setting the state to\nFINISHEDCOPY is that if worker waits until FINISHEDCOPY to update\nslot/origin name but it fails somewhere before reaching FINISHEDCOPY and\nafter creating slot/origin, then this new created slot/origin will be left\nbehind. It wouldn't be possible to find and drop them since their names are\nnot stored in the catalog. Eventually, this might also cause hitting\nthe max_replication_slots limit in case of such failures between origin\ncreation and FINISHEDCOPY.\n\nOne fix I can think is to update the catalog right after creating a new\norigin. But this would also require commiting the current transaction to\nactually persist the originname. I guess this action of commiting the\ntransaction in the middle of initial sync could hurt the copy process.\n\nWhat do you think?\n\nAlso; working on an updated patch to address your other comments. Thanks\nagain.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Shveta and Shi,Thanks for your investigations.shveta malik <shveta.malik@gmail.com>, 8 Şub 2023 Çar, 16:49 tarihinde şunu yazdı:On Tue, Feb 7, 2023 at 8:18 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> I reproduced the problem I reported before with latest patch (v7-0001,\n> v10-0002), and looked into this problem. It is caused by a similar reason. Here\n> is some analysis for the problem I reported [1].#6.\n>\n> First, a tablesync worker (worker-1) started for \"tbl1\", its originname is\n> \"pg_16398_1\". And it exited because of unique constraint. In\n> LogicalRepSyncTableStart(), originname in pg_subscription_rel is updated when\n> updating table state to DATASYNC, and the origin is created when updating table\n> state to FINISHEDCOPY. So when it exited with state DATASYNC , the origin is not\n> created but the originname has been updated in pg_subscription_rel.\n>\n> Then a tablesync worker (worker-2) started for \"tbl2\", its originname is\n> \"pg_16398_2\". After tablesync of \"tbl2\" finished, this worker moved to sync\n> table \"tbl1\". In LogicalRepSyncTableStart(), it got the originname of \"tbl1\" -\n> \"pg_16398_1\", by calling ReplicationOriginNameForLogicalRep(), and tried to drop\n> the origin (although it is not actually created before). After that, it called\n> replorigin_by_name to get the originid whose name is \"pg_16398_1\" and the result\n> is InvalidOid. Origin won't be created in this case because the sync worker has\n> created a replication slot (when it synced tbl2), so the originid was still\n> invalid and it caused an assertion failure when calling replorigin_advance().\n>\n> It seems we don't need to drop previous origin in worker-2 because the previous\n> origin was not created in worker-1. I think one way to fix it is to not update\n> originname of pg_subscription_rel when setting state to DATASYNC, and only do\n> that when setting state to FINISHEDCOPY. If so, the originname in\n> pg_subscription_rel will be set at the same time the origin is created.\n\n+1. Update of system-catalog needs to be done carefully and only when\norigin is created.I see that setting originname in the catalog before actually creating it causes issues. My concern with setting originname when setting the state to FINISHEDCOPY is that if worker waits until FINISHEDCOPY to update slot/origin name but it fails somewhere before reaching FINISHEDCOPY and after creating slot/origin, then this new created slot/origin will be left behind. It wouldn't be possible to find and drop them since their names are not stored in the catalog. Eventually, this might also cause hitting the max_replication_slots limit in case of such failures between origin creation and FINISHEDCOPY.One fix I can think is to update the catalog right after creating a new origin. But this would also require commiting the current transaction to actually persist the originname. I guess this action of commiting the transaction in the middle of initial sync could hurt the copy process.What do you think? Also; working on an updated patch to address your other comments. Thanks again.Best,-- Melih MutluMicrosoft", "msg_date": "Thu, 16 Feb 2023 14:37:19 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nMelih Mutlu <m.melihmutlu@gmail.com>, 16 Şub 2023 Per, 14:37 tarihinde şunu\nyazdı:\n\n> I see that setting originname in the catalog before actually creating it\n> causes issues. My concern with setting originname when setting the state to\n> FINISHEDCOPY is that if worker waits until FINISHEDCOPY to update\n> slot/origin name but it fails somewhere before reaching FINISHEDCOPY and\n> after creating slot/origin, then this new created slot/origin will be left\n> behind. It wouldn't be possible to find and drop them since their names are\n> not stored in the catalog. Eventually, this might also cause hitting\n> the max_replication_slots limit in case of such failures between origin\n> creation and FINISHEDCOPY.\n>\n> One fix I can think is to update the catalog right after creating a new\n> origin. But this would also require commiting the current transaction to\n> actually persist the originname. I guess this action of commiting the\n> transaction in the middle of initial sync could hurt the copy process.\n>\n\nHere are more thoughts on this:\nI still believe that updating originname when setting state\nto FINISHEDCOPY is not a good idea since any failure\nbefore FINISHEDCOPY prevent us to store originname in the catalog. If an\norigin or slot is not in the catalog, it's not easily possible to find and\ndrop origins/slot that are left behind. And we definitely do not want to\nkeep unnecessary origins/slots since we would hit max_replication_slots\nlimit.\nIt's better to be safe and update origin/slot names when setting state\nto DATASYNC. At this point, the worker must be sure that it writes correct\norigin/slot names into the catalog.\nFollowing part actually cleans up the catalog if a table is left behind in\nDATASYNC state and its slot and origin cannot be used for sync.\n\nReplicationSlotDropAtPubNode(LogRepWorkerWalRcvConn, prev_slotname, true);\n>\n> StartTransactionCommand();\n> /* Replication origin might still exist. Try to drop */\n> replorigin_drop_by_name(originname, true, false);\n>\n> /*\n> * Remove replication slot and origin name from the relation's\n> * catalog record\n> */\n> UpdateSubscriptionRel(MyLogicalRepWorker->subid,\n> MyLogicalRepWorker->relid,\n> MyLogicalRepWorker->relstate,\n> MyLogicalRepWorker->relstate_lsn,\n> NULL,\n> NULL);\n>\n\nThe patch needs to refresh the origin name before it begins copying the\ntable. It will try to read from the catalog but won't find any slot/origin\nsince they are cleaned. Then, it will move on with the correct origin name\nwhich is the one created/will be created for the current sync worker.\n\nI tested refetching originname and seems like it fixes the errors you\nreported.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Melih Mutlu <m.melihmutlu@gmail.com>, 16 Şub 2023 Per, 14:37 tarihinde şunu yazdı:I see that setting originname in the catalog before actually creating it causes issues. My concern with setting originname when setting the state to FINISHEDCOPY is that if worker waits until FINISHEDCOPY to update slot/origin name but it fails somewhere before reaching FINISHEDCOPY and after creating slot/origin, then this new created slot/origin will be left behind. It wouldn't be possible to find and drop them since their names are not stored in the catalog. Eventually, this might also cause hitting the max_replication_slots limit in case of such failures between origin creation and FINISHEDCOPY.One fix I can think is to update the catalog right after creating a new origin. But this would also require commiting the current transaction to actually persist the originname. I guess this action of commiting the transaction in the middle of initial sync could hurt the copy process.Here are more thoughts on this:I still believe that updating originname when setting state to FINISHEDCOPY is not a good idea since any failure before FINISHEDCOPY prevent us to store originname in the catalog. If an origin or slot is not in the catalog, it's not easily possible to find and drop origins/slot that are left behind. And we definitely do not want to keep unnecessary origins/slots since we would hit max_replication_slots limit. It's better to be safe and update origin/slot names when setting state to DATASYNC. At this point, the worker must be sure that it writes correct origin/slot names into the catalog. Following part actually cleans up the catalog if a table is left behind in DATASYNC state and its slot and origin cannot be used for sync.ReplicationSlotDropAtPubNode(LogRepWorkerWalRcvConn, prev_slotname, true);\t\t\tStartTransactionCommand();\t\t\t/* Replication origin might still exist. Try to drop */\t\t\treplorigin_drop_by_name(originname, true, false);\t\t\t/*\t\t\t * Remove replication slot and origin name from the relation's\t\t\t * catalog record\t\t\t */\t\t\tUpdateSubscriptionRel(MyLogicalRepWorker->subid,\t\t\t\t\t\t\t\t  MyLogicalRepWorker->relid,\t\t\t\t\t\t\t\t  MyLogicalRepWorker->relstate,\t\t\t\t\t\t\t\t  MyLogicalRepWorker->relstate_lsn,\t\t\t\t\t\t\t\t  NULL,\t\t\t\t\t\t\t\t  NULL);The patch needs to refresh the origin name before it begins copying the table. It will try to read from the catalog but won't find any slot/origin since they are cleaned. Then, it will move on with the correct origin name which is the one created/will be created for the current sync worker.I tested refetching originname and seems like it fixes the errors you reported.Thanks,-- Melih MutluMicrosoft", "msg_date": "Wed, 22 Feb 2023 15:51:35 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Shveta,\n\nThanks for reviewing.\nPlease see attached patches.\n\nshveta malik <shveta.malik@gmail.com>, 2 Şub 2023 Per, 14:31 tarihinde şunu\nyazdı:\n\n> On Wed, Feb 1, 2023 at 5:37 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> for (int64 i = 1; i <= lastusedid; i++)\n> {\n> char originname_to_drop[NAMEDATALEN] = {0};\n> snprintf(originname_to_drop,\n> sizeof(originname_to_drop), \"pg_%u_%lld\", subid, (long long) i);\n> .......\n> }\n>\n> --Is it better to use the function\n> 'ReplicationOriginNameForLogicalRep' here instead of sprintf, just to\n> be consistent everywhere to construct origin-name?\n>\n\nReplicationOriginNameForLogicalRep creates a slot name with current\n\"lastusedid\" and doesn't accept that id as parameter. Here the patch needs\nto check all possible id's.\n\n\n> /* Drop replication origin */\n> replorigin_drop_by_name(originname, true, false);\n> }\n>\n> --Are we passing missing_ok as true (second arg) intentionally here in\n> replorigin_drop_by_name? Once we fix the issue reported in my earlier\n> email (ASSERT), do you think it makes sense to pass missing_ok as\n> false here?\n>\n\nYes, missing_ok is intentional. The user might be concurrently refreshing\nthe sub or the apply worker might already drop the origin at that point.\nSo, missing_ok is set to true.\nThis is also how origin drops before the worker exits are handled on HEAD\ntoo. I only followed the same approach.\n\n\n> --Do we need to palloc for each relation separately? Shall we do it\n> once outside the loop and reuse it? Also pfree is not done for rstate\n> here.\n>\n\nRemoved palloc from the loop. No need to pfree here since the memory\ncontext will be deleted with the next CommitTransactionCommand call.\n\n\n> Can you please review the above flow (I have given line# along with),\n> I think it could be problematic. We alloced prev_slotname, assigned it\n> to slotname, freed prev_slotname and used slotname after freeing the\n> prev_slotname.\n> Also slotname is allocated some memory too, that is not freed.\n>\n\nRight, I used memcpy instead of assigning prev_slotname to slotname.\nslotname is returned in the end and pfree'd later [1]\n\nI also addressed your other reviews that I didn't explicitly mention in\nthis email. I simply applied the changes you pointed out. Also added some\nmore logs as well. I hope it's more useful now.\n\n[1]\nhttps://github.com/postgres/postgres/blob/master/src/backend/replication/logical/worker.c#L4359\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 22 Feb 2023 15:56:00 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Wang,\n\nThanks for reviewing.\nPlease see updated patches. [1]\n\nwangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>, 7 Şub 2023 Sal, 10:28\ntarihinde şunu yazdı:\n\n> 1. In the function ApplyWorkerMain.\n> I think we need to keep the worker name as \"leader apply worker\" in the\n> comment\n> like the current HEAD.\n>\n\nDone.\n\n\n> I think in this case we also need to pop the error context stack before\n> returning. Otherwise, I think we might use the wrong callback\n> (apply error_callback) after we return from this function.\n>\n\nDone.\n\n\n> 3. About the function UpdateSubscriptionRelReplicationSlot.\n> This newly introduced function UpdateSubscriptionRelReplicationSlot does\n> not\n> seem to be invoked. Do we need this function?\n\n\nRemoved.\n\nI think if 'need_full_snapshot' is false, it means we will create a snapshot\n> that can read only catalogs. (see SnapBuild->building_full_snapshot)\n\n\nFixed.\n\n```\n> 'use' will use the snapshot for the current transaction executing the\n> command.\n> This option must be used in a transaction, and CREATE_REPLICATION_SLOT\n> must be\n> the first command run in that transaction.\n> ```\n\nSo I think in the function CreateDecodingContext, when \"need_full_snapshot\"\n> is\n> true, we seem to need the following check, just like in the function\n> CreateInitDecodingContext:\n\n```\n> if (IsTransactionState() &&\n> GetTopTransactionIdIfAny() != InvalidTransactionId)\n> ereport(ERROR,\n> (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\n> errmsg(\"cannot create logical replication\n> slot in transaction that has performed writes\")));\n> ```\n\n\nYou're right to \"use\" the snapshot, it must be the first command in the\ntransaction. And that check happens here [2]. CreateReplicationSnapshot has\nalso similar check.\nI think the check you're referring to is needed to actually create a\nreplication slot and it performs whether the snapshot will be \"used\" or\n\"exported\". This is not the case for CreateReplicationSnapshot.\n\nIt seems that we also need to invoke the function\n> CheckLogicalDecodingRequirements in the new function\n> CreateReplicationSnapshot,\n> just like the function CreateReplicationSlot and the function\n> StartLogicalReplication.\n\n\nAdded this check.\n\n3. The invocation of startup_cb_wrapper in the function\n> CreateDecodingContext.\n> I think we should change the third input parameter to true when invoke\n> function\n> startup_cb_wrapper for CREATE_REPLICATION_SNAPSHOT. BTW, after applying\n> patch\n> v10-0002*, these settings will be inconsistent when sync workers use\n> \"CREATE_REPLICATION_SLOT\" and \"CREATE_REPLICATION_SNAPSHOT\" to take\n> snapshots.\n> This input parameter (true) will let us disable streaming and two-phase\n> transactions in function pgoutput_startup. See the last paragraph of the\n> commit\n> message for 4648243 for more details.\n\n\nI'm not sure if \"is_init\" should be set to true. CreateDecodingContext only\ncreates a context for an already existing logical slot and does not\ninitialize new one.\nI think inconsistencies between \"CREATE_REPLICATION_SLOT\" and\n\"CREATE_REPLICATION_SNAPSHOT\" are expected since one creates a new\nreplication slot and the other does not.\nCreateDecodingContext is also used in other places as well. Not sure how\nthis change would affect those places. I'll look into this more. Please let\nme know if I'm missing something.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCQmEE8BygXr%3DHi2N2t2kOE%3DPJwofn9TX0J9J4crjoXarQ%40mail.gmail.com\n[2]\nhttps://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c#L1108\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Wang,Thanks for reviewing.Please see updated patches. [1]wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>, 7 Şub 2023 Sal, 10:28 tarihinde şunu yazdı:\n1. In the function ApplyWorkerMain.\nI think we need to keep the worker name as \"leader apply worker\" in the comment\nlike the current HEAD.Done. \nI think in this case we also need to pop the error context stack before\nreturning. Otherwise, I think we might use the wrong callback\n(apply error_callback) after we return from this function.Done. \n3. About the function UpdateSubscriptionRelReplicationSlot.\nThis newly introduced function UpdateSubscriptionRelReplicationSlot does not\nseem to be invoked. Do we need this function? Removed.I think if 'need_full_snapshot' is false, it means we will create a snapshotthat can read only catalogs. (see SnapBuild->building_full_snapshot)Fixed.```'use' will use the snapshot for the current transaction executing the command.This option must be used in a transaction, and CREATE_REPLICATION_SLOT must bethe first command run in that transaction.```So I think in the function CreateDecodingContext, when \"need_full_snapshot\" istrue, we seem to need the following check, just like in the functionCreateInitDecodingContext:```        if (IsTransactionState() &&                GetTopTransactionIdIfAny() != InvalidTransactionId)                ereport(ERROR,                                (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),                                 errmsg(\"cannot create logical replication slot in transaction that has performed writes\")));``` You're right to \"use\" the snapshot, it must be the first command in the transaction. And that check happens here [2]. CreateReplicationSnapshot has also similar check.I think the check you're referring to is needed to actually create a replication slot and it performs whether the snapshot will be \"used\" or \"exported\". This is not the case for CreateReplicationSnapshot.It seems that we also need to invoke the functionCheckLogicalDecodingRequirements in the new function CreateReplicationSnapshot,just like the function CreateReplicationSlot and the functionStartLogicalReplication.Added this check.3. The invocation of startup_cb_wrapper in the function CreateDecodingContext.I think we should change the third input parameter to true when invoke functionstartup_cb_wrapper for CREATE_REPLICATION_SNAPSHOT. BTW, after applying patchv10-0002*, these settings will be inconsistent when sync workers use\"CREATE_REPLICATION_SLOT\" and \"CREATE_REPLICATION_SNAPSHOT\" to take snapshots.This input parameter (true) will let us disable streaming and two-phasetransactions in function pgoutput_startup. See the last paragraph of the commitmessage for 4648243 for more details.I'm not sure if \"is_init\" should be set to true. CreateDecodingContext only creates a context for an already existing logical slot and does not initialize new one.I think inconsistencies between \"CREATE_REPLICATION_SLOT\" and \"CREATE_REPLICATION_SNAPSHOT\" are expected since one creates a new replication slot and the other does not.CreateDecodingContext is also used in other places as well. Not sure how this change would affect those places. I'll look into this more. Please let me know if I'm missing something.[1] https://www.postgresql.org/message-id/CAGPVpCQmEE8BygXr%3DHi2N2t2kOE%3DPJwofn9TX0J9J4crjoXarQ%40mail.gmail.com[2] https://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c#L1108Thanks,-- Melih MutluMicrosoft", "msg_date": "Wed, 22 Feb 2023 16:04:00 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Feb 22, 2023 at 8:04 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Wang,\n>\n> Thanks for reviewing.\n> Please see updated patches. [1]\n\nThis is cool! Thanks for working on this.\nI had a chance to review your patchset and I had some thoughts and\nquestions.\n\nI notice that you've added a new user-facing option to make a snapshot.\nI think functionality to independently make a snapshot for use elsewhere\nhas been discussed in the past for the implementation of different\nfeatures (e.g. [1] pg_dump but they ended up using replication slots for\nthis I think?), but I'm not quite sure I understand all the implications\nfor providing a user-visible create snapshot command. Where can it be\nused? When can the snapshot be used? In your patch's case, you know that\nyou can use the snapshot you are creating, but I just wonder if any\nrestrictions or caveats need be taken for its general use.\n\nFor the worker reuse portion of the code, could it be a separate patch\nin the set? It could be independently committable and would be easier to\nreview (separate from repl slot reuse).\n\nGiven table sync worker reuse, I think it is worth considering a more\nexplicit structure for the table sync worker code now -- i.e. having a\nTableSyncWorkerMain() function. Though they still do the\nLogicalRepApplyLoop(), much of what else they do is different than the\napply leader.\n\nApply worker leader does:\n\nApplyWorkerMain()\n walrcv_startstreaming()\n LogicalRepApplyLoop()\n launch table sync workers\n walrcv_endstreaming()\n proc_exit()\n\nTable Sync workers master:\n\nApplyWorkerMain()\n start_table_sync()\n walrcv_create_slot()\n copy_table()\n walrcv_startstreaming()\n start_apply()\n LogicalRepApplyLoop()\n walrcv_endstreaming()\n proc_exit()\n\nNow table sync workers need to loop back and do start_table_sync() again\nfor their new table.\nYou have done this in ApplyWorkerMain(). But I think that this could be\na separate main function since their main loop is effectively totally\ndifferent now than an apply worker leader.\n\nSomething like:\n\nTableSyncWorkerMain()\n TableSyncWorkerLoop()\n start_table_sync()\n walrcv_startstreaming()\n LogicalRepApplyLoop()\n walrcv_endstreaming()\n wait_for_new_rel_assignment()\n proc_exit()\n\nYou mainly have this structure, but it is a bit hidden and some of the\nshared functions that previously may have made sense for table sync\nworker and apply workers to share don't really make sense to share\nanymore.\n\nThe main thing that table sync workers and apply workers share is the\nlogic in LogicalRepApplyLoop() (table sync workers use when they do\ncatchup), so perhaps we should make the other code separate?\n\nAlso on the topic of worker reuse, I was wondering if having workers\nfind their own next table assignment (as you have done in\nprocess_syncing_tables_for_sync()) makes sense.\n\nThe way the whole system would work now (with your patch applied), as I\nunderstand it, the apply leader would loop through the subscription rel\nstates and launch workers up to max_sync_workers_per_subscription for\nevery candidate table needing sync. The apply leader will continue to do\nthis, even though none of those workers would exit unless they die\nunexpectedly. So, once it reaches max_sync_workers_per_subscription, it\nwon't launch any more workers.\n\nWhen one of these sync workers is finished with a table (it is synced\nand caught up), it will search through the subscription rel states\nitself looking for a candidate table to work on.\n\nIt seems it would be common for workers to be looking through the\nsubscription rel states at the same time, and I don't really see how you\nprevent races in who is claiming a relation to work on. Though you take\na shared lock on the LogicalRepWorkerLock, what if in between\nlogicalrep_worker_find() and updating my MyLogicalRepWorker->relid,\nsomeone else also updates their relid to that relid. I don't think you\ncan update LogicalRepWorker->relid with only a shared lock.\n\nI wonder if it is not better to have the apply leader, in\nprocess_syncing_tables_for_apply(), first check for an existing worker\nfor the rel, then check for an available worker without an assignment,\nthen launch a worker?\n\nWorkers could then sleep after finishing their assignment and wait for\nthe leader to give them a new assignment.\n\nGiven an exclusive lock on LogicalRepWorkerLock, it may be okay for\nworkers to find their own table assignments from the subscriptionrel --\nand perhaps this will be much more efficient from a CPU perspective. It\nfeels just a bit weird to have the code doing that buried in\nprocess_syncing_tables_for_sync(). It seems like it should at least\nreturn out to a main table sync worker loop in which workers loop\nthrough finding a table and assigning it to themselves, syncing the\ntable, and catching the table up.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BU5nMLRjGtpskUkYSzZOEYZ_8OMc02k%2BO6FDi4una3mB4rS1w%40mail.gmail.com#45692f75a1e79d4ce2d4f6a0e3ccb853\n\n\n", "msg_date": "Sun, 26 Feb 2023 19:10:33 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Sun, 26 Feb 2023 at 19:11, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> This is cool! Thanks for working on this.\n> I had a chance to review your patchset and I had some thoughts and\n> questions.\n\nIt looks like this patch got a pretty solid review from Melanie\nPlageman in February just before the CF started. It was never set to\nWaiting on Author but I think that may be the right state for it.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Tue, 4 Apr 2023 10:51:46 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melanie,\n\nThanks for reviewing.\n\nMelanie Plageman <melanieplageman@gmail.com>, 27 Şub 2023 Pzt, 03:10\ntarihinde şunu yazdı:\n>\n> I notice that you've added a new user-facing option to make a snapshot.\n> I think functionality to independently make a snapshot for use elsewhere\n> has been discussed in the past for the implementation of different\n> features (e.g. [1] pg_dump but they ended up using replication slots for\n> this I think?), but I'm not quite sure I understand all the implications\n> for providing a user-visible create snapshot command. Where can it be\n> used? When can the snapshot be used? In your patch's case, you know that\n> you can use the snapshot you are creating, but I just wonder if any\n> restrictions or caveats need be taken for its general use.\n\n\nI can't say a use-case, other than this patch, that needs this user-facing\ncommand. The main reason why I added this command as it is in the patch is\nbecause that's already how other required communication between publisher\nand subscriber is done for other operations in logical replication. Even\nthough it may sound similar to the case in pg_dump discussion, I think the\nmain difference is that calling CREATE_REPLICATION_SNAPSHOT creates a\nsnapshot and imports it to wherever it's called (i.e. the same transaction\nwhich invoked CREATE_REPLICATION_SNAPSHOT ), and not used anywhere else.\nBut I agree that this part of the patch needs more thoughts and reviews.\nHonestly, I'm not also sure if this is the ideal way to fix the \"snapshot\nissue\" introduced by reusing the same replication slot.\n\n>\n> For the worker reuse portion of the code, could it be a separate patch\n> in the set? It could be independently committable and would be easier to\n> review (separate from repl slot reuse).\n\nI did this, please see the patch 0001.\n\n>\n> You mainly have this structure, but it is a bit hidden and some of the\n> shared functions that previously may have made sense for table sync\n> worker and apply workers to share don't really make sense to share\n> anymore.\n>\n> The main thing that table sync workers and apply workers share is the\n> logic in LogicalRepApplyLoop() (table sync workers use when they do\n> catchup), so perhaps we should make the other code separate?\n\nYou're right that apply and tablesync worker's paths are unnecessarily\nintertwined. With the reusing workers/replication slots logic, I guess it\nbecame worse.\nI tried to change the structure to something similar to what you explained.\nTablesync workers have different starting point now and it simply runs as\nfollows:\n\nTableSyncWorkerMain()\n loop:\n start_table_sync()\n walrcv_startstreaming()\n LogicalRepApplyLoop()\n check if there is a table with INIT state\n if there is such table: // reuse case\n clean_sync_worker()\n else: // exit case\n walrcv_endstreaming()\n ReplicationSlotDropAtPubNode()\n replorigin_drop_by_name\n break\n proc_exit()\n\n> It seems it would be common for workers to be looking through the\n> subscription rel states at the same time, and I don't really see how you\n> prevent races in who is claiming a relation to work on. Though you take\n> a shared lock on the LogicalRepWorkerLock, what if in between\n> logicalrep_worker_find() and updating my MyLogicalRepWorker->relid,\n> someone else also updates their relid to that relid. I don't think you\n> can update LogicalRepWorker->relid with only a shared lock.\n>\n>\n> I wonder if it is not better to have the apply leader, in\n> process_syncing_tables_for_apply(), first check for an existing worker\n> for the rel, then check for an available worker without an assignment,\n> then launch a worker?\n>\n> Workers could then sleep after finishing their assignment and wait for\n> the leader to give them a new assignment.\n\nI'm not sure if we should rely on a single apply worker for the assignment\nof several tablesync workers. I suspect that moving the assignment\nresponsibility to the apply worker may bring some overhead. But I agree\nthat shared lock on LogicalRepWorkerLock is not good. Changed it to\nexclusive lock.\n\n>\n> Given an exclusive lock on LogicalRepWorkerLock, it may be okay for\n> workers to find their own table assignments from the subscriptionrel --\n> and perhaps this will be much more efficient from a CPU perspective. It\n> feels just a bit weird to have the code doing that buried in\n> process_syncing_tables_for_sync(). It seems like it should at least\n> return out to a main table sync worker loop in which workers loop\n> through finding a table and assigning it to themselves, syncing the\n> table, and catching the table up.\n\nRight, it shouldn't be process_syncing_tables_for_sync()'s responsibility.\nI moved it into the TableSyncWorkerMain loop.\n\n\nAlso;\nI did some benchmarking like I did a couple of times previously [1].\nHere are the recent numbers:\n\nWith empty tables:\n+--------+------------+-------------+--------------+\n| | 10 tables | 100 tables | 1000 tables |\n+--------+------------+-------------+--------------+\n| master | 296.689 ms | 2579.154 ms | 41753.043 ms |\n+--------+------------+-------------+--------------+\n| patch | 210.580 ms | 1724.230 ms | 36247.061 ms |\n+--------+------------+-------------+--------------+\n\nWith 10 tables loaded with some data:\n+--------+------------+-------------+--------------+\n| | 1 MB | 10 MB | 100 MB |\n+--------+------------+-------------+--------------+\n| master | 568.072 ms | 2074.557 ms | 16995.399 ms |\n+--------+------------+-------------+--------------+\n| patch | 470.700 ms | 1923.386 ms | 16980.686 ms |\n+--------+------------+-------------+--------------+\n\nIt seems that even though master has improved since the last time I did a\nsimilar experiment, the patch still improves the time spent in table sync\nfor empty/small tables.\nAlso there is a decrease in the performance of the patch, compared with the\nprevious results [1]. Some portion of it might be caused by switching from\nshared locks to exclusive locks. I'll look into that a bit more though.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCQdZ_oj-QFcTOhTrUTs-NCKrrZ%3DZNCNPR1qe27rXV-iYw%40mail.gmail.com\n\nBest,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Mon, 8 May 2023 18:41:26 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi, and thanks for the patch! It is an interesting idea.\n\nI have not yet fully read this thread, so below are only my first\nimpressions after looking at patch 0001. Sorry if some of these were\nalready discussed earlier.\n\nTBH the patch \"reuse-workers\" logic seemed more complicated than I had\nimagined it might be.\n\n1.\nIIUC with patch 0001, each/every tablesync worker (a.k.a. TSW) when it\nfinishes dealing with one table then goes looking to find if there is\nsome relation that it can process next. So now every TSW has a loop\nwhere it will fight with every other available TSW over who will get\nto process the next relation.\n\nSomehow this seems all backwards to me. Isn't it strange for the TSW\nto be the one deciding what relation it would deal with next?\n\nIMO it seems more natural to simply return the finished TSW to some\nkind of \"pool\" of available workers and the main Apply process can\njust grab a TSW from that pool instead of launching a brand new one in\nthe existing function process_syncing_tables_for_apply(). Or, maybe\nthose \"available\" workers can be returned to a pool maintained within\nthe launcher.c code, which logicalrep_worker_launch() can draw from\ninstead of launching a whole new process?\n\n(I need to read the earlier posts to see if these options were already\ndiscussed and rejected)\n\n~~\n\n2.\nAFAIK the thing that identifies a tablesync worker is the fact that\nonly TSW will have a 'relid'.\n\nBut it feels very awkward to me to have a TSW marked as \"available\"\nand yet that LogicalRepWorker must still have some OLD relid field\nvalue lurking (otherwise it will forget that it is a \"tablesync\"\nworker!).\n\nIMO perhaps it is time now to introduce some enum 'type' to the\nLogicalRepWorker. Then an \"in_use\" type=TSW would have a valid 'relid'\nwhereas an \"available\" type=TSW would have relid == InvalidOid.\n\n~~\n\n3.\nMaybe I am mistaken, but it seems the benchmark results posted are\nonly using quite a small/default values for\n\"max_sync_workers_per_subscription\", so I wondered how those results\nare affected by increasing that GUC. I think having only very few\nworkers would cause more sequential processing, so conveniently the\neffect of the patch avoiding re-launch might be seen in the best\npossible light. OTOH, using more TSW in the first place might reduce\nthe overall tablesync time because the subscriber can do more work in\nparallel.\n\nSo I'm not quite sure what the goal is here. E.g. if the user doesn't\ncare much about how long tablesync phase takes then there is maybe no\nneed for this patch at all. OTOH, I thought if a user does care about\nthe subscription startup time, won't those users be opting for a much\nlarger \"max_sync_workers_per_subscription\" in the first place?\nTherefore shouldn't the benchmarking be using a larger number too?\n\n======\n\nHere are a few other random things noticed while looking at patch 0001:\n\n1. Commit message\n\n1a. typo /sequantially/sequentially/\n\n1b. Saying \"killed\" and \"killing\" seemed a bit extreme and implies\nsomebody else is killing the process. But I think mostly tablesync is\njust ending by a normal proc exit, so maybe reword this a bit.\n\n~~~\n\n2. It seemed odd that some -- clearly tablesync specific -- functions\nare in the worker.c instead of in tablesync.c.\n\n2a. e.g. clean_sync_worker\n\n2b. e.g. sync_worker_exit\n\n~~~\n\n3. process_syncing_tables_for_sync\n\n+ /*\n+ * Sync worker is cleaned at this point. It's ready to sync next table,\n+ * if needed.\n+ */\n+ SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n+ MyLogicalRepWorker->ready_to_reuse = true;\n SpinLockRelease(&MyLogicalRepWorker->relmutex);\n+ }\n+\n+ SpinLockRelease(&MyLogicalRepWorker->relmutex);\n\nIsn't there a double release of that mutex happening there?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 24 May 2023 12:59:18 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\n\nPeter Smith <smithpb2250@gmail.com>, 24 May 2023 Çar, 05:59 tarihinde şunu\nyazdı:\n\n> Hi, and thanks for the patch! It is an interesting idea.\n>\n> I have not yet fully read this thread, so below are only my first\n> impressions after looking at patch 0001. Sorry if some of these were\n> already discussed earlier.\n>\n> TBH the patch \"reuse-workers\" logic seemed more complicated than I had\n> imagined it might be.\n>\n\nIf you mean patch 0001 by the patch \"reuse-workers\", most of the complexity\ncomes with some refactoring to split apply worker and tablesync worker\npaths. [1]\nIf you mean the whole patch set, then I believe it's because reusing\nreplication slots also requires having a proper snapshot each time the\nworker moves to a new table. [2]\n\n\n>\n> 1.\n> IIUC with patch 0001, each/every tablesync worker (a.k.a. TSW) when it\n> finishes dealing with one table then goes looking to find if there is\n> some relation that it can process next. So now every TSW has a loop\n> where it will fight with every other available TSW over who will get\n> to process the next relation.\n>\n> Somehow this seems all backwards to me. Isn't it strange for the TSW\n> to be the one deciding what relation it would deal with next?\n>\n> IMO it seems more natural to simply return the finished TSW to some\n> kind of \"pool\" of available workers and the main Apply process can\n> just grab a TSW from that pool instead of launching a brand new one in\n> the existing function process_syncing_tables_for_apply(). Or, maybe\n> those \"available\" workers can be returned to a pool maintained within\n> the launcher.c code, which logicalrep_worker_launch() can draw from\n> instead of launching a whole new process?\n>\n> (I need to read the earlier posts to see if these options were already\n> discussed and rejected)\n>\n\nI think ([3]) relying on a single apply worker for the assignment of\nseveral tablesync workers might bring some overhead, it's possible that\nsome tablesync workers wait in idle until the apply worker assigns them\nsomething. OTOH yes, the current approach makes tablesync workers race for\na new table to sync.\nTBF both ways might be worth discussing/investigating more, before deciding\nwhich way to go.\n\n\n> 2.\n> AFAIK the thing that identifies a tablesync worker is the fact that\n> only TSW will have a 'relid'.\n>\n> But it feels very awkward to me to have a TSW marked as \"available\"\n> and yet that LogicalRepWorker must still have some OLD relid field\n> value lurking (otherwise it will forget that it is a \"tablesync\"\n> worker!).\n>\n> IMO perhaps it is time now to introduce some enum 'type' to the\n> LogicalRepWorker. Then an \"in_use\" type=TSW would have a valid 'relid'\n> whereas an \"available\" type=TSW would have relid == InvalidOid.\n>\n\nHmm, relid will be immediately updated when the worker moves to a new\ntable. And the time between finishing sync of a table and finding a new\ntable to sync should be minimal. I'm not sure how having an old relid for\nsuch a small amount of time can do any harm.\n\n\n> 3.\n> Maybe I am mistaken, but it seems the benchmark results posted are\n> only using quite a small/default values for\n> \"max_sync_workers_per_subscription\", so I wondered how those results\n> are affected by increasing that GUC. I think having only very few\n> workers would cause more sequential processing, so conveniently the\n> effect of the patch avoiding re-launch might be seen in the best\n> possible light. OTOH, using more TSW in the first place might reduce\n> the overall tablesync time because the subscriber can do more work in\n> parallel.\n\n\n\nSo I'm not quite sure what the goal is here. E.g. if the user doesn't\n\ncare much about how long tablesync phase takes then there is maybe no\n> need for this patch at all. OTOH, I thought if a user does care about\n> the subscription startup time, won't those users be opting for a much\n> larger \"max_sync_workers_per_subscription\" in the first place?\n> Therefore shouldn't the benchmarking be using a larger number too?\n\n\nRegardless of how many tablesync workers there are, reusing workers will\nspeed things up if a worker has a chance to sync more than one table.\nIncreasing the number of tablesync workers, of course, improves the\ntablesync performance. But if it doesn't make 100% parallel ( meaning that\n# of sync workers != # of tables to sync), then reusing workers can bring\nan additional improvement.\n\nHere are some benchmarks similar to earlier, but with 100 tables and\ndifferent number of workers:\n\n+--------+-------------+-------------+-------------+------------+\n| | 2 workers | 4 workers | 6 workers | 8 workers |\n+--------+-------------+-------------+-------------+------------+\n| master | 2579.154 ms | 1383.153 ms | 1001.559 ms | 911.758 ms |\n+--------+-------------+-------------+-------------+------------+\n| patch | 1724.230 ms | 853.894 ms | 601.176 ms | 496.395 ms |\n+--------+-------------+-------------+-------------+------------+\n\nSo yes, increasing the number of workers makes it faster. But reusing\nworkers can still improve more.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAAKRu_YKGyF%2BsvRQqe1th-mG9xLdzneWgh9H1z1DtypBkawkkw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/CAGPVpCRzD-ZZEc9ienhyrVpCzd9AJ7fxE--OFFJBnBg3E0438w%40mail.gmail.com\n\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Peter Smith <smithpb2250@gmail.com>, 24 May 2023 Çar, 05:59 tarihinde şunu yazdı:Hi, and thanks for the patch! It is an interesting idea.\n\nI have not yet fully read this thread, so below are only my first\nimpressions after looking at patch 0001. Sorry if some of these were\nalready discussed earlier.\n\nTBH the patch \"reuse-workers\" logic seemed more complicated than I had\nimagined it might be.If you mean patch 0001 by the patch \"reuse-workers\", most of the complexity comes with some refactoring to split apply worker and tablesync worker paths. [1]If you mean the whole patch set, then I believe it's because reusing replication slots also requires having a proper snapshot each time the worker moves to a new table. [2]  \n\n1.\nIIUC with patch 0001, each/every tablesync worker (a.k.a. TSW) when it\nfinishes dealing with one table then goes looking to find if there is\nsome relation that it can process next. So now every TSW has a loop\nwhere it will fight with every other available TSW over who will get\nto process the next relation.\n\nSomehow this seems all backwards to me. Isn't it strange for the TSW\nto be the one deciding what relation it would deal with next?\n\nIMO it seems more natural to simply return the finished TSW to some\nkind of \"pool\" of available workers and the main Apply process can\njust grab a TSW from that pool instead of launching a brand new one in\nthe existing function process_syncing_tables_for_apply(). Or, maybe\nthose \"available\" workers can be returned to a pool maintained within\nthe launcher.c code, which logicalrep_worker_launch() can draw from\ninstead of launching a whole new process?\n\n(I need to read the earlier posts to see if these options were already\ndiscussed and rejected)I think ([3]) relying on a single apply worker for the assignment of several tablesync workers might bring some overhead, it's possible that some tablesync workers wait in idle until the apply worker assigns them something. OTOH yes, the current approach makes tablesync workers race for a new table to sync.TBF both ways might be worth discussing/investigating more, before deciding which way to go. \n2.\nAFAIK the thing that identifies a  tablesync worker is the fact that\nonly TSW will have a 'relid'.\n\nBut it feels very awkward to me to have a TSW marked as \"available\"\nand yet that LogicalRepWorker must still have some OLD relid field\nvalue lurking (otherwise it will forget that it is a \"tablesync\"\nworker!).\n\nIMO perhaps it is time now to introduce some enum 'type' to the\nLogicalRepWorker. Then an \"in_use\" type=TSW would have a valid 'relid'\nwhereas an \"available\" type=TSW would have relid == InvalidOid.Hmm, relid will be immediately updated when the worker moves to a new table. And the time between finishing sync of a table and finding a new table to sync should be minimal. I'm not sure how having an old relid for such a small amount of time can do any harm.    \n3.\nMaybe I am mistaken, but it seems the benchmark results posted are\nonly using quite a small/default values for\n\"max_sync_workers_per_subscription\", so I wondered how those results\nare affected by increasing that GUC. I think having only very few\nworkers would cause more sequential processing, so conveniently the\neffect of the patch avoiding re-launch might be seen in the best\npossible light. OTOH, using more TSW in the first place might reduce\nthe overall tablesync time because the subscriber can do more work in\nparallel. So I'm not quite sure what the goal is here. E.g. if the user doesn't\ncare much about how long tablesync phase takes then there is maybe no\nneed for this patch at all. OTOH, I thought if a user does care about\nthe subscription startup time, won't those users be opting for a much\nlarger \"max_sync_workers_per_subscription\" in the first place?\nTherefore shouldn't the benchmarking be using a larger number too?Regardless of how many tablesync workers there are, reusing workers will speed things up if a worker has a chance to sync more than one table. Increasing the number of tablesync workers, of course, improves the tablesync performance. But if it doesn't make 100% parallel ( meaning that # of sync workers != # of tables to sync), then reusing workers can bring an additional improvement.  Here are some benchmarks similar to earlier, but with 100 tables and different number of workers:+--------+-------------+-------------+-------------+------------+|        | 2 workers   | 4 workers   | 6 workers   | 8 workers  |+--------+-------------+-------------+-------------+------------+| master | 2579.154 ms | 1383.153 ms | 1001.559 ms | 911.758 ms |+--------+-------------+-------------+-------------+------------+| patch  | 1724.230 ms | 853.894 ms  | 601.176 ms  | 496.395 ms |+--------+-------------+-------------+-------------+------------+So yes, increasing the number of workers makes it faster. But reusing workers can still improve more. [1] https://www.postgresql.org/message-id/CAAKRu_YKGyF%2BsvRQqe1th-mG9xLdzneWgh9H1z1DtypBkawkkw%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com[3] https://www.postgresql.org/message-id/CAGPVpCRzD-ZZEc9ienhyrVpCzd9AJ7fxE--OFFJBnBg3E0438w%40mail.gmail.com Best,-- Melih MutluMicrosoft", "msg_date": "Thu, 25 May 2023 11:59:26 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, May 25, 2023 at 6:59 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n>\n> Peter Smith <smithpb2250@gmail.com>, 24 May 2023 Çar, 05:59 tarihinde şunu yazdı:\n>>\n>> Hi, and thanks for the patch! It is an interesting idea.\n>>\n>> I have not yet fully read this thread, so below are only my first\n>> impressions after looking at patch 0001. Sorry if some of these were\n>> already discussed earlier.\n>>\n>> TBH the patch \"reuse-workers\" logic seemed more complicated than I had\n>> imagined it might be.\n>\n>\n> If you mean patch 0001 by the patch \"reuse-workers\", most of the complexity comes with some refactoring to split apply worker and tablesync worker paths. [1]\n> If you mean the whole patch set, then I believe it's because reusing replication slots also requires having a proper snapshot each time the worker moves to a new table. [2]\n>\n\nYes, I was mostly referring to the same as point 1 below about patch\n0001. I guess I just found the concept of mixing A) launching TSW (via\napply worker) with B) reassigning TSW to another relation (by the TSW\nbattling with its peers) to be a bit difficult to understand. I\nthought most of the refactoring seemed to arise from choosing to do it\nthat way.\n\n>>\n>>\n>> 1.\n>> IIUC with patch 0001, each/every tablesync worker (a.k.a. TSW) when it\n>> finishes dealing with one table then goes looking to find if there is\n>> some relation that it can process next. So now every TSW has a loop\n>> where it will fight with every other available TSW over who will get\n>> to process the next relation.\n>>\n>> Somehow this seems all backwards to me. Isn't it strange for the TSW\n>> to be the one deciding what relation it would deal with next?\n>>\n>> IMO it seems more natural to simply return the finished TSW to some\n>> kind of \"pool\" of available workers and the main Apply process can\n>> just grab a TSW from that pool instead of launching a brand new one in\n>> the existing function process_syncing_tables_for_apply(). Or, maybe\n>> those \"available\" workers can be returned to a pool maintained within\n>> the launcher.c code, which logicalrep_worker_launch() can draw from\n>> instead of launching a whole new process?\n>>\n>> (I need to read the earlier posts to see if these options were already\n>> discussed and rejected)\n>\n>\n> I think ([3]) relying on a single apply worker for the assignment of several tablesync workers might bring some overhead, it's possible that some tablesync workers wait in idle until the apply worker assigns them something. OTOH yes, the current approach makes tablesync workers race for a new table to sync.\n\nYes, it might be slower than the 'patched' code because \"available\"\nworkers might be momentarily idle while they wait to be re-assigned to\nthe next relation. We would need to try it to find out.\n\n> TBF both ways might be worth discussing/investigating more, before deciding which way to go.\n\n+1. I think it would be nice to see POC of both ways for benchmark\ncomparison because IMO performance is not the only consideration --\nunless there is an obvious winner, then they need to be judged also by\nthe complexity of the logic, the amount of code that needed to be\nrefactored, etc.\n\n>\n>>\n>> 2.\n>> AFAIK the thing that identifies a tablesync worker is the fact that\n>> only TSW will have a 'relid'.\n>>\n>> But it feels very awkward to me to have a TSW marked as \"available\"\n>> and yet that LogicalRepWorker must still have some OLD relid field\n>> value lurking (otherwise it will forget that it is a \"tablesync\"\n>> worker!).\n>>\n>> IMO perhaps it is time now to introduce some enum 'type' to the\n>> LogicalRepWorker. Then an \"in_use\" type=TSW would have a valid 'relid'\n>> whereas an \"available\" type=TSW would have relid == InvalidOid.\n>\n>\n> Hmm, relid will be immediately updated when the worker moves to a new table. And the time between finishing sync of a table and finding a new table to sync should be minimal. I'm not sure how having an old relid for such a small amount of time can do any harm.\n\nThere is no \"harm\", but it just didn't feel right to make the\nLogicalRepWorker to transition through some meaningless state\n(\"available\" for re-use but still assigned some relid), just because\nit was easy to do it that way. I think it is more natural for the\n'relid' to be valid only when it is valid for the worker and to be\nInvalidOid when it is not valid. --- Maybe this gripe would become\nmore apparent if the implementation use the \"free-list\" idea because\nthen you would have a lot of bogus relids assigned to the workers of\nthat list for longer than just a moment.\n\n>\n>>\n>> 3.\n>> Maybe I am mistaken, but it seems the benchmark results posted are\n>> only using quite a small/default values for\n>> \"max_sync_workers_per_subscription\", so I wondered how those results\n>> are affected by increasing that GUC. I think having only very few\n>> workers would cause more sequential processing, so conveniently the\n>> effect of the patch avoiding re-launch might be seen in the best\n>> possible light. OTOH, using more TSW in the first place might reduce\n>> the overall tablesync time because the subscriber can do more work in\n>> parallel.\n>>\n>>\n>>\n>> So I'm not quite sure what the goal is here. E.g. if the user doesn't\n>>\n>> care much about how long tablesync phase takes then there is maybe no\n>> need for this patch at all. OTOH, I thought if a user does care about\n>> the subscription startup time, won't those users be opting for a much\n>> larger \"max_sync_workers_per_subscription\" in the first place?\n>> Therefore shouldn't the benchmarking be using a larger number too?\n>\n>\n> Regardless of how many tablesync workers there are, reusing workers will speed things up if a worker has a chance to sync more than one table. Increasing the number of tablesync workers, of course, improves the tablesync performance. But if it doesn't make 100% parallel ( meaning that # of sync workers != # of tables to sync), then reusing workers can bring an additional improvement.\n>\n> Here are some benchmarks similar to earlier, but with 100 tables and different number of workers:\n>\n> +--------+-------------+-------------+-------------+------------+\n> | | 2 workers | 4 workers | 6 workers | 8 workers |\n> +--------+-------------+-------------+-------------+------------+\n> | master | 2579.154 ms | 1383.153 ms | 1001.559 ms | 911.758 ms |\n> +--------+-------------+-------------+-------------+------------+\n> | patch | 1724.230 ms | 853.894 ms | 601.176 ms | 496.395 ms |\n> +--------+-------------+-------------+-------------+------------+\n>\n> So yes, increasing the number of workers makes it faster. But reusing workers can still improve more.\n>\n\nThanks for the benchmark results! There is no denying they seem pretty\ngood numbers.\n\nBut it is difficult to get an overall picture of the behaviour. Mostly\nwhen benchmarks were posted you hold one variable fixed and show only\none other varying. It always leaves me wondering -- what about not\nempty tables, or what about different numbers of tables etc. Is it\npossible to make some script to gather a bigger set of results so we\ncan see everything at once? Perhaps then it will become clear there is\nsome \"sweet spot\" where the patch is really good but beyond that it\ndegrades (actually, who knows what it might show).\n\nFor example:\n\n=== empty tables\n\nworkers:2 workers:4 workers:8 workers:16\ntables:10 tables:10 tables:10 tables:10\ndata:0 data:0 data:0 data:0\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:100 tables:100 tables:100 tables:100\ndata:0 data:0 data:0 data:0\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:1000 tables:1000 tables:1000 tables:1000\ndata:0 data:0 data:0 data:0\nmaster/patch master/patch master/patch master/patch\n\n=== 1M data\n\nworkers:2 workers:4 workers:8 workers:16\ntables:10 tables:10 tables:10 tables:10\ndata:1M data:1M data:1M data:1M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:100 tables:100 tables:100 tables:100\ndata:1M data:1M data:1M data:1M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:1000 tables:1000 tables:1000 tables:1000\ndata:1M data:1M data:1M data:1M\nmaster/patch master/patch master/patch master/patch\n\n=== 10M data\n\nworkers:2 workers:4 workers:8 workers:16\ntables:10 tables:10 tables:10 tables:10\ndata:10M data:10M data:10M data:10M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:100 tables:100 tables:100 tables:100\ndata:10M data:10M data:10M data:10M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:1000 tables:1000 tables:1000 tables:1000\ndata:10M data:10M data:10M data:10M\nmaster/patch master/patch master/patch master/patch\n\n== 100M data\n\nworkers:2 workers:4 workers:8 workers:16\ntables:10 tables:10 tables:10 tables:10\ndata:100M data:100M data:100M data:100M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:100 tables:100 tables:100 tables:100\ndata:100M data:100M data:100M data:100M\nmaster/patch master/patch master/patch master/patch\n\nworkers:2 workers:4 workers:8 workers:16\ntables:1000 tables:1000 tables:1000 tables:1000\ndata:100M data:100M data:100M data:100M\nmaster/patch master/patch master/patch master/patch\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Fri, 26 May 2023 17:29:38 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nI rebased the patch and addressed the following reviews.\n\nPeter Smith <smithpb2250@gmail.com>, 24 May 2023 Çar, 05:59 tarihinde\nşunu yazdı:\n> Here are a few other random things noticed while looking at patch 0001:\n>\n> 1. Commit message\n>\n> 1a. typo /sequantially/sequentially/\n>\n> 1b. Saying \"killed\" and \"killing\" seemed a bit extreme and implies\n> somebody else is killing the process. But I think mostly tablesync is\n> just ending by a normal proc exit, so maybe reword this a bit.\n>\n\nFixed the type and reworded a bit.\n\n>\n> 2. It seemed odd that some -- clearly tablesync specific -- functions\n> are in the worker.c instead of in tablesync.c.\n>\n> 2a. e.g. clean_sync_worker\n>\n> 2b. e.g. sync_worker_exit\n>\n\nHonestly, the distinction between worker.c and tablesync.c is not that\nclear to me. Both seem like they're doing some work for tablesync and\napply.\nBut yes, you're right. Those functions fit better to tablesync.c. Moved them.\n\n>\n> 3. process_syncing_tables_for_sync\n>\n> + /*\n> + * Sync worker is cleaned at this point. It's ready to sync next table,\n> + * if needed.\n> + */\n> + SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> + MyLogicalRepWorker->ready_to_reuse = true;\n> SpinLockRelease(&MyLogicalRepWorker->relmutex);\n> + }\n> +\n> + SpinLockRelease(&MyLogicalRepWorker->relmutex);\n>\n> Isn't there a double release of that mutex happening there?\n\nFixed.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 1 Jun 2023 13:54:02 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 26 May 2023 Cum, 10:30 tarihinde\nşunu yazdı:\n>\n> On Thu, May 25, 2023 at 6:59 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Yes, I was mostly referring to the same as point 1 below about patch\n> 0001. I guess I just found the concept of mixing A) launching TSW (via\n> apply worker) with B) reassigning TSW to another relation (by the TSW\n> battling with its peers) to be a bit difficult to understand. I\n> thought most of the refactoring seemed to arise from choosing to do it\n> that way.\n\nNo, the refactoring is not related to the way of assigning a new\ntable. In fact, the patch did not include such refactoring a couple\nversions earlier [1] and was still assigning tables the same way. It\nwas suggested here [2]. Then, I made the patch 0001 which includes\nsome refactoring and only reuses the worker and nothing else. Also I\nfind it more understandable this way, maybe it's a bit subjective.\n\nI feel that logical replication related files are getting more and\nmore complex and hard to understand with each change. IMHO, even\nwithout reusing anything, those need some refactoring anyway. But for\nthis patch, refactoring some places made it simpler to reuse workers\nand/or replication slots, regardless of how tables are assigned to\nTSW's.\n\n> +1. I think it would be nice to see POC of both ways for benchmark\n> comparison because IMO performance is not the only consideration --\n> unless there is an obvious winner, then they need to be judged also by\n> the complexity of the logic, the amount of code that needed to be\n> refactored, etc.\n\nWill try to do that. But, like I mentioned above, I don't think that\nsuch a change would reduce the complexity or number of lines changed.\n\n> But it is difficult to get an overall picture of the behaviour. Mostly\n> when benchmarks were posted you hold one variable fixed and show only\n> one other varying. It always leaves me wondering -- what about not\n> empty tables, or what about different numbers of tables etc. Is it\n> possible to make some script to gather a bigger set of results so we\n> can see everything at once? Perhaps then it will become clear there is\n> some \"sweet spot\" where the patch is really good but beyond that it\n> degrades (actually, who knows what it might show).\n\nI actually shared the benchmarks with different numbers of tables and\nsizes. But those were all with 2 workers. I guess you want a similar\nbenchmark with different numbers of workers.\nWill work on this and share soon.\n\n\n\n[1] https://www.postgresql.org/message-id/CAGPVpCQmEE8BygXr%3DHi2N2t2kOE%3DPJwofn9TX0J9J4crjoXarQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAAKRu_YKGyF%2BsvRQqe1th-mG9xLdzneWgh9H1z1DtypBkawkkw%40mail.gmail.com\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\n\n", "msg_date": "Thu, 1 Jun 2023 14:22:30 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jun 1, 2023 at 7:22 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Peter,\n>\n> Peter Smith <smithpb2250@gmail.com>, 26 May 2023 Cum, 10:30 tarihinde\n> şunu yazdı:\n> >\n> > On Thu, May 25, 2023 at 6:59 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > Yes, I was mostly referring to the same as point 1 below about patch\n> > 0001. I guess I just found the concept of mixing A) launching TSW (via\n> > apply worker) with B) reassigning TSW to another relation (by the TSW\n> > battling with its peers) to be a bit difficult to understand. I\n> > thought most of the refactoring seemed to arise from choosing to do it\n> > that way.\n>\n> No, the refactoring is not related to the way of assigning a new\n> table. In fact, the patch did not include such refactoring a couple\n> versions earlier [1] and was still assigning tables the same way. It\n> was suggested here [2]. Then, I made the patch 0001 which includes\n> some refactoring and only reuses the worker and nothing else. Also I\n> find it more understandable this way, maybe it's a bit subjective.\n>\n> I feel that logical replication related files are getting more and\n> more complex and hard to understand with each change. IMHO, even\n> without reusing anything, those need some refactoring anyway. But for\n> this patch, refactoring some places made it simpler to reuse workers\n> and/or replication slots, regardless of how tables are assigned to\n> TSW's.\n\nIf refactoring is wanted anyway (regardless of the chosen \"reuse\"\nlogic), then will it be better to split off a separate 0001 patch just\nto get that part out of the way first?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 1 Jun 2023 08:22:21 -0400", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jun 1, 2023 6:54 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> I rebased the patch and addressed the following reviews.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments on 0001 patch.\r\n\r\n1.\r\n-\tereport(LOG,\r\n-\t\t\t(errmsg(\"logical replication table synchronization worker for subscription \\\"%s\\\", table \\\"%s\\\" has finished\",\r\n-\t\t\t\t\tMySubscription->name,\r\n-\t\t\t\t\tget_rel_name(MyLogicalRepWorker->relid))));\r\n\r\nCould we move this to somewhere else instead of removing it?\r\n\r\n2.\r\n+\tif (!OidIsValid(originid))\r\n+\t\toriginid = replorigin_create(originname);\r\n+\treplorigin_session_setup(originid, 0);\r\n+\treplorigin_session_origin = originid;\r\n+\t*origin_startpos = replorigin_session_get_progress(false);\r\n+\tCommitTransactionCommand();\r\n+\r\n+\t/* Is the use of a password mandatory? */\r\n+\tmust_use_password = MySubscription->passwordrequired &&\r\n+\t\t!superuser_arg(MySubscription->owner);\r\n+\tLogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo, true,\r\n+\t\t\t\t\t\t\t\t\t\t\tmust_use_password,\r\n+\t\t\t\t\t\t\t\t\t\t\tMySubscription->name, &err);\r\n\r\nIt seems that there is a problem when refactoring.\r\nSee commit e7e7da2f8d5.\r\n\r\n3.\r\n+\t/* Set this to false for safety, in case we're already reusing the worker */\r\n+\tMyLogicalRepWorker->ready_to_reuse = false;\r\n+\r\n\r\nI am not sure do we need to lock when setting it.\r\n\r\n4.\r\n+\t/*\r\n+\t * Allocate the origin name in long-lived context for error context\r\n+\t * message.\r\n+\t */\r\n+\tStartTransactionCommand();\r\n+\tReplicationOriginNameForLogicalRep(MySubscription->oid,\r\n+\t\t\t\t\t\t\t\t\t MyLogicalRepWorker->relid,\r\n+\t\t\t\t\t\t\t\t\t originname,\r\n+\t\t\t\t\t\t\t\t\t originname_size);\r\n+\tCommitTransactionCommand();\r\n\r\nDo we need the call to StartTransactionCommand() and CommitTransactionCommand()\r\nhere? Besides, the comment here is the same as the comment atop\r\nset_apply_error_context_origin(), do we need it?\r\n\r\n5.\r\nI saw a segmentation fault when debugging.\r\n\r\nIt happened when calling sync_worker_exit() called (see the code below in\r\nLogicalRepSyncTableStart()). In the case that this is not the first table the\r\nworker synchronizes, clean_sync_worker() has been called before (in\r\nTablesyncWorkerMain()), and LogRepWorkerWalRcvConn has been set to NULL. Then, a\r\nsegmentation fault happened because LogRepWorkerWalRcvConn is a null pointer.\r\n\r\n\tswitch (relstate)\r\n\t{\r\n\t\tcase SUBREL_STATE_SYNCDONE:\r\n\t\tcase SUBREL_STATE_READY:\r\n\t\tcase SUBREL_STATE_UNKNOWN:\r\n\t\t\tsync_worker_exit();\t/* doesn't return */\r\n\t}\r\n\r\nHere is the backtrace.\r\n\r\n#0 0x00007fc8a8ce4c95 in libpqrcv_disconnect (conn=0x0) at libpqwalreceiver.c:757\r\n#1 0x000000000092b8c0 in clean_sync_worker () at tablesync.c:150\r\n#2 0x000000000092b8ed in sync_worker_exit () at tablesync.c:164\r\n#3 0x000000000092d8f6 in LogicalRepSyncTableStart (origin_startpos=0x7ffd50f30f08) at tablesync.c:1293\r\n#4 0x0000000000934f76 in start_table_sync (origin_startpos=0x7ffd50f30f08, myslotname=0x7ffd50f30e80) at worker.c:4457\r\n#5 0x000000000093513b in run_tablesync_worker (options=0x7ffd50f30ec0, slotname=0x0, originname=0x7ffd50f30f10 \"pg_16394_16395\",\r\n originname_size=64, origin_startpos=0x7ffd50f30f08) at worker.c:4532\r\n#6 0x0000000000935a3a in TablesyncWorkerMain (main_arg=1) at worker.c:4853\r\n#7 0x00000000008e97f6 in StartBackgroundWorker () at bgworker.c:864\r\n#8 0x00000000008f350b in do_start_bgworker (rw=0x12fc1a0) at postmaster.c:5762\r\n#9 0x00000000008f38b7 in maybe_start_bgworkers () at postmaster.c:5986\r\n#10 0x00000000008f2975 in process_pm_pmsignal () at postmaster.c:5149\r\n#11 0x00000000008ee98a in ServerLoop () at postmaster.c:1770\r\n#12 0x00000000008ee3bb in PostmasterMain (argc=3, argv=0x12c4af0) at postmaster.c:1463\r\n#13 0x00000000007b6d3a in main (argc=3, argv=0x12c4af0) at main.c:198\r\n\r\n\r\nThe steps to reproduce: \r\nWorker1, in TablesyncWorkerMain(), the relstate of new table to sync (obtained\r\nby GetSubscriptionRelations) is SUBREL_STATE_INIT, and in the foreach loop,\r\nbefore the following Check (it needs a breakpoint before locking),\r\n\r\n\t\t\tLWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);\r\n\t\t\tif (rstate->state != SUBREL_STATE_SYNCDONE &&\r\n\t\t\t\t!logicalrep_worker_find(MySubscription->oid, rstate->relid, false))\r\n\t\t\t{\r\n\t\t\t\t/* Update worker state for the next table */\r\n\t\t\t\tMyLogicalRepWorker->relid = rstate->relid;\r\n\t\t\t\tMyLogicalRepWorker->relstate = rstate->state;\r\n\t\t\t\tMyLogicalRepWorker->relstate_lsn = rstate->lsn;\r\n\t\t\t\tLWLockRelease(LogicalRepWorkerLock);\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t\tLWLockRelease(LogicalRepWorkerLock);\r\n\r\nlet this table to be synchronized by another table sync worker (Worker2), and\r\nWorker2 has finished before logicalrep_worker_find was called(). Then Worker1\r\ntried to sync a table whose state is SUBREL_STATE_READY and the segmentation\r\nfault happened.\r\n\r\nRegards,\r\nShi Yu\r\n", "msg_date": "Mon, 5 Jun 2023 11:06:45 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Melih,\r\n\r\nThank you for making the patch!\r\nI'm also interested in the patchset. Here are the comments for 0001.\r\n\r\nSome codes are not suit for our coding conventions, but followings do not contain them\r\nbecause patches seems in the early statge.\r\nMoreover, 0003 needs rebase.\r\n\r\n01. general\r\n\r\nWhy do tablesync workers have to disconnect from publisher for every iterations?\r\nI think connection initiation overhead cannot be negligible in the postgres's basic\r\narchitecture. I have not checked yet, but could we add a new replication message\r\nlike STOP_STREAMING or CLEANUP? Or, engineerings for it is quite larger than the benefit?\r\n\r\n02. logicalrep_worker_launch()\r\n\r\n```\r\n- else\r\n+ else if (!OidIsValid(relid))\r\n snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyWorkerMain\");\r\n+ else\r\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"TablesyncWorkerMain\");\r\n```\r\n\r\nYou changed the entry point of tablesync workers, but bgw_type is still the same.\r\nDo you have any decisions about it? \r\n\r\n03. process_syncing_tables_for_sync()\r\n\r\n```\r\n+ /*\r\n+ * Sync worker is cleaned at this point. It's ready to sync next table,\r\n+ * if needed.\r\n+ */\r\n+ SpinLockAcquire(&MyLogicalRepWorker->relmutex);\r\n+ MyLogicalRepWorker->ready_to_reuse = true;\r\n+ SpinLockRelease(&MyLogicalRepWorker->relmutex);\r\n```\r\n\r\nMaybe acquiring the lock for modifying ready_to_reuse is not needed because all\r\nthe sync workers check only the own attribute. Moreover, other processes do not read.\r\n\r\n04. sync_worker_exit()\r\n\r\n```\r\n+/*\r\n+ * Exit routine for synchronization worker.\r\n+ */\r\n+void\r\n+pg_attribute_noreturn()\r\n+sync_worker_exit(void)\r\n```\r\n\r\nI think we do not have to rename the function from finish_sync_worker().\r\n\r\n05. LogicalRepApplyLoop()\r\n\r\n```\r\n+ if (MyLogicalRepWorker->ready_to_reuse)\r\n+ {\r\n+ endofstream = true;\r\n+ }\r\n```\r\n\r\nWe should add comments here to clarify the reason.\r\n\r\n06. stream_build_options()\r\n\r\nI think we can set twophase attribute here.\r\n\r\n07. TablesyncWorkerMain()\r\n\r\n```\r\n+ ListCell *lc;\r\n```\r\n\r\nThis variable should be declared inside the loop.\r\n\r\n08. TablesyncWorkerMain()\r\n\r\n```\r\n+ /*\r\n+ * If a relation with INIT state is assigned, clean up the worker for\r\n+ * the next iteration.\r\n+ *\r\n+ * If there is no more work left for this worker, break the loop to\r\n+ * exit.\r\n+ */\r\n+ if ( MyLogicalRepWorker->relstate == SUBREL_STATE_INIT)\r\n+ clean_sync_worker();\r\n```\r\n\r\nThe sync worker sends a signal to its leader per the iteration, but it may be too\r\noften. Maybe it is added for changing the rstate to READY, however, it is OK to\r\nchange it when the next change have come because should_apply_changes_for_rel()\r\nreturns true even if rel->state == SUBREL_STATE_SYNCDONE. I think the notification\r\nshould be done only at the end of sync workers. How do you think? \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 13 Jun 2023 10:05:56 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here are some review comments for the patch v2-0001.\n\n======\nCommit message\n\n1. General\nBetter to use consistent terms in this message. Either \"relations\" or\n\"tables\" -- not a mixture of both.\n\n~~~\n\n2.\nBefore this commit, tablesync workers were capable of syncing only one\nrelation. For each table, a new sync worker was launched and the worker\nwould exit when the worker is done with the current table.\n\n~\n\nSUGGESTION (2nd sentence)\nFor each table, a new sync worker was launched and that worker would\nexit when done processing the table.\n\n~~~\n\n3.\nNow, tablesync workers are not only limited with one relation and can\nmove to another relation in the same subscription. This reduces the\noverhead of launching a new background worker and exiting from that\nworker for each relation.\n\n~\n\nSUGGESTION (1st sentence)\nNow, tablesync workers are not limited to processing only one\nrelation. When done, they can move to processing another relation in\nthe same subscription.\n\n~~~\n\n4.\nA new tablesync worker gets launched only if the number of tablesync\nworkers for the subscription does not exceed\nmax_sync_workers_per_subscription. If there is a table needs to be synced,\na tablesync worker picks that up and syncs it.The worker continues to\npicking new tables to sync until there is no table left for synchronization\nin the subscription.\n\n~\n\nThis seems to be missing the point that only \"available\" workers go\nlooking for more tables to process. Maybe reword something like below:\n\nSUGGESTION\nIf there is a table that needs to be synced, an \"available\" tablesync\nworker picks up that table and syncs it. Each tablesync worker\ncontinues to pick new tables to sync until there are no tables left\nrequiring synchronization. If there was no \"available\" worker to\nprocess the table, then a new tablesync worker will be launched,\nprovided the number of tablesync workers for the subscription does not\nexceed max_sync_workers_per_subscription.\n\n======\nsrc/backend/replication/logical/launcher.c\n\n5. logicalrep_worker_launch\n\n@@ -460,8 +461,10 @@ retry:\n\n if (is_parallel_apply_worker)\n snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ParallelApplyWorkerMain\");\n- else\n+ else if (!OidIsValid(relid))\n snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyWorkerMain\");\n+ else\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"TablesyncWorkerMain\");\n\n if (OidIsValid(relid))\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n\n~\n\n5a.\nI felt at least these conditions can be rearranged, so you can use\nOidIsValid(relid) instead of !OidIsValid(relid).\n\n~\n\n5b.\nProbably it can all be simplified, if you are happy to do it in one line:\n\nsnprintf(bgw.bgw_function_name, BGW_MAXLEN,\n OidIsValid(relid) ? \"TablesyncWorkerMain\" :\n is_parallel_apply_worker ? \"ParallelApplyWorkerMain\" :\n\"ApplyWorkerMain\");\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n6. finish_sync_worker\n\nThis function is removed/renamed but there are still commenting in\nthis file referring to 'finish-sync_worker'\n\n~~~\n\n7. clean_sync_worker\n\nI agree with comment from Shi-san. There should still be logging\nsomewhere that say this tablesync worker has completed the processing\nof the current table.\n\n~~~\n\n8. sync_worker_exit\n\nThere is inconsistent function naming for clean_sync_worker versus\nsync_worker_exit.\n\nHow about: clean_sync_worker/exit_sync_worker?\nOr: sync_worker_clean/sync_worker_exit?\n\n~~~\n\n9. process_syncing_tables_for_sync\n\n@@ -378,7 +387,13 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n */\n replorigin_drop_by_name(originname, true, false);\n\n- finish_sync_worker();\n+ /*\n+ * Sync worker is cleaned at this point. It's ready to sync next table,\n+ * if needed.\n+ */\n+ SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n+ MyLogicalRepWorker->ready_to_reuse = true;\n+ SpinLockRelease(&MyLogicalRepWorker->relmutex);\n\n9a.\nI did not quite follow the logic. It says \"Sync worker is cleaned at\nthis point\", but who is doing that? -- more details are needed. But,\nwhy not just call clean_sync_worker() right here like it use to call\nfinish_sync_worker?\n\n~\n\n9b.\nShouldn't this \"ready_to_use\" flag be assigned within the\nclean_sync_worker() function, since that is the function making is\nclean for next re-use. The function comment even says so: \"Prepares\nthe synchronization worker for reuse or exit.\"\n\n======\nsrc/backend/replication/logical/worker.c\n\n10. General -- run_tablesync_worker, TablesyncWorkerMain\n\nIMO these functions would be more appropriately reside in the\ntablesync.c instead of the (common) worker.c. Was there some reason\nwhy they cannot be put there?\n\n~~~\n\n11. LogicalRepApplyLoop\n\n+ /*\n+ * apply_dispatch() may have gone into apply_handle_commit()\n+ * which can go into process_syncing_tables_for_sync early.\n+ * Before we were able to reuse tablesync workers, that\n+ * process_syncing_tables_for_sync call would exit the worker\n+ * instead of preparing for reuse. Now that tablesync workers\n+ * can be reused and process_syncing_tables_for_sync is not\n+ * responsible for exiting. We need to take care of memory\n+ * contexts here before moving to sync the nex table or exit.\n+ */\n\n11a.\nIMO it does not seem good to explain the reason by describing how the\nlogic USED to work, with code that is removed (e.g. \"Before we\nwere...\"). It's better to describe why this is needed here based on\nall the CURRENT code logic.\n\n~\n\n11b.\n/nex table/next table/\n\n~\n\n12.\n+ if (MyLogicalRepWorker->ready_to_reuse)\n+ {\n+ endofstream = true;\n+ }\n\nUnnecessary parentheses.\n\n~\n\n13.\n+ /*\n+ * If it's still not ready to reuse, this is probably an apply worker.\n+ * End streaming before exiting.\n+ */\n+ if (!MyLogicalRepWorker->ready_to_reuse)\n+ {\n+ /* All done */\n+ walrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);\n+ }\n\nHow can we not be 100% sure of the kind of worker we are dealing with?\nE.g. \"probably\" ??\n\nShould this code be using macros like am_tablesync_worker() to have\nsome certainty what it is dealing with here?\n\n~~~\n\n14. stream_build_options\n\n+ /* stream_build_options\n+ * Build logical replication streaming options.\n+ *\n+ * This function sets streaming options including replication slot name\n+ * and origin start position. Workers need these options for logical\nreplication.\n+ */\n+static void\n+stream_build_options(WalRcvStreamOptions *options, char *slotname,\nXLogRecPtr *origin_startpos)\n\nThe function name seem a bit strange -- it's not really \"building\"\nanything. How about something like SetStreamOptions, or\nset_stream_options.\n\n~~~\n\n15. run_tablesync_worker\n\n+static void\n+run_tablesync_worker(WalRcvStreamOptions *options,\n+ char *slotname,\n+ char *originname,\n+ int originname_size,\n+ XLogRecPtr *origin_startpos)\n+{\n+ /* Set this to false for safety, in case we're already reusing the worker */\n+ MyLogicalRepWorker->ready_to_reuse = false;\n\nMaybe reword the comment so it does not say set 'this' to false.\n\n~\n\n16.\n+ /* Start applying changes to catcup. */\n+ start_apply(*origin_startpos);\n\ntypo: catcup\n\n~~~\n\n17. run_apply_worker\n\n+static void\n+run_apply_worker(WalRcvStreamOptions *options,\n+ char *slotname,\n+ char *originname,\n+ int originname_size,\n+ XLogRecPtr *origin_startpos)\n+{\n+ /* This is the leader apply worker */\n+ RepOriginId originid;\n+ TimeLineID startpointTLI;\n+ char *err;\n+ bool must_use_password;\n\n\nThe comment above the variable declarations seems redundant/misplaced.\n\n~~\n\n18. InitializeLogRepWorker\n\n if (am_tablesync_worker())\n ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has started\",\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", relation \\\"%s\\\" with relid %u has started\",\n MySubscription->name,\n- get_rel_name(MyLogicalRepWorker->relid))));\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n else\n\n\nI felt this code could be using get_worker_name() function like the\n\"else\" does instead of the hardwired: \"logical replication table\nsynchronization worker\" string\n\n~~~\n\n19. TablesyncWorkerMain\n\n+TablesyncWorkerMain(Datum main_arg)\n+{\n+ int worker_slot = DatumGetInt32(main_arg);\n+ char originname[NAMEDATALEN];\n+ XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n+ char *myslotname = NULL;\n+ WalRcvStreamOptions options;\n+ List *rstates;\n+ SubscriptionRelState *rstate;\n+ ListCell *lc;\n\n- /* Setup replication origin tracking. */\n- StartTransactionCommand();\n- ReplicationOriginNameForLogicalRep(MySubscription->oid, InvalidOid,\n- originname, sizeof(originname));\n- originid = replorigin_by_name(originname, true);\n- if (!OidIsValid(originid))\n- originid = replorigin_create(originname);\n- replorigin_session_setup(originid, 0);\n- replorigin_session_origin = originid;\n- origin_startpos = replorigin_session_get_progress(false);\n-\n- /* Is the use of a password mandatory? */\n- must_use_password = MySubscription->passwordrequired &&\n- !superuser_arg(MySubscription->owner);\n-\n- /* Note that the superuser_arg call can access the DB */\n- CommitTransactionCommand();\n+ elog(LOG, \"logical replication table synchronization worker has started\");\n\nWould it be better if that elog was using the common function get_worker_name()?\n\n~~~\n\n20.\n+ if (MyLogicalRepWorker->ready_to_reuse)\n+ {\n+ /* This transaction will be committed by clean_sync_worker. */\n+ StartTransactionCommand();\n\nThe indentation is broken.\n\n~~~\n\n21.\n+ * Check if any table whose relation state is still INIT. If a table\n+ * in INIT state is found, the worker will not be finished, it will be\n+ * reused instead.\n */\n\nFirst sentence is not meaningful. Should it say: \"Check if there is\nany table whose relation state is still INIT.\" ??\n\n~~~\n\n22.\n+ /*\n+ * Pick the table for the next run if it is not already picked up\n+ * by another worker.\n+ *\n+ * Take exclusive lock to prevent any other sync worker from picking\n+ * the same table.\n+ */\n+ LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);\n+ if (rstate->state != SUBREL_STATE_SYNCDONE &&\n+ !logicalrep_worker_find(MySubscription->oid, rstate->relid, false))\n+ {\n+ /* Update worker state for the next table */\n+ MyLogicalRepWorker->relid = rstate->relid;\n+ MyLogicalRepWorker->relstate = rstate->state;\n+ MyLogicalRepWorker->relstate_lsn = rstate->lsn;\n+ LWLockRelease(LogicalRepWorkerLock);\n+ break;\n+ }\n+ LWLockRelease(LogicalRepWorkerLock);\n }\n+\n+ /*\n+ * If a relation with INIT state is assigned, clean up the worker for\n+ * the next iteration.\n+ *\n+ * If there is no more work left for this worker, break the loop to\n+ * exit.\n+ */\n+ if ( MyLogicalRepWorker->relstate == SUBREL_STATE_INIT)\n+ clean_sync_worker();\n else\n- {\n- walrcv_startstreaming(LogRepWorkerWalRcvConn, &options);\n- }\n+ break;\n\nI was unsure about this logic, but shouldn't the\nMyLogicalRepWorker->relstate be assigned a default value prior to all\nthese loops, so that there can be no chance for it to be\nSUBREL_STATE_INIT by accident.\n\n~\n\n23.\n+ /* If not exited yet, then the worker will sync another table. */\n+ StartTransactionCommand();\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has moved to sync table \\\"%s\\\" with relid %u.\",\n+ MySubscription->name, get_rel_name(MyLogicalRepWorker->relid),\nMyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n\n23a\nThis code seems strangely structured. Why is the \"not exited yet\" part\nnot within the preceding \"if\" block where the clean_sync_worker was\ndone?\n\n~~~\n\n23b.\nWont it be better for that errmsg to use the common function\nget_worker_name() instead of having the hardcoded string?\n\n\n======\nsrc/include/replication/worker_internal.h\n\n24.\n+ /*\n+ * Used to indicate whether sync worker is ready for being reused\n+ * to sync another relation.\n+ */\n+ bool ready_to_reuse;\n+\n\nIIUC this field has no meaning except for a tablesync worker, but the\nfieldname give no indication of that at all.\n\nTo make this more obvious it might be better to put this with the\nother tablesync fields:\n\n/* Used for initial table synchronization. */\nOid relid;\nchar relstate;\nXLogRecPtr relstate_lsn;\nslock_t relmutex;\nAnd maybe rename it according to that convention relXXX -- e.g.\n'relworker_available' or something\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 14 Jun 2023 15:45:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi hackers,\n\nYou can find the updated patchset attached.\nI worked to address the reviews and made some additional changes.\n\nLet me first explain the new patchset.\n0001: Refactors the logical replication code, mostly worker.c and\ntablesync.c. Although this patch makes it easier to reuse workers, I\nbelieve that it's useful even by itself without other patches. It does\nnot improve performance or anything but aims to increase readability\nand such.\n0002: This is only to reuse worker processes, everything else stays\nthe same (replication slots/origins etc.).\n0003: Adds a new command for streaming replication protocol to create\na snapshot by an existing replication slot.\n0004: Reuses replication slots/origins together with workers.\n\nEven only 0001 and 0002 are enough to improve table sync performance\nat the rates previously shared on this thread. This also means that\ncurrently 0004 (reusing replication slots/origins) does not improve as\nmuch as I would expect, even though it does not harm either.\nI just wanted to share what I did so far, while I'm continuing to\ninvestigate it more to see what I'm missing in patch 0004.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Fri, 23 Jun 2023 16:32:47 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nThanks for your reviews.\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 13 Haz 2023 Sal,\n13:06 tarihinde şunu yazdı:\n> 01. general\n>\n> Why do tablesync workers have to disconnect from publisher for every iterations?\n> I think connection initiation overhead cannot be negligible in the postgres's basic\n> architecture. I have not checked yet, but could we add a new replication message\n> like STOP_STREAMING or CLEANUP? Or, engineerings for it is quite larger than the benefit?\n\nThis actually makes sense. I quickly try to do that without adding any\nnew replication message. As you would expect, it did not work.\nI don't really know what's needed to make a connection to last for\nmore than one iteration. Need to look into this. Happy to hear any\nsuggestions and thoughts.\n\n> The sync worker sends a signal to its leader per the iteration, but it may be too\n> often. Maybe it is added for changing the rstate to READY, however, it is OK to\n> change it when the next change have come because should_apply_changes_for_rel()\n> returns true even if rel->state == SUBREL_STATE_SYNCDONE. I think the notification\n> should be done only at the end of sync workers. How do you think?\n\nI tried to move the logicalrep_worker_wakeup call from\nclean_sync_worker (end of an iteration) to finish_sync_worker (end of\nsync worker). I made table sync much slower for some reason, then I\nreverted that change. Maybe I should look a bit more into the reason\nwhy that happened some time.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\n\n", "msg_date": "Fri, 23 Jun 2023 16:39:57 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nThanks for your reviews. I tried to apply most of them. I just have\nsome comments below for some of them.\n\nPeter Smith <smithpb2250@gmail.com>, 14 Haz 2023 Çar, 08:45 tarihinde\nşunu yazdı:\n>\n> 9. process_syncing_tables_for_sync\n>\n> @@ -378,7 +387,13 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n> */\n> replorigin_drop_by_name(originname, true, false);\n>\n> - finish_sync_worker();\n> + /*\n> + * Sync worker is cleaned at this point. It's ready to sync next table,\n> + * if needed.\n> + */\n> + SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> + MyLogicalRepWorker->ready_to_reuse = true;\n> + SpinLockRelease(&MyLogicalRepWorker->relmutex);\n>\n> 9a.\n> I did not quite follow the logic. It says \"Sync worker is cleaned at\n> this point\", but who is doing that? -- more details are needed. But,\n> why not just call clean_sync_worker() right here like it use to call\n> finish_sync_worker?\n\nI agree that these explanations at places where the worker decides to\nnot continue with the current table were confusing. Even the name of\nready_to_reuse was misleading. I renamed it and tried to improve\ncomments in such places.\nCan you please check if those make more sense now?\n\n\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 10. General -- run_tablesync_worker, TablesyncWorkerMain\n>\n> IMO these functions would be more appropriately reside in the\n> tablesync.c instead of the (common) worker.c. Was there some reason\n> why they cannot be put there?\n\nI'm not really against moving those functions to tablesync.c. But\nwhat's not clear to me is worker.c. Is it the places to put common\nfunctions for all log. rep. workers? Then, what about apply worker?\nShould we consider a separate file for apply worker too?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\n\n", "msg_date": "Fri, 23 Jun 2023 16:50:24 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jun 23, 2023 at 11:50 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> > src/backend/replication/logical/worker.c\n> >\n> > 10. General -- run_tablesync_worker, TablesyncWorkerMain\n> >\n> > IMO these functions would be more appropriately reside in the\n> > tablesync.c instead of the (common) worker.c. Was there some reason\n> > why they cannot be put there?\n>\n> I'm not really against moving those functions to tablesync.c. But\n> what's not clear to me is worker.c. Is it the places to put common\n> functions for all log. rep. workers? Then, what about apply worker?\n> Should we consider a separate file for apply worker too?\n\nIIUC\n- tablesync.c = for tablesync only\n- applyparallelworker = for parallel apply worker only\n- worker.c = for both normal apply worker, plus \"common\" worker code\n\nRegarding making another file (e.g. applyworker.c). It sounds\nsensible, but I guess you would need to first demonstrate the end\nresult will be much cleaner to get support for such a big refactor.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 26 Jun 2023 12:21:14 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Melih,\r\n\r\nThank you for updating the patch! I have not reviewed yet, but I wanted\r\nto reply your comments.\r\n\r\n> This actually makes sense. I quickly try to do that without adding any\r\n> new replication message. As you would expect, it did not work.\r\n> I don't really know what's needed to make a connection to last for\r\n> more than one iteration. Need to look into this. Happy to hear any\r\n> suggestions and thoughts.\r\n\r\nI have analyzed how we handle this. Please see attached the patch (0003) which\r\nallows reusing connection. The patchset passed tests on my CI.\r\nTo make cfbot happy I reassigned the patch number.\r\n\r\nIn this patch, the tablesync worker does not call clean_sync_worker() at the end\r\nof iterations, and the establishment of the connection is done only once.\r\nThe creation of memory context is also suppressed.\r\n\r\nRegarding the walsender, streamingDone{Sending|Receiving} is now initialized\r\nbefore executing StartLogicalReplication(). These flags have been used to decide\r\nwhen the process exits copy mode. The default value is false, and they are set\r\nto true when the copy mode is finished.\r\nI think there was no use-case that the same walsender executes START_REPLICATION\r\nreplication twice so there were no codes for restoring flags. Please tell me if any other\r\nreasons.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Tue, 27 Jun 2023 07:42:49 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jun 23, 2023 at 7:03 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> You can find the updated patchset attached.\n> I worked to address the reviews and made some additional changes.\n>\n> Let me first explain the new patchset.\n> 0001: Refactors the logical replication code, mostly worker.c and\n> tablesync.c. Although this patch makes it easier to reuse workers, I\n> believe that it's useful even by itself without other patches. It does\n> not improve performance or anything but aims to increase readability\n> and such.\n> 0002: This is only to reuse worker processes, everything else stays\n> the same (replication slots/origins etc.).\n> 0003: Adds a new command for streaming replication protocol to create\n> a snapshot by an existing replication slot.\n> 0004: Reuses replication slots/origins together with workers.\n>\n> Even only 0001 and 0002 are enough to improve table sync performance\n> at the rates previously shared on this thread. This also means that\n> currently 0004 (reusing replication slots/origins) does not improve as\n> much as I would expect, even though it does not harm either.\n> I just wanted to share what I did so far, while I'm continuing to\n> investigate it more to see what I'm missing in patch 0004.\n>\n\nI think the reason why you don't see the benefit of the 0004 patches\nis that it still pays the cost of disconnect/connect and we haven't\nsaved much on network transfer costs because of the new snapshot you\nare creating in patch 0003. Is it possible to avoid disconnect/connect\neach time the patch needs to reuse the same tablesync worker? Once, we\ndo that and save the cost of drop_slot and associated network round\ntrip, you may see the benefit of 0003 and 0004 patches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Jun 2023 16:20:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Melih,\r\n\r\nThanks for updating the patch. Followings are my comments.\r\nNote that some lines exceeds 80 characters and some other lines seem too short.\r\nAnd comments about coding conventions were skipped.\r\n\r\n0001\r\n\r\n01. logicalrep_worker_launch()\r\n\r\n```\r\n if (is_parallel_apply_worker)\r\n+ {\r\n snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ParallelApplyWorkerMain\");\r\n- else\r\n- snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyWorkerMain\");\r\n-\r\n- if (OidIsValid(relid))\r\n snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n- \"logical replication worker for subscription %u sync %u\", subid, relid);\r\n- else if (is_parallel_apply_worker)\r\n+ \"logical replication parallel apply worker for subscription %u\", subid);\r\n snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n \"logical replication parallel apply worker for subscription %u\", subid);\r\n```\r\n\r\nLatter snprintf(bgw.bgw_name...) should be snprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication worker\").\r\n\r\n02. ApplyWorkerMain\r\n\r\n```\r\n /*\r\n * Setup callback for syscache so that we know when something changes in\r\n- * the subscription relation state.\r\n+ * the subscription relation state. Do this outside the loop to avoid\r\n+ * exceeding MAX_SYSCACHE_CALLBACKS\r\n */\r\n```\r\n\r\nI'm not sure this change is really needed. CacheRegisterSyscacheCallback() must\r\nbe outside the loop to avoid duplicated register, and it seems trivial.\r\n\r\n0002\r\n\r\n03. TablesyncWorkerMain()\r\n\r\nRegarding the inner loop, the exclusive lock is acquired even if the rstate is\r\nSUBREL_STATE_SYNCDONE. Moreover, palloc() and memcpy() for rstate seemsed not\r\nneeded. How about following?\r\n\r\n```\r\n for (;;)\r\n {\r\n List *rstates;\r\n- SubscriptionRelState *rstate;\r\n ListCell *lc;\r\n...\r\n- rstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));\r\n \r\n foreach(lc, rstates)\r\n {\r\n- memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));\r\n+ SubscriptionRelState *rstate =\r\n+ (SubscriptionRelState *) lfirst(lc);\r\n+\r\n+ if (rstate->state == SUBREL_STATE_SYNCDONE)\r\n+ continue;\r\n\r\n /*\r\n- * Pick the table for the next run if it is not already picked up\r\n- * by another worker.\r\n- *\r\n- * Take exclusive lock to prevent any other sync worker from picking\r\n- * the same table.\r\n- */\r\n+ * Take exclusive lock to prevent any other sync worker from\r\n+ * picking the same table.\r\n+ */\r\n LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);\r\n- if (rstate->state != SUBREL_STATE_SYNCDONE &&\r\n- !logicalrep_worker_find(MySubscription->oid, rstate->relid, false))\r\n+\r\n+ /*\r\n+ * Pick the table for the next run if it is not already picked up\r\n+ * by another worker.\r\n+ */\r\n+ if (!logicalrep_worker_find(MySubscription->oid,\r\n+ rstate->relid, false))\r\n```\r\n\r\n04. TablesyncWorkerMain\r\n\r\nI think rstates should be pfree'd at the end of the outer loop, but it's OK\r\nif other parts do not.\r\n\r\n05. repsponse for for post\r\n\r\n>\r\nI tried to move the logicalrep_worker_wakeup call from\r\nclean_sync_worker (end of an iteration) to finish_sync_worker (end of\r\nsync worker). I made table sync much slower for some reason, then I\r\nreverted that change. Maybe I should look a bit more into the reason\r\nwhy that happened some time.\r\n>\r\n\r\nI want to see the testing method to reproduce the same issue, could you please\r\nshare it to -hackers?\r\n\r\n0003, 0004\r\n\r\nI did not checked yet but I could say same as above:\r\nI want to see the testing method to reproduce the same issue.\r\nCould you please share it to -hackers?\r\nMy previous post (an approach for reuse connection) may help the performance.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 27 Jun 2023 15:00:39 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jun 27, 2023 at 1:12 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > This actually makes sense. I quickly try to do that without adding any\n> > new replication message. As you would expect, it did not work.\n> > I don't really know what's needed to make a connection to last for\n> > more than one iteration. Need to look into this. Happy to hear any\n> > suggestions and thoughts.\n>\n\nIt is not clear to me what exactly you tried here which didn't work.\nCan you please explain a bit more?\n\n> I have analyzed how we handle this. Please see attached the patch (0003) which\n> allows reusing connection.\n>\n\nWhy did you change the application name during the connection?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Jun 2023 10:27:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Amit,\r\n\r\n> > > This actually makes sense. I quickly try to do that without adding any\r\n> > > new replication message. As you would expect, it did not work.\r\n> > > I don't really know what's needed to make a connection to last for\r\n> > > more than one iteration. Need to look into this. Happy to hear any\r\n> > > suggestions and thoughts.\r\n> >\r\n> \r\n> It is not clear to me what exactly you tried here which didn't work.\r\n> Can you please explain a bit more?\r\n\r\nJust to confirm, this is not my part. Melih can answer this...\r\n\r\n> > I have analyzed how we handle this. Please see attached the patch (0003) which\r\n> > allows reusing connection.\r\n> >\r\n> \r\n> Why did you change the application name during the connection?\r\n\r\nIt was because the lifetime of tablesync worker is longer than slots's one and\r\ntablesync worker creates temporary replication slots many times, per the target\r\nrelation. The name of each slots has relid, so I thought that it was not suitable.\r\nBut in the later patch the tablesync worker tries to reuse the slot during the\r\nsynchronization, so in this case the application_name should be same as slotname.\r\n\r\nI added comment in 0003, and new file 0006 file to use slot name as application_name\r\nagain. Note again that the separation was just for specifying changes, Melih can\r\ninclude them to one part of files if needed.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Wed, 28 Jun 2023 06:31:54 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jun 28, 2023 at 12:02 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > > I have analyzed how we handle this. Please see attached the patch (0003) which\n> > > allows reusing connection.\n> > >\n> >\n> > Why did you change the application name during the connection?\n>\n> It was because the lifetime of tablesync worker is longer than slots's one and\n> tablesync worker creates temporary replication slots many times, per the target\n> relation. The name of each slots has relid, so I thought that it was not suitable.\n>\n\nOkay, but let's try to give a unique application name to each\ntablesync worker for the purpose of pg_stat_activity and synchronous\nreplication (as mentioned in existing comments as well). One idea is\nto generate a name like pg_<sub_id>_sync_<worker_slot> but feel free\nto suggest if you have any better ideas.\n\n> But in the later patch the tablesync worker tries to reuse the slot during the\n> synchronization, so in this case the application_name should be same as slotname.\n>\n\nFair enough. I am slightly afraid that if we can't show the benefits\nwith later patches then we may need to drop them but at this stage I\nfeel we need to investigate why those are not helping?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 Jul 2023 09:42:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Jul 3, 2023 at 9:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 28, 2023 at 12:02 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n>\n> > But in the later patch the tablesync worker tries to reuse the slot during the\n> > synchronization, so in this case the application_name should be same as slotname.\n> >\n>\n> Fair enough. I am slightly afraid that if we can't show the benefits\n> with later patches then we may need to drop them but at this stage I\n> feel we need to investigate why those are not helping?\n>\n\nOn thinking about this, I think the primary benefit we were expecting\nby saving network round trips for slot drop/create but now that we\nanyway need an extra round trip to establish a snapshot, so such a\nbenefit was not visible. This is just a theory so we should validate\nit. The another idea as discussed before [1] could be to try copying\nmultiple tables in a single transaction. Now, keeping a transaction\nopen for a longer time could have side-effects on the publisher node.\nSo, we probably need to ensure that we don't perform multiple large\nsyncs and even for smaller tables (and later sequences) perform it\nonly for some threshold number of tables which we can figure out by\nsome tests. Also, the other safety-check could be that anytime we need\nto perform streaming (sync with apply worker), we won't copy more\ntables in same transaction.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 Jul 2023 11:28:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, 28 Jun 2023 at 12:02, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > > > This actually makes sense. I quickly try to do that without adding any\n> > > > new replication message. As you would expect, it did not work.\n> > > > I don't really know what's needed to make a connection to last for\n> > > > more than one iteration. Need to look into this. Happy to hear any\n> > > > suggestions and thoughts.\n> > >\n> >\n> > It is not clear to me what exactly you tried here which didn't work.\n> > Can you please explain a bit more?\n>\n> Just to confirm, this is not my part. Melih can answer this...\n>\n> > > I have analyzed how we handle this. Please see attached the patch (0003) which\n> > > allows reusing connection.\n> > >\n> >\n> > Why did you change the application name during the connection?\n>\n> It was because the lifetime of tablesync worker is longer than slots's one and\n> tablesync worker creates temporary replication slots many times, per the target\n> relation. The name of each slots has relid, so I thought that it was not suitable.\n> But in the later patch the tablesync worker tries to reuse the slot during the\n> synchronization, so in this case the application_name should be same as slotname.\n>\n> I added comment in 0003, and new file 0006 file to use slot name as application_name\n> again. Note again that the separation was just for specifying changes, Melih can\n> include them to one part of files if needed.\n\nFew comments:\n1) Should these error messages say \"Could not create a snapshot by\nreplication slot\":\n+ if (!pubnames_str)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n /* likely guess */\n+ errmsg(\"could not start WAL streaming: %s\",\n+\npchomp(PQerrorMessage(conn->streamConn)))));\n+ pubnames_literal = PQescapeLiteral(conn->streamConn, pubnames_str,\n+\n strlen(pubnames_str));\n+ if (!pubnames_literal)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n /* likely guess */\n+ errmsg(\"could not start WAL streaming: %s\",\n+\npchomp(PQerrorMessage(conn->streamConn)))));\n+ appendStringInfo(&cmd, \", publication_names %s\", pubnames_literal);\n+ PQfreemem(pubnames_literal);\n+ pfree(pubnames_str);\n\n2) These checks are present in CreateReplicationSlot too, can we have\na common function to check these for both CreateReplicationSlot and\nCreateReplicationSnapshot:\n+ if (!IsTransactionBlock())\n+ ereport(ERROR,\n+ (errmsg(\"%s must be called inside a\ntransaction\",\n+\n\"CREATE_REPLICATION_SNAPSHOT ...\")));\n+\n+ if (XactIsoLevel != XACT_REPEATABLE_READ)\n+ ereport(ERROR,\n+ (errmsg(\"%s must be called in\nREPEATABLE READ isolation mode transaction\",\n+\n\"CREATE_REPLICATION_SNAPSHOT ...\")));\n+\n+ if (!XactReadOnly)\n+ ereport(ERROR,\n+ (errmsg(\"%s must be called in a read\nonly transaction\",\n+\n\"CREATE_REPLICATION_SNAPSHOT ...\")));\n+\n+ if (FirstSnapshotSet)\n+ ereport(ERROR,\n+ (errmsg(\"%s must be called before any query\",\n+\n\"CREATE_REPLICATION_SNAPSHOT ...\")));\n+\n+ if (IsSubTransaction())\n+ ereport(ERROR,\n+ (errmsg(\"%s must not be called in a\nsubtransaction\",\n+\n\"CREATE_REPLICATION_SNAPSHOT ...\")));\n\n3) Probably we can add the function header at this point of time:\n+/*\n+ * TODO\n+ */\n+static void\n+libpqrcv_slot_snapshot(WalReceiverConn *conn,\n+ char *slotname,\n+ const WalRcvStreamOptions *options,\n+ XLogRecPtr *lsn)\n\n4) Either or relation name or relid should be sufficient here, no need\nto print both:\n StartTransactionCommand();\n+ ereport(LOG,\n+ (errmsg(\"%s\nfor subscription \\\"%s\\\" has moved to sync table \\\"%s\\\" with relid\n%u.\",\n+\n get_worker_name(),\n+\n MySubscription->name,\n+\n get_rel_name(MyLogicalRepWorker->relid),\n+\n MyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n\n5) Why is this check of logicalrep_worker_find is required required,\nwill it not be sufficient to pick the relations that are in\nSUBREL_STATE_INIT state?\n+ /*\n+ * Pick the table for the next run if\nit is not already picked up\n+ * by another worker.\n+ *\n+ * Take exclusive lock to prevent any\nother sync worker from picking\n+ * the same table.\n+ */\n+ LWLockAcquire(LogicalRepWorkerLock,\nLW_EXCLUSIVE);\n+ if (rstate->state != SUBREL_STATE_SYNCDONE &&\n+\n!logicalrep_worker_find(MySubscription->oid, rstate->relid, false))\n+ {\n+ /* Update worker state for the\nnext table */\n\n6) This comment is missed while refactoring:\n- /* Build logical replication streaming options. */\n- options.logical = true;\n- options.startpoint = origin_startpos;\n- options.slotname = myslotname;\n\n7) We could keep twophase and origin as the same order as it was\nearlier so that it is easy to review that the existing code is kept as\nis in this case:\n+ options->proto.logical.publication_names = MySubscription->publications;\n+ options->proto.logical.binary = MySubscription->binary;\n+ options->proto.logical.twophase = false;\n+ options->proto.logical.origin = pstrdup(MySubscription->origin);\n+\n+ /*\n+ * Assign the appropriate option value for streaming option according to\n+ * the 'streaming' mode and the publisher's ability to support\nthat mode.\n+ */\n+ if (server_version >= 160000 &&\n\n8) There are few indentation issues, we could run pgindent once:\n8.a)\n+ /* Sync worker has completed synchronization of the\ncurrent table. */\n+ MyLogicalRepWorker->is_sync_completed = true;\n+\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization\nworker for subscription \\\"%s\\\", relation \\\"%s\\\" with relid %u has\nfinished\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n\n8.b)\n+ ereport(DEBUG2,\n+ (errmsg(\"process_syncing_tables_for_sync:\nupdated originname: %s, slotname: %s, state: %c for relation \\\"%u\\\" in\nsubscription \\\"%u\\\".\",\n+ \"NULL\", \"NULL\",\nMyLogicalRepWorker->relstate,\n+ MyLogicalRepWorker->relid,\nMyLogicalRepWorker->subid)));\n+ CommitTransactionCommand();\n+ pgstat_report_stat(false);\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 3 Jul 2023 17:19:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Amit,\r\n\r\n> > > > I have analyzed how we handle this. Please see attached the patch (0003)\r\n> which\r\n> > > > allows reusing connection.\r\n> > > >\r\n> > >\r\n> > > Why did you change the application name during the connection?\r\n> >\r\n> > It was because the lifetime of tablesync worker is longer than slots's one and\r\n> > tablesync worker creates temporary replication slots many times, per the target\r\n> > relation. The name of each slots has relid, so I thought that it was not suitable.\r\n> >\r\n> \r\n> Okay, but let's try to give a unique application name to each\r\n> tablesync worker for the purpose of pg_stat_activity and synchronous\r\n> replication (as mentioned in existing comments as well). One idea is\r\n> to generate a name like pg_<sub_id>_sync_<worker_slot> but feel free\r\n> to suggest if you have any better ideas.\r\n\r\nGood point. The slot id is passed as an argument of TablesyncWorkerMain(),\r\nso I passed it to LogicalRepSyncTableStart(). PSA new set.\r\n\r\n> > But in the later patch the tablesync worker tries to reuse the slot during the\r\n> > synchronization, so in this case the application_name should be same as\r\n> slotname.\r\n> >\r\n> \r\n> Fair enough. I am slightly afraid that if we can't show the benefits\r\n> with later patches then we may need to drop them but at this stage I\r\n> feel we need to investigate why those are not helping?\r\n\r\nAgreed. Now I'm planning to do performance testing independently. We can discuss\r\nbased on that or Melih's one.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Tue, 4 Jul 2023 05:42:48 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 27 Haz 2023 Sal,\n10:42 tarihinde şunu yazdı:\n>\n> Dear Melih,\n>\n> Thank you for updating the patch! I have not reviewed yet, but I wanted\n> to reply your comments.\n>\n> > This actually makes sense. I quickly try to do that without adding any\n> > new replication message. As you would expect, it did not work.\n> > I don't really know what's needed to make a connection to last for\n> > more than one iteration. Need to look into this. Happy to hear any\n> > suggestions and thoughts.\n>\n> I have analyzed how we handle this. Please see attached the patch (0003) which\n> allows reusing connection. The patchset passed tests on my CI.\n> To make cfbot happy I reassigned the patch number.\n>\n> In this patch, the tablesync worker does not call clean_sync_worker() at the end\n> of iterations, and the establishment of the connection is done only once.\n> The creation of memory context is also suppressed.\n>\n> Regarding the walsender, streamingDone{Sending|Receiving} is now initialized\n> before executing StartLogicalReplication(). These flags have been used to decide\n> when the process exits copy mode. The default value is false, and they are set\n> to true when the copy mode is finished.\n> I think there was no use-case that the same walsender executes START_REPLICATION\n> replication twice so there were no codes for restoring flags. Please tell me if any other\n> reasons.\n\nThanks for the 0003 patch. But it did not work for me. Can you create\na subscription successfully with patch 0003 applied?\nI get the following error: \" ERROR: table copy could not start\ntransaction on publisher: another command is already in progress\".\n\nI think streaming needs to be ended before moving to another table. So\nI changed the patch a little bit and also addressed the reviews from\nrecent emails. Please see the attached patch set.\n\nI'm still keeping the reuse connection patch separate for now to see\nwhat is needed clearly.\n\nThanks,\nMelih", "msg_date": "Tue, 4 Jul 2023 22:47:34 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 4 Tem 2023 Sal,\n08:42 tarihinde şunu yazdı:\n> > > But in the later patch the tablesync worker tries to reuse the slot during the\n> > > synchronization, so in this case the application_name should be same as\n> > slotname.\n> > >\n> >\n> > Fair enough. I am slightly afraid that if we can't show the benefits\n> > with later patches then we may need to drop them but at this stage I\n> > feel we need to investigate why those are not helping?\n>\n> Agreed. Now I'm planning to do performance testing independently. We can discuss\n> based on that or Melih's one.\n\nHere I attached what I use for performance testing of this patch.\n\nI only benchmarked the patch set with reusing connections very roughly\nso far. But seems like it improves quite significantly. For example,\nit took 611 ms to sync 100 empty tables, it was 1782 ms without\nreusing connections.\nFirst 3 patches from the set actually bring a good amount of\nimprovement, but not sure about the later patches yet.\n\nAmit Kapila <amit.kapila16@gmail.com>, 3 Tem 2023 Pzt, 08:59 tarihinde\nşunu yazdı:\n> On thinking about this, I think the primary benefit we were expecting\n> by saving network round trips for slot drop/create but now that we\n> anyway need an extra round trip to establish a snapshot, so such a\n> benefit was not visible. This is just a theory so we should validate\n> it. The another idea as discussed before [1] could be to try copying\n> multiple tables in a single transaction. Now, keeping a transaction\n> open for a longer time could have side-effects on the publisher node.\n> So, we probably need to ensure that we don't perform multiple large\n> syncs and even for smaller tables (and later sequences) perform it\n> only for some threshold number of tables which we can figure out by\n> some tests. Also, the other safety-check could be that anytime we need\n> to perform streaming (sync with apply worker), we won't copy more\n> tables in same transaction.\n>\n> Thoughts?\n\nYeah, maybe going to the publisher for creating a slot or only a\nsnapshot does not really make enough difference. I was hoping that\ncreating only snapshot by an existing replication slot would help the\nperformance. I guess I was either wrong or am missing something in the\nimplementation.\n\nThe tricky bit with keeping a long transaction to copy multiple tables\nis deciding how many tables one transaction can copy.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Tue, 4 Jul 2023 23:18:14 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 5, 2023 at 1:48 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 4 Tem 2023 Sal,\n> 08:42 tarihinde şunu yazdı:\n> > > > But in the later patch the tablesync worker tries to reuse the slot during the\n> > > > synchronization, so in this case the application_name should be same as\n> > > slotname.\n> > > >\n> > >\n> > > Fair enough. I am slightly afraid that if we can't show the benefits\n> > > with later patches then we may need to drop them but at this stage I\n> > > feel we need to investigate why those are not helping?\n> >\n> > Agreed. Now I'm planning to do performance testing independently. We can discuss\n> > based on that or Melih's one.\n>\n> Here I attached what I use for performance testing of this patch.\n>\n> I only benchmarked the patch set with reusing connections very roughly\n> so far. But seems like it improves quite significantly. For example,\n> it took 611 ms to sync 100 empty tables, it was 1782 ms without\n> reusing connections.\n> First 3 patches from the set actually bring a good amount of\n> improvement, but not sure about the later patches yet.\n>\n\nI suggest then we should focus first on those 3, get them committed\nand then look at the remaining.\n\n> Amit Kapila <amit.kapila16@gmail.com>, 3 Tem 2023 Pzt, 08:59 tarihinde\n> şunu yazdı:\n> > On thinking about this, I think the primary benefit we were expecting\n> > by saving network round trips for slot drop/create but now that we\n> > anyway need an extra round trip to establish a snapshot, so such a\n> > benefit was not visible. This is just a theory so we should validate\n> > it. The another idea as discussed before [1] could be to try copying\n> > multiple tables in a single transaction. Now, keeping a transaction\n> > open for a longer time could have side-effects on the publisher node.\n> > So, we probably need to ensure that we don't perform multiple large\n> > syncs and even for smaller tables (and later sequences) perform it\n> > only for some threshold number of tables which we can figure out by\n> > some tests. Also, the other safety-check could be that anytime we need\n> > to perform streaming (sync with apply worker), we won't copy more\n> > tables in same transaction.\n> >\n> > Thoughts?\n>\n> Yeah, maybe going to the publisher for creating a slot or only a\n> snapshot does not really make enough difference. I was hoping that\n> creating only snapshot by an existing replication slot would help the\n> performance. I guess I was either wrong or am missing something in the\n> implementation.\n>\n> The tricky bit with keeping a long transaction to copy multiple tables\n> is deciding how many tables one transaction can copy.\n>\n\nYeah, I was thinking that we should not allow copying some threshold\ndata in one transaction. After every copy, we will check the size of\nthe table and add it to the previously copied table size in the same\ntransaction. Once the size crosses a certain threshold, we will end\nthe transaction. This may not be a very good scheme but I think it\nthis helps then it would be much simpler than creating-only-snapshot\napproach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Jul 2023 09:26:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Melih,\r\n\r\n> Thanks for the 0003 patch. But it did not work for me. Can you create\r\n> a subscription successfully with patch 0003 applied?\r\n> I get the following error: \" ERROR: table copy could not start\r\n> transaction on publisher: another command is already in progress\".\r\n\r\nYou got the ERROR when all the patches (0001-0005) were applied, right?\r\nI have focused on 0001 and 0002 only, so I missed something.\r\nIf it was not correct, please attach the logfile and test script what you did.\r\n\r\nAs you might know, the error is output when the worker executs walrcv_endstreaming()\r\nbefore doing walrcv_startstreaming().\r\n\r\n> I think streaming needs to be ended before moving to another table. So\r\n> I changed the patch a little bit\r\n\r\nYour modification seemed not correct. I applied only first three patches (0001-0003), and\r\nexecuted attached script. Then I got following error on subscriber (attached as N2.log):\r\n\r\n> ERROR: could not send end-of-streaming message to primary: no COPY in progress\r\n\r\nIIUC the tablesync worker has been already stopped streaming without your modification.\r\nPlease see process_syncing_tables_for_sync():\r\n\r\n```\r\n\tif (MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&\r\n\t\tcurrent_lsn >= MyLogicalRepWorker->relstate_lsn)\r\n\t{\r\n\t\tTimeLineID\ttli;\r\n\t\tchar\t\tsyncslotname[NAMEDATALEN] = {0};\r\n\t\tchar\t\toriginname[NAMEDATALEN] = {0};\r\n\r\n\t\tMyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;\r\n...\r\n\t\t/*\r\n\t\t * End streaming so that LogRepWorkerWalRcvConn can be used to drop\r\n\t\t * the slot.\r\n\t\t */\r\n\t\twalrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);\r\n```\r\n\r\nThis means that following changes should not be in the 0003, should be at 0005.\r\nPSA fixed patches.\r\n\r\n```\r\n+\t/*\r\n+\t * If it's already connected to the publisher, end streaming before using\r\n+\t * the same connection for another iteration\r\n+\t */\r\n+\tif (LogRepWorkerWalRcvConn != NULL)\r\n+\t{\r\n+\t\tTimeLineID tli;\r\n+\t\twalrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);\r\n+\t}\r\n```\r\n\r\n\r\nBesides, cfbot could not apply your patch set [1]. According to the log, the\r\nbot tried to apply 0004 and 0005 first and got error. IIUC you should assign\r\nsame version number within the same mail, like v16-0001, v16-0002,....\r\n\r\n[1]: http://cfbot.cputube.org/patch_43_3784.log\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Thu, 6 Jul 2023 09:47:40 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi. Here are some review comments for the patch v16-0001\n\n======\nCommit message.\n\n1.\nAlso; most of the code shared by both worker types are already combined\nin LogicalRepApplyLoop(). There is no need to combine the rest in\nApplyWorkerMain() anymore.\n\n~\n\n/are already/is already/\n\n/Also;/Also,/\n\n~~~\n\n2.\nThis commit introduces TablesyncWorkerMain() as a new entry point for\ntablesync workers and separates both type of workers from each other.\nThis aims to increase code readability and help to maintain logical\nreplication workers separately.\n\n2a.\n/This commit/This patch/\n\n~\n\n2b.\n\"and separates both type of workers from each other\"\n\nMaybe that part can all be removed. The following sentence says the\nsame again anyhow.\n\n======\nsrc/backend/replication/logical/worker.c\n\n3.\n static void stream_write_change(char action, StringInfo s);\n static void stream_open_and_write_change(TransactionId xid, char\naction, StringInfo s);\n static void stream_close_file(void);\n+static void set_stream_options(WalRcvStreamOptions *options,\n+ char *slotname,\n+ XLogRecPtr *origin_startpos);\n\n~\n\nMaybe a blank line was needed here because this static should not be\ngrouped with the other functions that are grouped for \"Serialize and\ndeserialize changes for a toplevel transaction.\" comment.\n\n~~~\n\n4. set_stream_options\n\n+ /* set_stream_options\n+ * Set logical replication streaming options.\n+ *\n+ * This function sets streaming options including replication slot name and\n+ * origin start position. Workers need these options for logical replication.\n+ */\n+static void\n+set_stream_options(WalRcvStreamOptions *options,\n\nThe indentation is not right for this function comment.\n\n~~~\n\n5. set_stream_options\n\n+ /*\n+ * Even when the two_phase mode is requested by the user, it remains as\n+ * the tri-state PENDING until all tablesyncs have reached READY state.\n+ * Only then, can it become ENABLED.\n+ *\n+ * Note: If the subscription has no tables then leave the state as\n+ * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to\n+ * work.\n+ */\n+ if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING &&\n+ AllTablesyncsReady())\n+ options->proto.logical.twophase = true;\n+}\n\nThis part of the refactoring seems questionable...\n\nIIUC this new function was extracted from code in originally in\nfunction ApplyWorkerMain()\n\nBut in that original code, this fragment above was guarded by the condition\nif (!am_tablesync_worker())\n\nBut now where is that condition? e.g. What is stopping tablesync\nworking from getting into this code it previously would not have\nexecuted?\n\n~~~\n\n6.\n AbortOutOfAnyTransaction();\n- pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n+ pgstat_report_subscription_error(MySubscription->oid,\n+ !am_tablesync_worker());\n\nDoes this change have anything to do with this patch? Is it a quirk of\nrunning pg_indent?\n\n~~~\n7. run_tablesync_worker\n\nSince the stated intent of the patch is the separation of apply and\ntablesync workers then shouldn't this function belong in the\ntablesync.c file?\n\n~~~\n8. run_tablesync_worker\n\n+ * Runs the tablesync worker.\n+ * It starts syncing tables. After a successful sync, sets streaming options\n+ * and starts streaming to catchup.\n+ */\n+static void\n+run_tablesync_worker(WalRcvStreamOptions *options,\n\nNicer to have a blank line after the first sentence of that function comment?\n\n~~~\n9. run_apply_worker\n\n+/*\n+ * Runs the leader apply worker.\n+ * It sets up replication origin, streaming options and then starts streaming.\n+ */\n+static void\n+run_apply_worker(WalRcvStreamOptions *options,\n\nNicer to have a blank line after the first sentence of that function comment?\n\n~~~\n10. InitializeLogRepWorker\n\n+/*\n+ * Common initialization for logical replication workers; leader apply worker,\n+ * parallel apply worker and tablesync worker.\n *\n * Initialize the database connection, in-memory subscription and necessary\n * config options.\n */\n void\n-InitializeApplyWorker(void)\n+InitializeLogRepWorker(void)\n\ntypo:\n\n/workers;/workers:/\n\n~~~\n11. TablesyncWorkerMain\n\nSince the stated intent of the patch is the separation of apply and\ntablesync workers then shouldn't this function belong in the\ntablesync.c file?\n\n======\nsrc/include/replication/worker_internal.h\n\n12.\n #define isParallelApplyWorker(worker) ((worker)->leader_pid != InvalidPid)\n\n+extern void finish_sync_worker(void);\n\n~\n\nI think the macro isParallelApplyWorker is associated with the am_XXX\ninline functions that follow it, so it doesn’t seem the best place to\njam this extern in the middle of that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 7 Jul 2023 19:37:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear hackers,\r\n\r\nHi, I did a performance testing for v16 patch set.\r\nResults show that patches significantly improves the performance in most cases.\r\n\r\n# Method\r\n\r\nFollowing tests were done 10 times per condition, and compared by median.\r\ndo_one_test.sh was used for the testing.\r\n\r\n1.\tCreate tables on publisher\r\n2.\tInsert initial data on publisher\r\n3.\tCreate tables on subscriber\r\n4.\tCreate a replication slot (mysub_slot) on publisher\r\n5.\tCreate a publication on publisher\r\n6.\tCreate tables on subscriber\r\n--- timer on ---\r\n7.\tCreate subscription with pre-existing replication slot (mysub_slot)\r\n8.\tWait until all srsubstate in pg_subscription_rel becomes 'r'\r\n--- timer off ---\r\n\r\n# Tested sources\r\n\r\nI used three types of sources\r\n\r\n* HEAD (f863d82)\r\n* HEAD + 0001 + 0002\r\n* HEAD + 0001 + 0002 + 0003\r\n\r\n# Tested conditions\r\n\r\nFollowing parameters were changed during the measurement.\r\n\r\n### table size\r\n\r\n* empty\r\n* around 10kB\r\n\r\n### number of tables\r\n\r\n* 10\r\n* 100\r\n* 1000\r\n* 2000\r\n\r\n### max_sync_workers_per_subscription\r\n\r\n* 2\r\n* 4\r\n* 8\r\n* 16\r\n\r\n## Results\r\n\r\nPlease see the attached image file. Each cell shows the improvement percentage of\r\nmeasurement comapred with HEAD, HEAD + 0001 + 0002, and HEAD + 0001 + 0002 + 0003.\r\n\r\nAccording to the measurement, we can say following things:\r\n\r\n* In any cases the performance was improved from the HEAD.\r\n* The improvement became more significantly if number of synced tables were increased.\r\n* 0003 basically improved performance from first two patches\r\n* Increasing workers could sometimes lead to lesser performance due to contention.\r\n This was occurred when the number of tables were small. Moreover, this was not only happen by patchset - it happened even if we used HEAD.\r\n Detailed analysis will be done later.\r\n\r\nMored deital, please see the excel file. It contains all the results of measurement.\r\n\r\n## Detailed configuration\r\n\r\n* Powerful machine was used:\r\n - Number of CPU: 120\r\n - Memory: 755 GB\r\n\r\n* Both publisher and subscriber were on the same machine.\r\n* Following GUC settings were used for both pub/sub:\r\n\r\n```\r\nwal_level = logical\r\nshared_buffers = 40GB\r\nmax_worker_processes = 32\r\nmax_parallel_maintenance_workers = 24\r\nmax_parallel_workers = 32\r\nsynchronous_commit = off\r\ncheckpoint_timeout = 1d\r\nmax_wal_size = 24GB\r\nmin_wal_size = 15GB\r\nautovacuum = off\r\nmax_wal_senders = 200\r\nmax_replication_slots = 200\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 10 Jul 2023 02:37:30 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi, here are some review comments for patch v16-0002.\n\n======\nCommit message\n\n1.\nThis commit allows reusing tablesync workers for syncing more than one\ntable sequentially during their lifetime, instead of exiting after\nonly syncing one table.\n\nBefore this commit, tablesync workers were capable of syncing only one\ntable. For each table, a new sync worker was launched and that worker would\nexit when done processing the table.\n\nNow, tablesync workers are not limited to processing only one\ntable. When done, they can move to processing another table in\nthe same subscription.\n\n~\n\nIMO that first paragraph can be removed because AFAIK the other\nparagraphs are saying exactly the same thing but worded differently.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n2. General -- for clean_sync_worker and finish_sync_worker\n\nTBH, I found the separation of clean_sync_worker() and\nfinish_sync_worker() to be confusing. Can't it be rearranged to keep\nthe same function but just pass a boolean to tell it to exit or not\nexit?\n\ne.g.\n\nfinish_sync_worker(bool reuse_worker) { ... }\n\n~~~\n\n3. clean_sync_worker\n\n /*\n- * Commit any outstanding transaction. This is the usual case, unless\n- * there was nothing to do for the table.\n+ * Commit any outstanding transaction. This is the usual case, unless there\n+ * was nothing to do for the table.\n */\n\nThe word wrap seems OK, except the change seemed unrelated to this patch (??)\n\n~~~\n\n4.\n+ /*\n+ * Disconnect from publisher. Otherwise reused sync workers causes\n+ * exceeding max_wal_senders\n+ */\n\nMissing period, and not an English sentence.\n\nSUGGESTION (??)\nDisconnect from the publisher otherwise reusing the sync worker can\nerror due to exceeding max_wal_senders.\n\n~~~\n\n5. finish_sync_worker\n\n+/*\n+ * Exit routine for synchronization worker.\n+ */\n+void\n+pg_attribute_noreturn()\n+finish_sync_worker(void)\n+{\n+ clean_sync_worker();\n+\n /* And flush all writes. */\n XLogFlush(GetXLogWriteRecPtr());\n\n StartTransactionCommand();\n ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has finished\",\n- MySubscription->name,\n- get_rel_name(MyLogicalRepWorker->relid))));\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\n+ MySubscription->name)));\n CommitTransactionCommand();\n\nIn the original code, the XLogFlush was in a slightly different order\nthan in this refactored code. E.g. it came before signalling the apply\nworker. Is it OK to be changed?\n\nKeeping one function (suggested in #2) can maybe remove this potential issue.\n\n======\nsrc/backend/replication/logical/worker.c\n\n6. LogicalRepApplyLoop\n\n+ /*\n+ * apply_dispatch() may have gone into apply_handle_commit()\n+ * which can call process_syncing_tables_for_sync.\n+ *\n+ * process_syncing_tables_for_sync decides whether the sync of\n+ * the current table is completed. If it is completed,\n+ * streaming must be already ended. So, we can break the loop.\n+ */\n+ if (MyLogicalRepWorker->is_sync_completed)\n+ {\n+ endofstream = true;\n+ break;\n+ }\n+\n\nand\n\n+ /*\n+ * If is_sync_completed is true, this means that the tablesync\n+ * worker is done with synchronization. Streaming has already been\n+ * ended by process_syncing_tables_for_sync. We should move to the\n+ * next table if needed, or exit.\n+ */\n+ if (MyLogicalRepWorker->is_sync_completed)\n+ endofstream = true;\n\n~\n\nInstead of those code fragments above assigning 'endofstream' as a\nside-effect, would it be the same (but tidier) to just modify the\nother \"breaking\" condition below:\n\nBEFORE:\n/* Check if we need to exit the streaming loop. */\nif (endofstream)\nbreak;\n\nAFTER:\n/* Check if we need to exit the streaming loop. */\nif (endofstream || MyLogicalRepWorker->is_sync_completed)\nbreak;\n\n~~~\n\n7. LogicalRepApplyLoop\n\n+ /*\n+ * Tablesync workers should end streaming before exiting the main loop to\n+ * drop replication slot. Only end streaming here for apply workers.\n+ */\n+ if (!am_tablesync_worker())\n+ walrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);\n\nThis comment does not seem very clear. Maybe it can be reworded:\n\nSUGGESTION\nEnd streaming here only for apply workers. Ending streaming for\ntablesync workers is deferred until ... because ...\n\n~~~\n\n8. TablesyncWorkerMain\n\n+ StartTransactionCommand();\n+ ereport(LOG,\n+ (errmsg(\"%s for subscription \\\"%s\\\" has moved to sync table \\\"%s\\\"\nwith relid %u.\",\n+ get_worker_name(),\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n\nThe \"has moved to...\" terminology is unusual. If you say something\n\"will be reused to...\" then it matches better the commit message etc.\n\n~~~\n\n9.\n\n+ if (!is_table_found)\n+ break;\n\nInstead of an infinite loop that is exited by this 'break' it might be\nbetter to rearrange the logic slightly so the 'for' loop can exit\nnormally:\n\nBEFORE:\nfor (;;)\n\nAFTER\nfor (; !done;)\n\n======\nsrc/include/replication/worker_internal.h\n\n10.\n XLogRecPtr relstate_lsn;\n slock_t relmutex;\n\n+ /*\n+ * Indicates whether tablesync worker has completed sycning its assigned\n+ * table. If true, no need to continue with that table.\n+ */\n+ bool is_sync_completed;\n+\n\n10a.\nTypo /sycning/syncing/\n\n~\n\n10b.\nAll the other tablesync-related fields of this struct are named as\nrelXXX, so I wonder if is better for this to follow the same pattern.\ne.g. 'relsync_completed'\n\n~\n\n10c.\n\"If true, no need to continue with that table.\".\n\nI am not sure if this sentence is adding anything useful.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 10 Jul 2023 17:08:31 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 6 Tem 2023 Per, 06:56 tarihinde\nşunu yazdı:\n>\n> On Wed, Jul 5, 2023 at 1:48 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 4 Tem 2023 Sal,\n> > 08:42 tarihinde şunu yazdı:\n> > > > > But in the later patch the tablesync worker tries to reuse the slot during the\n> > > > > synchronization, so in this case the application_name should be same as\n> > > > slotname.\n> > > > >\n> > > >\n> > > > Fair enough. I am slightly afraid that if we can't show the benefits\n> > > > with later patches then we may need to drop them but at this stage I\n> > > > feel we need to investigate why those are not helping?\n> > >\n> > > Agreed. Now I'm planning to do performance testing independently. We can discuss\n> > > based on that or Melih's one.\n> >\n> > Here I attached what I use for performance testing of this patch.\n> >\n> > I only benchmarked the patch set with reusing connections very roughly\n> > so far. But seems like it improves quite significantly. For example,\n> > it took 611 ms to sync 100 empty tables, it was 1782 ms without\n> > reusing connections.\n> > First 3 patches from the set actually bring a good amount of\n> > improvement, but not sure about the later patches yet.\n> >\n>\n> I suggest then we should focus first on those 3, get them committed\n> and then look at the remaining.\n>\n\nThat sounds good. I'll do my best to address any review/concern from\nreviewers now for the first 3 patches and hopefully those can get\ncommitted first. I'll continue working on the remaining patches later.\n\n-- \nMelih Mutlu\nMicrosoft\n\n\n", "msg_date": "Mon, 10 Jul 2023 17:22:58 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 6 Tem 2023 Per,\n12:47 tarihinde şunu yazdı:\n>\n> Dear Melih,\n>\n> > Thanks for the 0003 patch. But it did not work for me. Can you create\n> > a subscription successfully with patch 0003 applied?\n> > I get the following error: \" ERROR: table copy could not start\n> > transaction on publisher: another command is already in progress\".\n>\n> You got the ERROR when all the patches (0001-0005) were applied, right?\n> I have focused on 0001 and 0002 only, so I missed something.\n> If it was not correct, please attach the logfile and test script what you did.\n\nYes, I did get an error with all patches applied. But with only 0001\nand 0002, your version seems like working and mine does not.\nWhat do you think about combining 0002 and 0003? Or should those stay separate?\n\n> Hi, I did a performance testing for v16 patch set.\n> Results show that patches significantly improves the performance in most cases.\n>\n> # Method\n>\n> Following tests were done 10 times per condition, and compared by median.\n> do_one_test.sh was used for the testing.\n>\n> 1. Create tables on publisher\n> 2. Insert initial data on publisher\n> 3. Create tables on subscriber\n> 4. Create a replication slot (mysub_slot) on publisher\n> 5. Create a publication on publisher\n> 6. Create tables on subscriber\n> --- timer on ---\n> 7. Create subscription with pre-existing replication slot (mysub_slot)\n> 8. Wait until all srsubstate in pg_subscription_rel becomes 'r'\n> --- timer off ---\n>\n\nThanks for taking the time to do testing and sharing the results. This\nis also how I've been doing the testing since, but the process was\nhalf scripted, half manual work.\n\n> According to the measurement, we can say following things:\n>\n> * In any cases the performance was improved from the HEAD.\n> * The improvement became more significantly if number of synced tables were increased.\n\nYes, I believe it becomes more significant when workers spend less\ntime with actually copying data but more with other stuff like\nlaunching workers, opening connections etc.\n\n> * 0003 basically improved performance from first two patches\n\nAgree, 0003 is definitely a good addition which was missing earlier.\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\n\n", "msg_date": "Mon, 10 Jul 2023 17:31:24 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here are some review comments for patch v16-00003\n\n======\n1. Commit Message.\n\nThe patch description is missing.\n\n======\n2. General.\n\n+LogicalRepSyncTableStart(XLogRecPtr *origin_startpos, int worker_slot)\n\nand\n\n+start_table_sync(XLogRecPtr *origin_startpos,\n+ char **myslotname,\n+ int worker_slot)\n\nand\n\n@@ -4548,12 +4552,13 @@ run_tablesync_worker(WalRcvStreamOptions *options,\n char *slotname,\n char *originname,\n int originname_size,\n- XLogRecPtr *origin_startpos)\n+ XLogRecPtr *origin_startpos,\n+ int worker_slot)\n\n\nIt seems the worker_slot is being passed all over the place as an\nadditional function argument so that it can be used to construct an\napplication_name. Is it possible/better to introduce a new\n'MyLogicalRepWorker' field for the 'worker_slot' so it does not have\nto be passed like this?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.\n+ /*\n+ * Disconnect from publisher. Otherwise reused sync workers causes\n+ * exceeding max_wal_senders.\n+ */\n+ if (LogRepWorkerWalRcvConn != NULL)\n+ {\n+ walrcv_disconnect(LogRepWorkerWalRcvConn);\n+ LogRepWorkerWalRcvConn = NULL;\n+ }\n+\n\nWhy is this comment mentioning anything about \"reused workers\" at all?\nThe worker process exits in this function, right?\n\n~~~\n\n4. LogicalRepSyncTableStart\n\n /*\n- * Here we use the slot name instead of the subscription name as the\n- * application_name, so that it is different from the leader apply worker,\n- * so that synchronous replication can distinguish them.\n+ * Connect to publisher if not yet. The application_name must be also\n+ * different from the leader apply worker because synchronous replication\n+ * must distinguish them.\n */\n\nI felt all the details in the 2nd part of this comment belong inside\nthe condition, not outside.\n\nSUGGESTION\n/* Connect to the publisher if haven't done so already. */\n\n~~~\n\n5.\n+ if (LogRepWorkerWalRcvConn == NULL)\n+ {\n+ char application_name[NAMEDATALEN];\n+\n+ /*\n+ * FIXME: set appropriate application_name. Previously, the slot name\n+ * was used becasue the lifetime of the tablesync worker was same as\n+ * that, but now the tablesync worker handles many slots during the\n+ * synchronization so that it is not suitable. So what should be?\n+ * Note that if the tablesync worker starts to reuse the replication\n+ * slot during synchronization, we should use the slot name as\n+ * application_name again.\n+ */\n+ snprintf(application_name, NAMEDATALEN, \"pg_%u_sync_%i\",\n+ MySubscription->oid, worker_slot);\n+ LogRepWorkerWalRcvConn =\n+ walrcv_connect(MySubscription->conninfo, true,\n+ must_use_password,\n+ application_name, &err);\n+ }\n\n5a.\n/becasue/because/\n\n~\n\n5b.\nI am not sure about what name this should ideally use, but anyway for\nuniqueness doesn't it still need to include the GetSystemIdentifier()\nsame as function ReplicationSlotNameForTablesync() was doing?\n\nMaybe this can use the same function ReplicationSlotNameForTablesync()\ncan be used but just pass the worker_slot instead of the relid?\n\n======\nsrc/backend/replication/logical/worker.c\n\n6. LogicalRepApplyLoop\n\n /*\n * Init the ApplyMessageContext which we clean up after each replication\n- * protocol message.\n+ * protocol message, if needed.\n */\n- ApplyMessageContext = AllocSetContextCreate(ApplyContext,\n- \"ApplyMessageContext\",\n- ALLOCSET_DEFAULT_SIZES);\n+ if (!ApplyMessageContext)\n+ ApplyMessageContext = AllocSetContextCreate(ApplyContext,\n+ \"ApplyMessageContext\",\n+\n\nMaybe slightly reword the comment.\n\nBEFORE:\nInit the ApplyMessageContext which we clean up after each replication\nprotocol message, if needed.\n\nAFTER:\nInit the ApplyMessageContext if needed. This context is cleaned up\nafter each replication protocol message.\n\n======\nsrc/backend/replication/walsender.c\n\n7.\n+ /*\n+ * Initialize the flag again because this streaming may be\n+ * second time.\n+ */\n+ streamingDoneSending = streamingDoneReceiving = false;\n\nIsn't this only possible to be 2nd time because the \"reuse tablesync\nworker\" might re-issue a START_REPLICATION again to the same\nWALSender? So, should this flag reset ONLY be done for the logical\nreplication ('else' part), otherwise it should be asserted false?\n\ne.g. Would it be better to be like this?\n\nif (cmd->kind == REPLICATION_KIND_PHYSICAL)\n{\nAssert(!streamingDoneSending && !streamingDoneReceiving)\nStartReplication(cmd);\n}\nelse\n{\n/* Reset flags because reusing tablesync workers can mean this is the\nsecond time here. */\nstreamingDoneSending = streamingDoneReceiving = false;\nStartLogicalReplication(cmd);\n}\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 11 Jul 2023 12:36:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 11, 2023 at 12:31 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 6 Tem 2023 Per,\n> 12:47 tarihinde şunu yazdı:\n> >\n> > Dear Melih,\n> >\n> > > Thanks for the 0003 patch. But it did not work for me. Can you create\n> > > a subscription successfully with patch 0003 applied?\n> > > I get the following error: \" ERROR: table copy could not start\n> > > transaction on publisher: another command is already in progress\".\n> >\n> > You got the ERROR when all the patches (0001-0005) were applied, right?\n> > I have focused on 0001 and 0002 only, so I missed something.\n> > If it was not correct, please attach the logfile and test script what you did.\n>\n> Yes, I did get an error with all patches applied. But with only 0001\n> and 0002, your version seems like working and mine does not.\n> What do you think about combining 0002 and 0003? Or should those stay separate?\n>\n\nEven if patches 0003 and 0002 are to be combined, I think that should\nnot happen until after the \"reuse\" design is confirmed which way is\nbest.\n\ne.g. IMO it might be easier to compare the different PoC designs for\npatch 0002 if there is no extra logic involved.\n\nPoC design#1 -- each tablesync decides for itself what to do next\nafter it finishes\nPoC design#2 -- reuse tablesync using a \"pool\" of available workers\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 11 Jul 2023 12:59:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Mon, Jul 10, 2023 at 8:01 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 6 Tem 2023 Per,\n> 12:47 tarihinde şunu yazdı:\n> >\n> > Dear Melih,\n> >\n> > > Thanks for the 0003 patch. But it did not work for me. Can you create\n> > > a subscription successfully with patch 0003 applied?\n> > > I get the following error: \" ERROR: table copy could not start\n> > > transaction on publisher: another command is already in progress\".\n> >\n> > You got the ERROR when all the patches (0001-0005) were applied, right?\n> > I have focused on 0001 and 0002 only, so I missed something.\n> > If it was not correct, please attach the logfile and test script what you did.\n>\n> Yes, I did get an error with all patches applied. But with only 0001\n> and 0002, your version seems like working and mine does not.\n> What do you think about combining 0002 and 0003? Or should those stay separate?\n>\n\nI am fine either way but I think one minor advantage of keeping 0003\nseparate is that we can focus on some of the problems specific to that\npatch. For example, the following comment in the 0003 patch: \"FIXME:\nset appropriate application_name...\". I have given a suggestion to\naddress it in [1] and Kuroda-San seems to have addressed the same but\nI am not sure if all of us agree with that or if there is any better\nway to address it. What do you think?\n\n>\n> > * 0003 basically improved performance from first two patches\n>\n> Agree, 0003 is definitely a good addition which was missing earlier.\n>\n\n+1.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JOZHmy2o2F2wTCPKsjpwDiKZPOeTa_jt%3Dwm2JLbf-jsg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Jul 2023 09:14:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Melih,\r\n\r\n> > > Thanks for the 0003 patch. But it did not work for me. Can you create\r\n> > > a subscription successfully with patch 0003 applied?\r\n> > > I get the following error: \" ERROR: table copy could not start\r\n> > > transaction on publisher: another command is already in progress\".\r\n> >\r\n> > You got the ERROR when all the patches (0001-0005) were applied, right?\r\n> > I have focused on 0001 and 0002 only, so I missed something.\r\n> > If it was not correct, please attach the logfile and test script what you did.\r\n> \r\n> Yes, I did get an error with all patches applied. But with only 0001\r\n> and 0002, your version seems like working and mine does not.\r\n\r\nHmm, really? IIUC I did not modify 0001 and 0002 patches, I just re-assigned the\r\nversion number. I compared between yours and mine, but no meaningful differences\r\nwere found. E.g., following command compared v4-0002 and v16-0002:\r\n\r\n```\r\ndiff --git a/../reuse_workers/v4-0002-Reuse-Tablesync-Workers.patch b/../reuse_workers/hayato/v16-0002-Reuse-Tablesync-Workers.patch\r\nindex 5350216e98..7785a573e4 100644\r\n--- a/../reuse_workers/v4-0002-Reuse-Tablesync-Workers.patch\r\n+++ b/../reuse_workers/hayato/v16-0002-Reuse-Tablesync-Workers.patch\r\n@@ -1,7 +1,7 @@\r\n-From d482022b40e0a5ce1b74fd0e320cb5b45da2f671 Mon Sep 17 00:00:00 2001\r\n+From db3e8e2d7aadea79126c5816bce8b06dc82f33c2 Mon Sep 17 00:00:00 2001\r\n From: Melih Mutlu <m.melihmutlu@gmail.com>\r\n Date: Tue, 4 Jul 2023 22:04:46 +0300\r\n-Subject: [PATCH 2/5] Reuse Tablesync Workers\r\n+Subject: [PATCH v16 2/5] Reuse Tablesync Workers\r\n \r\n This commit allows reusing tablesync workers for syncing more than one\r\n table sequentially during their lifetime, instead of exiting after\r\n@@ -324,5 +324,5 @@ index 7aba034774..1e9f8e6e72 100644\r\n static inline bool\r\n am_tablesync_worker(void)\r\n -- \r\n-2.25.1\r\n+2.27.0\r\n```\r\n\r\nFor confirmation, please attach the logfile and test script what you did\r\nif you could reproduce?\r\n\r\n> What do you think about combining 0002 and 0003? Or should those stay\r\n> separate?\r\n\r\nI have no strong opinion, but it may be useful to keep them pluggable.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 13 Jul 2023 04:09:12 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! I'm not sure what should be, but I modified only my part - 0003.\r\nPSA new patchset. Other patches were not changed.\r\n(I attached till 0005 just in case, but I did not consider about 0004 and 0005)\r\n\r\n> ======\r\n> 1. Commit Message.\r\n> \r\n> The patch description is missing.\r\n\r\nBriefly added.\r\n\r\n> 2. General.\r\n> \r\n> +LogicalRepSyncTableStart(XLogRecPtr *origin_startpos, int worker_slot)\r\n> \r\n> and\r\n> \r\n> +start_table_sync(XLogRecPtr *origin_startpos,\r\n> + char **myslotname,\r\n> + int worker_slot)\r\n> \r\n> and\r\n> \r\n> @@ -4548,12 +4552,13 @@ run_tablesync_worker(WalRcvStreamOptions\r\n> *options,\r\n> char *slotname,\r\n> char *originname,\r\n> int originname_size,\r\n> - XLogRecPtr *origin_startpos)\r\n> + XLogRecPtr *origin_startpos,\r\n> + int worker_slot)\r\n> \r\n> \r\n> It seems the worker_slot is being passed all over the place as an\r\n> additional function argument so that it can be used to construct an\r\n> application_name. Is it possible/better to introduce a new\r\n> 'MyLogicalRepWorker' field for the 'worker_slot' so it does not have\r\n> to be passed like this?\r\n\r\nI'm not sure it should be, but I did. How do you think?\r\n\r\n> src/backend/replication/logical/tablesync.c\r\n> \r\n> 3.\r\n> + /*\r\n> + * Disconnect from publisher. Otherwise reused sync workers causes\r\n> + * exceeding max_wal_senders.\r\n> + */\r\n> + if (LogRepWorkerWalRcvConn != NULL)\r\n> + {\r\n> + walrcv_disconnect(LogRepWorkerWalRcvConn);\r\n> + LogRepWorkerWalRcvConn = NULL;\r\n> + }\r\n> +\r\n> \r\n> Why is this comment mentioning anything about \"reused workers\" at all?\r\n> The worker process exits in this function, right?\r\n\r\nI considered that code again, and I found this part is not needed anymore.\r\n\r\nInitially it was added in 0002, this is because workers established new connections\r\nwithout exiting and walsenders on publisher might be remained. So This was correct\r\nfor 0002 patch.\r\nBut now, in 0003 patch, workers reuse connections, which means that no need to call\r\nwalrcv_disconnect() explicitly. It is done when processes are exit.\r\n\r\n> 4. LogicalRepSyncTableStart\r\n> \r\n> /*\r\n> - * Here we use the slot name instead of the subscription name as the\r\n> - * application_name, so that it is different from the leader apply worker,\r\n> - * so that synchronous replication can distinguish them.\r\n> + * Connect to publisher if not yet. The application_name must be also\r\n> + * different from the leader apply worker because synchronous replication\r\n> + * must distinguish them.\r\n> */\r\n> \r\n> I felt all the details in the 2nd part of this comment belong inside\r\n> the condition, not outside.\r\n> \r\n> SUGGESTION\r\n> /* Connect to the publisher if haven't done so already. */\r\n\r\nChanged.\r\n\r\n> 5.\r\n> + if (LogRepWorkerWalRcvConn == NULL)\r\n> + {\r\n> + char application_name[NAMEDATALEN];\r\n> +\r\n> + /*\r\n> + * FIXME: set appropriate application_name. Previously, the slot name\r\n> + * was used becasue the lifetime of the tablesync worker was same as\r\n> + * that, but now the tablesync worker handles many slots during the\r\n> + * synchronization so that it is not suitable. So what should be?\r\n> + * Note that if the tablesync worker starts to reuse the replication\r\n> + * slot during synchronization, we should use the slot name as\r\n> + * application_name again.\r\n> + */\r\n> + snprintf(application_name, NAMEDATALEN, \"pg_%u_sync_%i\",\r\n> + MySubscription->oid, worker_slot);\r\n> + LogRepWorkerWalRcvConn =\r\n> + walrcv_connect(MySubscription->conninfo, true,\r\n> + must_use_password,\r\n> + application_name, &err);\r\n> + }\r\n> \r\n> 5a.\r\n> /becasue/because/\r\n\r\nModified. Also, comments were moved atop ApplicationNameForTablesync.\r\nI was not sure when it is removed, but I kept it.\r\n\r\n> \r\n> 5b.\r\n> I am not sure about what name this should ideally use, but anyway for\r\n> uniqueness doesn't it still need to include the GetSystemIdentifier()\r\n> same as function ReplicationSlotNameForTablesync() was doing?\r\n> \r\n> Maybe this can use the same function ReplicationSlotNameForTablesync()\r\n> can be used but just pass the worker_slot instead of the relid?\r\n\r\nGood point. ApplicationNameForTablesync() was defined and used.\r\n\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 6. LogicalRepApplyLoop\r\n> \r\n> /*\r\n> * Init the ApplyMessageContext which we clean up after each replication\r\n> - * protocol message.\r\n> + * protocol message, if needed.\r\n> */\r\n> - ApplyMessageContext = AllocSetContextCreate(ApplyContext,\r\n> - \"ApplyMessageContext\",\r\n> - ALLOCSET_DEFAULT_SIZES);\r\n> + if (!ApplyMessageContext)\r\n> + ApplyMessageContext = AllocSetContextCreate(ApplyContext,\r\n> + \"ApplyMessageContext\",\r\n> +\r\n> \r\n> Maybe slightly reword the comment.\r\n> \r\n> BEFORE:\r\n> Init the ApplyMessageContext which we clean up after each replication\r\n> protocol message, if needed.\r\n> \r\n> AFTER:\r\n> Init the ApplyMessageContext if needed. This context is cleaned up\r\n> after each replication protocol message.\r\n\r\nChanged.\r\n\r\n> src/backend/replication/walsender.c\r\n> \r\n> 7.\r\n> + /*\r\n> + * Initialize the flag again because this streaming may be\r\n> + * second time.\r\n> + */\r\n> + streamingDoneSending = streamingDoneReceiving = false;\r\n> \r\n> Isn't this only possible to be 2nd time because the \"reuse tablesync\r\n> worker\" might re-issue a START_REPLICATION again to the same\r\n> WALSender? So, should this flag reset ONLY be done for the logical\r\n> replication ('else' part), otherwise it should be asserted false?\r\n> \r\n> e.g. Would it be better to be like this?\r\n> \r\n> if (cmd->kind == REPLICATION_KIND_PHYSICAL)\r\n> {\r\n> Assert(!streamingDoneSending && !streamingDoneReceiving)\r\n> StartReplication(cmd);\r\n> }\r\n> else\r\n> {\r\n> /* Reset flags because reusing tablesync workers can mean this is the\r\n> second time here. */\r\n> streamingDoneSending = streamingDoneReceiving = false;\r\n> StartLogicalReplication(cmd);\r\n> }\r\n>\r\n\r\nIt's OK to modify the comment. But after considering more, I started to think that\r\nany specification for physical replication should not be changed.\r\nSo I accepted comments only for the logical rep.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Thu, 13 Jul 2023 04:12:49 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 11 Tem 2023 Sal, 05:59 tarihinde şunu\nyazdı:\n> Even if patches 0003 and 0002 are to be combined, I think that should\n> not happen until after the \"reuse\" design is confirmed which way is\n> best.\n>\n> e.g. IMO it might be easier to compare the different PoC designs for\n> patch 0002 if there is no extra logic involved.\n>\n> PoC design#1 -- each tablesync decides for itself what to do next\n> after it finishes\n> PoC design#2 -- reuse tablesync using a \"pool\" of available workers\n\nRight. I made a patch 0003 to change 0002 so that tables will be assigned\nto sync workers by apply worker.\nIt's a rough POC and ignores some edge cases. But this is what I think how\napply worker would take the responsibility of table assignments. Hope the\nimplementation makes sense and I'm not missing anything that may cause\ndegraded perforrmance.\n\nPoC design#1 --> apply only patch 0001 and 0002\nPoC design#2 --> apply all patches, 0001, 0002 and 0003\n\nHere are some quick numbers with 100 empty tables.\n\n+--------------+----------------+----------------+----------------+\n| | 2 sync workers | 4 sync workers | 8 sync workers |\n+--------------+----------------+----------------+----------------+\n| POC design#1 | 1909.873 ms | 986.261 ms | 552.404 ms |\n+--------------+----------------+----------------+----------------+\n| POC design#2 | 4962.208 ms | 1240.503 ms | 1165.405 ms |\n+--------------+----------------+----------------+----------------+\n| master | 2666.008 ms | 1462.012 ms | 986.848 ms |\n+--------------+----------------+----------------+----------------+\n\nSeems like design#1 is better than both design#2 and master overall. It's\nsurprising to see that even master beats design#2 in some cases though. Not\nsure if that is expected or there are some places to improve design#2 even\nmore.\n\nWhat do you think?\n\nPS: I only attached the related patches and not the whole patch set. 0001\nand 0002 may contain some of your earlier reviews, but I'll send a proper\nupdated set soon.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 13 Jul 2023 23:27:41 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 14, 2023 at 1:58 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Here are some quick numbers with 100 empty tables.\n>\n> +--------------+----------------+----------------+----------------+\n> | | 2 sync workers | 4 sync workers | 8 sync workers |\n> +--------------+----------------+----------------+----------------+\n> | POC design#1 | 1909.873 ms | 986.261 ms | 552.404 ms |\n> +--------------+----------------+----------------+----------------+\n> | POC design#2 | 4962.208 ms | 1240.503 ms | 1165.405 ms |\n> +--------------+----------------+----------------+----------------+\n> | master | 2666.008 ms | 1462.012 ms | 986.848 ms |\n> +--------------+----------------+----------------+----------------+\n>\n> Seems like design#1 is better than both design#2 and master overall. It's surprising to see that even master beats design#2 in some cases though. Not sure if that is expected or there are some places to improve design#2 even more.\n>\n\nYeah, it is quite surprising that Design#2 is worse than master. I\nsuspect there is something wrong going on with your Design#2 patch.\nOne area to check is whether apply worker is able to quickly assign\nthe new relations to tablesync workers. Note that currently after the\nfirst time assigning the tables to workers, the apply worker may wait\nbefore processing the next set of tables in the main loop of\nLogicalRepApplyLoop(). The other minor point about design#2\nimplementation is that you may want to first assign the allocated\ntablesync workers before trying to launch a new worker.\n\n>\n> PS: I only attached the related patches and not the whole patch set. 0001 and 0002 may contain some of your earlier reviews, but I'll send a proper updated set soon.\n>\n\nYeah, that would be helpful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Jul 2023 13:41:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Kuroda-san.\n\nHere are some review comments for the v17-0003 patch. They are all minor.\n\n======\nCommit message\n\n1.\nPreviously tablesync workers establish new connections when it changes\nthe syncing\ntable, but this might have additional overhead. This patch allows to\nreuse connections\ninstead.\n\n~\n\n/This patch allows to reuse connections instead./This patch allows the\nexisting connection to be reused./\n\n~~~\n\n2.\nAs for the publisher node, this patch allows to reuse logical\nwalsender processes\nafter the streaming is done once.\n\n~\n\nIs this paragraph even needed? Since the connection is reused then it\nalready implies the other end (the Wlasender) is being reused, right?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.\n+ * FIXME: set appropriate application_name. Previously, the slot name was used\n+ * because the lifetime of the tablesync worker was same as that, but now the\n+ * tablesync worker handles many slots during the synchronization so that it is\n+ * not suitable. So what should be? Note that if the tablesync worker starts to\n+ * reuse the replication slot during synchronization, we should use the slot\n+ * name as application_name again.\n+ */\n+static void\n+ApplicationNameForTablesync(Oid suboid, int worker_slot,\n+ char *application_name, Size szapp)\n\n3a.\nI felt that most of this FIXME comment belongs with the calling code,\nnot here.\n\n3b.\nAlso, maybe it needs some rewording -- I didn't understand exactly\nwhat it is trying to say.\n\n\n~~~\n\n4.\n- /*\n- * Here we use the slot name instead of the subscription name as the\n- * application_name, so that it is different from the leader apply worker,\n- * so that synchronous replication can distinguish them.\n- */\n- LogRepWorkerWalRcvConn =\n- walrcv_connect(MySubscription->conninfo, true,\n- must_use_password,\n- slotname, &err);\n+ /* Connect to the publisher if haven't done so already. */\n+ if (LogRepWorkerWalRcvConn == NULL)\n+ {\n+ char application_name[NAMEDATALEN];\n+\n+ /*\n+ * The application_name must be also different from the leader apply\n+ * worker because synchronous replication must distinguish them.\n+ */\n+ ApplicationNameForTablesync(MySubscription->oid,\n+ MyLogicalRepWorker->worker_slot,\n+ application_name,\n+ NAMEDATALEN);\n+ LogRepWorkerWalRcvConn =\n+ walrcv_connect(MySubscription->conninfo, true,\n+ must_use_password,\n+ application_name, &err);\n+ }\n+\n\nShould the comment mention the \"subscription name\" as it did before?\n\nSUGGESTION\nThe application_name must differ from the subscription name (used by\nthe leader apply worker) because synchronous replication has to be\nable to distinguish this worker from the leader apply worker.\n\n======\nsrc/backend/replication/logical/worker.c\n\n5.\n-start_table_sync(XLogRecPtr *origin_startpos, char **myslotname)\n+start_table_sync(XLogRecPtr *origin_startpos,\n+ char **myslotname)\n\nThis is a wrapping change only. It looks like an unnecessary hangover\nfrom a previous version of 0003.\n\n======\nsrc/backend/replication/walsender.c\n\n6. exec_replication_command\n\n+\n if (cmd->kind == REPLICATION_KIND_PHYSICAL)\n StartReplication(cmd);\n~\n\nThe extra blank line does not belong in this patch.\n\n======\nsrc/include/replication/worker_internal.h\n\n+ /* Indicates the slot number which corresponds to this LogicalRepWorker. */\n+ int worker_slot;\n+\n\n6a\nI think this field is very fundamental, so IMO it should be defined at\nthe top of the struct, maybe nearby the other 'in_use' and\n'generation' fields.\n\n~\n\n6b.\nAlso, since this is already a \"worker\" struct so there is no need to\nhave \"worker\" in the field name again -- just \"slot_number\" or\n\"slotnum\" might be a better name.\n\nAnd then the comment can also be simplified.\n\nSUGGESTION\n/* Slot number of this worker. */\nint slotnum;\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 14 Jul 2023 18:23:50 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 14 Tem 2023 Cum, 11:11 tarihinde\nşunu yazdı:\n\n> Yeah, it is quite surprising that Design#2 is worse than master. I\n> suspect there is something wrong going on with your Design#2 patch.\n> One area to check is whether apply worker is able to quickly assign\n> the new relations to tablesync workers. Note that currently after the\n> first time assigning the tables to workers, the apply worker may wait\n> before processing the next set of tables in the main loop of\n> LogicalRepApplyLoop(). The other minor point about design#2\n> implementation is that you may want to first assign the allocated\n> tablesync workers before trying to launch a new worker.\n>\n\nIt's not actually worse than master all the time. It seems like it's just\nunreliable.\nHere are some consecutive runs for both designs and master.\n\ndesign#1 = 1621,527 ms, 1788,533 ms, 1645,618 ms, 1702,068 ms, 1745,753 ms\ndesign#2 = 2089,077 ms, 1864,571 ms, 4574,799 ms, 5422,217 ms, 1905,944 ms\nmaster = 2815,138 ms, 2481,954 ms , 2594,413 ms, 2620,690 ms, 2489,323 ms\n\nAnd apply worker was not busy with applying anything during these\nexperiments since there were not any writes to the publisher. I'm not sure\nhow that would also affect the performance if there were any writes.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Amit Kapila <amit.kapila16@gmail.com>, 14 Tem 2023 Cum, 11:11 tarihinde şunu yazdı:\nYeah, it is quite surprising that Design#2 is worse than master. I\nsuspect there is something wrong going on with your Design#2 patch.\nOne area to check is whether apply worker is able to quickly assign\nthe new relations to tablesync workers. Note that currently after the\nfirst time assigning the tables to workers, the apply worker may wait\nbefore processing the next set of tables in the main loop of\nLogicalRepApplyLoop(). The other minor point about design#2\nimplementation is that you may want to first assign the allocated\ntablesync workers before trying to launch a new worker.It's not actually worse than master all the time. It seems like it's just unreliable. Here are some consecutive runs for both designs and master.design#1 = 1621,527 ms, 1788,533 ms, 1645,618 ms, 1702,068 ms, 1745,753 msdesign#2 = 2089,077 ms, 1864,571 ms, 4574,799 ms, 5422,217 ms, 1905,944 msmaster = 2815,138 ms, 2481,954 ms , 2594,413 ms, 2620,690 ms, 2489,323 ms And apply worker was not busy with applying anything during these experiments since there were not any writes to the publisher. I'm not sure how that would also affect the performance if there were any writes.Thanks,-- Melih MutluMicrosoft", "msg_date": "Fri, 14 Jul 2023 12:36:49 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 14, 2023 at 3:07 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 14 Tem 2023 Cum, 11:11 tarihinde şunu yazdı:\n>>\n>> Yeah, it is quite surprising that Design#2 is worse than master. I\n>> suspect there is something wrong going on with your Design#2 patch.\n>> One area to check is whether apply worker is able to quickly assign\n>> the new relations to tablesync workers. Note that currently after the\n>> first time assigning the tables to workers, the apply worker may wait\n>> before processing the next set of tables in the main loop of\n>> LogicalRepApplyLoop(). The other minor point about design#2\n>> implementation is that you may want to first assign the allocated\n>> tablesync workers before trying to launch a new worker.\n>\n>\n> It's not actually worse than master all the time. It seems like it's just unreliable.\n> Here are some consecutive runs for both designs and master.\n>\n> design#1 = 1621,527 ms, 1788,533 ms, 1645,618 ms, 1702,068 ms, 1745,753 ms\n> design#2 = 2089,077 ms, 1864,571 ms, 4574,799 ms, 5422,217 ms, 1905,944 ms\n> master = 2815,138 ms, 2481,954 ms , 2594,413 ms, 2620,690 ms, 2489,323 ms\n>\n> And apply worker was not busy with applying anything during these experiments since there were not any writes to the publisher. I'm not sure how that would also affect the performance if there were any writes.\n>\n\nYeah, this is a valid point. I think this is in favor of the Design#1\napproach we are discussing here. One thing I was thinking whether we\ncan do anything to alleviate the contention at the higher worker\ncount. One possibility is to have some kind of available worker list\nwhich can be used to pick up the next worker instead of checking all\nthe workers while assigning the next table. We can probably explore it\nseparately once the first three patches are ready because anyway, this\nwill be an optimization atop the Design#1 approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 15 Jul 2023 16:48:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nPFA updated patches. Rebased 0003 with minor changes. Addressed Peter's\nreviews for 0001 and 0002 with some small comments below.\n\nPeter Smith <smithpb2250@gmail.com>, 10 Tem 2023 Pzt, 10:09 tarihinde şunu\nyazdı:\n\n> 6. LogicalRepApplyLoop\n>\n> + /*\n> + * apply_dispatch() may have gone into apply_handle_commit()\n> + * which can call process_syncing_tables_for_sync.\n> + *\n> + * process_syncing_tables_for_sync decides whether the sync of\n> + * the current table is completed. If it is completed,\n> + * streaming must be already ended. So, we can break the loop.\n> + */\n> + if (MyLogicalRepWorker->is_sync_completed)\n> + {\n> + endofstream = true;\n> + break;\n> + }\n> +\n>\n> and\n>\n> + /*\n> + * If is_sync_completed is true, this means that the tablesync\n> + * worker is done with synchronization. Streaming has already been\n> + * ended by process_syncing_tables_for_sync. We should move to the\n> + * next table if needed, or exit.\n> + */\n> + if (MyLogicalRepWorker->is_sync_completed)\n> + endofstream = true;\n>\n> ~\n>\n> Instead of those code fragments above assigning 'endofstream' as a\n> side-effect, would it be the same (but tidier) to just modify the\n> other \"breaking\" condition below:\n>\n> BEFORE:\n> /* Check if we need to exit the streaming loop. */\n> if (endofstream)\n> break;\n>\n> AFTER:\n> /* Check if we need to exit the streaming loop. */\n> if (endofstream || MyLogicalRepWorker->is_sync_completed)\n> break;\n>\n\nFirst place you mentioned also breaks the infinite loop. Such an if\nstatement is needed there with or without endofstream assignment.\n\nI think if there is a flag to break a loop, using that flag to indicate\nthat we should exit the loop seems more appropriate to me. I see that it\nwould be a bit tidier without endofstream = true lines, but I feel like it\nwould also be less readable.\n\nI don't have a strong opinion though. I'm just keeping them as they are for\nnow, but I can change them if you disagree.\n\n\n>\n> 10b.\n> All the other tablesync-related fields of this struct are named as\n> relXXX, so I wonder if is better for this to follow the same pattern.\n> e.g. 'relsync_completed'\n>\n\nAren't those start with rel because they're related to the relation that\nthe tablesync worker is syncing? is_sync_completed is not a relation\nspecific field. I'm okay with changing the name but feel like\nrelsync_completed would be misleading.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Mon, 17 Jul 2023 18:54:30 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 18, 2023 at 1:54 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> PFA updated patches. Rebased 0003 with minor changes. Addressed Peter's reviews for 0001 and 0002 with some small comments below.\n>\n\nThanks, I will take another look at these soon. FYI, the 0001 patch\ndoes not apply cleanly. It needs to be rebased again because\nget_worker_name() function was recently removed from HEAD.\n\nreplication/logical/worker.o: In function `InitializeLogRepWorker':\n/home/postgres/oss_postgres_misc/src/backend/replication/logical/worker.c:4605:\nundefined reference to `get_worker_name'\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 Jul 2023 11:25:40 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 18, 2023 at 11:25 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Jul 18, 2023 at 1:54 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > PFA updated patches. Rebased 0003 with minor changes. Addressed Peter's reviews for 0001 and 0002 with some small comments below.\n> >\n>\n> Thanks, I will take another look at these soon. FYI, the 0001 patch\n> does not apply cleanly. It needs to be rebased again because\n> get_worker_name() function was recently removed from HEAD.\n>\n\nSorry, to be more correct -- it applied OK, but failed to build.\n\n> replication/logical/worker.o: In function `InitializeLogRepWorker':\n> /home/postgres/oss_postgres_misc/src/backend/replication/logical/worker.c:4605:\n> undefined reference to `get_worker_name'\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\n\n", "msg_date": "Tue, 18 Jul 2023 11:33:27 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 18 Tem 2023 Sal, 04:33 tarihinde şunu\nyazdı:\n\n> On Tue, Jul 18, 2023 at 11:25 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> >\n> > On Tue, Jul 18, 2023 at 1:54 AM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > PFA updated patches. Rebased 0003 with minor changes. Addressed\n> Peter's reviews for 0001 and 0002 with some small comments below.\n> > >\n> >\n> > Thanks, I will take another look at these soon. FYI, the 0001 patch\n> > does not apply cleanly. It needs to be rebased again because\n> > get_worker_name() function was recently removed from HEAD.\n> >\n>\n> Sorry, to be more correct -- it applied OK, but failed to build.\n>\n\nAttached the fixed patchset.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Tue, 18 Jul 2023 12:03:38 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 18, 2023 at 2:33 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Attached the fixed patchset.\n>\n\nFew comments on 0001\n====================\n1.\n+ logicalrep_worker_attach(worker_slot);\n+\n+ /* Setup signal handling */\n+ pqsignal(SIGHUP, SignalHandlerForConfigReload);\n+ pqsignal(SIGTERM, die);\n+ BackgroundWorkerUnblockSignals();\n+\n+ /*\n+ * We don't currently need any ResourceOwner in a walreceiver process, but\n+ * if we did, we could call CreateAuxProcessResourceOwner here.\n+ */\n+\n+ /* Initialise stats to a sanish value */\n+ MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\n+ MyLogicalRepWorker->reply_time = GetCurrentTimestamp();\n+\n+ /* Load the libpq-specific functions */\n+ load_file(\"libpqwalreceiver\", false);\n+\n+ InitializeLogRepWorker();\n+\n+ /* Connect to the origin and start the replication. */\n+ elog(DEBUG1, \"connecting to publisher using connection string \\\"%s\\\"\",\n+ MySubscription->conninfo);\n+\n+ /*\n+ * Setup callback for syscache so that we know when something changes in\n+ * the subscription relation state.\n+ */\n+ CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,\n+ invalidate_syncing_table_states,\n+ (Datum) 0);\n\nIt seems this part of the code is the same for ApplyWorkerMain() and\nTablesyncWorkerMain(). So, won't it be better to move it into a common\nfunction?\n\n2. Can LogicalRepSyncTableStart() be static function?\n\n3. I think you don't need to send 0004, 0005 each time till we are\nable to finish patches till 0003.\n\n4. In 0001's commit message, you can say that it will help the\nupcoming reuse tablesync worker patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jul 2023 18:17:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, 11 Jul 2023 at 08:30, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Jul 11, 2023 at 12:31 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 6 Tem 2023 Per,\n> > 12:47 tarihinde şunu yazdı:\n> > >\n> > > Dear Melih,\n> > >\n> > > > Thanks for the 0003 patch. But it did not work for me. Can you create\n> > > > a subscription successfully with patch 0003 applied?\n> > > > I get the following error: \" ERROR: table copy could not start\n> > > > transaction on publisher: another command is already in progress\".\n> > >\n> > > You got the ERROR when all the patches (0001-0005) were applied, right?\n> > > I have focused on 0001 and 0002 only, so I missed something.\n> > > If it was not correct, please attach the logfile and test script what you did.\n> >\n> > Yes, I did get an error with all patches applied. But with only 0001\n> > and 0002, your version seems like working and mine does not.\n> > What do you think about combining 0002 and 0003? Or should those stay separate?\n> >\n>\n> Even if patches 0003 and 0002 are to be combined, I think that should\n> not happen until after the \"reuse\" design is confirmed which way is\n> best.\n>\n> e.g. IMO it might be easier to compare the different PoC designs for\n> patch 0002 if there is no extra logic involved.\n>\n> PoC design#1 -- each tablesync decides for itself what to do next\n> after it finishes\n> PoC design#2 -- reuse tablesync using a \"pool\" of available workers\n\nI did a POC for design#2 for implementing a worker pool to synchronize\nthe tables for a subscriber. The core design is the same as what Melih\nhad implemented at [1]. I had already started the implementation of\nPOC based on one of the earlier e-mail [2] Peter had shared.\nThe POC has been implemented like:\na) Apply worker will check the tablesync pool and see if any tablesync\nworker is free:\n i) If there are no free workers in the pool, start a table sync\nworker and add it to the table sync pool.\n ii) If there are free workers in the pool, re-use the tablesync\nworker for synchronizing another table.\nb) Apply worker will check if the tables are synchronized, if all the\ntables are synchronized apply worker will release all the workers from\nthe tablesync pool\nc) Apply worker and tablesync worker has shared memory to share the\nfollowing relation data and execution state between the apply worker\nand the tablesync worker\nd) The apply worker and tablesync worker's pid are also stored in the\nshared memory so that we need not take a lock on LogicalRepWorkerLock\nand loop on max_logical_replication_workers every time. We use the pid\nstored in shared memory to wake up the apply worker and tablesync\nworker whenever needed.\n\nWhile I was implementing the POC I found one issue in the POC\npatch(there is no problem with the HEAD code, issue was only with the\nPOC):\n1) Apply worker was waiting for the table to be set to SYNCDONE.\n2) Mean time tablesync worker sets the table to SYNCDONE and sets\napply worker's latch.\n3) Apply worker will reset the latch set by tablesync and go to main\nloop and wait in main loop latch(since tablesync worker's latch was\nalready reset, apply worker will wait for 1 second)\nTo fix this I had to set apply worker's latch once in 1ms in this case\nalone which is not a good solution as it will consume a lot of cpu\ncycles. A better fix for this would be to introduce a new subscription\nrelation state.\n\nAttached patch has the changes for the same. 001, 0002 and 0003 are\nthe patches shared by Melih and Kuroda-san earlier. 0004 patch has the\nchanges for the POC of Tablesync worker pool implementation.\nPOC design 1: Tablesync worker identifies the tables that should be\nsynced and reuses the connection.\nPOC design 2: Tablesync worker pool with apply worker scheduling the\nwork to tablesync workers in the tablesync pool and reusing the\nconnection.\nPerformance results for 10 empty tables:\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| | 2 sync workers | 4 sync workers | 8 sync\nworkers | 16 sync workers|\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| HEAD | 128.4685 ms | 121.271 ms | 136.5455 ms\n | N/A |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| POC design#1| 70.7095 ms | 80.9805 ms | 102.773 ms |\n N/A |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| POC design#2| 70.858 ms | 83.0845 ms | 112.505 ms |\n N/A |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n\nPerformance results for 100 empty tables:\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| | 2 sync workers | 4 sync workers | 8 sync\nworkers | 16 sync workers|\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| HEAD | 1039.89 ms | 860.88 ms | 1112.312 ms\n | 1122.52 ms |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| POC design#1| 310.920 ms | 293.14 ms | 385.698 ms |\n 456.64 ms |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| POC design#2 | 318.464 ms | 313.98 ms | 352.316 ms |\n 441.53 ms |\n+-------------------+--------------------+--------------------+----------------------+----------------+\n\nPerformance results for 1000 empty tables:\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| | 2 sync workers | 4 sync workers | 8 sync\nworkers | 16 sync workers|\n+------------------+---------------------+---------------------+---------------------+----------------+\n| HEAD | 16327.96 ms | 10253.65 ms | 9741.986 ms\n| 10278.98 ms |\n+-------------------+--------------------+---------------------+---------------------+----------------+\n| POC design#1| 3598.21 ms | 3099.54 ms | 2944.386 ms |\n2588.20 ms |\n+-------------------+--------------------+---------------------+---------------------+----------------+\n| POC design#2| 4131.72 ms | 2840.36 ms | 3001.159 ms |\n5461.82 ms |\n+-------------------+--------------------+---------------------+--------------------+----------------+\n\nPerformance results for 2000 empty tables:\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| | 2 sync workers | 4 sync workers | 8 sync\nworkers | 16 sync workers|\n+-------------------+--------------------+--------------------+----------------------+----------------+\n| HEAD | 47210.92 ms | 25239.90 ms | 19171.48 ms\n| 19556.46 ms |\n+-------------------+--------------------+--------------------+---------------------+----------------+\n| POC design#1| 10598.32 ms | 6995.61 ms | 6507.53 ms |\n5295.72 ms |\n+-------------------+--------------------+--------------------+-------------------------------------+\n| POC design#2| 11121.00 ms | 6659.74 ms | 6253.66 ms |\n15433.81 ms |\n+-------------------+--------------------+--------------------+-------------------------------------+\n\nThe performance result execution for the same is attached in\nPerftest_Results.xlsx.\nAlso testing with a) table having data and b) apply worker applying\nchanges while table sync in progress is not done. One of us will do\nand try to share the results for these too.\nIt is noticed that performance of POC design #1 and POC design #2 are\ngood but POC design #2's performance degrades when there are a greater\nnumber of workers and more tables. In POC design #2, when there are a\ngreater number of workers and tables, apply worker is becoming a\nbottleneck as it must allocate work for all the workers.\nBased on the test results, POC design #1 is better.\n\nThanks to Kuroda-san for helping me in running the performance tests.\n\n[1] - https://www.postgresql.org/message-id/CAGPVpCSk4v-V1WbFDy8a5dL7Es5z8da6hoQbuVyrqP5s3Yh6Cg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAHut%2BPs8gWP9tCPK9gdMnxyshRKgVP3pJnAnaJto_T07uR9xUA%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Tue, 18 Jul 2023 19:41:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Some review comments for v19-0001\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n1. run_tablesync_worker\n+run_tablesync_worker(WalRcvStreamOptions *options,\n+ char *slotname,\n+ char *originname,\n+ int originname_size,\n+ XLogRecPtr *origin_startpos)\n+{\n+ /* Start table synchronization. */\n+ start_table_sync(origin_startpos, &slotname);\n\nThere was no such comment (\"/* Start table synchronization. */\") in\nthe original HEAD code, so I didn't see that it adds much value by\nadding it in the refactored code.\n\n~~~\n\n2. LogicalRepSyncTableStart\n\n/*\n* Finally, wait until the leader apply worker tells us to catch up and\n* then return to let LogicalRepApplyLoop do it.\n*/\nwait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n\n~\n\nShould LogicalRepApplyLoop still be mentioned here, since that is\nstatic in worker.c? Maybe it is better to refer instead to the common\n'start_apply' wrapper? (see also #5a below)\n\n======\nsrc/backend/replication/logical/worker.c\n\n3. set_stream_options\n\n+/*\n+ * Sets streaming options including replication slot name and origin start\n+ * position. Workers need these options for logical replication.\n+ */\n+void\n+set_stream_options(WalRcvStreamOptions *options,\n\nI'm not sure if the last sentence of the comment is adding anything useful.\n\n~~~\n\n4. start_apply\n/*\n * Run the apply loop with error handling. Disable the subscription,\n * if necessary.\n *\n * Note that we don't handle FATAL errors which are probably because\n * of system resource error and are not repeatable.\n */\nvoid\nstart_apply(XLogRecPtr origin_startpos)\n\n~\n\n4a.\nSomehow I found the function names to be confusing. Intuitively (IMO)\n'start_apply' is for apply worker and 'start_tablesync' is for\ntablesync worker. But actually, the start_apply() function is the\n*common* function for both kinds of worker. Might be easier to\nunderstand if start_apply function name can be changed to indicate it\nis really common -- e.g. common_apply_loop(), or similar.\n\n~\n\n4b.\nIf adverse to changing the function name, it might be helpful anyway\nif the function comment can emphasize this function is shared by\ndifferent worker types. e.g. \"Common function to run the apply\nloop...\"\n\n~~~\n\n5. run_apply_worker\n\n+ ReplicationOriginNameForLogicalRep(MySubscription->oid, InvalidOid,\n+ originname, originname_size);\n+\n+ /* Setup replication origin tracking. */\n+ StartTransactionCommand();\n\nEven if you wish ReplicationOriginNameForLogicalRep() to be outside of\nthe transaction I thought it should still come *after* the comment,\nsame as it does in the HEAD code.\n\n~~~\n\n6. ApplyWorkerMain\n\n- /* Run the main loop. */\n- start_apply(origin_startpos);\n+ /* This is leader apply worker */\n+ run_apply_worker(&options, myslotname, originname,\nsizeof(originname), &origin_startpos);\n\n proc_exit(0);\n }\n\n~\n\n6a.\nThe comment \"/* This is leader apply worker */\" is redundant now. This\nfunction is the entry point for leader apply workers so it can't be\nanything else.\n\n~\n\n6b.\n\nCaller parameter wrapping differs from the similar code in\nTablesyncWorkerMain. Shouldn't they be similar?\n\ne.g.\n+ run_apply_worker(&options, myslotname, originname,\nsizeof(originname), &origin_startpos);\n\nversus\n+ run_tablesync_worker(&options,\n+ myslotname,\n+ originname,\n+ sizeof(originname),\n+ &origin_startpos);\n\n======\nsrc/include/replication/worker_internal.h\n\n7.\n+\n+extern void set_stream_options(WalRcvStreamOptions *options,\n+ char *slotname,\n+ XLogRecPtr *origin_startpos);\n+extern void start_apply(XLogRecPtr origin_startpos);\n+extern void DisableSubscriptionAndExit(void);\n+\n\nMaybe all the externs belong together? It doesn't seem right for just\nthese 3 externs to be separated from all the others, with those static\ninline functions in-between.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 19 Jul 2023 13:07:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 19, 2023 at 8:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Some review comments for v19-0001\n>\n...\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 3. set_stream_options\n>\n> +/*\n> + * Sets streaming options including replication slot name and origin start\n> + * position. Workers need these options for logical replication.\n> + */\n> +void\n> +set_stream_options(WalRcvStreamOptions *options,\n>\n> I'm not sure if the last sentence of the comment is adding anything useful.\n>\n\nPersonally, I find it useful as at a high-level it tells the purpose\nof setting these options.\n\n> ~~~\n>\n> 4. start_apply\n> /*\n> * Run the apply loop with error handling. Disable the subscription,\n> * if necessary.\n> *\n> * Note that we don't handle FATAL errors which are probably because\n> * of system resource error and are not repeatable.\n> */\n> void\n> start_apply(XLogRecPtr origin_startpos)\n>\n> ~\n>\n> 4a.\n> Somehow I found the function names to be confusing. Intuitively (IMO)\n> 'start_apply' is for apply worker and 'start_tablesync' is for\n> tablesync worker. But actually, the start_apply() function is the\n> *common* function for both kinds of worker. Might be easier to\n> understand if start_apply function name can be changed to indicate it\n> is really common -- e.g. common_apply_loop(), or similar.\n>\n> ~\n>\n> 4b.\n> If adverse to changing the function name, it might be helpful anyway\n> if the function comment can emphasize this function is shared by\n> different worker types. e.g. \"Common function to run the apply\n> loop...\"\n>\n\nI would prefer to change the comments as suggested by you in 4b\nbecause both the workers (apply and tablesync) need to perform apply,\nso it seems logical for both of them to invoke start_apply.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jul 2023 09:24:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Jul 18, 2023 at 1:54 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> PFA updated patches. Rebased 0003 with minor changes. Addressed Peter's reviews for 0001 and 0002 with some small comments below.\n>\n> Peter Smith <smithpb2250@gmail.com>, 10 Tem 2023 Pzt, 10:09 tarihinde şunu yazdı:\n>>\n>> 6. LogicalRepApplyLoop\n>>\n>> + /*\n>> + * apply_dispatch() may have gone into apply_handle_commit()\n>> + * which can call process_syncing_tables_for_sync.\n>> + *\n>> + * process_syncing_tables_for_sync decides whether the sync of\n>> + * the current table is completed. If it is completed,\n>> + * streaming must be already ended. So, we can break the loop.\n>> + */\n>> + if (MyLogicalRepWorker->is_sync_completed)\n>> + {\n>> + endofstream = true;\n>> + break;\n>> + }\n>> +\n>>\n>> and\n>>\n>> + /*\n>> + * If is_sync_completed is true, this means that the tablesync\n>> + * worker is done with synchronization. Streaming has already been\n>> + * ended by process_syncing_tables_for_sync. We should move to the\n>> + * next table if needed, or exit.\n>> + */\n>> + if (MyLogicalRepWorker->is_sync_completed)\n>> + endofstream = true;\n>>\n>> ~\n>>\n>> Instead of those code fragments above assigning 'endofstream' as a\n>> side-effect, would it be the same (but tidier) to just modify the\n>> other \"breaking\" condition below:\n>>\n>> BEFORE:\n>> /* Check if we need to exit the streaming loop. */\n>> if (endofstream)\n>> break;\n>>\n>> AFTER:\n>> /* Check if we need to exit the streaming loop. */\n>> if (endofstream || MyLogicalRepWorker->is_sync_completed)\n>> break;\n>\n>\n> First place you mentioned also breaks the infinite loop. Such an if statement is needed there with or without endofstream assignment.\n>\n> I think if there is a flag to break a loop, using that flag to indicate that we should exit the loop seems more appropriate to me. I see that it would be a bit tidier without endofstream = true lines, but I feel like it would also be less readable.\n>\n> I don't have a strong opinion though. I'm just keeping them as they are for now, but I can change them if you disagree.\n>\n\nI felt it was slightly sneaky to re-use the existing variable as a\nconvenient way to do what you want. But, I don’t feel strongly enough\non this point to debate it -- maybe see later if others have an\nopinion about this.\n\n>>\n>>\n>> 10b.\n>> All the other tablesync-related fields of this struct are named as\n>> relXXX, so I wonder if is better for this to follow the same pattern.\n>> e.g. 'relsync_completed'\n>\n>\n> Aren't those start with rel because they're related to the relation that the tablesync worker is syncing? is_sync_completed is not a relation specific field. I'm okay with changing the name but feel like relsync_completed would be misleading.\n\nMy reading of the code is slightly different: Only these fields have\nthe prefix ‘rel’ and they are all grouped under the comment “/* Used\nfor initial table synchronization. */” because AFAIK only these fields\nare TWS specific (not used for other kinds of workers).\n\nSince this new flag field is also TWS-specific, therefore IMO it\nshould follow the same consistent name pattern. But, if you are\nunconvinced, maybe see later if others have an opinion about it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Jul 2023 12:32:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Some review comments for patch v20-0002\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n1. finish_sync_worker\n/*\n * Exit routine for synchronization worker.\n *\n * If reuse_worker is false, the worker will not be reused and exit.\n */\n\n~\n\nIMO the \"will not be reused\" part doesn't need saying -- it is\nself-evident from the fact \"reuse_worker is false\".\n\nSUGGESTION\nIf reuse_worker is false, at the conclusion of this function the\nworker process will exit.\n\n~~~\n\n2. finish_sync_worker\n\n- StartTransactionCommand();\n- ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has finished\",\n- MySubscription->name,\n- get_rel_name(MyLogicalRepWorker->relid))));\n- CommitTransactionCommand();\n-\n /* Find the leader apply worker and signal it. */\n logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);\n\n- /* Stop gracefully */\n- proc_exit(0);\n+ if (!reuse_worker)\n+ {\n+ StartTransactionCommand();\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\n+ MySubscription->name)));\n+ CommitTransactionCommand();\n+\n+ /* Stop gracefully */\n+ proc_exit(0);\n+ }\n\nIn the HEAD code the log message came *before* it signalled to the\napply leader. Won't it be better to keep the logic in that same order?\n\n~~~\n\n3. process_syncing_tables_for_sync\n\n- finish_sync_worker();\n+ /* Sync worker has completed synchronization of the current table. */\n+ MyLogicalRepWorker->is_sync_completed = true;\n+\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", relation \\\"%s\\\" with relid %u has finished\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n\nIIUC it is only the \" table synchronization\" part that is finished\nhere; not the whole \"table synchronization worker\" (compared to\nfinish_sync_worker function), so maybe the word \"worker\" should not\nbe in this message.\n\n~~~\n\n4. TablesyncWorkerMain\n\n+ if (MyLogicalRepWorker->is_sync_completed)\n+ {\n+ /* tablesync is done unless a table that needs syncning is found */\n+ done = true;\n\nSUGGESTION (Typo \"syncning\" and minor rewording.)\nThis tablesync worker is 'done' unless another table that needs\nsyncing is found.\n\n~\n\n5.\n+ /* Found a table for next iteration */\n+ finish_sync_worker(true);\n+\n+ StartTransactionCommand();\n+ ereport(LOG,\n+ (errmsg(\"logical replication worker for subscription \\\"%s\\\" will be\nreused to sync table \\\"%s\\\" with relid %u.\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ CommitTransactionCommand();\n+\n+ done = false;\n+ break;\n+ }\n+ LWLockRelease(LogicalRepWorkerLock);\n\n5a.\nIMO it seems better to put this ereport *inside* the\nfinish_sync_worker() function alongside the similar log for when the\nworker is not reused.\n\n~\n\n5b.\nIsn't there a missing call to that LWLockRelease, if the 'break' happens?\n\n======\nsrc/backend/replication/logical/worker.c\n\n6. LogicalRepApplyLoop\n\nRefer to [1] for my reply to a previous review comment\n\n~~~\n\n7. InitializeLogRepWorker\n\n if (am_tablesync_worker())\n ereport(LOG,\n- (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n\\\"%s\\\" has started\",\n+ (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n\\\"%s\\\" with relid %u has started\",\n MySubscription->name,\n- get_rel_name(MyLogicalRepWorker->relid))));\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n\nBut this is certainly a tablesync worker so the message here should\nsay \"logical replication table synchronization worker\" like the HEAD\ncode used to do.\n\nIt seems this mistake was introduced in patch v20-0001.\n\n======\nsrc/include/replication/worker_internal.h\n\n8.\nRefer to [1] for my reply to a previous review comment\n\n------\n[1] Replies to previous 0002 comments --\nhttps://www.postgresql.org/message-id/CAHut%2BPtiAtGJC52SGNdobOah5ctYDDhWWKd%3DuP%3DrkRgXzg5rdg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Jul 2023 12:41:17 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 20, 2023 at 8:02 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Jul 18, 2023 at 1:54 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > PFA updated patches. Rebased 0003 with minor changes. Addressed Peter's reviews for 0001 and 0002 with some small comments below.\n> >\n> > Peter Smith <smithpb2250@gmail.com>, 10 Tem 2023 Pzt, 10:09 tarihinde şunu yazdı:\n> >>\n> >> 6. LogicalRepApplyLoop\n> >>\n> >> + /*\n> >> + * apply_dispatch() may have gone into apply_handle_commit()\n> >> + * which can call process_syncing_tables_for_sync.\n> >> + *\n> >> + * process_syncing_tables_for_sync decides whether the sync of\n> >> + * the current table is completed. If it is completed,\n> >> + * streaming must be already ended. So, we can break the loop.\n> >> + */\n> >> + if (MyLogicalRepWorker->is_sync_completed)\n> >> + {\n> >> + endofstream = true;\n> >> + break;\n> >> + }\n> >> +\n> >>\n> >> and\n> >>\n> >> + /*\n> >> + * If is_sync_completed is true, this means that the tablesync\n> >> + * worker is done with synchronization. Streaming has already been\n> >> + * ended by process_syncing_tables_for_sync. We should move to the\n> >> + * next table if needed, or exit.\n> >> + */\n> >> + if (MyLogicalRepWorker->is_sync_completed)\n> >> + endofstream = true;\n> >>\n> >> ~\n> >>\n> >> Instead of those code fragments above assigning 'endofstream' as a\n> >> side-effect, would it be the same (but tidier) to just modify the\n> >> other \"breaking\" condition below:\n> >>\n> >> BEFORE:\n> >> /* Check if we need to exit the streaming loop. */\n> >> if (endofstream)\n> >> break;\n> >>\n> >> AFTER:\n> >> /* Check if we need to exit the streaming loop. */\n> >> if (endofstream || MyLogicalRepWorker->is_sync_completed)\n> >> break;\n> >\n> >\n> > First place you mentioned also breaks the infinite loop. Such an if statement is needed there with or without endofstream assignment.\n> >\n> > I think if there is a flag to break a loop, using that flag to indicate that we should exit the loop seems more appropriate to me. I see that it would be a bit tidier without endofstream = true lines, but I feel like it would also be less readable.\n> >\n> > I don't have a strong opinion though. I'm just keeping them as they are for now, but I can change them if you disagree.\n> >\n>\n> I felt it was slightly sneaky to re-use the existing variable as a\n> convenient way to do what you want. But, I don’t feel strongly enough\n> on this point to debate it -- maybe see later if others have an\n> opinion about this.\n>\n\nI feel it is okay to use the existing variable 'endofstream' here but\nshall we have an assertion that it is a tablesync worker?\n\n> >>\n> >>\n> >> 10b.\n> >> All the other tablesync-related fields of this struct are named as\n> >> relXXX, so I wonder if is better for this to follow the same pattern.\n> >> e.g. 'relsync_completed'\n> >\n> >\n> > Aren't those start with rel because they're related to the relation that the tablesync worker is syncing? is_sync_completed is not a relation specific field. I'm okay with changing the name but feel like relsync_completed would be misleading.\n>\n> My reading of the code is slightly different: Only these fields have\n> the prefix ‘rel’ and they are all grouped under the comment “/* Used\n> for initial table synchronization. */” because AFAIK only these fields\n> are TWS specific (not used for other kinds of workers).\n>\n> Since this new flag field is also TWS-specific, therefore IMO it\n> should follow the same consistent name pattern. But, if you are\n> unconvinced, maybe see later if others have an opinion about it.\n>\n\n+1 to use the prefix 'rel' here as the sync is specific to the\nrelation. Even during apply phase, we will apply the relation-specific\nchanges. See should_apply_changes_for_rel().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jul 2023 09:29:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi, I had a look at the latest 00003 patch (v20-0003).\n\nAlthough this patch was recently modified, the updates are mostly only\nto make it compatible with the updated v20-0002 patch. Specifically,\nthe v20-0003 updates did not yet address my review comments from\nv17-0003 [1].\n\nAnyway, this post is just a reminder so the earlier review doesn't get\nforgotten.\n\n------\n[1] v17-0003 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPuMAiO_X_Kw6ud-jr5WOm%2Brpkdu7CppDU6mu%3DgY7UVMzQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Jul 2023 14:10:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 20 Tem 2023 Per, 07:10 tarihinde şunu\nyazdı:\n\n> Hi, I had a look at the latest 00003 patch (v20-0003).\n>\n> Although this patch was recently modified, the updates are mostly only\n> to make it compatible with the updated v20-0002 patch. Specifically,\n> the v20-0003 updates did not yet address my review comments from\n> v17-0003 [1].\n>\n\nYes, I only addressed your reviews for 0001 and 0002, and rebased 0003 in\nlatest patches as stated here [1].\n\nI'll update the patch soon according to recent reviews, including yours for\n0003.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCTvALKEXe0%3DN-%2BiMmVxVQ-%2BP8KZ_1qQ1KsSSZ-V9wJ5hw%40mail.gmail.com\n\nThanks for the reminder.\n-- \nMelih Mutlu\nMicrosoft\n\nHi Peter,Peter Smith <smithpb2250@gmail.com>, 20 Tem 2023 Per, 07:10 tarihinde şunu yazdı:Hi, I had a look at the latest 00003 patch (v20-0003).\n\nAlthough this patch was recently modified, the updates are mostly only\nto make it compatible with the updated v20-0002 patch. Specifically,\nthe v20-0003 updates did not yet address my review comments from\nv17-0003 [1].Yes, I only addressed your reviews for 0001 and 0002, and rebased 0003 in latest patches as stated here [1].I'll update the patch soon according to recent reviews, including yours for 0003.[1]  https://www.postgresql.org/message-id/CAGPVpCTvALKEXe0%3DN-%2BiMmVxVQ-%2BP8KZ_1qQ1KsSSZ-V9wJ5hw%40mail.gmail.comThanks for the reminder.-- Melih MutluMicrosoft", "msg_date": "Thu, 20 Jul 2023 11:38:02 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nPeter Smith <smithpb2250@gmail.com>, 20 Tem 2023 Per, 05:41 tarihinde şunu\nyazdı:\n\n> 7. InitializeLogRepWorker\n>\n> if (am_tablesync_worker())\n> ereport(LOG,\n> - (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n> \\\"%s\\\" has started\",\n> + (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n> \\\"%s\\\" with relid %u has started\",\n> MySubscription->name,\n> - get_rel_name(MyLogicalRepWorker->relid))));\n> + get_rel_name(MyLogicalRepWorker->relid),\n> + MyLogicalRepWorker->relid)));\n>\n> But this is certainly a tablesync worker so the message here should\n> say \"logical replication table synchronization worker\" like the HEAD\n> code used to do.\n>\n> It seems this mistake was introduced in patch v20-0001.\n>\n\nI'm a bit confused here. Isn't it decided to use \"logical replication\nworker\" regardless of the worker's type [1]. That's why I made this change.\nIf that's not the case here, I'll put it back.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAHut%2BPt1xwATviPGjjtJy5L631SGf3qjV9XUCmxLu16cHamfgg%40mail.gmail.com\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Peter Smith <smithpb2250@gmail.com>, 20 Tem 2023 Per, 05:41 tarihinde şunu yazdı:\n7. InitializeLogRepWorker\n\n  if (am_tablesync_worker())\n  ereport(LOG,\n- (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n\\\"%s\\\" has started\",\n+ (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n\\\"%s\\\" with relid %u has started\",\n  MySubscription->name,\n- get_rel_name(MyLogicalRepWorker->relid))));\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n\nBut this is certainly a tablesync worker so the message here should\nsay \"logical replication table synchronization worker\" like the HEAD\ncode used to do.\n\nIt seems this mistake was introduced in patch v20-0001.I'm a bit confused here. Isn't it decided to use \"logical replication worker\" regardless of the worker's type [1]. That's why I made this change. If that's not the case here, I'll put it back.[1] https://www.postgresql.org/message-id/flat/CAHut%2BPt1xwATviPGjjtJy5L631SGf3qjV9XUCmxLu16cHamfgg%40mail.gmail.comThanks,-- Melih MutluMicrosoft", "msg_date": "Thu, 20 Jul 2023 14:42:29 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 20, 2023 at 5:12 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com>, 20 Tem 2023 Per, 05:41 tarihinde şunu yazdı:\n>>\n>> 7. InitializeLogRepWorker\n>>\n>> if (am_tablesync_worker())\n>> ereport(LOG,\n>> - (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n>> \\\"%s\\\" has started\",\n>> + (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n>> \\\"%s\\\" with relid %u has started\",\n>> MySubscription->name,\n>> - get_rel_name(MyLogicalRepWorker->relid))));\n>> + get_rel_name(MyLogicalRepWorker->relid),\n>> + MyLogicalRepWorker->relid)));\n>>\n>> But this is certainly a tablesync worker so the message here should\n>> say \"logical replication table synchronization worker\" like the HEAD\n>> code used to do.\n>>\n>> It seems this mistake was introduced in patch v20-0001.\n>\n>\n> I'm a bit confused here. Isn't it decided to use \"logical replication worker\" regardless of the worker's type [1]. That's why I made this change. If that's not the case here, I'll put it back.\n>\n\nI feel where the worker type is clear, it is better to use it unless\nthe same can lead to translation issues.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jul 2023 18:07:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAttached the updated patches with recent reviews addressed.\n\nSee below for my comments:\n\nPeter Smith <smithpb2250@gmail.com>, 19 Tem 2023 Çar, 06:08 tarihinde şunu\nyazdı:\n\n> Some review comments for v19-0001\n>\n> 2. LogicalRepSyncTableStart\n>\n> /*\n> * Finally, wait until the leader apply worker tells us to catch up and\n> * then return to let LogicalRepApplyLoop do it.\n> */\n> wait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n>\n> ~\n>\n> Should LogicalRepApplyLoop still be mentioned here, since that is\n> static in worker.c? Maybe it is better to refer instead to the common\n> 'start_apply' wrapper? (see also #5a below)\n\n\nIsn't' LogicalRepApplyLoop static on HEAD and also mentioned in the exact\ncomment in tablesync.c while the common \"start_apply\" function also exists?\nI'm not sure how such a change would be related to this patch.\n\n---\n\n5.\n> + /* Found a table for next iteration */\n> + finish_sync_worker(true);\n> +\n> + StartTransactionCommand();\n> + ereport(LOG,\n> + (errmsg(\"logical replication worker for subscription \\\"%s\\\" will be\n> reused to sync table \\\"%s\\\" with relid %u.\",\n> + MySubscription->name,\n> + get_rel_name(MyLogicalRepWorker->relid),\n> + MyLogicalRepWorker->relid)));\n> + CommitTransactionCommand();\n> +\n> + done = false;\n> + break;\n> + }\n> + LWLockRelease(LogicalRepWorkerLock);\n\n\n> 5b.\n> Isn't there a missing call to that LWLockRelease, if the 'break' happens?\n\n\nLock is already released before break, if that's the lock you meant:\n\n/* Update worker state for the next table */\n> MyLogicalRepWorker->relid = rstate->relid;\n> MyLogicalRepWorker->relstate = rstate->state;\n> MyLogicalRepWorker->relstate_lsn = rstate->lsn;\n> LWLockRelease(LogicalRepWorkerLock);\n\n\n> /* Found a table for next iteration */\n> finish_sync_worker(true);\n> done = false;\n> break;\n\n\n---\n\n2.\n> As for the publisher node, this patch allows to reuse logical\n> walsender processes\n> after the streaming is done once.\n\n\n> ~\n\n\n> Is this paragraph even needed? Since the connection is reused then it\n> already implies the other end (the Wlasender) is being reused, right?\n\n\nI actually see no harm in explaining this explicitly.\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 20 Jul 2023 16:40:47 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 20, 2023 at 11:41 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached the updated patches with recent reviews addressed.\n>\n> See below for my comments:\n>\n> Peter Smith <smithpb2250@gmail.com>, 19 Tem 2023 Çar, 06:08 tarihinde şunu yazdı:\n>>\n>> Some review comments for v19-0001\n>>\n>> 2. LogicalRepSyncTableStart\n>>\n>> /*\n>> * Finally, wait until the leader apply worker tells us to catch up and\n>> * then return to let LogicalRepApplyLoop do it.\n>> */\n>> wait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n>>\n>> ~\n>>\n>> Should LogicalRepApplyLoop still be mentioned here, since that is\n>> static in worker.c? Maybe it is better to refer instead to the common\n>> 'start_apply' wrapper? (see also #5a below)\n>\n>\n> Isn't' LogicalRepApplyLoop static on HEAD and also mentioned in the exact comment in tablesync.c while the common \"start_apply\" function also exists? I'm not sure how such a change would be related to this patch.\n>\n\nFair enough. I thought it was questionable for one module to refer to\nanother module's static functions, but you are correct - it is not\nreally related to your patch. Sorry for the noise.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jul 2023 11:48:48 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Some review comments for v21-0001\n\n======\nsrc/backend/replication/logical/worker.c\n\n1. InitializeLogRepWorker\n\n if (am_tablesync_worker())\n ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has started\",\n+ (errmsg(\"logical replication worker for subscription \\\"%s\\\", table\n\\\"%s\\\" has started\",\n MySubscription->name,\n get_rel_name(MyLogicalRepWorker->relid))));\n\nI think this should not be changed. IIUC that decision for using the\ngeneric worker name for translations was only when the errmsg was in\nshared code where the worker type was not clear from existing\nconditions. See also previous review comments [1].\n\n~~~\n\n2. StartLogRepWorker\n\n/* Common function to start the leader apply or tablesync worker. */\nvoid\nStartLogRepWorker(int worker_slot)\n{\n/* Attach to slot */\nlogicalrep_worker_attach(worker_slot);\n\n/* Setup signal handling */\npqsignal(SIGHUP, SignalHandlerForConfigReload);\npqsignal(SIGTERM, die);\nBackgroundWorkerUnblockSignals();\n\n/*\n* We don't currently need any ResourceOwner in a walreceiver process, but\n* if we did, we could call CreateAuxProcessResourceOwner here.\n*/\n\n/* Initialise stats to a sanish value */\nMyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\nMyLogicalRepWorker->reply_time = GetCurrentTimestamp();\n\n/* Load the libpq-specific functions */\nload_file(\"libpqwalreceiver\", false);\n\nInitializeLogRepWorker();\n\n/* Connect to the origin and start the replication. */\nelog(DEBUG1, \"connecting to publisher using connection string \\\"%s\\\"\",\nMySubscription->conninfo);\n\n/*\n* Setup callback for syscache so that we know when something changes in\n* the subscription relation state.\n*/\nCacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,\n invalidate_syncing_table_states,\n (Datum) 0);\n}\n\n~\n\n2a.\nThe function name seems a bit misleading because it is not really\n\"starting\" anything here - it is just more \"initialization\" code,\nright? Nor is it common to all kinds of LogRepWorker. Maybe the\nfunction could be named something else like 'InitApplyOrSyncWorker()'.\n-- see also #2c\n\n~\n\n2b.\nShould this have Assert to ensure this is only called from leader\napply or tablesync? -- see also #2c\n\n~\n\n2c.\nIMO maybe the best/tidiest way to do this is not to introduce a new\nfunction at all. Instead, just put all this \"common init\" code into\nthe existing \"common init\" function ('InitializeLogRepWorker') and\nexecute it only if (am_tablesync_worker() || am_leader_apply_worker())\n{ }.\n\n======\nsrc/include/replication/worker_internal.h\n\n3.\n extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,\n XLogRecPtr remote_lsn);\n+extern void set_stream_options(WalRcvStreamOptions *options,\n+ char *slotname,\n+ XLogRecPtr *origin_startpos);\n+\n+extern void start_apply(XLogRecPtr origin_startpos);\n+extern void DisableSubscriptionAndExit(void);\n+extern void StartLogRepWorker(int worker_slot);\n\nThis placement (esp. with the missing whitespace) seems to be grouping\nthe set_stream_options with the other 'pa' externs, which are all\nunder the comment \"/* Parallel apply worker setup and interactions\n*/\".\n\nPutting all these up near the other \"extern void\nInitializeLogRepWorker(void)\" might be less ambiguous.\n\n------\n[1] worker name in errmsg -\nhttps://www.postgresql.org/message-id/CAA4eK1%2B%2BwkxxMjsPh-z2aKa9ZjNhKsjv0Tnw%2BTVX-hCBkDHusw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jul 2023 12:00:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Some review comments for v21-0002.\n\nOn Thu, Jul 20, 2023 at 11:41 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached the updated patches with recent reviews addressed.\n>\n> See below for my comments:\n>\n> Peter Smith <smithpb2250@gmail.com>, 19 Tem 2023 Çar, 06:08 tarihinde şunu yazdı:\n>>\n>> 5.\n>> + /* Found a table for next iteration */\n>> + finish_sync_worker(true);\n>> +\n>> + StartTransactionCommand();\n>> + ereport(LOG,\n>> + (errmsg(\"logical replication worker for subscription \\\"%s\\\" will be\n>> reused to sync table \\\"%s\\\" with relid %u.\",\n>> + MySubscription->name,\n>> + get_rel_name(MyLogicalRepWorker->relid),\n>> + MyLogicalRepWorker->relid)));\n>> + CommitTransactionCommand();\n>> +\n>> + done = false;\n>> + break;\n>> + }\n>> + LWLockRelease(LogicalRepWorkerLock);\n>>\n>>\n>> 5b.\n>> Isn't there a missing call to that LWLockRelease, if the 'break' happens?\n>\n>\n> Lock is already released before break, if that's the lock you meant:\n>\n>> /* Update worker state for the next table */\n>> MyLogicalRepWorker->relid = rstate->relid;\n>> MyLogicalRepWorker->relstate = rstate->state;\n>> MyLogicalRepWorker->relstate_lsn = rstate->lsn;\n>> LWLockRelease(LogicalRepWorkerLock);\n>>\n>>\n>> /* Found a table for next iteration */\n>> finish_sync_worker(true);\n>> done = false;\n>> break;\n>\n>\n\nSorry, I misread the code. You are right.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n1.\n+ if (!reuse_worker)\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\n+ MySubscription->name)));\n+ }\n+ else\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication worker for subscription \\\"%s\\\" will be\nreused to sync table \\\"%s\\\" with relid %u.\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ }\n\n1a.\nWe know this must be a tablesync worker, so I think that second errmsg\nshould also be saying \"logical replication table synchronization\nworker\".\n\n~\n\n1b.\nSince this is if/else anyway, is it simpler to be positive and say \"if\n(reuse_worker)\" instead of the negative \"if (!reuse_worker)\"\n\n~~~\n\n2. run_tablesync_worker\n {\n+ MyLogicalRepWorker->relsync_completed = false;\n+\n+ /* Start table synchronization. */\n start_table_sync(origin_startpos, &slotname);\nThis still contains the added comment that I'd previously posted I\nthought was adding anything useful. Also, I didn't think this comment\nexists in the HEAD code.\n======\nsrc/backend/replication/logical/worker.c\n\n3. LogicalRepApplyLoop\n\n+ /*\n+ * apply_dispatch() may have gone into apply_handle_commit()\n+ * which can call process_syncing_tables_for_sync.\n+ *\n+ * process_syncing_tables_for_sync decides whether the sync of\n+ * the current table is completed. If it is completed,\n+ * streaming must be already ended. So, we can break the loop.\n+ */\n+ if (am_tablesync_worker() &&\n+ MyLogicalRepWorker->relsync_completed)\n+ {\n+ endofstream = true;\n+ break;\n+ }\n+\n\nMaybe just personal taste, but IMO it is better to rearrange like\nbelow because then there is no reason to read the long comment except\nfor tablesync workers.\n\nif (am_tablesync_worker())\n{\n /*\n * apply_dispatch() may have gone into apply_handle_commit()\n * which can call process_syncing_tables_for_sync.\n *\n * process_syncing_tables_for_sync decides whether the sync of\n * the current table is completed. If it is completed,\n * streaming must be already ended. So, we can break the loop.\n */\n if (MyLogicalRepWorker->relsync_completed)\n {\n endofstream = true;\n break;\n }\n}\n\n~~~\n\n4. LogicalRepApplyLoop\n\n+\n+ /*\n+ * If relsync_completed is true, this means that the tablesync\n+ * worker is done with synchronization. Streaming has already been\n+ * ended by process_syncing_tables_for_sync. We should move to the\n+ * next table if needed, or exit.\n+ */\n+ if (am_tablesync_worker() &&\n+ MyLogicalRepWorker->relsync_completed)\n+ endofstream = true;\n\nDitto the same comment about rearranging the condition, as #3 above.\n\n======\nsrc/include/replication/worker_internal.h\n\n5.\n+ /*\n+ * Indicates whether tablesync worker has completed syncing its assigned\n+ * table.\n+ */\n+ bool relsync_completed;\n+\n\nIsn't it better to arrange this to be adjacent to other relXXX fields,\nso they all clearly belong to that \"Used for initial table\nsynchronization.\" group?\n\nFor example, something like:\n\n/* Used for initial table synchronization. */\nOid relid;\nchar relstate;\nXLogRecPtr relstate_lsn;\nslock_t relmutex;\nbool relsync_completed; /* has tablesync finished syncing\nthe assigned table? */\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jul 2023 14:09:15 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 21, 2023 at 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ~~~\n>\n> 2. StartLogRepWorker\n>\n> /* Common function to start the leader apply or tablesync worker. */\n> void\n> StartLogRepWorker(int worker_slot)\n> {\n> /* Attach to slot */\n> logicalrep_worker_attach(worker_slot);\n>\n> /* Setup signal handling */\n> pqsignal(SIGHUP, SignalHandlerForConfigReload);\n> pqsignal(SIGTERM, die);\n> BackgroundWorkerUnblockSignals();\n>\n> /*\n> * We don't currently need any ResourceOwner in a walreceiver process, but\n> * if we did, we could call CreateAuxProcessResourceOwner here.\n> */\n>\n> /* Initialise stats to a sanish value */\n> MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\n> MyLogicalRepWorker->reply_time = GetCurrentTimestamp();\n>\n> /* Load the libpq-specific functions */\n> load_file(\"libpqwalreceiver\", false);\n>\n> InitializeLogRepWorker();\n>\n> /* Connect to the origin and start the replication. */\n> elog(DEBUG1, \"connecting to publisher using connection string \\\"%s\\\"\",\n> MySubscription->conninfo);\n>\n> /*\n> * Setup callback for syscache so that we know when something changes in\n> * the subscription relation state.\n> */\n> CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,\n> invalidate_syncing_table_states,\n> (Datum) 0);\n> }\n>\n> ~\n>\n> 2a.\n> The function name seems a bit misleading because it is not really\n> \"starting\" anything here - it is just more \"initialization\" code,\n> right? Nor is it common to all kinds of LogRepWorker. Maybe the\n> function could be named something else like 'InitApplyOrSyncWorker()'.\n> -- see also #2c\n>\n\nHow about SetupLogRepWorker? The other thing I noticed is that we\ndon't seem to be consistent in naming functions in these files. For\nexample, shall we make all exposed functions follow camel case (like\nInitializeLogRepWorker) and static functions follow _ style (like\nrun_apply_worker) or the other possibility is to use _ style for all\nfunctions except may be the entry functions like ApplyWorkerMain()? I\ndon't know if there is already a pattern but if not then let's form it\nnow, so that code looks consistent.\n\n> ~\n>\n> 2b.\n> Should this have Assert to ensure this is only called from leader\n> apply or tablesync? -- see also #2c\n>\n> ~\n>\n> 2c.\n> IMO maybe the best/tidiest way to do this is not to introduce a new\n> function at all. Instead, just put all this \"common init\" code into\n> the existing \"common init\" function ('InitializeLogRepWorker') and\n> execute it only if (am_tablesync_worker() || am_leader_apply_worker())\n> { }.\n>\n\nI don't like 2c much because it will make InitializeLogRepWorker()\nhave two kinds of initializations.\n\n> ======\n> src/include/replication/worker_internal.h\n>\n> 3.\n> extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,\n> XLogRecPtr remote_lsn);\n> +extern void set_stream_options(WalRcvStreamOptions *options,\n> + char *slotname,\n> + XLogRecPtr *origin_startpos);\n> +\n> +extern void start_apply(XLogRecPtr origin_startpos);\n> +extern void DisableSubscriptionAndExit(void);\n> +extern void StartLogRepWorker(int worker_slot);\n>\n> This placement (esp. with the missing whitespace) seems to be grouping\n> the set_stream_options with the other 'pa' externs, which are all\n> under the comment \"/* Parallel apply worker setup and interactions\n> */\".\n>\n> Putting all these up near the other \"extern void\n> InitializeLogRepWorker(void)\" might be less ambiguous.\n>\n\n+1. Also, note that they should be in the same order as they are in .c files.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 Jul 2023 11:09:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 21, 2023 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > ~~~\n> >\n> > 2. StartLogRepWorker\n> >\n> > /* Common function to start the leader apply or tablesync worker. */\n> > void\n> > StartLogRepWorker(int worker_slot)\n> > {\n> > /* Attach to slot */\n> > logicalrep_worker_attach(worker_slot);\n> >\n> > /* Setup signal handling */\n> > pqsignal(SIGHUP, SignalHandlerForConfigReload);\n> > pqsignal(SIGTERM, die);\n> > BackgroundWorkerUnblockSignals();\n> >\n> > /*\n> > * We don't currently need any ResourceOwner in a walreceiver process, but\n> > * if we did, we could call CreateAuxProcessResourceOwner here.\n> > */\n> >\n> > /* Initialise stats to a sanish value */\n> > MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\n> > MyLogicalRepWorker->reply_time = GetCurrentTimestamp();\n> >\n> > /* Load the libpq-specific functions */\n> > load_file(\"libpqwalreceiver\", false);\n> >\n> > InitializeLogRepWorker();\n> >\n> > /* Connect to the origin and start the replication. */\n> > elog(DEBUG1, \"connecting to publisher using connection string \\\"%s\\\"\",\n> > MySubscription->conninfo);\n> >\n> > /*\n> > * Setup callback for syscache so that we know when something changes in\n> > * the subscription relation state.\n> > */\n> > CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,\n> > invalidate_syncing_table_states,\n> > (Datum) 0);\n> > }\n> >\n> > ~\n> >\n> > 2a.\n> > The function name seems a bit misleading because it is not really\n> > \"starting\" anything here - it is just more \"initialization\" code,\n> > right? Nor is it common to all kinds of LogRepWorker. Maybe the\n> > function could be named something else like 'InitApplyOrSyncWorker()'.\n> > -- see also #2c\n> >\n>\n> How about SetupLogRepWorker?\n\nThe name is better than StartXXX, but still, SetupXXX seems a synonym\nof InitXXX. That is why I thought it is a bit awkward having 2\nfunctions with effectively the same name and the same\ninitialization/setup purpose (the only difference is one function\nexcludes parallel workers, and the other function is common to all\nworkers).\n\n> The other thing I noticed is that we\n> don't seem to be consistent in naming functions in these files. For\n> example, shall we make all exposed functions follow camel case (like\n> InitializeLogRepWorker) and static functions follow _ style (like\n> run_apply_worker) or the other possibility is to use _ style for all\n> functions except may be the entry functions like ApplyWorkerMain()? I\n> don't know if there is already a pattern but if not then let's form it\n> now, so that code looks consistent.\n>\n\n+1 for using some consistent rule, but I think this may result in\n*many* changes, so it would be safer to itemize all the changes first,\njust to make sure everybody is OK with it first before updating\neverything.\n\n------\nKind Regards,\nPeter Smith\n\n\n", "msg_date": "Fri, 21 Jul 2023 16:34:40 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 21, 2023 at 12:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > ~~~\n> > >\n> > > 2. StartLogRepWorker\n> > >\n> > > /* Common function to start the leader apply or tablesync worker. */\n> > > void\n> > > StartLogRepWorker(int worker_slot)\n> > > {\n> > > /* Attach to slot */\n> > > logicalrep_worker_attach(worker_slot);\n> > >\n> > > /* Setup signal handling */\n> > > pqsignal(SIGHUP, SignalHandlerForConfigReload);\n> > > pqsignal(SIGTERM, die);\n> > > BackgroundWorkerUnblockSignals();\n> > >\n> > > /*\n> > > * We don't currently need any ResourceOwner in a walreceiver process, but\n> > > * if we did, we could call CreateAuxProcessResourceOwner here.\n> > > */\n> > >\n> > > /* Initialise stats to a sanish value */\n> > > MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\n> > > MyLogicalRepWorker->reply_time = GetCurrentTimestamp();\n> > >\n> > > /* Load the libpq-specific functions */\n> > > load_file(\"libpqwalreceiver\", false);\n> > >\n> > > InitializeLogRepWorker();\n> > >\n> > > /* Connect to the origin and start the replication. */\n> > > elog(DEBUG1, \"connecting to publisher using connection string \\\"%s\\\"\",\n> > > MySubscription->conninfo);\n> > >\n> > > /*\n> > > * Setup callback for syscache so that we know when something changes in\n> > > * the subscription relation state.\n> > > */\n> > > CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,\n> > > invalidate_syncing_table_states,\n> > > (Datum) 0);\n> > > }\n> > >\n> > > ~\n> > >\n> > > 2a.\n> > > The function name seems a bit misleading because it is not really\n> > > \"starting\" anything here - it is just more \"initialization\" code,\n> > > right? Nor is it common to all kinds of LogRepWorker. Maybe the\n> > > function could be named something else like 'InitApplyOrSyncWorker()'.\n> > > -- see also #2c\n> > >\n> >\n> > How about SetupLogRepWorker?\n>\n> The name is better than StartXXX, but still, SetupXXX seems a synonym\n> of InitXXX. That is why I thought it is a bit awkward having 2\n> functions with effectively the same name and the same\n> initialization/setup purpose (the only difference is one function\n> excludes parallel workers, and the other function is common to all\n> workers).\n>\n\nI can't know of a better way. We can probably name it as\nSetupApplyOrSyncWorker or something like that if you find that better.\n\n> > The other thing I noticed is that we\n> > don't seem to be consistent in naming functions in these files. For\n> > example, shall we make all exposed functions follow camel case (like\n> > InitializeLogRepWorker) and static functions follow _ style (like\n> > run_apply_worker) or the other possibility is to use _ style for all\n> > functions except may be the entry functions like ApplyWorkerMain()? I\n> > don't know if there is already a pattern but if not then let's form it\n> > now, so that code looks consistent.\n> >\n>\n> +1 for using some consistent rule, but I think this may result in\n> *many* changes, so it would be safer to itemize all the changes first,\n> just to make sure everybody is OK with it first before updating\n> everything.\n>\n\nFair enough. We can do that as a first patch and then work on the\nrefactoring patch to avoid introducing more inconsistencies or we can\ndo the refactoring patch first but keep all the new function names to\nfollow _ style.\n\nApart from this, few more comments on 0001:\n1.\n+run_apply_worker(WalRcvStreamOptions *options,\n+ char *slotname,\n+ char *originname,\n+ int originname_size,\n+ XLogRecPtr *origin_startpos)\n\nThe caller neither uses nor passes the value of origin_startpos. So,\nisn't it better to make origin_startpos local to run_apply_worker()?\nIt seems the same is true for some of the other parameters slotname,\noriginname, originname_size. Is there a reason to keep these as\narguments in this function?\n\n2.\n+static void\n+run_tablesync_worker(WalRcvStreamOptions *options,\n+ char *slotname,\n+ char *originname,\n+ int originname_size,\n+ XLogRecPtr *origin_startpos)\n\nThe comments in the previous point seem to apply to this as well.\n\n3.\n+ set_stream_options(options, slotname, origin_startpos);\n+\n+ walrcv_startstreaming(LogRepWorkerWalRcvConn, options);\n+\n+ if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING &&\n+ AllTablesyncsReady())\n\nThis last check is done in set_stream_options() and here as well. I\ndon't see any reason to give different answers at both places but\nbefore the patch, we were not relying on any such assumption that this\ncheck will always give the same answer considering the answer could be\ndifferent due to AllTablesyncsReady(). Can we move this check outside\nset_stream_options()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 Jul 2023 12:54:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 21 Tem 2023 Cum, 08:39 tarihinde\nşunu yazdı:\n\n> On Fri, Jul 21, 2023 at 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> How about SetupLogRepWorker? The other thing I noticed is that we\n> don't seem to be consistent in naming functions in these files. For\n> example, shall we make all exposed functions follow camel case (like\n> InitializeLogRepWorker) and static functions follow _ style (like\n> run_apply_worker) or the other possibility is to use _ style for all\n> functions except may be the entry functions like ApplyWorkerMain()? I\n> don't know if there is already a pattern but if not then let's form it\n> now, so that code looks consistent.\n>\n\nI agree that these files have inconsistencies in naming things.\nMost of the time I can't really figure out which naming convention I should\nuse. I try to name things by looking at other functions with similar\nresponsibilities.\n\n\n> 3.\n> > extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,\n> > XLogRecPtr remote_lsn);\n> > +extern void set_stream_options(WalRcvStreamOptions *options,\n> > + char *slotname,\n> > + XLogRecPtr *origin_startpos);\n> > +\n> > +extern void start_apply(XLogRecPtr origin_startpos);\n> > +extern void DisableSubscriptionAndExit(void);\n> > +extern void StartLogRepWorker(int worker_slot);\n> >\n> > This placement (esp. with the missing whitespace) seems to be grouping\n> > the set_stream_options with the other 'pa' externs, which are all\n> > under the comment \"/* Parallel apply worker setup and interactions\n> > */\".\n> >\n> > Putting all these up near the other \"extern void\n> > InitializeLogRepWorker(void)\" might be less ambiguous.\n> >\n>\n> +1. Also, note that they should be in the same order as they are in .c\n> files.\n>\n\nI did not realize the order is the same with .c files. Good to know. I'll\nfix it along with other comments.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nAmit Kapila <amit.kapila16@gmail.com>, 21 Tem 2023 Cum, 08:39 tarihinde şunu yazdı:On Fri, Jul 21, 2023 at 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\nHow about SetupLogRepWorker? The other thing I noticed is that we\ndon't seem to be consistent in naming functions in these files. For\nexample, shall we make all exposed functions follow camel case (like\nInitializeLogRepWorker) and static functions follow _ style (like\nrun_apply_worker) or the other possibility is to use _ style for all\nfunctions except may be the entry functions like ApplyWorkerMain()? I\ndon't know if there is already a pattern but if not then let's form it\nnow, so that code looks consistent.I agree that these files have inconsistencies in naming things.Most of the time I can't really figure out which naming convention I should use. I try to name things by looking at other functions with similar responsibilities. \n> 3.\n>  extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,\n>      XLogRecPtr remote_lsn);\n> +extern void set_stream_options(WalRcvStreamOptions *options,\n> +    char *slotname,\n> +    XLogRecPtr *origin_startpos);\n> +\n> +extern void start_apply(XLogRecPtr origin_startpos);\n> +extern void DisableSubscriptionAndExit(void);\n> +extern void StartLogRepWorker(int worker_slot);\n>\n> This placement (esp. with the missing whitespace) seems to be grouping\n> the set_stream_options with the other 'pa' externs, which are all\n> under the comment \"/* Parallel apply worker setup and interactions\n> */\".\n>\n> Putting all these up near the other \"extern void\n> InitializeLogRepWorker(void)\" might be less ambiguous.\n>\n\n+1. Also, note that they should be in the same order as they are in .c files.I did not realize the order is the same with .c files. Good to know. I'll fix it along with other comments.Thanks,-- Melih MutluMicrosoft", "msg_date": "Fri, 21 Jul 2023 12:47:46 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 21, 2023 at 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 12:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n\n> > > The other thing I noticed is that we\n> > > don't seem to be consistent in naming functions in these files. For\n> > > example, shall we make all exposed functions follow camel case (like\n> > > InitializeLogRepWorker) and static functions follow _ style (like\n> > > run_apply_worker) or the other possibility is to use _ style for all\n> > > functions except may be the entry functions like ApplyWorkerMain()? I\n> > > don't know if there is already a pattern but if not then let's form it\n> > > now, so that code looks consistent.\n> > >\n> >\n> > +1 for using some consistent rule, but I think this may result in\n> > *many* changes, so it would be safer to itemize all the changes first,\n> > just to make sure everybody is OK with it first before updating\n> > everything.\n> >\n>\n> Fair enough. We can do that as a first patch and then work on the\n> refactoring patch to avoid introducing more inconsistencies or we can\n> do the refactoring patch first but keep all the new function names to\n> follow _ style.\n>\n\nFixing the naming inconsistency will be more far-reaching than just a\nfew functions affected by these \"reuse\" patches. There are plenty of\nexisting functions already inconsistently named in the HEAD code. So\nperhaps this topic should be moved to a separate thread?\n\nFor example, here are some existing/proposed names:\n\n===\n\nworker.c (HEAD)\n\nstatic functions\n DisableSubscriptionAndExit -> disable_subscription_and_exit\n FindReplTupleInLocalRel -> find_repl_tuple_in_local_rel\n TwoPhaseTransactionGid -> two_phase_transaction_gid\n TargetPrivilegesCheck -> target_privileges_check\n UpdateWorkerStats -> update_worker_stats\n LogicalRepApplyLoop -> logical_rep_apply_loop\n\nnon-static functions\n stream_stop_internal -> StreamStopInternal\n apply_spooled_messages -> ApplySpooledMessages\n apply_dispatch -> ApplyDispatch\n store_flush_position -> StoreFlushPosition\n set_apply_error_context_origin -> SetApplyErrorContextOrigin\n\n===\n\ntablesync.c (HEAD)\n\nstatic functions\n FetchTableStates -> fetch_table_states\n\nnon-static functions\n invalidate_syncing_table_states -> InvalidateSyncingTableStates\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jul 2023 19:48:07 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com>, 21 Tem 2023 Cum, 12:48 tarihinde şunu\nyazdı:\n\n> On Fri, Jul 21, 2023 at 5:24 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 12:05 PM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > >\n> > > On Fri, Jul 21, 2023 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > >\n>\n> > > > The other thing I noticed is that we\n> > > > don't seem to be consistent in naming functions in these files. For\n> > > > example, shall we make all exposed functions follow camel case (like\n> > > > InitializeLogRepWorker) and static functions follow _ style (like\n> > > > run_apply_worker) or the other possibility is to use _ style for all\n> > > > functions except may be the entry functions like ApplyWorkerMain()? I\n> > > > don't know if there is already a pattern but if not then let's form\n> it\n> > > > now, so that code looks consistent.\n> > > >\n> > >\n> > > +1 for using some consistent rule, but I think this may result in\n> > > *many* changes, so it would be safer to itemize all the changes first,\n> > > just to make sure everybody is OK with it first before updating\n> > > everything.\n> > >\n> >\n> > Fair enough. We can do that as a first patch and then work on the\n> > refactoring patch to avoid introducing more inconsistencies or we can\n> > do the refactoring patch first but keep all the new function names to\n> > follow _ style.\n> >\n>\n> Fixing the naming inconsistency will be more far-reaching than just a\n> few functions affected by these \"reuse\" patches. There are plenty of\n> existing functions already inconsistently named in the HEAD code. So\n> perhaps this topic should be moved to a separate thread?\n>\n\n+1 for moving it to a separate thread. This is not something particularly\nintroduced by this patch.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nPeter Smith <smithpb2250@gmail.com>, 21 Tem 2023 Cum, 12:48 tarihinde şunu yazdı:On Fri, Jul 21, 2023 at 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 12:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n\n> > > The other thing I noticed is that we\n> > > don't seem to be consistent in naming functions in these files. For\n> > > example, shall we make all exposed functions follow camel case (like\n> > > InitializeLogRepWorker) and static functions follow _ style (like\n> > > run_apply_worker) or the other possibility is to use _ style for all\n> > > functions except may be the entry functions like ApplyWorkerMain()? I\n> > > don't know if there is already a pattern but if not then let's form it\n> > > now, so that code looks consistent.\n> > >\n> >\n> > +1 for using some consistent rule, but I think this may result in\n> > *many* changes, so it would be safer to itemize all the changes first,\n> > just to make sure everybody is OK with it first before updating\n> > everything.\n> >\n>\n> Fair enough. We can do that as a first patch and then work on the\n> refactoring patch to avoid introducing more inconsistencies or we can\n> do the refactoring patch first but keep all the new function names to\n> follow _ style.\n>\n\nFixing the naming inconsistency will be more far-reaching than just a\nfew functions affected by these \"reuse\" patches. There are plenty of\nexisting functions already inconsistently named in the HEAD code. So\nperhaps this topic should be moved to a separate thread?+1 for moving it to a separate thread. This is not something particularly introduced by this patch.Thanks,-- Melih MutluMicrosoft", "msg_date": "Fri, 21 Jul 2023 12:51:56 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nMelih Mutlu <m.melihmutlu@gmail.com>, 21 Tem 2023 Cum, 12:47 tarihinde şunu\nyazdı:\n\n> I did not realize the order is the same with .c files. Good to know. I'll\n> fix it along with other comments.\n>\n\nAddressed the recent reviews and attached the updated patches.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Tue, 25 Jul 2023 17:57:52 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here are some comments for patch v22-0001.\n\n======\n1. General -- naming conventions\n\nThere is quite a lot of inconsistency with variable/parameter naming\nstyles in this patch. I understand in most cases the names are copied\nunchanged from the original functions. Still, since this is a big\nrefactor anyway, it can also be a good opportunity to clean up those\ninconsistencies instead of just propagating them to different places.\nIIUC, the usual reluctance to rename things because it would cause\nbackpatch difficulties doesn't apply here (since everything is being\nrefactored anyway).\n\nE.g. Consider using use snake_case names more consistently in the\nfollowing places:\n\n~\n\n1a. start_table_sync\n\n+static void\n+start_table_sync(XLogRecPtr *origin_startpos, char **myslotname)\n+{\n+ char *syncslotname = NULL;\n\norigin_startpos -> (no change)\nmyslotname -> my_slot_name (But, is there a better name for this than\ncalling it \"my\" slot name)\nsyncslotname -> sync_slot_name\n\n~\n\n1b. run_tablesync_worker\n\n+static void\n+run_tablesync_worker()\n+{\n+ char originname[NAMEDATALEN];\n+ XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n+ char *slotname = NULL;\n+ WalRcvStreamOptions options;\n\noriginname -> origin_name\norigin_startpos -> (no change)\nslotname -> slot_name\n\n~\n\n1c. set_stream_options\n\n+void\n+set_stream_options(WalRcvStreamOptions *options,\n+ char *slotname,\n+ XLogRecPtr *origin_startpos)\n+{\n+ int server_version;\n\noptions -> (no change)\nslotname -> slot_name\norigin_startpos -> (no change)\nserver_version -> (no change)\n\n~\n\n1d. run_apply_worker\n\n static void\n-start_apply(XLogRecPtr origin_startpos)\n+run_apply_worker()\n {\n- PG_TRY();\n+ char originname[NAMEDATALEN];\n+ XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n+ char *slotname = NULL;\n+ WalRcvStreamOptions options;\n+ RepOriginId originid;\n+ TimeLineID startpointTLI;\n+ char *err;\n+ bool must_use_password;\n\noriginname -> origin_name\norigin_startpos => (no change)\nslotname -> slot_name\noriginid -> origin_id\n\n======\nsrc/backend/replication/logical/worker.c\n\n2. SetupApplyOrSyncWorker\n\n-ApplyWorkerMain(Datum main_arg)\n+SetupApplyOrSyncWorker(int worker_slot)\n {\n- int worker_slot = DatumGetInt32(main_arg);\n- char originname[NAMEDATALEN];\n- XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n- char *myslotname = NULL;\n- WalRcvStreamOptions options;\n- int server_version;\n-\n- InitializingApplyWorker = true;\n-\n /* Attach to slot */\n logicalrep_worker_attach(worker_slot);\n\n+ Assert(am_tablesync_worker() || am_leader_apply_worker());\n+\n\nWhy is the Assert not the very first statement of this function?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 26 Jul 2023 14:40:25 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jul 26, 2023 at 10:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some comments for patch v22-0001.\n>\n> ======\n> 1. General -- naming conventions\n>\n> There is quite a lot of inconsistency with variable/parameter naming\n> styles in this patch. I understand in most cases the names are copied\n> unchanged from the original functions. Still, since this is a big\n> refactor anyway, it can also be a good opportunity to clean up those\n> inconsistencies instead of just propagating them to different places.\n>\n\nI am not against improving consistency in the naming of existing\nvariables but I feel it would be better to do as a separate patch\nalong with improving the consistency function names. For new\nfunctions/variables, it would be good to follow a consistent style.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jul 2023 11:12:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here are some review comments for v22-0002\n\n======\n1. General - errmsg\n\nAFAIK, the errmsg part does not need to be enclosed by extra parentheses.\n\ne.g.\nBEFORE\nereport(LOG,\n(errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\nMySubscription->name)));\nAFTER\nereport(LOG,\nerrmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\nMySubscription->name));\n\n~\n\nThe patch has multiple cases similar to that example.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n2.\n+ if (reuse_worker)\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" will be reused to sync table \\\"%s\\\" with relid\n%u.\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid),\n+ MyLogicalRepWorker->relid)));\n+ }\n+ else\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" has finished\",\n+ MySubscription->name)));\n+ }\n\nThese brackets { } are not really necessary.\n\n~~~\n\n3. TablesyncWorkerMain\n+ for (;!done;)\n+ {\n+ List *rstates;\n+ ListCell *lc;\n+\n+ run_tablesync_worker();\n+\n+ if (IsTransactionState())\n+ CommitTransactionCommand();\n+\n+ if (MyLogicalRepWorker->relsync_completed)\n+ {\n+ /*\n+ * This tablesync worker is 'done' unless another table that needs\n+ * syncing is found.\n+ */\n+ done = true;\n\nThose variables 'rstates' and 'lc' do not need to be declared at this\nscope -- they can be declared further down, closer to where they are\nneeded.\n\n=====\nsrc/backend/replication/logical/worker.c\n\n4. LogicalRepApplyLoop\n+\n+ if (am_tablesync_worker())\n+ /*\n+ * If relsync_completed is true, this means that the tablesync\n+ * worker is done with synchronization. Streaming has already been\n+ * ended by process_syncing_tables_for_sync. We should move to the\n+ * next table if needed, or exit.\n+ */\n+ if (MyLogicalRepWorker->relsync_completed)\n+ endofstream = true;\n\nHere I think it is better to use bracketing { } for the outer \"if\",\ninstead of only relying on the indentation for readability. YMMV.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 27 Jul 2023 11:13:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here are some review comments for v22-0003\n\n======\n\n1. ApplicationNameForTablesync\n+/*\n+ * Determine the application_name for tablesync workers.\n+ *\n+ * Previously, the replication slot name was used as application_name. Since\n+ * it's possible to reuse tablesync workers now, a tablesync worker can handle\n+ * several different replication slots during its lifetime. Therefore, we\n+ * cannot use the slot name as application_name anymore. Instead, the slot\n+ * number of the tablesync worker is used as a part of the application_name.\n+ *\n+ * FIXME: if the tablesync worker starts to reuse the replication slot during\n+ * synchronization, we should again use the replication slot name as\n+ * application_name.\n+ */\n+static void\n+ApplicationNameForTablesync(Oid suboid, int worker_slot,\n+ char *application_name, Size szapp)\n+{\n+ snprintf(application_name, szapp, \"pg_%u_sync_%i_\" UINT64_FORMAT, suboid,\n+ worker_slot, GetSystemIdentifier());\n+}\n\n1a.\nThe intent of the \"FIXME\" comment was not clear. Is this some existing\nproblem that needs addressing, or is this really more like just an\n\"XXX\" warning/note for the future, in case the tablesync logic\nchanges?\n\n~\n\n1b.\nSince this is a new function, should it be named according to the\nconvention for static functions?\n\ne.g.\nApplicationNameForTablesync -> app_name_for_tablesync\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 27 Jul 2023 11:16:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 27, 2023 at 6:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for v22-0003\n>\n> ======\n>\n> 1. ApplicationNameForTablesync\n> +/*\n> + * Determine the application_name for tablesync workers.\n> + *\n> + * Previously, the replication slot name was used as application_name. Since\n> + * it's possible to reuse tablesync workers now, a tablesync worker can handle\n> + * several different replication slots during its lifetime. Therefore, we\n> + * cannot use the slot name as application_name anymore. Instead, the slot\n> + * number of the tablesync worker is used as a part of the application_name.\n> + *\n> + * FIXME: if the tablesync worker starts to reuse the replication slot during\n> + * synchronization, we should again use the replication slot name as\n> + * application_name.\n> + */\n> +static void\n> +ApplicationNameForTablesync(Oid suboid, int worker_slot,\n> + char *application_name, Size szapp)\n> +{\n> + snprintf(application_name, szapp, \"pg_%u_sync_%i_\" UINT64_FORMAT, suboid,\n> + worker_slot, GetSystemIdentifier());\n> +}\n>\n> 1a.\n> The intent of the \"FIXME\" comment was not clear. Is this some existing\n> problem that needs addressing, or is this really more like just an\n> \"XXX\" warning/note for the future, in case the tablesync logic\n> changes?\n>\n\nThis seems to be a Note for the future, so better to use XXX notation here.\n\n> ~\n>\n> 1b.\n> Since this is a new function, should it be named according to the\n> convention for static functions?\n>\n> e.g.\n> ApplicationNameForTablesync -> app_name_for_tablesync\n>\n\nI think for now let's follow the style for similar functions like\nReplicationOriginNameForLogicalRep() and\nReplicationSlotNameForTablesync().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Jul 2023 10:32:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 26 Tem 2023 Çar, 07:40 tarihinde şunu\nyazdı:\n\n> Here are some comments for patch v22-0001.\n>\n> ======\n> 1. General -- naming conventions\n>\n> There is quite a lot of inconsistency with variable/parameter naming\n> styles in this patch. I understand in most cases the names are copied\n> unchanged from the original functions. Still, since this is a big\n> refactor anyway, it can also be a good opportunity to clean up those\n> inconsistencies instead of just propagating them to different places.\n> IIUC, the usual reluctance to rename things because it would cause\n> backpatch difficulties doesn't apply here (since everything is being\n> refactored anyway).\n>\n> E.g. Consider using use snake_case names more consistently in the\n> following places:\n>\n\nI can simply change the places you mentioned, that seems okay to me.\nThe reason why I did not change the namings in existing variables/functions\nis because I did (and still do) not get what's the naming conventions in\nthose files. Is snake_case the convention for variables in those files (or\nin general)?\n\n2. SetupApplyOrSyncWorker\n>\n> -ApplyWorkerMain(Datum main_arg)\n> +SetupApplyOrSyncWorker(int worker_slot)\n> {\n> - int worker_slot = DatumGetInt32(main_arg);\n> - char originname[NAMEDATALEN];\n> - XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n> - char *myslotname = NULL;\n> - WalRcvStreamOptions options;\n> - int server_version;\n> -\n> - InitializingApplyWorker = true;\n> -\n> /* Attach to slot */\n> logicalrep_worker_attach(worker_slot);\n>\n> + Assert(am_tablesync_worker() || am_leader_apply_worker());\n> +\n>\n> Why is the Assert not the very first statement of this function?\n>\n\nI would also prefer to assert in the very beginning but am_tablesync_worker\nand am_leader_apply_worker require MyLogicalRepWorker to be not NULL.\nAnd MyLogicalRepWorker is assigned in logicalrep_worker_attach. I can\nchange this if you think there is a better way to check the worker type.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Peter,Peter Smith <smithpb2250@gmail.com>, 26 Tem 2023 Çar, 07:40 tarihinde şunu yazdı:Here are some comments for patch v22-0001.\n\n======\n1. General -- naming conventions\n\nThere is quite a lot of inconsistency with variable/parameter naming\nstyles in this patch. I understand in most cases the names are copied\nunchanged from the original functions. Still, since this is a big\nrefactor anyway, it can also be a good opportunity to clean up those\ninconsistencies instead of just propagating them to different places.\nIIUC, the usual reluctance to rename things because it would cause\nbackpatch difficulties doesn't apply here (since everything is being\nrefactored anyway).\n\nE.g. Consider using use snake_case names more consistently in the\nfollowing places: I can simply change the places you mentioned, that seems okay to me.The reason why I did not change the namings in existing variables/functions is because I did (and still do) not get what's the naming conventions in those files. Is snake_case the convention for variables in those files (or in general)? \n2. SetupApplyOrSyncWorker\n\n-ApplyWorkerMain(Datum main_arg)\n+SetupApplyOrSyncWorker(int worker_slot)\n {\n- int worker_slot = DatumGetInt32(main_arg);\n- char originname[NAMEDATALEN];\n- XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n- char    *myslotname = NULL;\n- WalRcvStreamOptions options;\n- int server_version;\n-\n- InitializingApplyWorker = true;\n-\n  /* Attach to slot */\n  logicalrep_worker_attach(worker_slot);\n\n+ Assert(am_tablesync_worker() || am_leader_apply_worker());\n+\n\nWhy is the Assert not the very first statement of this function?I would also prefer to assert in the very beginning but am_tablesync_worker and am_leader_apply_worker require MyLogicalRepWorker to be not NULL. And MyLogicalRepWorker is assigned in logicalrep_worker_attach. I can change this if you think there is a better way to check the worker type.Thanks,-- Melih MutluMicrosoft", "msg_date": "Thu, 27 Jul 2023 16:29:51 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Jul 27, 2023 at 11:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Peter,\n>\n> Peter Smith <smithpb2250@gmail.com>, 26 Tem 2023 Çar, 07:40 tarihinde şunu yazdı:\n>>\n>> Here are some comments for patch v22-0001.\n>>\n>> ======\n>> 1. General -- naming conventions\n>>\n>> There is quite a lot of inconsistency with variable/parameter naming\n>> styles in this patch. I understand in most cases the names are copied\n>> unchanged from the original functions. Still, since this is a big\n>> refactor anyway, it can also be a good opportunity to clean up those\n>> inconsistencies instead of just propagating them to different places.\n>> IIUC, the usual reluctance to rename things because it would cause\n>> backpatch difficulties doesn't apply here (since everything is being\n>> refactored anyway).\n>>\n>> E.g. Consider using use snake_case names more consistently in the\n>> following places:\n>\n>\n> I can simply change the places you mentioned, that seems okay to me.\n> The reason why I did not change the namings in existing variables/functions is because I did (and still do) not get what's the naming conventions in those files. Is snake_case the convention for variables in those files (or in general)?\n>\n\nTBH, I also don't know if there is a specific Postgres coding\nguideline to use snake_case or not (and Chat-GPT did not know either\nwhen I asked about it). I only assumed snake_case in my previous\nreview comment because the mentioned vars were already all lowercase.\nAnyway, the point was that whatever style is chosen, it ought to be\nused *consistently* because having a random mixture of styles in the\nsame function (e.g. worker_slot, originname, origin_startpos,\nmyslotname, options, server_version) seems messy. Meanwhile, I think\nAmit suggested [1] that for now, we only need to worry about the name\nconsistency in new code.\n\n\n>> 2. SetupApplyOrSyncWorker\n>>\n>> -ApplyWorkerMain(Datum main_arg)\n>> +SetupApplyOrSyncWorker(int worker_slot)\n>> {\n>> - int worker_slot = DatumGetInt32(main_arg);\n>> - char originname[NAMEDATALEN];\n>> - XLogRecPtr origin_startpos = InvalidXLogRecPtr;\n>> - char *myslotname = NULL;\n>> - WalRcvStreamOptions options;\n>> - int server_version;\n>> -\n>> - InitializingApplyWorker = true;\n>> -\n>> /* Attach to slot */\n>> logicalrep_worker_attach(worker_slot);\n>>\n>> + Assert(am_tablesync_worker() || am_leader_apply_worker());\n>> +\n>>\n>> Why is the Assert not the very first statement of this function?\n>\n>\n> I would also prefer to assert in the very beginning but am_tablesync_worker and am_leader_apply_worker require MyLogicalRepWorker to be not NULL. And MyLogicalRepWorker is assigned in logicalrep_worker_attach. I can change this if you think there is a better way to check the worker type.\n>\n\nI see. In that case your Assert LGTM.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1%2Bh9hWDAKupsoiw556xqh7uvj_F1pjFJc4jQhL89HdGww%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 28 Jul 2023 09:57:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih,\n\nBACKGROUND\n----------\n\nWe wanted to compare performance for the 2 different reuse-worker\ndesigns, when the apply worker is already busy handling other\nreplications, and then simultaneously the test table tablesyncs are\noccurring.\n\nTo test this scenario, some test scripts were written (described\nbelow). For comparisons, the scripts were then run using a build of\nHEAD; design #1 (v21); design #2 (0718).\n\nHOW THE TEST WORKS\n------------------\n\nOverview:\n1. The apply worker is made to subscribe to a 'busy_tbl'.\n2. After the SUBSCRIPTION is created, the publisher-side then loops\n(forever) doing INSERTS into that busy_tbl.\n3. While the apply worker is now busy, the subscriber does an ALTER\nSUBSCRIPTION REFRESH PUBLICATION to subscribe to all the other test\ntables.\n4. We time how long it takes for all tablsyncs to complete\n5. Repeat above for different numbers of empty tables (10, 100, 1000,\n2000) and different numbers of sync workers (2, 4, 8, 16)\n\nScripts\n-------\n\n(PSA 4 scripts to implement this logic)\n\ntestrun script\n- this does common setup (do_one_test_setup) and then the pub/sub\nscripts (do_one_test_PUB and do_one_test_SUB -- see below) are run in\nparallel\n- repeat 10 times\n\ndo_one_test_setup script\n- init and start instances\n- ipc setup tables and procedures\n\ndo_one_test_PUB script\n- ipc setup pub/sub\n- table setup\n- publishes the \"busy_tbl\", but then waits for the subscriber to\nsubscribe to only this one\n- alters the publication to include all other tables (so subscriber\nwill see these only after the ALTER SUBSCRIPTION PUBLICATION REFRESH)\n- enter a busy INSERT loop until it informed by the subscriber that\nthe test is finished\n\ndo_one_test_SUB script\n- ipc setup pub/sub\n- table setup\n- subscribes only to \"busy_tbl\", then informs the publisher when that\nis done (this will cause the publisher to commence the stay_busy loop)\n- after it knows the publishing busy loop has started it does\n- ALTER SUBSCRIPTION REFRESH PUBLICATION\n- wait until all the tablesyncs are ready <=== This is the part that\nis timed for the test RESULT\n\nPROBLEM\n-------\n\nLooking at the output files (e.g. *.dat_PUB and *.dat_SUB) they seem\nto confirm the tests are working how we wanted.\n\nUnfortunately, there is some slot problem for the patched builds (both\ndesigns #1 and #2). e.g. Search \"ERROR\" in the *.log files and see\nmany slot-related errors.\n\nPlease note - running these same scripts with HEAD build gave no such\nerrors. So it appears to be a patch problem.\n\n------\nKind Regards\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 28 Jul 2023 17:22:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Jul 28, 2023 at 5:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Melih,\n>\n> BACKGROUND\n> ----------\n>\n> We wanted to compare performance for the 2 different reuse-worker\n> designs, when the apply worker is already busy handling other\n> replications, and then simultaneously the test table tablesyncs are\n> occurring.\n>\n> To test this scenario, some test scripts were written (described\n> below). For comparisons, the scripts were then run using a build of\n> HEAD; design #1 (v21); design #2 (0718).\n>\n> HOW THE TEST WORKS\n> ------------------\n>\n> Overview:\n> 1. The apply worker is made to subscribe to a 'busy_tbl'.\n> 2. After the SUBSCRIPTION is created, the publisher-side then loops\n> (forever) doing INSERTS into that busy_tbl.\n> 3. While the apply worker is now busy, the subscriber does an ALTER\n> SUBSCRIPTION REFRESH PUBLICATION to subscribe to all the other test\n> tables.\n> 4. We time how long it takes for all tablsyncs to complete\n> 5. Repeat above for different numbers of empty tables (10, 100, 1000,\n> 2000) and different numbers of sync workers (2, 4, 8, 16)\n>\n> Scripts\n> -------\n>\n> (PSA 4 scripts to implement this logic)\n>\n> testrun script\n> - this does common setup (do_one_test_setup) and then the pub/sub\n> scripts (do_one_test_PUB and do_one_test_SUB -- see below) are run in\n> parallel\n> - repeat 10 times\n>\n> do_one_test_setup script\n> - init and start instances\n> - ipc setup tables and procedures\n>\n> do_one_test_PUB script\n> - ipc setup pub/sub\n> - table setup\n> - publishes the \"busy_tbl\", but then waits for the subscriber to\n> subscribe to only this one\n> - alters the publication to include all other tables (so subscriber\n> will see these only after the ALTER SUBSCRIPTION PUBLICATION REFRESH)\n> - enter a busy INSERT loop until it informed by the subscriber that\n> the test is finished\n>\n> do_one_test_SUB script\n> - ipc setup pub/sub\n> - table setup\n> - subscribes only to \"busy_tbl\", then informs the publisher when that\n> is done (this will cause the publisher to commence the stay_busy loop)\n> - after it knows the publishing busy loop has started it does\n> - ALTER SUBSCRIPTION REFRESH PUBLICATION\n> - wait until all the tablesyncs are ready <=== This is the part that\n> is timed for the test RESULT\n>\n> PROBLEM\n> -------\n>\n> Looking at the output files (e.g. *.dat_PUB and *.dat_SUB) they seem\n> to confirm the tests are working how we wanted.\n>\n> Unfortunately, there is some slot problem for the patched builds (both\n> designs #1 and #2). e.g. Search \"ERROR\" in the *.log files and see\n> many slot-related errors.\n>\n> Please note - running these same scripts with HEAD build gave no such\n> errors. So it appears to be a patch problem.\n>\n\nHi\n\nFYI, here is some more information about ERRORs seen.\n\nThe patches were re-tested -- applied in stages (and also against the\ndifferent scripts) to identify where the problem was introduced. Below\nare the observations:\n\n~~~\n\nUsing original test scripts\n\n1. Using only patch v21-0001\n- no errors\n\n2. Using only patch v21-0001+0002\n- no errors\n\n3. Using patch v21-0001+0002+0003\n- no errors\n\n~~~\n\nUsing the \"busy loop\" test scripts for long transactions\n\n1. Using only patch v21-0001\n- no errors\n\n2. Using only patch v21-0001+0002\n- gives errors for \"no copy in progress issue\"\ne.g. ERROR: could not send data to WAL stream: no COPY in progress\n\n3. Using patch v21-0001+0002+0003\n- gives the same \"no copy in progress issue\" errors as above\ne.g. ERROR: could not send data to WAL stream: no COPY in progress\n- and also gives slot consistency point errors\ne.g. ERROR: could not create replication slot\n\"pg_16700_sync_16514_7261998170966054867\": ERROR: could not find\nlogical decoding starting point\ne.g. LOG: could not drop replication slot\n\"pg_16700_sync_16454_7261998170966054867\" on publisher: ERROR:\nreplication slot \"pg_16700_sync_16454_7261998170966054867\" does not\nexist\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 1 Aug 2023 14:14:02 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, 1 Aug 2023 at 09:44, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Jul 28, 2023 at 5:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Melih,\n> >\n> > BACKGROUND\n> > ----------\n> >\n> > We wanted to compare performance for the 2 different reuse-worker\n> > designs, when the apply worker is already busy handling other\n> > replications, and then simultaneously the test table tablesyncs are\n> > occurring.\n> >\n> > To test this scenario, some test scripts were written (described\n> > below). For comparisons, the scripts were then run using a build of\n> > HEAD; design #1 (v21); design #2 (0718).\n> >\n> > HOW THE TEST WORKS\n> > ------------------\n> >\n> > Overview:\n> > 1. The apply worker is made to subscribe to a 'busy_tbl'.\n> > 2. After the SUBSCRIPTION is created, the publisher-side then loops\n> > (forever) doing INSERTS into that busy_tbl.\n> > 3. While the apply worker is now busy, the subscriber does an ALTER\n> > SUBSCRIPTION REFRESH PUBLICATION to subscribe to all the other test\n> > tables.\n> > 4. We time how long it takes for all tablsyncs to complete\n> > 5. Repeat above for different numbers of empty tables (10, 100, 1000,\n> > 2000) and different numbers of sync workers (2, 4, 8, 16)\n> >\n> > Scripts\n> > -------\n> >\n> > (PSA 4 scripts to implement this logic)\n> >\n> > testrun script\n> > - this does common setup (do_one_test_setup) and then the pub/sub\n> > scripts (do_one_test_PUB and do_one_test_SUB -- see below) are run in\n> > parallel\n> > - repeat 10 times\n> >\n> > do_one_test_setup script\n> > - init and start instances\n> > - ipc setup tables and procedures\n> >\n> > do_one_test_PUB script\n> > - ipc setup pub/sub\n> > - table setup\n> > - publishes the \"busy_tbl\", but then waits for the subscriber to\n> > subscribe to only this one\n> > - alters the publication to include all other tables (so subscriber\n> > will see these only after the ALTER SUBSCRIPTION PUBLICATION REFRESH)\n> > - enter a busy INSERT loop until it informed by the subscriber that\n> > the test is finished\n> >\n> > do_one_test_SUB script\n> > - ipc setup pub/sub\n> > - table setup\n> > - subscribes only to \"busy_tbl\", then informs the publisher when that\n> > is done (this will cause the publisher to commence the stay_busy loop)\n> > - after it knows the publishing busy loop has started it does\n> > - ALTER SUBSCRIPTION REFRESH PUBLICATION\n> > - wait until all the tablesyncs are ready <=== This is the part that\n> > is timed for the test RESULT\n> >\n> > PROBLEM\n> > -------\n> >\n> > Looking at the output files (e.g. *.dat_PUB and *.dat_SUB) they seem\n> > to confirm the tests are working how we wanted.\n> >\n> > Unfortunately, there is some slot problem for the patched builds (both\n> > designs #1 and #2). e.g. Search \"ERROR\" in the *.log files and see\n> > many slot-related errors.\n> >\n> > Please note - running these same scripts with HEAD build gave no such\n> > errors. So it appears to be a patch problem.\n> >\n>\n> Hi\n>\n> FYI, here is some more information about ERRORs seen.\n>\n> The patches were re-tested -- applied in stages (and also against the\n> different scripts) to identify where the problem was introduced. Below\n> are the observations:\n>\n> ~~~\n>\n> Using original test scripts\n>\n> 1. Using only patch v21-0001\n> - no errors\n>\n> 2. Using only patch v21-0001+0002\n> - no errors\n>\n> 3. Using patch v21-0001+0002+0003\n> - no errors\n>\n> ~~~\n>\n> Using the \"busy loop\" test scripts for long transactions\n>\n> 1. Using only patch v21-0001\n> - no errors\n>\n> 2. Using only patch v21-0001+0002\n> - gives errors for \"no copy in progress issue\"\n> e.g. ERROR: could not send data to WAL stream: no COPY in progress\n>\n> 3. Using patch v21-0001+0002+0003\n> - gives the same \"no copy in progress issue\" errors as above\n> e.g. ERROR: could not send data to WAL stream: no COPY in progress\n> - and also gives slot consistency point errors\n> e.g. ERROR: could not create replication slot\n> \"pg_16700_sync_16514_7261998170966054867\": ERROR: could not find\n> logical decoding starting point\n> e.g. LOG: could not drop replication slot\n> \"pg_16700_sync_16454_7261998170966054867\" on publisher: ERROR:\n> replication slot \"pg_16700_sync_16454_7261998170966054867\" does not\n> exist\n\nI agree that \"no copy in progress issue\" issue has nothing to do with\n0001 patch. This issue is present with the 0002 patch.\nIn the case when the tablesync worker has to apply the transactions\nafter the table is synced, the tablesync worker sends the feedback of\nwritepos, applypos and flushpos which results in \"No copy in progress\"\nerror as the stream has ended already. Fixed it by exiting the\nstreaming loop if the tablesync worker is done with the\nsynchronization. The attached 0004 patch has the changes for the same.\nThe rest of v22 patches are the same patch that were posted by Melih\nin the earlier mail.\n\nRegards,\nVignesh", "msg_date": "Tue, 1 Aug 2023 12:02:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Tue, Aug 1, 2023 at 9:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> FYI, here is some more information about ERRORs seen.\n>\n> The patches were re-tested -- applied in stages (and also against the\n> different scripts) to identify where the problem was introduced. Below\n> are the observations:\n>\n> ~~~\n>\n> Using original test scripts\n>\n> 1. Using only patch v21-0001\n> - no errors\n>\n> 2. Using only patch v21-0001+0002\n> - no errors\n>\n> 3. Using patch v21-0001+0002+0003\n> - no errors\n>\n> ~~~\n>\n> Using the \"busy loop\" test scripts for long transactions\n>\n> 1. Using only patch v21-0001\n> - no errors\n>\n> 2. Using only patch v21-0001+0002\n> - gives errors for \"no copy in progress issue\"\n> e.g. ERROR: could not send data to WAL stream: no COPY in progress\n>\n> 3. Using patch v21-0001+0002+0003\n> - gives the same \"no copy in progress issue\" errors as above\n> e.g. ERROR: could not send data to WAL stream: no COPY in progress\n> - and also gives slot consistency point errors\n> e.g. ERROR: could not create replication slot\n> \"pg_16700_sync_16514_7261998170966054867\": ERROR: could not find\n> logical decoding starting point\n> e.g. LOG: could not drop replication slot\n> \"pg_16700_sync_16454_7261998170966054867\" on publisher: ERROR:\n> replication slot \"pg_16700_sync_16454_7261998170966054867\" does not\n> exist\n>\n\nI think we are getting the error (ERROR: could not find logical\ndecoding starting point) because we wouldn't have waited for WAL to\nbecome available before reading it. It could happen due to the\nfollowing code:\nWalSndWaitForWal()\n{\n...\nif (streamingDoneReceiving && streamingDoneSending &&\n!pq_is_send_pending())\nbreak;\n..\n}\n\nNow, it seems that in 0003 patch, instead of resetting flags\nstreamingDoneSending, and streamingDoneReceiving before start\nreplication, we should reset before create logical slots because we\nneed to read the WAL during that time as well to find the consistent\npoint.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:31:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 2 Ağu 2023 Çar, 12:01 tarihinde şunu\nyazdı:\n\n> I think we are getting the error (ERROR: could not find logical\n> decoding starting point) because we wouldn't have waited for WAL to\n> become available before reading it. It could happen due to the\n> following code:\n> WalSndWaitForWal()\n> {\n> ...\n> if (streamingDoneReceiving && streamingDoneSending &&\n> !pq_is_send_pending())\n> break;\n> ..\n> }\n>\n> Now, it seems that in 0003 patch, instead of resetting flags\n> streamingDoneSending, and streamingDoneReceiving before start\n> replication, we should reset before create logical slots because we\n> need to read the WAL during that time as well to find the consistent\n> point.\n>\n\nThanks for the suggestion Amit. I've been looking into this recently and\ncouldn't figure out the cause until now.\nI quickly made the fix in 0003. Seems like it resolved the \"could not find\nlogical decoding starting point\" errors.\n\nvignesh C <vignesh21@gmail.com>, 1 Ağu 2023 Sal, 09:32 tarihinde şunu yazdı:\n\n> I agree that \"no copy in progress issue\" issue has nothing to do with\n> 0001 patch. This issue is present with the 0002 patch.\n> In the case when the tablesync worker has to apply the transactions\n> after the table is synced, the tablesync worker sends the feedback of\n> writepos, applypos and flushpos which results in \"No copy in progress\"\n> error as the stream has ended already. Fixed it by exiting the\n> streaming loop if the tablesync worker is done with the\n> synchronization. The attached 0004 patch has the changes for the same.\n> The rest of v22 patches are the same patch that were posted by Melih\n> in the earlier mail.\n\n\nThanks for the fix. I placed it into 0002 with a slight change as follows:\n\n- send_feedback(last_received, false, false);\n> + if (!MyLogicalRepWorker->relsync_completed)\n> + send_feedback(last_received, false, false);\n\n\nIMHO relsync_completed means simply the same with streaming_done, that's\nwhy I wanted to check that flag instead of an additional goto statement.\nDoes it make sense to you as well?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 2 Aug 2023 12:42:07 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\n>\nPFA an updated version with some of the earlier reviews addressed.\nForgot to include them in the previous email.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 2 Aug 2023 13:39:07 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Aug 2, 2023 at 4:09 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> PFA an updated version with some of the earlier reviews addressed.\n> Forgot to include them in the previous email.\n>\n\nIt is always better to explicitly tell which reviews are addressed but\nanyway, I have done some minor cleanup in the 0001 patch including\nremoving includes which didn't seem necessary, modified a few\ncomments, and ran pgindent. I also thought of modifying some variable\nnames based on suggestions by Peter Smith in an email [1] but didn't\nfind many of them any better than the current ones so modified just a\nfew of those. If you guys are okay with this then let's commit it and\nthen we can focus more on the remaining patches.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPs3Du9JFmhecWY8%2BVFD11VLOkSmB36t_xWHHQJNMpdA-A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 2 Aug 2023 18:49:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Aug 2, 2023 at 11:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 2, 2023 at 4:09 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > PFA an updated version with some of the earlier reviews addressed.\n> > Forgot to include them in the previous email.\n> >\n>\n> It is always better to explicitly tell which reviews are addressed but\n> anyway, I have done some minor cleanup in the 0001 patch including\n> removing includes which didn't seem necessary, modified a few\n> comments, and ran pgindent. I also thought of modifying some variable\n> names based on suggestions by Peter Smith in an email [1] but didn't\n> find many of them any better than the current ones so modified just a\n> few of those. If you guys are okay with this then let's commit it and\n> then we can focus more on the remaining patches.\n>\n\nI checked the latest patch v25-0001.\n\nLGTM.\n\n~~\n\nBTW, I have re-tested many cases of HEAD versus HEAD+v25-0001 (using\ncurrent test scripts previously mentioned in this thread). Because\nv25-0001 is only a refactoring patch we expect that the results should\nbe the same as for HEAD, and that is what I observed.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Aug 2023 14:05:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Aug 3, 2023 at 9:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Aug 2, 2023 at 11:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 2, 2023 at 4:09 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > >\n> > > PFA an updated version with some of the earlier reviews addressed.\n> > > Forgot to include them in the previous email.\n> > >\n> >\n> > It is always better to explicitly tell which reviews are addressed but\n> > anyway, I have done some minor cleanup in the 0001 patch including\n> > removing includes which didn't seem necessary, modified a few\n> > comments, and ran pgindent. I also thought of modifying some variable\n> > names based on suggestions by Peter Smith in an email [1] but didn't\n> > find many of them any better than the current ones so modified just a\n> > few of those. If you guys are okay with this then let's commit it and\n> > then we can focus more on the remaining patches.\n> >\n>\n> I checked the latest patch v25-0001.\n>\n> LGTM.\n>\n\nThanks, I have pushed 0001. Let's focus on the remaining patches.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 3 Aug 2023 11:51:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih,\n\nNow that v25-0001 has been pushed, can you please rebase the remaining patches?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Aug 2023 18:19:18 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Just to clarify my previous post, I meant we will need new v26* patches\n\nv24-0001 -> not needed because v25-0001 pushed\nv24-0002 -> v26-0001\nv24-0003 -> v26-0002\n\nOn Thu, Aug 3, 2023 at 6:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Melih,\n>\n> Now that v25-0001 has been pushed, can you please rebase the remaining patches?\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\n\n", "msg_date": "Thu, 3 Aug 2023 19:06:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 3 Ağu 2023 Per, 09:22 tarihinde şunu\nyazdı:\n\n> On Thu, Aug 3, 2023 at 9:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > I checked the latest patch v25-0001.\n> >\n> > LGTM.\n> >\n>\n> Thanks, I have pushed 0001. Let's focus on the remaining patches.\n>\n\nThanks!\n\nPeter Smith <smithpb2250@gmail.com>, 3 Ağu 2023 Per, 12:06 tarihinde şunu\nyazdı:\n\n> Just to clarify my previous post, I meant we will need new v26* patches\n>\n\nRight. I attached the v26 as you asked.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Thu, 3 Aug 2023 14:29:59 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "FWIW, I confirmed that my review comments for v22* have all been\naddressed in the latest v26* patches.\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 4 Aug 2023 12:56:00 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih.\n\nNow that the design#1 ERRORs have been fixed, we returned to doing\nperformance measuring of the design#1 patch versus HEAD.\n\nUnfortunately, we observed that under some particular conditions\n(large transactions of 1000 inserts/tx for a busy apply worker, 100\nempty tables to be synced) the performance was worse with the design#1\npatch applied.\n\n~~\n\nRESULTS\n\nBelow are some recent measurements (for 100 empty tables to be synced\nwhen apply worker is already busy). We vary the size of the published\ntransaction for the \"busy\" table, and you can see that for certain\nlarge transaction sizes (1000 and 2000 inserts/tx) the design#1\nperformance was worse than HEAD:\n\n~\n\nThe publisher \"busy\" table does commit every 10 inserts:\n2w 4w 8w 16w\nHEAD 3945 1138 1166 1205\nHEAD+v24-0002 3559 886 355 490\n%improvement 10% 22% 70% 59%\n\n~\n\nThe publisher \"busy\" table does commit every 100 inserts:\n2w 4w 8w 16w\nHEAD 2363 1357 1354 1355\nHEAD+v24-0002 2077 1358 762 756\n%improvement 12% 0% 44% 44%\n\n~\n\nPublisher \"busy\" table does commit every 1000 inserts:\n2w 4w 8w 16w\nHEAD 11898 5855 1868 1631\nHEAD+v24-0002 21905 8254 3531 1626\n%improvement -84% -41% -89% 0%\n\n^ Note - design#1 was slower than HEAD here\n\n~\n\nPublisher \"busy\" table does commit every 2000 inserts:\n2w 4w 8w 16w\nHEAD 21740 7109 3454 1703\nHEAD+v24-0002 21585 10877 4779 2293\n%improvement 1% -53% -38% -35%\n\n^ Note - design#1 was slower than HEAD here\n\n~\n\nThe publisher \"busy\" table does commit every 5000 inserts:\n2w 4w 8w 16w\nHEAD 36094 18105 8595 3567\nHEAD+v24-0002 36305 18199 8151 3710\n%improvement -1% -1% 5% -4%\n\n~\n\nThe publisher \"busy\" table does commit every 10000 inserts:\n2w 4w 8w 16w\nHEAD 38077 18406 9426 5559\nHEAD+v24-0002 36763 18027 8896 4166\n%improvement 3% 2% 6% 25%\n\n------\n\nTEST SCRIPTS\n\nThe \"busy apply\" test scripts are basically the same as already posted\n[1], but I have reattached the latest ones again anyway.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuNVNK2%2BA%2BR6eV8rKPNBHemCFE4NDtEYfpXbYr6SsvvBg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 7 Aug 2023 16:25:28 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thursday, August 3, 2023 7:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n\r\n> Right. I attached the v26 as you asked. \r\n\r\nThanks for posting the patches.\r\n \r\nWhile reviewing the patch, I noticed one rare case that it's possible that there\r\nare two table sync worker for the same table in the same time.\r\n\r\nThe patch relies on LogicalRepWorkerLock to prevent concurrent access, but the\r\napply worker will start a new worker after releasing the lock. So, at the point[1]\r\nwhere the lock is released and the new table sync worker has not been started,\r\nit seems possible that another old table sync worker will be reused for the\r\nsame table.\r\n\r\n\t\t\t\t/* Now safe to release the LWLock */\r\n\t\t\t\tLWLockRelease(LogicalRepWorkerLock);\r\n*[1]\r\n\t\t\t\t/*\r\n\t\t\t\t * If there are free sync worker slot(s), start a new sync\r\n\t\t\t\t * worker for the table.\r\n\t\t\t\t */\r\n\t\t\t\tif (nsyncworkers < max_sync_workers_per_subscription)\r\n\t\t\t\t...\r\n\t\t\t\t\t\tlogicalrep_worker_launch(MyLogicalRepWorker->dbid,\r\n\r\nI can reproduce it by using gdb.\r\n\r\nSteps:\r\n1. set max_sync_workers_per_subscription to 1 and setup pub/sub which publishes\r\n two tables(table A and B).\r\n2. when the table sync worker for the table A started, use gdb to block it\r\n before being reused for another table.\r\n3. set max_sync_workers_per_subscription to 2 and use gdb to block the apply\r\n worker at the point after releasing the LogicalRepWorkerLock and before\r\n starting another table sync worker for table B.\r\n4. release the blocked table sync worker, then we can see the table sync worker\r\n is also reused for table B.\r\n5. release the apply worker, then we can see the apply worker will start\r\n another table sync worker for the same table(B).\r\n\r\nI think it would be better to prevent this case from happening as this case\r\nwill give some unexpected ERROR or LOG. Note that I haven't checked if it would\r\ncause worse problems like duplicate copy or others.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 9 Aug 2023 02:58:03 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih,\n\nHere is a patch to help in getting the execution at various phases\nlike: a) replication slot creation time, b) Wal reading c) Number of\nWAL records read d) subscription relation state change etc\nCouple of observation while we tested with this patch:\n1) We noticed that the patch takes more time for finding the decoding\nstart point.\n2) Another observation was that the number of XLOG records read for\nidentify the consistent point was significantly high with the v26_0001\npatch.\n\nHEAD\npostgres=# select avg(counttime)/1000 \"avgtime(ms)\",\nmedian(counttime)/1000 \"median(ms)\", min(counttime)/1000\n\"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\ngroup by logtype;\n avgtime(ms) | median(ms) | mintime(ms) |\nmaxtime(ms) | logtype\n------------------------+------------------------+-------------+-------------+--------------------------\n 0.00579245283018867920 | 0.00200000000000000000 | 0 |\n 1 | SNAPSHOT_BUILD\n 1.2246811320754717 | 0.98550000000000000000 | 0 |\n 37 | LOGICAL_SLOT_CREATION\n 171.0863283018867920 | 183.9120000000000000 | 0 |\n 408 | FIND_DECODING_STARTPOINT\n 2.0699433962264151 | 1.4380000000000000 | 1 |\n 49 | INIT_DECODING_CONTEXT\n(4 rows)\n\nHEAD + v26-0001 patch\npostgres=# select avg(counttime)/1000 \"avgtime(ms)\",\nmedian(counttime)/1000 \"median(ms)\", min(counttime)/1000\n\"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\ngroup by logtype;\n avgtime(ms) | median(ms) | mintime(ms) |\nmaxtime(ms) | logtype\n------------------------+------------------------+-------------+-------------+--------------------------\n 0.00588113207547169810 | 0.00500000000000000000 | 0 |\n 0 | SNAPSHOT_BUILD\n 1.1270962264150943 | 1.1000000000000000 | 0 |\n 2 | LOGICAL_SLOT_CREATION\n 301.1745528301886790 | 410.4870000000000000 | 0 |\n 427 | FIND_DECODING_STARTPOINT\n 1.4814660377358491 | 1.4530000000000000 | 1 |\n 9 | INIT_DECODING_CONTEXT\n(4 rows)\n\nIn the above FIND_DECODING_STARTPOINT is very much higher with V26-0001 patch.\n\nHEAD\nFIND_DECODING_XLOG_RECORD_COUNT\n- average = 2762\n- median = 3362\n\nHEAD + reuse worker patch(v26_0001 patch)\nWhere FIND_DECODING_XLOG_RECORD_COUNT\n- average = 4105\n- median = 5345\n\nSimilarly Number of xlog records read is higher with v26_0001 patch.\n\nSteps to calculate the timing:\n-- first collect the necessary LOG from subscriber's log.\ncat *.log | grep -E\n'(LOGICAL_SLOT_CREATION|INIT_DECODING_CONTEXT|FIND_DECODING_STARTPOINT|SNAPSHOT_BUILD|FIND_DECODING_XLOG_RECORD_COUNT|LOGICAL_XLOG_READ|LOGICAL_DECODE_PROCESS_RECORD|LOGICAL_WAIT_TRANSACTION)'\n> grep.dat\n\ncreate table testv26(logtime varchar, pid varchar, level varchar,\nspace varchar, logtype varchar, counttime int);\n-- then copy these datas into db table to count the avg number.\nCOPY testv26 FROM '/home/logs/grep.dat' DELIMITER ' ';\n\n-- Finally, use the SQL to analyze the data:\nselect avg(counttime)/1000 \"avgtime(ms)\", logtype from testv26 group by logtype;\n\n--- To get the number of xlog records read:\nselect avg(counttime) from testv26 where logtype\n='FIND_DECODING_XLOG_RECORD_COUNT' and counttime != 1;\n\nThanks to Peter and Hou-san who helped in finding these out. We are\nparallely analysing this, @Melih Mutlu posting this information so\nthat it might help you too in analysing this issue.\n\nRegards,\nVignesh", "msg_date": "Wed, 9 Aug 2023 09:51:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Aug 9, 2023 at 8:28 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, August 3, 2023 7:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> > Right. I attached the v26 as you asked.\n>\n> Thanks for posting the patches.\n>\n> While reviewing the patch, I noticed one rare case that it's possible that there\n> are two table sync worker for the same table in the same time.\n>\n> The patch relies on LogicalRepWorkerLock to prevent concurrent access, but the\n> apply worker will start a new worker after releasing the lock. So, at the point[1]\n> where the lock is released and the new table sync worker has not been started,\n> it seems possible that another old table sync worker will be reused for the\n> same table.\n>\n> /* Now safe to release the LWLock */\n> LWLockRelease(LogicalRepWorkerLock);\n> *[1]\n> /*\n> * If there are free sync worker slot(s), start a new sync\n> * worker for the table.\n> */\n> if (nsyncworkers < max_sync_workers_per_subscription)\n> ...\n> logicalrep_worker_launch(MyLogicalRepWorker->dbid,\n>\n\nYeah, this is a problem. I think one idea to solve this is by\nextending the lock duration till we launch the tablesync worker but we\nshould also consider changing this locking scheme such that there is a\nbetter way to indicate that for a particular rel, tablesync is in\nprogress. Currently, the code in TablesyncWorkerMain() also acquires\nthe lock in exclusive mode even though the tablesync for a rel is in\nprogress which I guess could easily heart us for larger values of\nmax_logical_replication_workers. So, that could be another motivation\nto think for a different locking scheme.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Aug 2023 10:15:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih,\n\nFYI -- The same testing was repeated but this time PG was configured\nto say synchronous_commit=on. Other factors and scripts were the same\nas before --- busy apply, 5 runs, 4 workers, 1000 inserts/tx, 100\nempty tables, etc.\n\nThere are still more xlog records seen for the v26 patch, but now the\nv26 performance was better than HEAD.\n\nRESULTS (synchronous_commit=on)\n---------------------------------------------------\n\nXlog Counts\n\nHEAD\npostgres=# select avg(counttime) \"avg\", median(counttime) \"median\",\nmin(counttime) \"min\", max(counttime) \"max\", logtype from test_head\ngroup by logtype;\n avg | median | min | max |\n logtype\n-----------------------+-----------------------+-----+------+-----------\n-----------------------+-----------------------+-----+------+-----------\n-----------------------+-----------------------+-----+------+-----------\n1253.7509433962264151 | 1393.0000000000000000 | 1 | 2012 |\nFIND_DECODING_XLOG_RECORD_COUNT\n(1 row)\n\n\nHEAD+v26-0001\npostgres=# select avg(counttime) \"avg\", median(counttime) \"median\",\nmin(counttime) \"min\", max(counttime) \"max\", logtype from test_v26\ngroup by logtype;\n avg | median | min | max |\n logtype\n-----------------------+-----------------------+-----+------+-----------\n-----------------------+-----------------------+-----+------+-----------\n-----------------------+-----------------------+-----+------+-----------\n1278.4075471698113208 | 1423.5000000000000000 | 1 | 2015 |\nFIND_DECODING_XLOG_RECORD_COUNT\n(1 row)\n\n~~~~~~\n\nPerformance\n\nHEAD\n[peter@localhost res_0809_vignesh_timing_sync_head]$ cat *.dat_SUB |\ngrep RESULT | grep -v duration | awk '{print $3}'\n4014.266\n3892.089\n4195.318\n3571.862\n4312.183\n\n\nHEAD+v26-0001\n[peter@localhost res_0809_vignesh_timing_sync_v260001]$ cat *.dat_SUB\n| grep RESULT | grep -v duration | awk '{print $3}'\n3326.627\n3213.028\n3433.611\n3299.803\n3258.821\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 10 Aug 2023 15:35:18 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter and Vignesh,\n\nPeter Smith <smithpb2250@gmail.com>, 7 Ağu 2023 Pzt, 09:25 tarihinde şunu\nyazdı:\n\n> Hi Melih.\n>\n> Now that the design#1 ERRORs have been fixed, we returned to doing\n> performance measuring of the design#1 patch versus HEAD.\n\n\nThanks a lot for taking the time to benchmark the patch. It's really\nhelpful.\n\nPublisher \"busy\" table does commit every 1000 inserts:\n> 2w 4w 8w 16w\n> HEAD 11898 5855 1868 1631\n> HEAD+v24-0002 21905 8254 3531 1626\n> %improvement -84% -41% -89% 0%\n\n\n> ^ Note - design#1 was slower than HEAD here\n\n\n> ~\n\n\n> Publisher \"busy\" table does commit every 2000 inserts:\n> 2w 4w 8w 16w\n> HEAD 21740 7109 3454 1703\n> HEAD+v24-0002 21585 10877 4779 2293\n> %improvement 1% -53% -38% -35%\n\n\nI assume you meant HEAD+v26-0002 and not v24. I wanted to quickly reproduce\nthese two cases where the patch was significantly worse. Interestingly my\nresults are a bit different than yours.\n\nPublisher \"busy\" table does commit every 1000 inserts:\n2w 4w 8w 16w\nHEAD 22405 10335 5008 3304\nHEAD+v26 19954 8037 4068 2761\n%improvement 1% 2% 2% 1%\n\nPublisher \"busy\" table does commit every 2000 inserts:\n2w 4w 8w 16w\nHEAD 33122 14220 7251 4279\nHEAD+v26 34248 16213 7356 3914\n%improvement 0% -1% 0% 1%\n\nIf I'm not doing something wrong in testing (or maybe the patch doesn't\nperform reliable yet for some reason), I don't see a drastic change in\nperformance. But I guess the patch is supposed to perform better than HEAD\nin these both cases anyway. right?. I would expect the performance of the\npatch to converge to HEAD's performance with large tables. But I'm not sure\nwhat to expect when apply worker is busy with large transactions.\n\nHowever, I need to investigate a bit more what Vignesh shared earlier [1].\nIt makes sense that those issues can cause this problem here.\n\nIt just takes a bit of time for me to figure out these things, but I'm\nworking on it.\n\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm1TA068E2niJFUR9ig%2BYz3-ank%3Dj5%3Dj-2UocbzaDnQPrA%40mail.gmail.com\n\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Peter and Vignesh,Peter Smith <smithpb2250@gmail.com>, 7 Ağu 2023 Pzt, 09:25 tarihinde şunu yazdı:Hi Melih.Now that the design#1 ERRORs have been fixed, we returned to doingperformance measuring of the design#1 patch versus HEAD.Thanks a lot for taking the time to benchmark the patch. It's really helpful.Publisher \"busy\" table does commit every 1000 inserts:2w 4w 8w 16wHEAD 11898 5855 1868 1631HEAD+v24-0002 21905 8254 3531 1626%improvement -84% -41% -89% 0%^ Note - design#1 was slower than HEAD here~Publisher \"busy\" table does commit every 2000 inserts:2w 4w 8w 16wHEAD 21740 7109 3454 1703HEAD+v24-0002 21585 10877 4779 2293%improvement 1% -53% -38% -35%I assume you meant HEAD+v26-0002 and not v24. I wanted to quickly reproduce these two cases where the patch was significantly worse. Interestingly my results are a bit different than yours. Publisher \"busy\" table does commit every 1000 inserts:2w 4w 8w 16wHEAD 22405 10335 5008 3304HEAD+v26  19954  8037 4068 2761%improvement 1% 2% 2% 1%Publisher \"busy\" table does commit every 2000 inserts:2w 4w 8w 16wHEAD 33122 14220 7251 4279HEAD+v26 34248 16213 7356 3914%improvement 0% -1% 0% 1%If I'm not doing something wrong in testing (or maybe the patch doesn't perform reliable yet for some reason), I don't see a drastic change in performance. But I guess the patch is supposed to perform better than HEAD in these both cases anyway. right?. I would expect the performance of the patch to converge to HEAD's performance with large tables. But I'm not sure what to expect when apply worker is busy with large transactions.However, I need to investigate a bit more what Vignesh shared earlier [1]. It makes sense that those issues can cause this problem here.It just takes a bit of time for me to figure out these things, but I'm working on it.[1] https://www.postgresql.org/message-id/CALDaNm1TA068E2niJFUR9ig%2BYz3-ank%3Dj5%3Dj-2UocbzaDnQPrA%40mail.gmail.com Thanks,-- Melih MutluMicrosoft", "msg_date": "Thu, 10 Aug 2023 17:54:02 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Aug 11, 2023 at 12:54 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Peter and Vignesh,\n>\n> Peter Smith <smithpb2250@gmail.com>, 7 Ağu 2023 Pzt, 09:25 tarihinde şunu yazdı:\n>>\n>> Hi Melih.\n>>\n>> Now that the design#1 ERRORs have been fixed, we returned to doing\n>> performance measuring of the design#1 patch versus HEAD.\n>\n>\n> Thanks a lot for taking the time to benchmark the patch. It's really helpful.\n>\n>> Publisher \"busy\" table does commit every 1000 inserts:\n>> 2w 4w 8w 16w\n>> HEAD 11898 5855 1868 1631\n>> HEAD+v24-0002 21905 8254 3531 1626\n>> %improvement -84% -41% -89% 0%\n>>\n>>\n>> ^ Note - design#1 was slower than HEAD here\n>>\n>>\n>> ~\n>>\n>>\n>> Publisher \"busy\" table does commit every 2000 inserts:\n>> 2w 4w 8w 16w\n>> HEAD 21740 7109 3454 1703\n>> HEAD+v24-0002 21585 10877 4779 2293\n>> %improvement 1% -53% -38% -35%\n>\n>\n> I assume you meant HEAD+v26-0002 and not v24. I wanted to quickly reproduce these two cases where the patch was significantly worse. Interestingly my results are a bit different than yours.\n>\n\nNo, I meant what I wrote there. When I ran the tests the HEAD included\nthe v25-0001 refactoring patch, but v26 did not yet exist.\n\nFor now, we are only performance testing the first\n\"Reuse-Tablesyc-Workers\" patch, but not yet including the second patch\n(\"Reuse connection when...\").\n\nNote that those \"Reuse-Tablesyc-Workers\" patches v24-0002 and v26-0001\nare equivalent because there are only cosmetic log message differences\nbetween them.\nSo, my testing was with HEAD+v24-0002 (but not including v24-0003).\nYour same testing should be with HEAD+v26-0001 (but not including v26-0002).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 11 Aug 2023 08:25:57 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, 9 Aug 2023 at 09:51, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi Melih,\n>\n> Here is a patch to help in getting the execution at various phases\n> like: a) replication slot creation time, b) Wal reading c) Number of\n> WAL records read d) subscription relation state change etc\n> Couple of observation while we tested with this patch:\n> 1) We noticed that the patch takes more time for finding the decoding\n> start point.\n> 2) Another observation was that the number of XLOG records read for\n> identify the consistent point was significantly high with the v26_0001\n> patch.\n>\n> HEAD\n> postgres=# select avg(counttime)/1000 \"avgtime(ms)\",\n> median(counttime)/1000 \"median(ms)\", min(counttime)/1000\n> \"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\n> group by logtype;\n> avgtime(ms) | median(ms) | mintime(ms) |\n> maxtime(ms) | logtype\n> ------------------------+------------------------+-------------+-------------+--------------------------\n> 0.00579245283018867920 | 0.00200000000000000000 | 0 |\n> 1 | SNAPSHOT_BUILD\n> 1.2246811320754717 | 0.98550000000000000000 | 0 |\n> 37 | LOGICAL_SLOT_CREATION\n> 171.0863283018867920 | 183.9120000000000000 | 0 |\n> 408 | FIND_DECODING_STARTPOINT\n> 2.0699433962264151 | 1.4380000000000000 | 1 |\n> 49 | INIT_DECODING_CONTEXT\n> (4 rows)\n>\n> HEAD + v26-0001 patch\n> postgres=# select avg(counttime)/1000 \"avgtime(ms)\",\n> median(counttime)/1000 \"median(ms)\", min(counttime)/1000\n> \"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\n> group by logtype;\n> avgtime(ms) | median(ms) | mintime(ms) |\n> maxtime(ms) | logtype\n> ------------------------+------------------------+-------------+-------------+--------------------------\n> 0.00588113207547169810 | 0.00500000000000000000 | 0 |\n> 0 | SNAPSHOT_BUILD\n> 1.1270962264150943 | 1.1000000000000000 | 0 |\n> 2 | LOGICAL_SLOT_CREATION\n> 301.1745528301886790 | 410.4870000000000000 | 0 |\n> 427 | FIND_DECODING_STARTPOINT\n> 1.4814660377358491 | 1.4530000000000000 | 1 |\n> 9 | INIT_DECODING_CONTEXT\n> (4 rows)\n>\n> In the above FIND_DECODING_STARTPOINT is very much higher with V26-0001 patch.\n>\n> HEAD\n> FIND_DECODING_XLOG_RECORD_COUNT\n> - average = 2762\n> - median = 3362\n>\n> HEAD + reuse worker patch(v26_0001 patch)\n> Where FIND_DECODING_XLOG_RECORD_COUNT\n> - average = 4105\n> - median = 5345\n>\n> Similarly Number of xlog records read is higher with v26_0001 patch.\n>\n> Steps to calculate the timing:\n> -- first collect the necessary LOG from subscriber's log.\n> cat *.log | grep -E\n> '(LOGICAL_SLOT_CREATION|INIT_DECODING_CONTEXT|FIND_DECODING_STARTPOINT|SNAPSHOT_BUILD|FIND_DECODING_XLOG_RECORD_COUNT|LOGICAL_XLOG_READ|LOGICAL_DECODE_PROCESS_RECORD|LOGICAL_WAIT_TRANSACTION)'\n> > grep.dat\n>\n> create table testv26(logtime varchar, pid varchar, level varchar,\n> space varchar, logtype varchar, counttime int);\n> -- then copy these datas into db table to count the avg number.\n> COPY testv26 FROM '/home/logs/grep.dat' DELIMITER ' ';\n>\n> -- Finally, use the SQL to analyze the data:\n> select avg(counttime)/1000 \"avgtime(ms)\", logtype from testv26 group by logtype;\n>\n> --- To get the number of xlog records read:\n> select avg(counttime) from testv26 where logtype\n> ='FIND_DECODING_XLOG_RECORD_COUNT' and counttime != 1;\n>\n> Thanks to Peter and Hou-san who helped in finding these out. We are\n> parallely analysing this, @Melih Mutlu posting this information so\n> that it might help you too in analysing this issue.\n\nI analysed further on why it needs to read a larger number of XLOG\nrecords in some cases while creating the replication slot, here are my\nthoughts:\nNote: Tablesync worker needs to connect to the publisher and create\nconsistent point for the slots by reading the XLOG records. This\nrequires that all the open transactions and the transactions that are\ncreated while creating consistent point should be committed.\nI feel the creation of slots is better in few cases in Head because:\nPublisher | Subscriber\n------------------------------------------------------------\nBegin txn1 transaction |\nInsert 1..1000 records |\nCommit |\nBegin txn2 transaction |\nInsert 1..1000 records | Apply worker applies transaction txn1\n | Start tablesync table t2\n | create consistent point in\n | publisher before transaction txn3 is\n | started\ncommit | We just need to wait till\n | transaction txn2 is finished.\nBegin txn3 transaction |\nInsert 1..1000 records |\ncommit |\n\nIn V26, this is happening in some cases:\nPublisher | Subscriber\n------------------------------------------------------------\nBegin txn1 transaction |\nInsert 1..1000 records |\nCommit |\nBegin txn2 transaction |\nInsert 1..1000 records | Apply worker applies transaction txn1\n | Start tablesync table t2\ncommit | Create consistent point\nBegin txn3 transaction | (since transaction txn2 is committed\n | and txn3 is started, we will\n | need to wait\n | for transaction txn3 to be\n | committed)\nInsert 1..1000 records |\ncommit |\n\nThis is because In HEAD the tablesync worker will be started after one\ncommit, so we are able to create the consistent point before a new\ntransaction is started in some cases.\nCreate slot will be fastest if the tablesync worker is able to connect\nto the publisher and create a consistent point before the new\ntransaction is started. The probability of this is better in HEAD for\nthis scenario as the new tablesync worker is started after commit and\nthe tablesync worker in HEAD has a better time window(because the\ncurrent transaction has just started) before another new transaction\nis started. This probability is slightly lower with the V26 version.\nI felt this issue is purely a timing issue in a few cases because of\nthe timing of the new transactions being created while creating the\nslot.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 11 Aug 2023 16:26:26 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, 11 Aug 2023 at 16:26, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 9 Aug 2023 at 09:51, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi Melih,\n> >\n> > Here is a patch to help in getting the execution at various phases\n> > like: a) replication slot creation time, b) Wal reading c) Number of\n> > WAL records read d) subscription relation state change etc\n> > Couple of observation while we tested with this patch:\n> > 1) We noticed that the patch takes more time for finding the decoding\n> > start point.\n> > 2) Another observation was that the number of XLOG records read for\n> > identify the consistent point was significantly high with the v26_0001\n> > patch.\n> >\n> > HEAD\n> > postgres=# select avg(counttime)/1000 \"avgtime(ms)\",\n> > median(counttime)/1000 \"median(ms)\", min(counttime)/1000\n> > \"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\n> > group by logtype;\n> > avgtime(ms) | median(ms) | mintime(ms) |\n> > maxtime(ms) | logtype\n> > ------------------------+------------------------+-------------+-------------+--------------------------\n> > 0.00579245283018867920 | 0.00200000000000000000 | 0 |\n> > 1 | SNAPSHOT_BUILD\n> > 1.2246811320754717 | 0.98550000000000000000 | 0 |\n> > 37 | LOGICAL_SLOT_CREATION\n> > 171.0863283018867920 | 183.9120000000000000 | 0 |\n> > 408 | FIND_DECODING_STARTPOINT\n> > 2.0699433962264151 | 1.4380000000000000 | 1 |\n> > 49 | INIT_DECODING_CONTEXT\n> > (4 rows)\n> >\n> > HEAD + v26-0001 patch\n> > postgres=# select avg(counttime)/1000 \"avgtime(ms)\",\n> > median(counttime)/1000 \"median(ms)\", min(counttime)/1000\n> > \"mintime(ms)\", max(counttime)/1000 \"maxtime(ms)\", logtype from test\n> > group by logtype;\n> > avgtime(ms) | median(ms) | mintime(ms) |\n> > maxtime(ms) | logtype\n> > ------------------------+------------------------+-------------+-------------+--------------------------\n> > 0.00588113207547169810 | 0.00500000000000000000 | 0 |\n> > 0 | SNAPSHOT_BUILD\n> > 1.1270962264150943 | 1.1000000000000000 | 0 |\n> > 2 | LOGICAL_SLOT_CREATION\n> > 301.1745528301886790 | 410.4870000000000000 | 0 |\n> > 427 | FIND_DECODING_STARTPOINT\n> > 1.4814660377358491 | 1.4530000000000000 | 1 |\n> > 9 | INIT_DECODING_CONTEXT\n> > (4 rows)\n> >\n> > In the above FIND_DECODING_STARTPOINT is very much higher with V26-0001 patch.\n> >\n> > HEAD\n> > FIND_DECODING_XLOG_RECORD_COUNT\n> > - average = 2762\n> > - median = 3362\n> >\n> > HEAD + reuse worker patch(v26_0001 patch)\n> > Where FIND_DECODING_XLOG_RECORD_COUNT\n> > - average = 4105\n> > - median = 5345\n> >\n> > Similarly Number of xlog records read is higher with v26_0001 patch.\n> >\n> > Steps to calculate the timing:\n> > -- first collect the necessary LOG from subscriber's log.\n> > cat *.log | grep -E\n> > '(LOGICAL_SLOT_CREATION|INIT_DECODING_CONTEXT|FIND_DECODING_STARTPOINT|SNAPSHOT_BUILD|FIND_DECODING_XLOG_RECORD_COUNT|LOGICAL_XLOG_READ|LOGICAL_DECODE_PROCESS_RECORD|LOGICAL_WAIT_TRANSACTION)'\n> > > grep.dat\n> >\n> > create table testv26(logtime varchar, pid varchar, level varchar,\n> > space varchar, logtype varchar, counttime int);\n> > -- then copy these datas into db table to count the avg number.\n> > COPY testv26 FROM '/home/logs/grep.dat' DELIMITER ' ';\n> >\n> > -- Finally, use the SQL to analyze the data:\n> > select avg(counttime)/1000 \"avgtime(ms)\", logtype from testv26 group by logtype;\n> >\n> > --- To get the number of xlog records read:\n> > select avg(counttime) from testv26 where logtype\n> > ='FIND_DECODING_XLOG_RECORD_COUNT' and counttime != 1;\n> >\n> > Thanks to Peter and Hou-san who helped in finding these out. We are\n> > parallely analysing this, @Melih Mutlu posting this information so\n> > that it might help you too in analysing this issue.\n>\n> I analysed further on why it needs to read a larger number of XLOG\n> records in some cases while creating the replication slot, here are my\n> thoughts:\n> Note: Tablesync worker needs to connect to the publisher and create\n> consistent point for the slots by reading the XLOG records. This\n> requires that all the open transactions and the transactions that are\n> created while creating consistent point should be committed.\n> I feel the creation of slots is better in few cases in Head because:\n> Publisher | Subscriber\n> ------------------------------------------------------------\n> Begin txn1 transaction |\n> Insert 1..1000 records |\n> Commit |\n> Begin txn2 transaction |\n> Insert 1..1000 records | Apply worker applies transaction txn1\n> | Start tablesync table t2\n> | create consistent point in\n> | publisher before transaction txn3 is\n> | started\n> commit | We just need to wait till\n> | transaction txn2 is finished.\n> Begin txn3 transaction |\n> Insert 1..1000 records |\n> commit |\n>\n> In V26, this is happening in some cases:\n> Publisher | Subscriber\n> ------------------------------------------------------------\n> Begin txn1 transaction |\n> Insert 1..1000 records |\n> Commit |\n> Begin txn2 transaction |\n> Insert 1..1000 records | Apply worker applies transaction txn1\n> | Start tablesync table t2\n> commit | Create consistent point\n> Begin txn3 transaction | (since transaction txn2 is committed\n> | and txn3 is started, we will\n> | need to wait\n> | for transaction txn3 to be\n> | committed)\n> Insert 1..1000 records |\n> commit |\n>\n> This is because In HEAD the tablesync worker will be started after one\n> commit, so we are able to create the consistent point before a new\n> transaction is started in some cases.\n> Create slot will be fastest if the tablesync worker is able to connect\n> to the publisher and create a consistent point before the new\n> transaction is started. The probability of this is better in HEAD for\n> this scenario as the new tablesync worker is started after commit and\n> the tablesync worker in HEAD has a better time window(because the\n> current transaction has just started) before another new transaction\n> is started. This probability is slightly lower with the V26 version.\n> I felt this issue is purely a timing issue in a few cases because of\n> the timing of the new transactions being created while creating the\n> slot.\n\nI used the following steps to analyse this issue:\nLogs can be captured by applying the patches at [1].\n\n-- first collect the necessary information about from publisher's log\nfrom the execution of HEAD:\ncat *.log | grep FIND_DECODING_XLOG_RECORD_COUNT > grep_head.dat\n\n-- first collect the necessary information about from publisher's log\nfrom the execution of v26:\ncat *.log | grep FIND_DECODING_XLOG_RECORD_COUNT > grep_v26.dat\n\n-- then copy these datas into HEAD's db table to count the avg number.\nCOPY test_head FROM '/home/logs/grep_head.dat' DELIMITER ' ';\n\n-- then copy these datas into the v26 db table to count the avg number.\nCOPY test_v26 FROM '/home/logs/grep_v26.dat' DELIMITER ' ';\n\nFind the average of XLOG records read in HEAD:\npostgres=# select avg(counttime) from test_head where logtype\n='FIND_DECODING_XLOG_RECORD_COUNT' and counttime != 1;\n avg\n-----------------------\n 1394.1100000000000000\n(1 row)\n\nFind the average of XLOG records read in V26:\npostgres=# select avg(counttime) from test_v26 where logtype\n='FIND_DECODING_XLOG_RECORD_COUNT' and counttime != 1;\n avg\n-----------------------\n 1900.4100000000000000\n(1 row)\n\nWhen analysing why create replication slot needs to read more records\nin a few cases, I found a very interesting observation. I found that\nwith HEAD about 29% (29 out of 100 tables) of tables could find the\nconsistent point by reading the WAL records up to the next subsequent\nCOMMIT, whereas with V26 patch only 5% of tables could find the\nconsistent point by reading the WAL records up to next subsequent\ncommit. In these cases V26 patch had to read another transaction of\napproximately > 1000 WAL records to reach the consistent point which\nresults in an increase of average for more records to be read with V26\nversion. For these I got the start lsn and consistent lsn from the log\nfiles by matching the corresponding FIND_DECODING_XLOG_RECORD_COUNT, I\ndid a waldump of the WAL file and searched the records between start\nlsn and consistent LSN in the WAL dump and confirmed that only one\nCOMMIT record had to be read to reach the consistent point. Details of\nthis information from the log of HEAD and V26 is attached.\n\nThe number of tables required to read less than 1 commit can be found\nby the following:\n-- I checked for 1000 WAL records because we are having 1000 inserts\nin each transaction.\nselect count(counttime) from test_head where logtype\n='FIND_DECODING_XLOG_RECORD_COUNT' and counttime < 1000;\n count\n-------\n 29\n(1 row)\n\nselect count(counttime) from test_v26 where logtype\n='FIND_DECODING_XLOG_RECORD_COUNT' and counttime < 1000;\n count\n-------\n 5\n(1 row)\n\nApart from these there were other instances where the V26 had to read\nmore COMMIT record in few cases.\nThe above is happening because as mentioned in [2]. i.e. in HEAD the\ntablesync worker will be started after one commit, so we are able to\ncreate the consistent point before a new transaction is started in\nsome cases. Create slot will be fastest if the tablesync worker is\nable to connect to the publisher and create a consistent point before\nthe new transaction is started. The probability of this is better in\nHEAD for this scenario as the new tablesync worker is started after\ncommit and the tablesync worker in HEAD has a better time\nwindow(because the current transaction has just started) before\nanother new transaction is started. This probability is slightly\nlower with the V26 version. I felt this issue is purely a timing issue\nin a few cases because of the timing of the new transactions being\ncreated while creating the slot.\nSince this is purely a timing issue as explained above in a few cases\nbecause of the timing of the new transactions being created while\ncreating the slot, I felt we can ignore this.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm1TA068E2niJFUR9ig%2BYz3-ank%3Dj5%3Dj-2UocbzaDnQPrA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm2k2z3Hpa3Omb_tpxWkyHnUvsWjJMbqDs-2uD2eLzemJQ%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Fri, 11 Aug 2023 17:54:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Peter,\n\nPeter Smith <smithpb2250@gmail.com>, 11 Ağu 2023 Cum, 01:26 tarihinde şunu\nyazdı:\n\n> No, I meant what I wrote there. When I ran the tests the HEAD included\n> the v25-0001 refactoring patch, but v26 did not yet exist.\n>\n> For now, we are only performance testing the first\n> \"Reuse-Tablesyc-Workers\" patch, but not yet including the second patch\n> (\"Reuse connection when...\").\n>\n> Note that those \"Reuse-Tablesyc-Workers\" patches v24-0002 and v26-0001\n> are equivalent because there are only cosmetic log message differences\n> between them.\n>\n\nOk, that's fair.\n\n\n\n> So, my testing was with HEAD+v24-0002 (but not including v24-0003).\n> Your same testing should be with HEAD+v26-0001 (but not including\n> v26-0002).\n>\n\nThat's actually what I did. I should have been more clear about what I\nincluded in my previous email.With v26-0002, results are noticeably better\nanyway.\nI just rerun the test again against HEAD, HEAD+v26-0001 and additionally\nHEAD+v26-0001+v26-0002 this time, for better comparison.\n\nHere are my results with the same scripts you shared earlier (I obviously\nonly changed the number of inserts before each commit. ).\nNote that this is when synchronous_commit = off.\n\n100 inserts/tx\n+-------------+-------+------+------+------+\n| | 2w | 4w | 8w | 16w |\n+-------------+-------+------+------+------+\n| v26-0002 | 10421 | 6472 | 6656 | 6566 |\n+-------------+-------+------+------+------+\n| improvement | 31% | 12% | 0% | 5% |\n+-------------+-------+------+------+------+\n| v26-0001 | 14585 | 7386 | 7129 | 7274 |\n+-------------+-------+------+------+------+\n| improvement | 9% | 5% | 12% | 7% |\n+-------------+-------+------+------+------+\n| HEAD | 16130 | 7785 | 8147 | 7827 |\n+-------------+-------+------+------+------+\n\n1000 inserts/tx\n+-------------+-------+------+------+------+\n| | 2w | 4w | 8w | 16w |\n+-------------+-------+------+------+------+\n| v26-0002 | 13796 | 6848 | 5942 | 6315 |\n+-------------+-------+------+------+------+\n| improvement | 9% | 7% | 10% | 8% |\n+-------------+-------+------+------+------+\n| v26-0001 | 14685 | 7325 | 6675 | 6719 |\n+-------------+-------+------+------+------+\n| improvement | 3% | 0% | 0% | 2% |\n+-------------+-------+------+------+------+\n| HEAD | 15118 | 7354 | 6644 | 6890 |\n+-------------+-------+------+------+------+\n\n2000 inserts/tx\n+-------------+-------+-------+------+------+\n| | 2w | 4w | 8w | 16w |\n+-------------+-------+-------+------+------+\n| v26-0002 | 22442 | 9944 | 6034 | 5829 |\n+-------------+-------+-------+------+------+\n| improvement | 5% | 2% | 4% | 10% |\n+-------------+-------+-------+------+------+\n| v26-0001 | 23632 | 10164 | 6311 | 6480 |\n+-------------+-------+-------+------+------+\n| improvement | 0% | 0% | 0% | 0% |\n+-------------+-------+-------+------+------+\n| HEAD | 23667 | 10157 | 6285 | 6470 |\n+-------------+-------+-------+------+------+\n\n5000 inserts/tx\n+-------------+-------+-------+-------+------+\n| | 2w | 4w | 8w | 16w |\n+-------------+-------+-------+-------+------+\n| v26-0002 | 41443 | 21385 | 10832 | 6146 |\n+-------------+-------+-------+-------+------+\n| improvement | 0% | 0% | 1% | 16% |\n+-------------+-------+-------+-------+------+\n| v26-0001 | 41293 | 21226 | 10814 | 6158 |\n+-------------+-------+-------+-------+------+\n| improvement | 0% | 1% | 1% | 15% |\n+-------------+-------+-------+-------+------+\n| HEAD | 41503 | 21466 | 10943 | 7292 |\n+-------------+-------+-------+-------+------+\n\n\nAgain, I couldn't reproduce the cases where you saw significantly degraded\nperformance. I wonder if I'm missing something. Did you do anything not\nincluded in the test scripts you shared? Do you think v26-0001 will\nperform 84% worse than HEAD, if you try again? I just want to be sure that\nit was not a random thing.\nInterestingly, I also don't see an improvement in above results as big as\nin your results when inserts/tx ratio is smaller. Even though it certainly\nis improved in such cases.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Peter,Peter Smith <smithpb2250@gmail.com>, 11 Ağu 2023 Cum, 01:26 tarihinde şunu yazdı:\nNo, I meant what I wrote there. When I ran the tests the HEAD included\nthe v25-0001 refactoring patch, but v26 did not yet exist.\n\nFor now, we are only performance testing the first\n\"Reuse-Tablesyc-Workers\" patch, but not yet including the second patch\n(\"Reuse connection when...\").\n\nNote that those \"Reuse-Tablesyc-Workers\" patches v24-0002 and v26-0001\nare equivalent because there are only cosmetic log message differences\nbetween them.Ok, that's fair. \nSo, my testing was with HEAD+v24-0002 (but not including v24-0003).\nYour same testing should be with HEAD+v26-0001 (but not including v26-0002).That's actually what I did. I should have been more clear about what I included in my previous email.With v26-0002, results are noticeably better anyway.I just rerun the test again against HEAD, HEAD+v26-0001 and additionally HEAD+v26-0001+v26-0002 this time, for better comparison.Here are my results with the same scripts you shared earlier (I obviously only changed the number of inserts before each commit. ).Note that this is when synchronous_commit = off.100 inserts/tx+-------------+-------+------+------+------+|             | 2w    | 4w   | 8w   | 16w  |+-------------+-------+------+------+------+| v26-0002    | 10421 | 6472 | 6656 | 6566 |+-------------+-------+------+------+------+| improvement | 31%   | 12%  | 0%   | 5%   |+-------------+-------+------+------+------+| v26-0001    | 14585 | 7386 | 7129 | 7274 |+-------------+-------+------+------+------+| improvement | 9%    | 5%   | 12%  | 7%   |+-------------+-------+------+------+------+| HEAD        | 16130 | 7785 | 8147 | 7827 |+-------------+-------+------+------+------+1000 inserts/tx+-------------+-------+------+------+------+|             | 2w    | 4w   | 8w   | 16w  |+-------------+-------+------+------+------+| v26-0002    | 13796 | 6848 | 5942 | 6315 |+-------------+-------+------+------+------+| improvement | 9%    | 7%   | 10%  | 8%   |+-------------+-------+------+------+------+| v26-0001    | 14685 | 7325 | 6675 | 6719 |+-------------+-------+------+------+------+| improvement | 3%    | 0%   | 0%   | 2%   |+-------------+-------+------+------+------+| HEAD        | 15118 | 7354 | 6644 | 6890 |+-------------+-------+------+------+------+2000 inserts/tx+-------------+-------+-------+------+------+|             | 2w    | 4w    | 8w   | 16w  |+-------------+-------+-------+------+------+| v26-0002    | 22442 | 9944  | 6034 | 5829 |+-------------+-------+-------+------+------+| improvement | 5%    | 2%    | 4%   | 10%  |+-------------+-------+-------+------+------+| v26-0001    | 23632 | 10164 | 6311 | 6480 |+-------------+-------+-------+------+------+| improvement | 0%    | 0%    | 0%   | 0%   |+-------------+-------+-------+------+------+| HEAD        | 23667 | 10157 | 6285 | 6470 |+-------------+-------+-------+------+------+5000 inserts/tx+-------------+-------+-------+-------+------+|             | 2w    | 4w    | 8w    | 16w  |+-------------+-------+-------+-------+------+| v26-0002    | 41443 | 21385 | 10832 | 6146 |+-------------+-------+-------+-------+------+| improvement | 0%    | 0%    | 1%    | 16%  |+-------------+-------+-------+-------+------+| v26-0001    | 41293 | 21226 | 10814 | 6158 |+-------------+-------+-------+-------+------+| improvement | 0%    | 1%    | 1%    | 15%  |+-------------+-------+-------+-------+------+| HEAD        | 41503 | 21466 | 10943 | 7292 |+-------------+-------+-------+-------+------+Again, I couldn't reproduce the cases where you saw significantly degraded performance. I wonder if I'm missing something. Did you do anything not included in the test scripts you shared? Do you think v26-0001 will perform 84% worse than HEAD, if you try again? I just want to be sure that it was not a random thing.Interestingly, I also don't see an improvement in above results as big as in your results when inserts/tx ratio is smaller. Even though it certainly is improved in such cases. Thanks,-- Melih MutluMicrosoft", "msg_date": "Fri, 11 Aug 2023 16:45:39 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Aug 11, 2023 at 7:15 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com>, 11 Ağu 2023 Cum, 01:26 tarihinde şunu yazdı:\n>>\n>> No, I meant what I wrote there. When I ran the tests the HEAD included\n>> the v25-0001 refactoring patch, but v26 did not yet exist.\n>>\n>> For now, we are only performance testing the first\n>> \"Reuse-Tablesyc-Workers\" patch, but not yet including the second patch\n>> (\"Reuse connection when...\").\n>>\n>> Note that those \"Reuse-Tablesyc-Workers\" patches v24-0002 and v26-0001\n>> are equivalent because there are only cosmetic log message differences\n>> between them.\n>\n>\n> Ok, that's fair.\n>\n>\n>>\n>> So, my testing was with HEAD+v24-0002 (but not including v24-0003).\n>> Your same testing should be with HEAD+v26-0001 (but not including v26-0002).\n>\n>\n> That's actually what I did. I should have been more clear about what I included in my previous email.With v26-0002, results are noticeably better anyway.\n> I just rerun the test again against HEAD, HEAD+v26-0001 and additionally HEAD+v26-0001+v26-0002 this time, for better comparison.\n>\n> Here are my results with the same scripts you shared earlier (I obviously only changed the number of inserts before each commit. ).\n> Note that this is when synchronous_commit = off.\n>\n> 100 inserts/tx\n> +-------------+-------+------+------+------+\n> | | 2w | 4w | 8w | 16w |\n> +-------------+-------+------+------+------+\n> | v26-0002 | 10421 | 6472 | 6656 | 6566 |\n> +-------------+-------+------+------+------+\n> | improvement | 31% | 12% | 0% | 5% |\n> +-------------+-------+------+------+------+\n> | v26-0001 | 14585 | 7386 | 7129 | 7274 |\n> +-------------+-------+------+------+------+\n> | improvement | 9% | 5% | 12% | 7% |\n> +-------------+-------+------+------+------+\n> | HEAD | 16130 | 7785 | 8147 | 7827 |\n> +-------------+-------+------+------+------+\n>\n> 1000 inserts/tx\n> +-------------+-------+------+------+------+\n> | | 2w | 4w | 8w | 16w |\n> +-------------+-------+------+------+------+\n> | v26-0002 | 13796 | 6848 | 5942 | 6315 |\n> +-------------+-------+------+------+------+\n> | improvement | 9% | 7% | 10% | 8% |\n> +-------------+-------+------+------+------+\n> | v26-0001 | 14685 | 7325 | 6675 | 6719 |\n> +-------------+-------+------+------+------+\n> | improvement | 3% | 0% | 0% | 2% |\n> +-------------+-------+------+------+------+\n> | HEAD | 15118 | 7354 | 6644 | 6890 |\n> +-------------+-------+------+------+------+\n>\n> 2000 inserts/tx\n> +-------------+-------+-------+------+------+\n> | | 2w | 4w | 8w | 16w |\n> +-------------+-------+-------+------+------+\n> | v26-0002 | 22442 | 9944 | 6034 | 5829 |\n> +-------------+-------+-------+------+------+\n> | improvement | 5% | 2% | 4% | 10% |\n> +-------------+-------+-------+------+------+\n> | v26-0001 | 23632 | 10164 | 6311 | 6480 |\n> +-------------+-------+-------+------+------+\n> | improvement | 0% | 0% | 0% | 0% |\n> +-------------+-------+-------+------+------+\n> | HEAD | 23667 | 10157 | 6285 | 6470 |\n> +-------------+-------+-------+------+------+\n>\n> 5000 inserts/tx\n> +-------------+-------+-------+-------+------+\n> | | 2w | 4w | 8w | 16w |\n> +-------------+-------+-------+-------+------+\n> | v26-0002 | 41443 | 21385 | 10832 | 6146 |\n> +-------------+-------+-------+-------+------+\n> | improvement | 0% | 0% | 1% | 16% |\n> +-------------+-------+-------+-------+------+\n> | v26-0001 | 41293 | 21226 | 10814 | 6158 |\n> +-------------+-------+-------+-------+------+\n> | improvement | 0% | 1% | 1% | 15% |\n> +-------------+-------+-------+-------+------+\n> | HEAD | 41503 | 21466 | 10943 | 7292 |\n> +-------------+-------+-------+-------+------+\n>\n>\n> Again, I couldn't reproduce the cases where you saw significantly degraded performance.\n>\n\nI am not surprised to see that you don't see regression because as per\nVignesh's analysis, this is purely a timing issue where sometimes\nafter the patch the slot creation can take more time because there is\na constant inflow of transactions on the publisher. I think we are\nseeing it because this workload is predominantly just creating and\ndestroying slots. We can probably improve it later as discussed\nearlier by using a single for multiple copies (especially for small\ntables) or something like that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Aug 2023 19:21:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, Aug 10, 2023 at 10:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 9, 2023 at 8:28 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Thursday, August 3, 2023 7:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > > Right. I attached the v26 as you asked.\n> >\n> > Thanks for posting the patches.\n> >\n> > While reviewing the patch, I noticed one rare case that it's possible that there\n> > are two table sync worker for the same table in the same time.\n> >\n> > The patch relies on LogicalRepWorkerLock to prevent concurrent access, but the\n> > apply worker will start a new worker after releasing the lock. So, at the point[1]\n> > where the lock is released and the new table sync worker has not been started,\n> > it seems possible that another old table sync worker will be reused for the\n> > same table.\n> >\n> > /* Now safe to release the LWLock */\n> > LWLockRelease(LogicalRepWorkerLock);\n> > *[1]\n> > /*\n> > * If there are free sync worker slot(s), start a new sync\n> > * worker for the table.\n> > */\n> > if (nsyncworkers < max_sync_workers_per_subscription)\n> > ...\n> > logicalrep_worker_launch(MyLogicalRepWorker->dbid,\n> >\n>\n> Yeah, this is a problem. I think one idea to solve this is by\n> extending the lock duration till we launch the tablesync worker but we\n> should also consider changing this locking scheme such that there is a\n> better way to indicate that for a particular rel, tablesync is in\n> progress. Currently, the code in TablesyncWorkerMain() also acquires\n> the lock in exclusive mode even though the tablesync for a rel is in\n> progress which I guess could easily heart us for larger values of\n> max_logical_replication_workers. So, that could be another motivation\n> to think for a different locking scheme.\n>\n\nYet another problem is that currently apply worker maintains a hash\ntable for 'last_start_times' to avoid restarting the tablesync worker\nimmediately upon error. The same functionality is missing while\nreusing the table sync worker. One possibility is to use a shared hash\ntable to remember start times but I think it may depend on what we\ndecide to solve the previous problem reported by Hou-San.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Aug 2023 15:37:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Thu, 10 Aug 2023 at 10:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 9, 2023 at 8:28 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Thursday, August 3, 2023 7:30 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > > Right. I attached the v26 as you asked.\n> >\n> > Thanks for posting the patches.\n> >\n> > While reviewing the patch, I noticed one rare case that it's possible that there\n> > are two table sync worker for the same table in the same time.\n> >\n> > The patch relies on LogicalRepWorkerLock to prevent concurrent access, but the\n> > apply worker will start a new worker after releasing the lock. So, at the point[1]\n> > where the lock is released and the new table sync worker has not been started,\n> > it seems possible that another old table sync worker will be reused for the\n> > same table.\n> >\n> > /* Now safe to release the LWLock */\n> > LWLockRelease(LogicalRepWorkerLock);\n> > *[1]\n> > /*\n> > * If there are free sync worker slot(s), start a new sync\n> > * worker for the table.\n> > */\n> > if (nsyncworkers < max_sync_workers_per_subscription)\n> > ...\n> > logicalrep_worker_launch(MyLogicalRepWorker->dbid,\n> >\n>\n> Yeah, this is a problem. I think one idea to solve this is by\n> extending the lock duration till we launch the tablesync worker but we\n> should also consider changing this locking scheme such that there is a\n> better way to indicate that for a particular rel, tablesync is in\n> progress. Currently, the code in TablesyncWorkerMain() also acquires\n> the lock in exclusive mode even though the tablesync for a rel is in\n> progress which I guess could easily heart us for larger values of\n> max_logical_replication_workers. So, that could be another motivation\n> to think for a different locking scheme.\n\nThere are couple of ways in which this issue can be solved:\nApproach #1) check that the reuse worker has not picked up this table\nfor table sync from logicalrep_worker_launch while holding a lock on\nLogicalRepWorkerLock, if the reuse worker has already picked it up for\nprocessing, simply ignore it and return, nothing has to be done by the\nlauncher in this case.\nApproach #2) a) Applyworker to create a shared memory of all the\nrelations that need to be synced, b) tablesync worker to take a lock\non this shared memory and pick the next table to be\nprocessed(tablesync worker need not get the subscription relations\nagain and again) c) tablesync worker to update the status in shared\nmemory for the relation(since the lock is held there will be no\nconcurrency issues), also mark the start time in the shared memory,\nthis will help in not to restart the failed table before\nwal_retrieve_retry_interval has expired d) tablesync worker to sync\nthe table e) subscription relation will be marked as ready and the\ntablesync worker to remove the entry from shared memory f) Applyworker\nwill periodically synchronize the shared memory relations to keep it\nin sync with the fetched subscription relation tables g) when apply\nworker exits, the shared memory will be cleared.\n\nApproach #2) will also help in solving the other issue reported by Amit at [1].\nI felt we can use Approach #2 to solve the problem as it solves both\nthe reported issues and also there is an added advantage where the\nre-use table sync worker need not scan the pg_subscription_rel to get\nthe non-ready table for every run, instead we can use the list\nprepared by apply worker.\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KyHfVOVeio28p8CHDnuyKuej78cj_7U9mHAB4ictVQwQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 14 Aug 2023 17:29:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Here is another review comment about patch v26-0001.\n\nThe tablesync worker processes include the 'relid' as part of their\nname. See launcher.c:\n\nsnprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication tablesync worker for subscription %u sync %u\",\n subid,\n relid);\n\n~~\n\nAnd if that worker is \"reused\" by v26-0001 to process another relation\nthere is a LOG\n\nif (reuse_worker)\n ereport(LOG,\n errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\" will be reused to sync table \\\"%s\\\" with relid\n%u.\",\n MySubscription->name,\n get_rel_name(MyLogicalRepWorker->relid),\n MyLogicalRepWorker->relid));\n\n\nAFAICT, when being \"reused\" the original process name remains\nunchanged, and so I think it will continue to appear to any user\nlooking at it that the tablesync process is just taking a very long\ntime handling the original 'relid'.\n\nWon't the stale process name cause confusion to the users?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 16 Aug 2023 09:18:02 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Fri, Aug 11, 2023 at 11:45 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Again, I couldn't reproduce the cases where you saw significantly degraded performance. I wonder if I'm missing something. Did you do anything not included in the test scripts you shared? Do you think v26-0001 will perform 84% worse than HEAD, if you try again? I just want to be sure that it was not a random thing.\n> Interestingly, I also don't see an improvement in above results as big as in your results when inserts/tx ratio is smaller. Even though it certainly is improved in such cases.\n>\n\nTEST ENVIRONMENTS\n\nI am running the tests on a high-spec machine:\n\n-- NOTE: Nobody else is using this machine during our testing, so\nthere are no unexpected influences messing up the results.\n\n\nLinix\n\nArchitecture: x86_64\nCPU(s): 120\nThread(s) per core: 2\nCore(s) per socket: 15\n\n total used free shared buff/cache available\nMem: 755G 5.7G 737G 49M 12G 748G\nSwap: 4.0G 0B 4.0G\n\n~~~\n\nThe results I am seeing are not random. HEAD+v26-0001 is consistently\nworse than HEAD but only for some settings. With these settings, I see\nbad results (i.e. worse than HEAD) consistently every time using the\ndedicated test machine.\n\nHou-san also reproduced bad results using a different high-spec machine\n\nVignesh also reproduced bad results using just his laptop but in his\ncase, it did *not* occur every time. As discussed elsewhere the\nproblem is timing-related, so sometimes you may be lucky and sometimes\nnot.\n\n~\n\nI expect you are running everything correctly, but if you are using\njust a laptop (like Vignesh) then like him you might need to try\nmultiple times before you can hit the problem happening in your\nenvironment.\n\nAnyway, in case there is some other reason you are not seeing the bad\nresults I have re-attached scripts and re-described every step below.\n\n======\n\nBUILDING\n\n-- NOTE: I have a very minimal configuration without any\noptimization/debug flags etc. See config.log\n\n$ ./configure --prefix=/home/peter/pg_oss\n\n-- NOTE: Of course, make sure to be running using the correct Postgres:\n\necho 'set environment variables for OSS work'\nexport PATH=/home/peter/pg_oss/bin:$PATH\n\n-- NOTE: Be sure to do git stash or whatever so don't accidentally\nbuild a patched version thinking it is the HEAD version\n-- NOTE: Be sure to do a full clean build and apply (or don't apply\nv26-0001) according to the test you wish to run.\n\nSTEPS\n1. sudo make clean\n2. make\n3. sudo make install\n\n======\n\nSCRIPTS & STEPS\n\nSCRIPTS\ntestrun.sh\ndo_one_test_setup.sh\ndo_one_test_PUB.sh\ndo_one_test_SUB.sh\n\n---\n\nSTEPS\n\nStep-1. Edit the testrun.sh\n\ntables=( 100 )\nworkers=( 2 4 8 16 )\nsize=\"0\"\nprefix=\"0816headbusy\" <-- edit to differentiate each test run\n\n~\n\nStep-2. Edit the do_one_test_PUB.sh\nIF commit_counter = 1000 THEN <-- edit this if needed. I wanted 1000\ninserts/tx so nothing to do\n\n~\n\nStep-3: Check nothing else is running. If yes, then clean it up\n[peter@localhost testing_busy]$ ps -eaf | grep postgres\npeter 111924 100103 0 19:31 pts/0 00:00:00 grep --color=auto postgres\n\n~\n\nStep-4: Run the tests\n[peter@localhost testing_busy]$ ./testrun.sh\nnum_tables=100, size=0, num_workers=2, run #1 <-- check the echo\nmatched the config you set in the Setp-1\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to shut down.... done\nserver stopped\nnum_tables=100, size=0, num_workers=2, run #2\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to shut down.... done\nserver stopped\nnum_tables=100, size=0, num_workers=2, run #3\n...\n\n~\n\nStep-5: Sanity check\nWhen the test completes the current folder will be full of .log and .dat* files.\nCheck for sanity that no errors happened\n\n[peter@localhost testing_busy]$ cat *.log | grep ERROR\n[peter@localhost testing_busy]$\n\n~\n\nStep-6: Collect the results\nThe results are output (by the do_one_test_SUB.sh) into the *.dat_SUB files\nUse grep to extract them\n\n[peter@localhost testing_busy]$ cat 0816headbusy_100t_0_2w_*.dat_SUB |\ngrep RESULT | grep -v duration | awk '{print $3}'\n11742.019\n12157.355\n11773.807\n11582.981\n12220.962\n12546.325\n12210.713\n12614.892\n12015.489\n13527.05\n\nRepeat grep for other files:\n$ cat 0816headbusy_100t_0_4w_*.dat_SUB | grep RESULT | grep -v\nduration | awk '{print $3}'\n$ cat 0816headbusy_100t_0_8w_*.dat_SUB | grep RESULT | grep -v\nduration | awk '{print $3}'\n$ cat 0816headbusy_100t_0_16w_*.dat_SUB | grep RESULT | grep -v\nduration | awk '{print $3}'\n\n~\n\nStep-7: Summarise the results\nNow I just cut/paste the results from Step-6 into a spreadsheet and\nreport the median of the runs.\n\nFor example, for the above HEAD run, it was:\n 2w 4w 8w 16w\n1 11742 5996 1919 1582\n2 12157 5960 1871 1469\n3 11774 5926 2101 1571\n4 11583 6155 1883 1671\n5 12221 6310 1895 1707\n6 12546 6166 1900 1470\n7 12211 6114 2477 1587\n8 12615 6173 2610 1715\n9 12015 5869 2110 1673\n10 13527 5913 2144 1227\nMedian 12184 6055 2010 1584\n\n~\n\nStep-8: REPEAT\n-- repeat all above for different size transactions (editing do_one_test_PUB.sh)\n-- repeat all above after rebuilding again with HEAD+v26-0001\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 16 Aug 2023 13:53:47 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi Melih,\n\nLast week we revisited your implementation of design#2. Vignesh rebased it,\nand then made a few other changes.\n\nPSA v28*\n\nThe patch changes include:\n* changed the logic slightly by setting recv_immediately(new variable), if\nthis variable is set the main apply worker loop will not wait in this case.\n* setting the relation state to ready immediately if there are no more\nincremental changes to be synced.\n* receive the incremental changes if applicable and set the relation state\nto ready without waiting.\n* reuse the worker if the worker is free before trying to start a new table\nsync worker\n* restarting the tablesync worker only after wal_retrieve_retry_interval\n\n~\n\nFWIW, we just wanted to share with you the performance measurements seen\nusing this design#2 patch set:\n\n======\n\nRESULTS (not busy tests)\n\n------\n10 empty tables\n 2w 4w 8w 16w\nHEAD: 125 119 140 133\nHEAD+v28*: 92 93 123 134\n%improvement: 27% 22% 12% -1%\n------\n100 empty tables\n 2w 4w 8w 16w\nHEAD: 1037 843 1109 1155\nHEAD+v28*: 591 625 2616 2569\n%improvement: 43% 26% -136% -122%\n------\n1000 empty tables\n 2w 4w 8w 16w\nHEAD: 15874 10047 9919 10338\nHEAD+v28*: 33673 12199 9094 9896\n%improvement: -112% -21% 8% 4%\n------\n2000 empty tables\n 2w 4w 8w 16w\nHEAD: 45266 24216 19395 19820\nHEAD+v28*: 88043 21550 21668 22607\n%improvement: -95% 11% -12% -14%\n\n~~~\n\nNote - the results were varying quite a lot in comparison to the HEAD\ne.g. HEAD results are very consistent, but the v28* results observed are not\nHEAD 1000 (2w): 15861, 15777, 16007, 15950, 15886, 15740, 15846, 15740,\n15908, 15940\nv28* 1000 (2w): 34214, 13679, 8792, 33289, 31976, 56071, 57042, 56163,\n34058, 11969\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHi Melih,Last week we revisited your implementation of design#2. Vignesh rebased it, and then made a few other changes.PSA v28*The patch changes include:* changed the logic slightly by setting recv_immediately(new variable), if this variable is set the main apply worker loop will not wait in this case.* setting the relation state to ready immediately if there are no more incremental changes to be synced.* receive the incremental changes if applicable and set the relation state to ready without waiting.* reuse the worker if the worker is free before trying to start a new table sync worker* restarting the tablesync worker only after wal_retrieve_retry_interval~FWIW, we just wanted to share with you the performance measurements seen using this design#2 patch set:======RESULTS (not busy tests)------10 empty tables                2w      4w      8w      16wHEAD:           125     119     140     133HEAD+v28*:      92      93      123     134%improvement:   27%     22%     12%     -1%------100 empty tables                2w      4w      8w      16wHEAD:           1037    843     1109    1155HEAD+v28*:      591     625     2616    2569%improvement:   43%     26%     -136%   -122%------1000 empty tables                2w      4w      8w      16wHEAD:           15874   10047   9919    10338HEAD+v28*:      33673   12199   9094    9896%improvement:   -112%   -21%    8%      4%------2000 empty tables                2w      4w      8w      16wHEAD:           45266   24216   19395   19820HEAD+v28*:      88043   21550   21668   22607%improvement:  -95%     11%     -12%    -14%~~~Note - the results were varying quite a lot in comparison to the HEAD e.g. HEAD results are very consistent, but the v28* results observed are notHEAD 1000 (2w): 15861, 15777, 16007, 15950, 15886, 15740, 15846, 15740, 15908, 15940v28* 1000 (2w):  34214, 13679, 8792, 33289, 31976, 56071, 57042, 56163, 34058, 11969------Kind Regards,Peter Smith.Fujitsu Australia", "msg_date": "Mon, 21 Aug 2023 17:56:27 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Oops - now with attachments\n\nOn Mon, Aug 21, 2023 at 5:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> Hi Melih,\n>\n> Last week we revisited your implementation of design#2. Vignesh rebased\n> it, and then made a few other changes.\n>\n> PSA v28*\n>\n> The patch changes include:\n> * changed the logic slightly by setting recv_immediately(new variable), if\n> this variable is set the main apply worker loop will not wait in this case.\n> * setting the relation state to ready immediately if there are no more\n> incremental changes to be synced.\n> * receive the incremental changes if applicable and set the relation state\n> to ready without waiting.\n> * reuse the worker if the worker is free before trying to start a new\n> table sync worker\n> * restarting the tablesync worker only after wal_retrieve_retry_interval\n>\n> ~\n>\n> FWIW, we just wanted to share with you the performance measurements seen\n> using this design#2 patch set:\n>\n> ======\n>\n> RESULTS (not busy tests)\n>\n> ------\n> 10 empty tables\n> 2w 4w 8w 16w\n> HEAD: 125 119 140 133\n> HEAD+v28*: 92 93 123 134\n> %improvement: 27% 22% 12% -1%\n> ------\n> 100 empty tables\n> 2w 4w 8w 16w\n> HEAD: 1037 843 1109 1155\n> HEAD+v28*: 591 625 2616 2569\n> %improvement: 43% 26% -136% -122%\n> ------\n> 1000 empty tables\n> 2w 4w 8w 16w\n> HEAD: 15874 10047 9919 10338\n> HEAD+v28*: 33673 12199 9094 9896\n> %improvement: -112% -21% 8% 4%\n> ------\n> 2000 empty tables\n> 2w 4w 8w 16w\n> HEAD: 45266 24216 19395 19820\n> HEAD+v28*: 88043 21550 21668 22607\n> %improvement: -95% 11% -12% -14%\n>\n> ~~~\n>\n> Note - the results were varying quite a lot in comparison to the HEAD\n> e.g. HEAD results are very consistent, but the v28* results observed are\n> not\n> HEAD 1000 (2w): 15861, 15777, 16007, 15950, 15886, 15740, 15846, 15740,\n> 15908, 15940\n> v28* 1000 (2w): 34214, 13679, 8792, 33289, 31976, 56071, 57042, 56163,\n> 34058, 11969\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>", "msg_date": "Mon, 21 Aug 2023 17:58:25 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "Hi,\n\nThis patch is not applying on the HEAD. Please rebase and share the\nupdated patch.\n\nThanks and Regards\nShlok Kyal\n\nOn Wed, 10 Jan 2024 at 14:55, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Oops - now with attachments\n>\n> On Mon, Aug 21, 2023 at 5:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Hi Melih,\n>>\n>> Last week we revisited your implementation of design#2. Vignesh rebased it, and then made a few other changes.\n>>\n>> PSA v28*\n>>\n>> The patch changes include:\n>> * changed the logic slightly by setting recv_immediately(new variable), if this variable is set the main apply worker loop will not wait in this case.\n>> * setting the relation state to ready immediately if there are no more incremental changes to be synced.\n>> * receive the incremental changes if applicable and set the relation state to ready without waiting.\n>> * reuse the worker if the worker is free before trying to start a new table sync worker\n>> * restarting the tablesync worker only after wal_retrieve_retry_interval\n>>\n>> ~\n>>\n>> FWIW, we just wanted to share with you the performance measurements seen using this design#2 patch set:\n>>\n>> ======\n>>\n>> RESULTS (not busy tests)\n>>\n>> ------\n>> 10 empty tables\n>> 2w 4w 8w 16w\n>> HEAD: 125 119 140 133\n>> HEAD+v28*: 92 93 123 134\n>> %improvement: 27% 22% 12% -1%\n>> ------\n>> 100 empty tables\n>> 2w 4w 8w 16w\n>> HEAD: 1037 843 1109 1155\n>> HEAD+v28*: 591 625 2616 2569\n>> %improvement: 43% 26% -136% -122%\n>> ------\n>> 1000 empty tables\n>> 2w 4w 8w 16w\n>> HEAD: 15874 10047 9919 10338\n>> HEAD+v28*: 33673 12199 9094 9896\n>> %improvement: -112% -21% 8% 4%\n>> ------\n>> 2000 empty tables\n>> 2w 4w 8w 16w\n>> HEAD: 45266 24216 19395 19820\n>> HEAD+v28*: 88043 21550 21668 22607\n>> %improvement: -95% 11% -12% -14%\n>>\n>> ~~~\n>>\n>> Note - the results were varying quite a lot in comparison to the HEAD\n>> e.g. HEAD results are very consistent, but the v28* results observed are not\n>> HEAD 1000 (2w): 15861, 15777, 16007, 15950, 15886, 15740, 15846, 15740, 15908, 15940\n>> v28* 1000 (2w): 34214, 13679, 8792, 33289, 31976, 56071, 57042, 56163, 34058, 11969\n>>\n>> ------\n>> Kind Regards,\n>> Peter Smith.\n>> Fujitsu Australia\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:59:22 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, Jan 10, 2024 at 2:59 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> This patch is not applying on the HEAD. Please rebase and share the\n> updated patch.\n>\n\nIIRC, there were some regressions observed with this patch. So, one\nneeds to analyze those as well. I think we should mark it as \"Returned\nwith feedback\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:04:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" }, { "msg_contents": "On Wed, 10 Jan 2024 at 15:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 10, 2024 at 2:59 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n> > This patch is not applying on the HEAD. Please rebase and share the\n> > updated patch.\n> >\n>\n> IIRC, there were some regressions observed with this patch. So, one\n> needs to analyze those as well. I think we should mark it as \"Returned\n> with feedback\".\n\nThanks, I have updated the status to \"Returned with feedback\".\nFeel free to post an updated version with the fix for the regression\nand start a new entry for the same.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 10 Jan 2024 18:40:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reuse Workers and Replication Slots during Logical\n Replication" } ]
[ { "msg_contents": "Hi,\n\nThere's a number of symbols that are exported by libpq that are also in\nbinaries (mostly via pgport). That strikes me as not great, because the\nbehaviour in those cases isn't particularly well defined / OS dependent /\nlinker option dependent.\n\nHow about renaming those functions, turning the functions exported by libpq\ninto wrappers?\n\nThis is at least:\n\npqsignal\npg_char_to_encoding\npg_valid_server_encoding_id\npg_valid_server_encoding\npg_encoding_to_char\npg_utf_mblen\n\nI'm not quite sure why we export some of these, but we likely don't want to\nchange that, given the API/ABI break that'd cause.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Jul 2022 13:47:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "symbol \"conflicts\" between libpq and binaries" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17540\nLogged by: William Duclot\nEmail address: william.duclot@gmail.com\nPostgreSQL version: 14.4\nOperating system: GNU/Linux (Red Hat 8.5.0)\nDescription: \n\nMy application uses prepared statements. This section of the documentation\nis going to be very relevant to the rest of this report:\nhttps://www.postgresql.org/docs/current/sql-prepare.html#SQL-PREPARE-NOTES.\r\n\r\nThis is a minimal reproduction of the problem I observe, which I will\nexplain below:\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=6b01d161da27379844e7602a16543626\r\n\r\nScenario:\r\n- I create a fairly simple table (id + timestamp). Timestamp is indexed.\r\n- I create a simple-ish prepared statement for `SELECT MIN(id), MAX(id) from\nrelation_tuple_transaction WHERE timestamp >= $1;`\r\n- I execute the prepared statement multiple times (> 5 times)\r\n\r\nFrom the 6th time onwards, the query plan used by Postgres changes, which\nisn't fully unexpected as the documentation linked above does make it clear\nthat Postgres might decide to change the query plan for a generic query plan\nafter the 5th execution. And indeed, the estimated \"cost\" of the generic\nplan is lower than the custom plan's: therefore the query planner behaves\ncorrectly according to the documentation.\r\n\r\nNow, the problem: the execution of the generic plan is multiple orders of\nmagnitude slower than the custom query plan (\"actual time\" for the generic\nplan is over 6500x slower), yet Postgres decides to stick with the generic\nplan. Very unexpected for me: I was very happy with the first 5 plans, yet\nPostgres decides to change the plan for another that's enormously slower and\nstick with it.\r\nGiving a different parameter passed to the prepared statement (eg `now() -\ninterval '5 days'`) does give a \"slow\" custom plan (similar to the generic\nplan). This means that the query planner does not realise that the actual\nparameter value matters a lot, and that the parameters used _in practice_\nresult in a faster plan than the generic plan (100% of the first 5\nexecutions), and that therefore it shouldn't stick to the generic plan.\r\n\r\nIt is particularly insidious as actually I wasn't even aware I was using\nprepared statements. Like most applications I use a database driver (pgx, in\nGo) which I learnt uses `PQexecPrepared` under the hood, which creates a\nsort of \"unnamed prepared statement\" behaving the same as this minimal\nreproduction without me ever being aware that prepared statements are\ninvolved anywhere between my code and the database. This makes debugging\nvery complex as there's no reason to suspect anything\nprepared-statement-related and a manual EXPLAIN ANALYZE outside of a\nprepared statement won't show the problem.\r\n\r\nNote: setting `plan_cache_mode = force_custom_plan` database-wide solved the\nimmediate problem but is a workaround. It was a very welcome workaround,\nthough.", "msg_date": "Tue, 05 Jul 2022 22:13:15 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #17540: Prepared statement: PG switches to a generic query plan\n which is consistently much slower" }, { "msg_contents": "On Wed, Jul 6, 2022 at 2:41 PM PG Bug reporting form <noreply@postgresql.org>\nwrote:\n\n> The following bug has been logged on the website:\n>\n> Bug reference: 17540\n> Logged by: William Duclot\n> Email address: william.duclot@gmail.com\n> PostgreSQL version: 14.4\n> Operating system: GNU/Linux (Red Hat 8.5.0)\n> Description:\n>\n\n\n> This means that the query planner does not realise that the actual\n> parameter value matters a lot, and that the parameters used _in practice_\n> result in a faster plan than the generic plan (100% of the first 5\n> executions), and that therefore it shouldn't stick to the generic plan.\n>\n\nI mean, it is the planner and so, no, it doesn't understand that the\nexecutor encountered an issue.\n\n\n> It is particularly insidious as actually I wasn't even aware I was using\n> prepared statements. Like most applications I use a database driver (pgx,\n> in\n> Go) which I learnt uses `PQexecPrepared` under the hood, which creates a\n> sort of \"unnamed prepared statement\" behaving the same as this minimal\n> reproduction without me ever being aware that prepared statements are\n> involved anywhere between my code and the database.\n\n\nYep, and the core project pretty much says that if you don't like this you\nneed to complain to the driver writer and ask them to provide you an\ninterface to the unnamed parse-bind-execute API which lets you perform\nparameterization without memory, just safety.\n\nPostgreSQL has built the needed tools to make this less problematic, and\nhas made solid attempts to improve matters in the current state of things.\nThere doesn't seem to be a bug here. There is potentially room for\nimprovement but no one presently is working on things in this area.\n\nDavid J.\n\nOn Wed, Jul 6, 2022 at 2:41 PM PG Bug reporting form <noreply@postgresql.org> wrote:The following bug has been logged on the website:\n\nBug reference:      17540\nLogged by:          William Duclot\nEmail address:      william.duclot@gmail.com\nPostgreSQL version: 14.4\nOperating system:   GNU/Linux (Red Hat 8.5.0)\nDescription:        This means that the query planner does not realise that the actual\nparameter value matters a lot, and that the parameters used _in practice_\nresult in a faster plan than the generic plan (100% of the first 5\nexecutions), and that therefore it shouldn't stick to the generic plan.I mean, it is the planner and so, no, it doesn't understand that the executor encountered an issue.\n\nIt is particularly insidious as actually I wasn't even aware I was using\nprepared statements. Like most applications I use a database driver (pgx, in\nGo) which I learnt uses `PQexecPrepared` under the hood, which creates a\nsort of \"unnamed prepared statement\" behaving the same as this minimal\nreproduction without me ever being aware that prepared statements are\ninvolved anywhere between my code and the database.Yep, and the core project pretty much says that if you don't like this you need to complain to the driver writer and ask them to provide you an interface to the unnamed parse-bind-execute API which lets you perform parameterization without memory, just safety.PostgreSQL has built the needed tools to make this less problematic, and has made solid attempts to improve matters in the current state of things.  There doesn't seem to be a bug here.  There is potentially room for improvement but no one presently is working on things in this area.David J.", "msg_date": "Wed, 6 Jul 2022 15:07:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "(On Thu, 7 Jul 2022 at 09:41, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> Scenario:\n> - I create a fairly simple table (id + timestamp). Timestamp is indexed.\n> - I create a simple-ish prepared statement for `SELECT MIN(id), MAX(id) from\n> relation_tuple_transaction WHERE timestamp >= $1;`\n> - I execute the prepared statement multiple times (> 5 times)\n>\n> From the 6th time onwards, the query plan used by Postgres changes, which\n> isn't fully unexpected as the documentation linked above does make it clear\n> that Postgres might decide to change the query plan for a generic query plan\n> after the 5th execution. And indeed, the estimated \"cost\" of the generic\n> plan is lower than the custom plan's: therefore the query planner behaves\n> correctly according to the documentation.\n\nIt's a pretty narrow fix for a fairly generic problem, but I think the\nplanner wouldn't have picked the pk_rttx index if build_minmax_path()\nhadn't added the \"id IS NOT NULL\" qual.\n\nI know that Andy Fan has been proposing a patch to add a Bitmapset\nfield to RelOptInfo to record the non-NULLable columns. That's a\nfairly lightweight patch, so it might be worth adding that just so\nbuild_minmax_path() can skip adding the NULL test if the column is a\nNOT NULL column.\n\nI see that preprocess_minmax_aggregates() won't touch anything that's\nnot a query to a single relation, so the Var can't be NULLable from\nbeing on the outside of an outer join. So it looks like to plumb in\nAndy's patch, build_minmax_path() would need to be modified to check\nif mminfo->target is a plain Var and then test if that Var is NOT\nNULLable then skip adding the NullTest.\n\nAll seems fairly trivial. It's just a fairly narrow fix to side-step a\nmore generic costing problem we have for Params. I just don't have\nany bright ideas on how to fix the more generic problem right now.\n\nI've been looking for a good excuse to commit Andy's NOT NULL patch so\nthat he has some more foundations for the other work he's doing. This\nmight be that excuse.\n\nDoes anyone think differently?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKU4AWoZrFaWAkTn9tE2_dd4RYnUiQUiX8xc=ryUywhBWQv89w@mail.gmail.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:23:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, 7 Jul 2022 at 12:23, David Rowley <dgrowleyml@gmail.com> wrote:\n> [1] https://www.postgresql.org/message-id/CAKU4AWoZrFaWAkTn9tE2_dd4RYnUiQUiX8xc=ryUywhBWQv89w@mail.gmail.com\n\nCorrection: [1]\nhttps://www.postgresql.org/message-id/CAKU4AWpUA8dyVSU1nfCJz71mu6VEjbGS1uy8azrt5CdyoZqGQA%40mail.gmail.com\n\nDavid\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:46:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "Hi,\n\n\nOn 2022-07-06 15:07:46 -0700, David G. Johnston wrote:\n> On Wed, Jul 6, 2022 at 2:41 PM PG Bug reporting form <noreply@postgresql.org>\n> wrote:\n> > It is particularly insidious as actually I wasn't even aware I was using\n> > prepared statements. Like most applications I use a database driver (pgx,\n> > in\n> > Go) which I learnt uses `PQexecPrepared` under the hood, which creates a\n> > sort of \"unnamed prepared statement\" behaving the same as this minimal\n> > reproduction without me ever being aware that prepared statements are\n> > involved anywhere between my code and the database.\n>\n>\n> Yep, and the core project pretty much says that if you don't like this you\n> need to complain to the driver writer and ask them to provide you an\n> interface to the unnamed parse-bind-execute API which lets you perform\n> parameterization without memory, just safety.\n>\n> PostgreSQL has built the needed tools to make this less problematic, and\n> has made solid attempts to improve matters in the current state of things.\n> There doesn't seem to be a bug here. There is potentially room for\n> improvement but no one presently is working on things in this area.\n\nI think the cost for the slow plan being so much cheaper can almost be\nqualified as bug.\n\nThe slow plan seems pretty nonsensical to me. ISTM that something in the\ncosting there is at least almost broken.\n\n\nResult (cost=1.06..1.07 rows=1 width=16) (actual time=148.732..148.734 rows=1 loops=1)\n Buffers: shared hit=4935\n InitPlan 1 (returns $0)\n -> Limit (cost=0.42..0.53 rows=1 width=8) (actual time=73.859..73.860 rows=0 loops=1)\n Buffers: shared hit=2113\n -> Index Scan using pk_rttx on relation_tuple_transaction (cost=0.42..9445.44 rows=86400 width=8) (actual time=73.857..73.858 rows=0 loops=1)\n Index Cond: (id IS NOT NULL)\n Filter: (\"timestamp\" >= $1)\n Rows Removed by Filter: 259201\n Buffers: shared hit=2113\n InitPlan 2 (returns $1)\n -> Limit (cost=0.42..0.53 rows=1 width=8) (actual time=74.869..74.870 rows=0 loops=1)\n Buffers: shared hit=2822\n -> Index Scan Backward using pk_rttx on relation_tuple_transaction relation_tuple_transaction_1 (cost=0.42..9445.44 rows=86400 width=8) (actual time=74.868..74.868 rows=0 loops=1)\n Index Cond: (id IS NOT NULL)\n Filter: (\"timestamp\" >= $1)\n Rows Removed by Filter: 259201\n Buffers: shared hit=2822\nPlanning Time: 0.224 ms\nExecution Time: 148.781 ms\n\nThe planner assumes the table has 259201 rows. Somehow we end up\nassuming that a estimate-less filter reduces the number of rows to 86400\nboth on a backward and a forward scan.\n\nAnd for some reason we don't take the filter clause into account *at\nall* for the cost of returning the first row.\n\nSET enable_seqscan = false;\nEXPLAIN SELECT * FROM relation_tuple_transaction WHERE id IS NOT NULL LIMIT 1;\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Limit (cost=0.42..0.45 rows=1 width=16) │\n│ -> Index Scan using pk_rttx on relation_tuple_transaction (cost=0.42..8797.44 rows=259201 width=16) │\n│ Index Cond: (id IS NOT NULL) │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(3 rows)\n\nIt's also pointless that we use \"Index Cond: (id IS NOT NULL)\" for a\nprimary key index, but that's a minor thing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 17:46:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, 7 Jul 2022 at 12:46, Andres Freund <andres@anarazel.de> wrote:\n> I think the cost for the slow plan being so much cheaper can almost be\n> qualified as bug.\n>\n> The slow plan seems pretty nonsensical to me. ISTM that something in the\n> costing there is at least almost broken.\n\nI forgot to mention what the \"generic problem\" is when I posted my\nreply. I should have mentioned that this is how we cost LIMIT. We\nassume that we'll find the LIMIT 1 row after incurring the scan cost\nmultiplied by (1 / 259201).\n\nFor the plan with WHERE timestamp >= $1, the seqscan plan looks pretty\ncheap for fetching DEFAULT_INEQ_SEL of the 259201 rows considering the\nLIMIT multiples the cost of the scan by (1 / 86400).\n\nDavid\n\n\n", "msg_date": "Thu, 7 Jul 2022 13:54:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've been looking for a good excuse to commit Andy's NOT NULL patch so\n> that he has some more foundations for the other work he's doing. This\n> might be that excuse.\n\n> Does anyone think differently?\n\nWhile I don't have any problem with tracking column NOT NULL flags\nin RelOptInfo once the planner has a use for that info, I'm not sure\nthat we have a solid use-case for it quite yet. In particular, the\nfact that the table column is marked NOT NULL doesn't mean that any\nparticular occurrence of that column's Var can be freely assumed to be\nnon-null. The patch I'm working on to label Vars that have possibly\nbeen nulled by outer joins [1] seems like essential infrastructure for\ndoing anything very useful with the info.\n\nMaybe that objection doesn't apply to build_minmax_path's usage in\nparticular, but that's an awfully narrow use-case.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/830269.1656693747@sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 06 Jul 2022 23:06:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the cost for the slow plan being so much cheaper can almost be\n> qualified as bug.\n> The slow plan seems pretty nonsensical to me. ISTM that something in the\n> costing there is at least almost broken.\n\nI think this is probably an instance of the known problem that a generic\nplan is made without knowledge of the actual parameter values, and that\ncan lead us to make statistical assumptions that are not valid for the\nactual values, but nonetheless make one plan look cheaper than another\neven though the opposite is true given the actual values. In essence,\ncomparing the cost estimate for the generic plan to the cost estimate\nfor a custom plan is not really logically valid, because those estimates\nare founded on different statistics. I don't know how to fix that :-(.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 23:13:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, 7 Jul 2022 at 15:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While I don't have any problem with tracking column NOT NULL flags\n> in RelOptInfo once the planner has a use for that info, I'm not sure\n> that we have a solid use-case for it quite yet. In particular, the\n> fact that the table column is marked NOT NULL doesn't mean that any\n> particular occurrence of that column's Var can be freely assumed to be\n> non-null. The patch I'm working on to label Vars that have possibly\n> been nulled by outer joins [1] seems like essential infrastructure for\n> doing anything very useful with the info.\n\nI was aware that you'd done that work. I'm interested in it, but just\nnot found the time to look yet.\n\n> Maybe that objection doesn't apply to build_minmax_path's usage in\n> particular, but that's an awfully narrow use-case.\n\nI thought I'd quickly put the idea together and fairly quickly noticed\nthat we do preprocess_minmax_aggregates() in grouping_planner(), which\nis long before we load the RelOptInfo data in\nadd_base_rels_to_query(), which is called in query_planner(). I\nconsidered if we could move the preprocess_minmax_aggregates(), but\nthat does not seem right, although, surprisingly, no tests seem to\nfail from doing so. I'd have expected at least some EXPLAIN outputs to\nhave changed from the no-longer-present IS NOT NULL quals.\n\nI imagine a much less narrow case would be to check for redundant\nRestrictInfos in distribute_restrictinfo_to_rels(). That would also\ncatch cases such as WHERE non_nullable_col IS NULL, provided that qual\nmade it down to baserestrictinfo. When I realised that, I thought I\nmight be starting to overlap with your work in the link below.\n\n> [1] https://www.postgresql.org/message-id/flat/830269.1656693747@sss.pgh.pa.us\n\nThe 2 attached patches do fix the bad reported plan, it's just that\nit's a very roundabout way of fixing it\n\nAnyway, I've no current plans to take the attached any further. I\nthink it'll be better to pursue your NULLable-Var stuff and see if we\ncan do something more generic like remove provably redundant NullTests\nfrom baserestrictinfo.\n\nDavid", "msg_date": "Thu, 7 Jul 2022 15:31:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Anyway, I've no current plans to take the attached any further. I\n> think it'll be better to pursue your NULLable-Var stuff and see if we\n> can do something more generic like remove provably redundant NullTests\n> from baserestrictinfo.\n\nYeah, I suspect that the way forward is to allow\npreprocess_minmax_aggregates to do what it does now, and then\nremove the IS NOT NULL clause again later when we have the\ninfo available to let us do that in a generic way.\n\nIn any case, as you said, it's just a band-aid that happens to\nhelp in this exact scenario. It's not doing much for the bad\ncost estimation that's the root of the problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 23:50:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "Hi,\n\nOn 2022-07-06 23:13:18 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think the cost for the slow plan being so much cheaper can almost be\n> > qualified as bug.\n> > The slow plan seems pretty nonsensical to me. ISTM that something in the\n> > costing there is at least almost broken.\n>\n> I think this is probably an instance of the known problem that a generic\n> plan is made without knowledge of the actual parameter values, and that\n> can lead us to make statistical assumptions that are not valid for the\n> actual values, but nonetheless make one plan look cheaper than another\n> even though the opposite is true given the actual values. In essence,\n> comparing the cost estimate for the generic plan to the cost estimate\n> for a custom plan is not really logically valid, because those estimates\n> are founded on different statistics. I don't know how to fix that :-(.\n\nI think there's something more fundamentally wrong - somehow we end up with\nassuming > 50% selectivity on both the min and the max initplan, for the same\ncondition! And afaics (although it's a bit hard to see with the precision\nexplain prints floating point values as) don't charge cpu_operator_cost /\ncpu_tuple_cost. And this is on a table where we can know, despite not know the\nparameter value, that the column being compared has a correlation of 1.\n\nIn this case the whole generic plan part seems like a red herring. The generic\nplan is *awful* and would still be awful if the value were known, but\nsomewhere around the middle of the value range.\n\n\nHere's the op's tables + query, but without the prepared statement part:\n\nCREATE TABLE relation_tuple_transaction (\n id BIGSERIAL NOT NULL UNIQUE,\n timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL UNIQUE,\n CONSTRAINT pk_rttx PRIMARY KEY (id)\n);\nCREATE INDEX ix_relation_tuple_transaction_by_timestamp on relation_tuple_transaction(timestamp);\nINSERT INTO relation_tuple_transaction(timestamp) SELECT * FROM generate_series\n ( now() - interval '3 days'\n , now()\n , '1 second'::interval) dd\n ;\nvacuum freeze analyze;\nEXPLAIN ANALYZE SELECT MIN(id), MAX(id) from relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days');\n\npostgres[631148][1]=# EXPLAIN ANALYZE SELECT MIN(id), MAX(id) from relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days');;\n\nResult (cost=1.01..1.02 rows=1 width=16) (actual time=113.379..113.381 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.42..0.50 rows=1 width=8) (actual time=113.347..113.348 rows=1 loops=1)\n -> Index Scan using pk_rttx on relation_tuple_transaction (cost=0.42..10741.45 rows=127009 width=8) (actual time=113.345..113.345 rows=1 loops=1)\n Index Cond: (id IS NOT NULL)\n Filter: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\n Rows Removed by Filter: 129746\n InitPlan 2 (returns $1)\n -> Limit (cost=0.42..0.50 rows=1 width=8) (actual time=0.024..0.024 rows=1 loops=1)\n -> Index Scan Backward using pk_rttx on relation_tuple_transaction relation_tuple_transaction_1 (cost=0.42..10741.45 rows=127009 width=8) (actual time=0.023..0.023 rows=1 loops=1)\n Index Cond: (id IS NOT NULL)\n Filter: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\nPlanning Time: 0.370 ms\nExecution Time: 113.441 ms\n(14 rows)\n\nWe're pretty much by definition scanning half the table via the index scans,\nand end up with a cost of 1.02 (yes, aware that the paths are costed\nseparately).\n\n\nFWIW, manually writing the min/max as ORDER BY timestamp ASC/DESC LIMIT 1\nqueries yields a *vastly* better plan:\n\nEXPLAIN ANALYZE SELECT (SELECT id FROM relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days') ORDER BY timestamp ASC LIMIT 1), (SELECT id FROM relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days') ORDER BY timestamp DESC LIMIT 1);\n\nResult (cost=0.92..0.93 rows=1 width=16) (actual time=0.110..0.111 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.42..0.46 rows=1 width=16) (actual time=0.079..0.079 rows=1 loops=1)\n -> Index Scan using ix_relation_tuple_transaction_by_timestamp on relation_tuple_transaction (cost=0.42..4405.46 rows=129602 width=16) (actual time=0.077..0.078 rows=1 loops=1)\n Index Cond: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\n InitPlan 2 (returns $1)\n -> Limit (cost=0.42..0.46 rows=1 width=16) (actual time=0.028..0.028 rows=1 loops=1)\n -> Index Scan Backward using ix_relation_tuple_transaction_by_timestamp on relation_tuple_transaction relation_tuple_transaction_1 (cost=0.42..4405.46 rows=129602 width=16) (actual time=0.027..0.027 rows=1 loops=1)\n Index Cond: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\nPlanning Time: 0.270 ms\nExecution Time: 0.159 ms (11 rows)\n\nAnd it stays sane even if you add a (redundantly evaluated) AND id IS NOT NULL.\n\n\nEXPLAIN SELECT id FROM relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days') AND id IS NOT NULL ORDER BY timestamp ASC LIMIT 1;\nQUERY PLAN\nLimit (cost=0.42..0.46 rows=1 width=16)\n -> Index Scan using ix_relation_tuple_transaction_by_timestamp on relation_tuple_transaction (cost=0.42..4405.46 rows=129602 width=16)\n Index Cond: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\n Filter: (id IS NOT NULL)\n(4 rows)\n\n\nEXPLAIN SELECT min(id) FROM relation_tuple_transaction WHERE timestamp >= (now() - interval '1.5 days');\nQUERY PLAN\nResult (cost=0.50..0.51 rows=1 width=8)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.42..0.50 rows=1 width=8)\n -> Index Scan using pk_rttx on relation_tuple_transaction (cost=0.42..10741.45 rows=129602 width=8)\n Index Cond: (id IS NOT NULL)\n Filter: (\"timestamp\" >= (now() - '1 day 12:00:00'::interval))\n(6 rows)\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 21:36:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-06 23:13:18 -0400, Tom Lane wrote:\n>> comparing the cost estimate for the generic plan to the cost estimate\n>> for a custom plan is not really logically valid, because those estimates\n>> are founded on different statistics. I don't know how to fix that :-(.\n\n> I think there's something more fundamentally wrong - somehow we end up with\n> assuming > 50% selectivity on both the min and the max initplan, for the same\n> condition!\n\nWell, sure, because it *is* the same condition. AFAICS this is operating\nas designed. Do I wish it were better? Sure, but there is no simple fix\nhere.\n\nThe reasoning that's being applied in the generic plan is\n\n(1) default selectivity estimate for a scalar inequality is\n#define DEFAULT_INEQ_SEL 0.3333333333333333\n\n(2) therefore, the filter condition on the indexscan will select a random\none-third of the table;\n\n(3) therefore, the LIMIT will be able to stop after about three rows,\nwhichever direction we scan in.\n\nThe information that is lacking is that the \"id\" and \"timestamp\"\ncolumns are heavily correlated, so that we may have to scan far more\nthan three rows in \"id\" order before finding a row satisfying the\ninequality on \"timestamp\". This is a problem we've understood for\na long time --- I recall talking about it at PGCon a decade ago.\n\nThe extended stats machinery provides a framework wherein we could\ncalculate and save the ordering correlation between the two columns,\nbut I don't believe it actually calculates that number yet --- I think\nthe functional-dependency stuff is close but not the right thing.\nEven if we had the stats, it's not very clear where to fit this\ntype of consideration into the planner's estimates.\n\n> In this case the whole generic plan part seems like a red herring. The generic\n> plan is *awful* and would still be awful if the value were known, but\n> somewhere around the middle of the value range.\n\nIf the value were somewhere around the middle (which is more or less\nwhat we're assuming for the generic plan), then an indexscan on the\ntimestamp column isn't going to be that great either; you'd still\nbe scanning half the table.\n\n> FWIW, manually writing the min/max as ORDER BY timestamp ASC/DESC LIMIT 1\n> queries yields a *vastly* better plan:\n\nThose queries give the wrong answers. We're looking for the min or max\nid, not the id associated with the min or max timestamp. (They're\naccidentally the same with this toy dataset.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Jul 2022 14:02:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, 7 Jul 2022 at 15:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Anyway, I've no current plans to take the attached any further. I\n> > think it'll be better to pursue your NULLable-Var stuff and see if we\n> > can do something more generic like remove provably redundant NullTests\n> > from baserestrictinfo.\n>\n> Yeah, I suspect that the way forward is to allow\n> preprocess_minmax_aggregates to do what it does now, and then\n> remove the IS NOT NULL clause again later when we have the\n> info available to let us do that in a generic way.\n\nI started looking at a more generic way to fix this. In the attached\nI'm catching quals being added to baserestrictinfo in\ndistribute_restrictinfo_to_rels() and checking for IS NOT NULL quals\non columns defined with NOT NULL.\n\nI did this by adding a new function add_baserestrictinfo_to_rel()\nwhich can be the place where we add any future logic to ignore other\nalways-true quals. Perhaps in the future, we can add some logic there\nto look for quals on partitions which are always true based on the\npartition constraint.\n\nI also took the opportunity here to slightly modernised the Bitmapset\ncode in this area. We previously called bms_membership() and then\nbms_singleton_member(), which is not quite optimal. We invented\nbms_get_singleton_member() as a more efficient way of getting that.\nThe empty set case can just be handled more easily now since you\nchanged empty sets to always be NULL. If it's not an empty set and not\na singleton, then it must contain multiple members.\n\nI'm quite keen to see some forward progress on improving things for\nthis bug report. It would be good to take some more measures to stop\nthe planner being tricked into making silly mistakes. This is one\nexample of somewhere we could do better.\n\nDavid", "msg_date": "Thu, 6 Jul 2023 11:55:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, Jul 6, 2023 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 7 Jul 2022 at 15:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > Anyway, I've no current plans to take the attached any further. I\n> > > think it'll be better to pursue your NULLable-Var stuff and see if we\n> > > can do something more generic like remove provably redundant NullTests\n> > > from baserestrictinfo.\n> >\n> > Yeah, I suspect that the way forward is to allow\n> > preprocess_minmax_aggregates to do what it does now, and then\n> > remove the IS NOT NULL clause again later when we have the\n> > info available to let us do that in a generic way.\n>\n> I started looking at a more generic way to fix this. In the attached\n> I'm catching quals being added to baserestrictinfo in\n> distribute_restrictinfo_to_rels() and checking for IS NOT NULL quals\n> on columns defined with NOT NULL.\n>\n> I did this by adding a new function add_baserestrictinfo_to_rel()\n> which can be the place where we add any future logic to ignore other\n> always-true quals. Perhaps in the future, we can add some logic there\n> to look for quals on partitions which are always true based on the\n> partition constraint.\n\n\nI think this is a good start. Maybe we can extend it with little effort\nto cover OR clauses. For an OR clause, we can test its sub-clauses and\nif one of them is IS NOT NULL qual on a NOT NULL column then we can know\nthat the OR clause is always true.\n\nMaybe we can also test if the qual is always true according to the\napplicable constraint expressions of the given relation, something that\nis like the opposite of relation_excluded_by_constraints(). Of course\nthat would require much more efforts.\n\nAnother thing I'm wondering is that since we already have the\nouter-join-aware-Var infrastructure, maybe we can also test whether a IS\nNOT NULL qual in join clauses is always true. I imagine we need to test\nwhether the Var in the IS NOT NULL qual has an empty varnullingrels\nbesides that the Var is a NOT NULL column.\n\nBTW, with this patch the variable ‘rel’ in function\ndistribute_restrictinfo_to_rels is unused.\n\nThanks\nRichard\n\nOn Thu, Jul 6, 2023 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 7 Jul 2022 at 15:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Anyway, I've no current plans to take the attached any further. I\n> > think it'll be better to pursue your NULLable-Var stuff and see if we\n> > can do something more generic like remove provably redundant NullTests\n> > from baserestrictinfo.\n>\n> Yeah, I suspect that the way forward is to allow\n> preprocess_minmax_aggregates to do what it does now, and then\n> remove the IS NOT NULL clause again later when we have the\n> info available to let us do that in a generic way.\n\nI started looking at a more generic way to fix this.  In the attached\nI'm catching quals being added to baserestrictinfo in\ndistribute_restrictinfo_to_rels() and checking for IS NOT NULL quals\non columns defined with NOT NULL.\n\nI did this by adding a new function add_baserestrictinfo_to_rel()\nwhich can be the place where we add any future logic to ignore other\nalways-true quals. Perhaps in the future, we can add some logic there\nto look for quals on partitions which are always true based on the\npartition constraint.I think this is a good start.  Maybe we can extend it with little effortto cover OR clauses.  For an OR clause, we can test its sub-clauses andif one of them is IS NOT NULL qual on a NOT NULL column then we can knowthat the OR clause is always true.Maybe we can also test if the qual is always true according to theapplicable constraint expressions of the given relation, something thatis like the opposite of relation_excluded_by_constraints().  Of coursethat would require much more efforts.Another thing I'm wondering is that since we already have theouter-join-aware-Var infrastructure, maybe we can also test whether a ISNOT NULL qual in join clauses is always true.  I imagine we need to testwhether the Var in the IS NOT NULL qual has an empty varnullingrelsbesides that the Var is a NOT NULL column.BTW, with this patch the variable ‘rel’ in functiondistribute_restrictinfo_to_rels is unused.ThanksRichard", "msg_date": "Thu, 6 Jul 2023 17:26:55 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, Jul 6, 2023 at 5:26 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Thu, Jul 6, 2023 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> I started looking at a more generic way to fix this. In the attached\n>> I'm catching quals being added to baserestrictinfo in\n>> distribute_restrictinfo_to_rels() and checking for IS NOT NULL quals\n>> on columns defined with NOT NULL.\n>>\n>> I did this by adding a new function add_baserestrictinfo_to_rel()\n>> which can be the place where we add any future logic to ignore other\n>> always-true quals. Perhaps in the future, we can add some logic there\n>> to look for quals on partitions which are always true based on the\n>> partition constraint.\n>\n>\n> I think this is a good start. Maybe we can extend it with little effort\n> to cover OR clauses. For an OR clause, we can test its sub-clauses and\n> if one of them is IS NOT NULL qual on a NOT NULL column then we can know\n> that the OR clause is always true.\n>\n> Maybe we can also test if the qual is always true according to the\n> applicable constraint expressions of the given relation, something that\n> is like the opposite of relation_excluded_by_constraints(). Of course\n> that would require much more efforts.\n>\n> Another thing I'm wondering is that since we already have the\n> outer-join-aware-Var infrastructure, maybe we can also test whether a IS\n> NOT NULL qual in join clauses is always true. I imagine we need to test\n> whether the Var in the IS NOT NULL qual has an empty varnullingrels\n> besides that the Var is a NOT NULL column.\n>\n> BTW, with this patch the variable ‘rel’ in function\n> distribute_restrictinfo_to_rels is unused.\n>\n\nAttached is what I have in mind. The patch extends the logic from two\npoints.\n\n* it also checks OR clauses to see if it is always true.\n\n* it also checks for join clauses by additionally testing if the nulling\nbitmap is empty.\n\nI did not try the logic about testing a qual against the relation's\nconstraints though.\n\nThanks\nRichard", "msg_date": "Fri, 7 Jul 2023 15:02:59 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Fri, 7 Jul 2023 at 19:03, Richard Guo <guofenglinux@gmail.com> wrote:\n> Attached is what I have in mind. The patch extends the logic from two\n> points.\n>\n> * it also checks OR clauses to see if it is always true.\n>\n> * it also checks for join clauses by additionally testing if the nulling\n> bitmap is empty.\n\nDo you mind writing some regression tests for this?\n\nI don't really see an existing test file that would suit, maybe it's\nworth adding something like predicate.sql\n\nDavid\n\n\n", "msg_date": "Mon, 10 Jul 2023 14:14:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Mon, Jul 10, 2023 at 10:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 7 Jul 2023 at 19:03, Richard Guo <guofenglinux@gmail.com> wrote:\n> > Attached is what I have in mind. The patch extends the logic from two\n> > points.\n> >\n> > * it also checks OR clauses to see if it is always true.\n> >\n> > * it also checks for join clauses by additionally testing if the nulling\n> > bitmap is empty.\n>\n> Do you mind writing some regression tests for this?\n>\n> I don't really see an existing test file that would suit, maybe it's\n> worth adding something like predicate.sql\n\n\nHere is v3 patch with regression tests. I add the new test into the\ngroup where stats test is in, but I'm not sure if this is the right\nplace.\n\nThanks\nRichard", "msg_date": "Mon, 10 Jul 2023 14:39:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Mon, Jul 10, 2023 at 2:39 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Here is v3 patch with regression tests. I add the new test into the\n> group where stats test is in, but I'm not sure if this is the right\n> place.\n>\n\ncfbot says there is a test failure in postgres_fdw. So update to v4 to\nfix that.\n\nThanks\nRichard", "msg_date": "Wed, 26 Jul 2023 11:17:44 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Mon, 10 Jul 2023 at 18:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> Here is v3 patch with regression tests. I add the new test into the\n> group where stats test is in, but I'm not sure if this is the right\n> place.\n\nThanks for taking an interest in this.\n\nI spent more time looking at the idea and I wondered why we should\njust have it skip distributing IS NOT NULL quals to the relations.\nShould we also be allow IS NULL quals on non-nullable Vars to be\ndetected as false?\n\nI did some work on your v3 patch to see if that could be made to work.\nI ended up just trying to make a new RestrictInfo with a \"false\"\nclause, but quickly realised that it's not safe to go making new\nRestrictInfos during deconstruct_distribute_oj_quals(). A comment\nthere mentions:\n\n/*\n* Each time we produce RestrictInfo(s) from these quals, reset the\n* last_rinfo_serial counter, so that the RestrictInfos for the \"same\"\n* qual condition get identical serial numbers. (This relies on the\n* fact that we're not changing the qual list in any way that'd affect\n* the number of RestrictInfos built from it.) This'll allow us to\n* detect duplicative qual usage later.\n*/\n\nI ended up moving the function that looks for the NullTest quals in\nthe joinlist out so it's done after the quals have been distributed to\nthe relations. I'm not really that happy with this as if we ever\nfound some way to optimise quals that could be made part of an\nEquivalenceClass then those quals would have already have been\nprocessed to become EquivalenceClasses. I just don't see how to do it\nearlier as deconstruct_distribute_oj_quals() calls\nremove_nulling_relids() which changes the Var's varnullingrels causing\nthem to be empty during the processing of the NullTest qual.\n\nIt's also not so great that the RestrictInfo gets duplicated in:\n\nCREATE TABLE t1 (a INT NOT NULL, b INT);\nCREATE TABLE t2 (c INT NOT NULL, d INT);\nCREATE TABLE t3 (e INT NOT NULL, f INT);\n\npostgres=# EXPLAIN (costs off) SELECT * FROM t1 JOIN t2 ON t1.a = 1\nLEFT JOIN t3 ON t2.c IS NULL AND t2.d = 1;\n QUERY PLAN\n-------------------------------------------------------\n Nested Loop\n -> Nested Loop Left Join\n Join Filter: (false AND false AND (t2.d = 1))\n -> Seq Scan on t2\n -> Result\n One-Time Filter: false\n -> Materialize\n -> Seq Scan on t1\n Filter: (a = 1)\n(9 rows)\n\nAdjusting the code to build a new false clause and setting that in the\nexisting RestrictInfo rather than building a new RestrictInfo seems to\nfix that. I wondered if the duplication was a result of the\nrinfo_serial number changing.\n\nChecking back to the original MinMaxAgg I'm not sure if this is all\ngetting more complex than it's worth or not.\n\nI've attached what I've ended up with so far.\n\nDavid\n\n\nDavid", "msg_date": "Wed, 27 Sep 2023 02:42:32 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Tue, Sep 26, 2023 at 9:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I ended up moving the function that looks for the NullTest quals in\n> the joinlist out so it's done after the quals have been distributed to\n> the relations.\n\n\nIt seems that the RestrictInfos for the \"same\" qual condition still may\nget different serial numbers even if transform_join_clauses() is called\nafter we've distributed all the quals. For example,\n\nselect * from t1\n left join t2 on t1.a = t2.c\n left join t3 on t2.c = t3.e and t2.c is null;\n\nThere are two versions for qual 't2.c is null': with and without being\nmarked nullable by t1/t2 join. Let's write them as 'c* is null' and 'c\nis null'. They are supposed to have identical serial number. But after\nwe've transformed 'c is null' to 'false', they do not have identical\nserial number any more. This may cause problems where the logic of\nserial numbers is relied on?\n\n\n> I'm not really that happy with this as if we ever\n> found some way to optimise quals that could be made part of an\n> EquivalenceClass then those quals would have already have been\n> processed to become EquivalenceClasses. I just don't see how to do it\n> earlier as deconstruct_distribute_oj_quals() calls\n> remove_nulling_relids() which changes the Var's varnullingrels causing\n> them to be empty during the processing of the NullTest qual.\n\n\nHmm, I don't think it's a problem that deconstruct_distribute_oj_quals\nchanges the nullingrels. It will compute the correct nullingrels at\nlast for different clones of the same qual condition. We can just check\nthe nullingrels whatever it computes.\n\n\n> It's also not so great that the RestrictInfo gets duplicated in:\n>\n> Adjusting the code to build a new false clause and setting that in the\n> existing RestrictInfo rather than building a new RestrictInfo seems to\n> fix that. I wondered if the duplication was a result of the\n> rinfo_serial number changing.\n\n\nThe RestrictInfo nodes in different joinlists are multiply-linked rather\nthan copied, so when building restrictlist for a joinrel we use pointer\nequality to remove any duplication. In your patch new RestrictInfo\nnodes are created in transform_join_clauses(), so pointer equality no\nlonger works and we see duplication in the plan.\n\n\n> Checking back to the original MinMaxAgg I'm not sure if this is all\n> getting more complex than it's worth or not.\n\n\nIt seems that optimizing IS NULL quals is more complex than optimizing\nIS NOT NULL quals. I also wonder if it's worth the trouble to optimize\nIS NULL quals.\n\nBTW, there is an Assert failure running regression tests with your\npatch. I haven't looked into it.\n\nThanks\nRichard\n\nOn Tue, Sep 26, 2023 at 9:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\nI ended up moving the function that looks for the NullTest quals in\nthe joinlist out so it's done after the quals have been distributed to\nthe relations. It seems that the RestrictInfos for the \"same\" qual condition still mayget different serial numbers even if transform_join_clauses() is calledafter we've distributed all the quals.  For example,select * from t1    left join t2 on t1.a = t2.c    left join t3 on t2.c = t3.e and t2.c is null;There are two versions for qual 't2.c is null': with and without beingmarked nullable by t1/t2 join.  Let's write them as 'c* is null' and 'cis null'.  They are supposed to have identical serial number.  But afterwe've transformed 'c is null' to 'false', they do not have identicalserial number any more.  This may cause problems where the logic ofserial numbers is relied on? I'm not really that happy with this as if we ever\nfound some way to optimise quals that could be made part of an\nEquivalenceClass then those quals would have already have been\nprocessed to become EquivalenceClasses. I just don't see how to do it\nearlier as deconstruct_distribute_oj_quals() calls\nremove_nulling_relids() which changes the Var's varnullingrels causing\nthem to be empty during the processing of the NullTest qual.Hmm, I don't think it's a problem that deconstruct_distribute_oj_qualschanges the nullingrels.  It will compute the correct nullingrels atlast for different clones of the same qual condition.  We can just checkthe nullingrels whatever it computes. \nIt's also not so great that the RestrictInfo gets duplicated in:\n\nAdjusting the code to build a new false clause and setting that in the\nexisting RestrictInfo rather than building a new RestrictInfo seems to\nfix that. I wondered if the duplication was a result of the\nrinfo_serial number changing.The RestrictInfo nodes in different joinlists are multiply-linked ratherthan copied, so when building restrictlist for a joinrel we use pointerequality to remove any duplication.  In your patch new RestrictInfonodes are created in transform_join_clauses(), so pointer equality nolonger works and we see duplication in the plan. \nChecking back to the original MinMaxAgg I'm not sure if this is all\ngetting more complex than it's worth or not.It seems that optimizing IS NULL quals is more complex than optimizingIS NOT NULL quals.  I also wonder if it's worth the trouble to optimizeIS NULL quals.BTW, there is an Assert failure running regression tests with yourpatch.  I haven't looked into it.ThanksRichard", "msg_date": "Thu, 28 Sep 2023 11:22:38 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, 28 Sept 2023 at 16:22, Richard Guo <guofenglinux@gmail.com> wrote:\n> It seems that optimizing IS NULL quals is more complex than optimizing\n> IS NOT NULL quals. I also wonder if it's worth the trouble to optimize\n> IS NULL quals.\n\nI'm happy to reduce the scope of this patch. As for what to cut, I\nthink if we're doing a subset then we should try to do that subset in\na way that best leaves things open for phase 2 at some later date.\n\nIn my view, it would be less surprising that this works for base quals\nand not join quals than if it worked with \"Var IS NOT NULL\" but not\n\"Var IS NULL\". I'm unsure if my view is clouded by the fact that I\ndon't have a clear picture in my head on how this should work for join\nquals, however.\n\nWould it be surprising if this didn't work for join quals? My\nthoughts are probably not any more so than the fact that extended\nstatistics only work for base quals and not join quals, but I'm sure\nother people will have different views on that. I don't feel like we\nshould end up with exactly nothing committed from this patch solely\ndue to scope creep.\n\nDavid\n\n\n", "msg_date": "Thu, 28 Sep 2023 16:51:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Thu, Sep 28, 2023 at 11:51 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 28 Sept 2023 at 16:22, Richard Guo <guofenglinux@gmail.com> wrote:\n> > It seems that optimizing IS NULL quals is more complex than optimizing\n> > IS NOT NULL quals. I also wonder if it's worth the trouble to optimize\n> > IS NULL quals.\n>\n> I'm happy to reduce the scope of this patch. As for what to cut, I\n> think if we're doing a subset then we should try to do that subset in\n> a way that best leaves things open for phase 2 at some later date.\n\n\nI had a go at supporting IS NULL quals and ended up with the attached.\nThe patch generates a new constant-FALSE RestrictInfo that is marked\nwith the same required_relids etc as the original one if it is an IS\nNULL qual that can be reduced to FALSE. Note that the original\nrinfo_serial is also copied to the new RestrictInfo.\n\nOne thing that is not great is that we may have 'FALSE and otherquals'\nin the final plan, as shown by the plan below which is from the new\nadded test case.\n\n+explain (costs off)\n+select * from pred_tab t1 left join pred_tab t2 on true left join pred_tab\nt3 on t2.a is null and t2.b = 1;\n+ QUERY PLAN\n+---------------------------------------------------\n+ Nested Loop Left Join\n+ -> Seq Scan on pred_tab t1\n+ -> Materialize\n+ -> Nested Loop Left Join\n+ Join Filter: (false AND (t2.b = 1))\n+ -> Seq Scan on pred_tab t2\n+ -> Result\n+ One-Time Filter: false\n+(8 rows)\n\nMaybe we can artificially reduce it to 'FALSE', but I'm not sure if it's\nworth the trouble.\n\nThanks\nRichard", "msg_date": "Sun, 8 Oct 2023 16:26:43 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On 8/10/2023 15:26, Richard Guo wrote:\nHi,\n> On Thu, Sep 28, 2023 at 11:51 AM David Rowley <dgrowleyml@gmail.com \n> <mailto:dgrowleyml@gmail.com>> wrote:\n> \n> On Thu, 28 Sept 2023 at 16:22, Richard Guo <guofenglinux@gmail.com\n> <mailto:guofenglinux@gmail.com>> wrote:\n> > It seems that optimizing IS NULL quals is more complex than\n> optimizing\n> > IS NOT NULL quals.  I also wonder if it's worth the trouble to\n> optimize\n> > IS NULL quals.\n> \n> I'm happy to reduce the scope of this patch. As for what to cut, I\n> think if we're doing a subset then we should try to do that subset in\n> a way that best leaves things open for phase 2 at some later date.\n> \n> \n> I had a go at supporting IS NULL quals and ended up with the attached.\n> The patch generates a new constant-FALSE RestrictInfo that is marked\n> with the same required_relids etc as the original one if it is an IS\n> NULL qual that can be reduced to FALSE.  Note that the original\n> rinfo_serial is also copied to the new RestrictInfo.\n> \n> One thing that is not great is that we may have 'FALSE and otherquals'\n> in the final plan, as shown by the plan below which is from the new\n> added test case.\n\nSetting aside the thread's subject, I am interested in this feature \nbecause of its connection with the SJE feature and the same issue raised \n[1] during the discussion.\nIn the attachment - rebased version of your patch (because of the \n5d8aa8bced).\nAlthough the patch is already in a good state, some improvements can be \nmade. Look:\nexplain (costs off)\nSELECT oid,relname FROM pg_class\nWHERE oid < 5 OR (oid = 1 AND oid IS NULL);\n\n Bitmap Heap Scan on pg_class\n Recheck Cond: ((oid < '5'::oid) OR ((oid = '1'::oid) AND (oid IS NULL)))\n -> BitmapOr\n -> Bitmap Index Scan on pg_class_oid_index\n Index Cond: (oid < '5'::oid)\n -> Bitmap Index Scan on pg_class_oid_index\n Index Cond: ((oid = '1'::oid) AND (oid IS NULL))\n\nIf we go deeply through the filter, I guess we could replace such buried \nclauses.\n\n[1] Removing unneeded self joins\nhttps://www.postgresql.org/message-id/CAPpHfdt-0kVV7O%3D%3DaJEbjY2iGYBu%2BXBzTHEbPv_6sVNeC7fffQ%40mail.gmail.com\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional", "msg_date": "Tue, 24 Oct 2023 11:25:12 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Tue, Oct 24, 2023 at 12:25 PM Andrei Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> Setting aside the thread's subject, I am interested in this feature\n> because of its connection with the SJE feature and the same issue raised\n> [1] during the discussion.\n\n\nThanks for taking an interest in this.\n\nI rebased this patch over the SJE commit, and found that it can help\ndiscard redundant IS_NOT_NULL quals added by SJE logic if we've\nsuccessfully removed some self-joins on primary keys, as shown by the\nregression test plan changes, which IMO makes this patch look more\nuseful in practice.\n\n\n> Although the patch is already in a good state, some improvements can be\n> made. Look:\n> explain (costs off)\n> SELECT oid,relname FROM pg_class\n> WHERE oid < 5 OR (oid = 1 AND oid IS NULL);\n>\n> Bitmap Heap Scan on pg_class\n> Recheck Cond: ((oid < '5'::oid) OR ((oid = '1'::oid) AND (oid IS\n> NULL)))\n> -> BitmapOr\n> -> Bitmap Index Scan on pg_class_oid_index\n> Index Cond: (oid < '5'::oid)\n> -> Bitmap Index Scan on pg_class_oid_index\n> Index Cond: ((oid = '1'::oid) AND (oid IS NULL))\n>\n> If we go deeply through the filter, I guess we could replace such buried\n> clauses.\n\n\nYeah, we can do that by exploring harder on OR clauses. But for now I\nthink it's more important for this patch to introduce the\n'reduce-quals-to-constant' mechanism. As a start I think it'd be better\nto keep the logic simple for review. In the future maybe we can extend\nit to consider more than just NullTest quals, for example we could also\nconsider applicable constraint expressions of the given relation.\n\nThanks\nRichard", "msg_date": "Wed, 1 Nov 2023 10:20:49 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Wed, 1 Nov 2023 at 15:21, Richard Guo <guofenglinux@gmail.com> wrote:\n> I rebased this patch over the SJE commit\n\nI rebased your v7 patch on top of 930d2b442 and updated the expected\nresults of some new regression tests which now have their NullTest\nclauses removed.\n\nI also renamed add_baserestrictinfo_to_rel() to\nadd_base_clause_to_rel() so that it's more aligned to\nadd_join_clause_to_rels().\n\nOn looking deeper, I see you're overwriting the rinfo_serial of the\nconst-false RestrictInfo with the one from the original RestrictInfo.\nIf that's the correct thing to do then the following comment would\nneed to be updated to mention this exception of why the rinfo_serial\nisn't unique.\n\n/*----------\n* Serial number of this RestrictInfo. This is unique within the current\n* PlannerInfo context, with a few critical exceptions:\n* 1. When we generate multiple clones of the same qual condition to\n* cope with outer join identity 3, all the clones get the same serial\n* number. This reflects that we only want to apply one of them in any\n* given plan.\n* 2. If we manufacture a commuted version of a qual to use as an index\n* condition, it copies the original's rinfo_serial, since it is in\n* practice the same condition.\n* 3. RestrictInfos made for a child relation copy their parent's\n* rinfo_serial. Likewise, when an EquivalenceClass makes a derived\n* equality clause for a child relation, it copies the rinfo_serial of\n* the matching equality clause for the parent. This allows detection\n* of redundant pushed-down equality clauses.\n*----------\n*/\n\nLooking at the tests, I see:\n\nselect * from pred_tab t1 left join pred_tab t2 on true left join\npred_tab t3 on t2.a is null;\n\nI'm wondering if you can come up with a better test for this? I don't\nquite see any reason why varnullingrels can't be empty for t2.a in the\njoin qual as the \"ON true\" join condition between t1 and t2 means that\nthere shouldn't ever be any NULL t2.a rows. My thoughts are that if\nwe improve how varnullingrels are set in the future then this test\nwill be broken.\n\nAlso, I also like to write exactly what each test is testing so that\nit's easier in the future to maintain the expected results. It's\noften tricky when making planner changes to know if some planner\nchanges makes a test completely useless or if the expected results\njust need to be updated. If someone changes varnullingrels to be\nempty for this case, then if they accept the actual results as\nexpected results then the test becomes useless. I tend to do this\nwith comments in the .sql file along the lines of \"-- Ensure ...\"\n\nI also would rather see the SQLs in the test wrap their lines before\neach join and the keywords to be upper case.\n\nDavid", "msg_date": "Wed, 29 Nov 2023 13:48:10 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "On Wed, Nov 29, 2023 at 8:48 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I rebased your v7 patch on top of 930d2b442 and updated the expected\n> results of some new regression tests which now have their NullTest\n> clauses removed.\n\n\nThanks for your rebase.\n\n\n> On looking deeper, I see you're overwriting the rinfo_serial of the\n> const-false RestrictInfo with the one from the original RestrictInfo.\n> If that's the correct thing to do then the following comment would\n> need to be updated to mention this exception of why the rinfo_serial\n> isn't unique.\n\n\nRight, that's what we need to do.\n\n\n> Looking at the tests, I see:\n>\n> select * from pred_tab t1 left join pred_tab t2 on true left join\n> pred_tab t3 on t2.a is null;\n>\n> I'm wondering if you can come up with a better test for this? I don't\n> quite see any reason why varnullingrels can't be empty for t2.a in the\n> join qual as the \"ON true\" join condition between t1 and t2 means that\n> there shouldn't ever be any NULL t2.a rows. My thoughts are that if\n> we improve how varnullingrels are set in the future then this test\n> will be broken.\n>\n> Also, I also like to write exactly what each test is testing so that\n> it's easier in the future to maintain the expected results. It's\n> often tricky when making planner changes to know if some planner\n> changes makes a test completely useless or if the expected results\n> just need to be updated. If someone changes varnullingrels to be\n> empty for this case, then if they accept the actual results as\n> expected results then the test becomes useless. I tend to do this\n> with comments in the .sql file along the lines of \"-- Ensure ...\"\n>\n> I also would rather see the SQLs in the test wrap their lines before\n> each join and the keywords to be upper case.\n\n\nThanks for the suggestions on the tests. I had a go at improving the\ntest queries and their comments.\n\nBTW, I changed the subject of this patch to 'Reduce NullTest quals to\nconstant TRUE or FALSE', which seems more accurate to me, because this\npatch also reduces IS NULL clauses to constant-FALSE when applicable, in\naddition to ignoring redundant NOT NULL clauses.\n\nThanks\nRichard", "msg_date": "Fri, 1 Dec 2023 18:07:03 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17540: Prepared statement: PG switches to a generic query\n plan which is consistently much slower" }, { "msg_contents": "(Moving discussion from -bugs [1] to -hackers for more visibility.)\n\nBackground:\nThis started out as a performance fix for bug #17540 but has now\nextended beyond that as fixing that only requires we don't add\nredundant IS NOT NULL quals to Min/Max aggregate rewrites. The\nattached gets rid of all IS NOT NULL quals on columns that are\nprovably not null and replaces any IS NULL quals on NOT NULL columns\nwith a const-false gating qual which could result in not having to\nscan the relation at all.\n\nexplain (costs off) select * from pg_class where oid is null;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n\nThe need for this is slightly higher than it once was as the self-join\nremoval code must add IS NOT NULL quals when removing self-joins when\nthe join condition is strict.\n\nexplain select c1.* from pg_class c1 inner join pg_class c2 on c1.oid=c2.oid;\n QUERY PLAN\n----------------------------------------------------------------\n Seq Scan on pg_class c2 (cost=0.00..18.15 rows=415 width=273)\n\nmaster would contain an oid IS NOT NULL filter condition.\n\nOn Fri, 1 Dec 2023 at 23:07, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Wed, Nov 29, 2023 at 8:48 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> On looking deeper, I see you're overwriting the rinfo_serial of the\n>> const-false RestrictInfo with the one from the original RestrictInfo.\n>> If that's the correct thing to do then the following comment would\n>> need to be updated to mention this exception of why the rinfo_serial\n>> isn't unique.\n>\n>\n> Right, that's what we need to do.\n>\n>>\n>> Looking at the tests, I see:\n>>\n>> select * from pred_tab t1 left join pred_tab t2 on true left join\n>> pred_tab t3 on t2.a is null;\n>>\n>> I'm wondering if you can come up with a better test for this? I don't\n>> quite see any reason why varnullingrels can't be empty for t2.a in the\n>> join qual as the \"ON true\" join condition between t1 and t2 means that\n>> there shouldn't ever be any NULL t2.a rows. My thoughts are that if\n>> we improve how varnullingrels are set in the future then this test\n>> will be broken.\n>>\n>> Also, I also like to write exactly what each test is testing so that\n>> it's easier in the future to maintain the expected results. It's\n>> often tricky when making planner changes to know if some planner\n>> changes makes a test completely useless or if the expected results\n>> just need to be updated. If someone changes varnullingrels to be\n>> empty for this case, then if they accept the actual results as\n>> expected results then the test becomes useless. I tend to do this\n>> with comments in the .sql file along the lines of \"-- Ensure ...\"\n>>\n>> I also would rather see the SQLs in the test wrap their lines before\n>> each join and the keywords to be upper case.\n>\n>\n> Thanks for the suggestions on the tests. I had a go at improving the\n> test queries and their comments.\n\nThanks. I made a pass over this patch which resulted in just adding\nand tweaking some comments.\n\nThe other thing that bothers me about this patch now is the lack of\nsimplification of OR clauses with a redundant condition. For example:\n\npostgres=# explain (costs off) select * from pg_class where oid is\nnull or relname = 'non-existent';\n QUERY PLAN\n---------------------------------------------------------------------\n Bitmap Heap Scan on pg_class\n Recheck Cond: ((oid IS NULL) OR (relname = 'non-existant'::name))\n -> BitmapOr\n -> Bitmap Index Scan on pg_class_oid_index\n Index Cond: (oid IS NULL)\n -> Bitmap Index Scan on pg_class_relname_nsp_index\n Index Cond: (relname = 'non-existant'::name)\n(7 rows)\n\noid is null is const-false and if we simplified that to remove the\nredundant OR branch and run it through the constant folding code, we'd\nend up with just the relname = 'non-existent' and we'd end up with a\nmore simple plan as a result.\n\nI don't think that's a blocker. I think the patch is ready to go even\nwithout doing anything to improve that.\n\nHappy to hear other people's thoughts on this patch. Otherwise, I\ncurrently don't think the missed optimisation is a reason to block\nwhat we've ended up with so far.\n\nDavid\n\n[1] https://postgr.es/m/flat/17540-7aa1855ad5ec18b4%40postgresql.org", "msg_date": "Fri, 8 Dec 2023 00:04:37 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Removing const-false IS NULL quals and redundant IS NOT NULL quals" }, { "msg_contents": "\nHi,\n\nDavid Rowley <dgrowleyml@gmail.com> writes:\n\n>\n> Happy to hear other people's thoughts on this patch. Otherwise, I\n> currently don't think the missed optimisation is a reason to block\n> what we've ended up with so far.\n>\n> David\n>\n> [1] https://postgr.es/m/flat/17540-7aa1855ad5ec18b4%40postgresql.org\n>\n> [2. application/x-patch; v10-0001-Reduce-NullTest-quals-to-constant-TRUE-or-FALSE.patch]...\n\nThanks for working on this, an I just get a complaint about this missed\noptimisation 7 hours ago..\n\nI also want to add notnullattnums for the UniqueKey stuff as well, by\ncomparing your implementation with mine, I found you didn't consider\nthe NOT NULL generated by filter. After apply your patch:\n\ncreate table a(a int);\nexplain (costs off) select * from a where a > 3 and a is null;\n QUERY PLAN \n-------------------------------------\n Seq Scan on a\n Filter: ((a IS NULL) AND (a > 3))\n(2 rows)\n\nThis is acutally needed by UniqueKey stuff, do you think it should be\nadded? To save some of your time, you can check what I did in UniqueKey\n\n[1]\nhttps://www.postgresql.org/message-id/attachment/151254/v1-0001-uniquekey-on-base-relation-and-used-it-for-mark-d.patch \n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 27 Dec 2023 19:20:38 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "On Wed, Dec 27, 2023 at 7:38 PM Andy Fan <zhihuifan1213@163.com> wrote:\n\n> I also want to add notnullattnums for the UniqueKey stuff as well, by\n> comparing your implementation with mine, I found you didn't consider\n> the NOT NULL generated by filter. After apply your patch:\n>\n> create table a(a int);\n> explain (costs off) select * from a where a > 3 and a is null;\n> QUERY PLAN\n> -------------------------------------\n> Seq Scan on a\n> Filter: ((a IS NULL) AND (a > 3))\n> (2 rows)\n\n\nThe detection of self-inconsistent restrictions already exists in\nplanner.\n\n# set constraint_exclusion to on;\nSET\n# explain (costs off) select * from a where a > 3 and a is null;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n(2 rows)\n\nThanks\nRichard\n\nOn Wed, Dec 27, 2023 at 7:38 PM Andy Fan <zhihuifan1213@163.com> wrote:\nI also want to add notnullattnums for the UniqueKey stuff as well, by\ncomparing your implementation with mine,  I found you didn't consider\nthe NOT NULL generated by filter. After apply your patch:\n\ncreate table a(a int);\nexplain (costs off) select * from a where a > 3 and a is null;\n             QUERY PLAN              \n-------------------------------------\n Seq Scan on a\n   Filter: ((a IS NULL) AND (a > 3))\n(2 rows)The detection of self-inconsistent restrictions already exists inplanner.# set constraint_exclusion to on;SET# explain (costs off) select * from a where a > 3 and a is null;        QUERY PLAN-------------------------- Result   One-Time Filter: false(2 rows)ThanksRichard", "msg_date": "Wed, 27 Dec 2023 19:58:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "\nRichard Guo <guofenglinux@gmail.com> writes:\n>\n> The detection of self-inconsistent restrictions already exists in\n> planner.\n>\n> # set constraint_exclusion to on;\n> SET\n> # explain (costs off) select * from a where a > 3 and a is null;\n> QUERY PLAN\n> --------------------------\n> Result\n> One-Time Filter: false\n> (2 rows)\n\nIt has a different scope and cost from what I suggested. I'd suggest\nto detect the notnull constraint only with lower cost and it can be used\nin another user case. the constaint_exclusion can covers more user\ncases but more expensivly and default off.\n\n\nApart from the abve topic, I'm thinking if we should think about the\ncase like this: \n\ncreate table t1(a int);\ncreate table t2(a int);\n\nexplain (costs off) select * from t1 join t2 using(a) where a is NULL;\n QUERY PLAN \n-----------------------------------\n Hash Join\n Hash Cond: (t2.a = t1.a)\n -> Seq Scan on t2\n -> Hash\n -> Seq Scan on t1\n Filter: (a IS NULL)\n\nHere a is nullable at the base relation side, but we know that the query\nwould not return anything at last. IIUC, there is no good place to\nhandle this in our current infrastructure, I still raise this up in case\nI missed anything.\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 29 Dec 2023 09:25:09 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4459/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4459\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:31:46 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "On Thu, 28 Dec 2023 at 00:38, Andy Fan <zhihuifan1213@163.com> wrote:\n> I also want to add notnullattnums for the UniqueKey stuff as well, by\n> comparing your implementation with mine, I found you didn't consider\n> the NOT NULL generated by filter. After apply your patch:\n>\n> create table a(a int);\n> explain (costs off) select * from a where a > 3 and a is null;\n> QUERY PLAN\n> -------------------------------------\n> Seq Scan on a\n> Filter: ((a IS NULL) AND (a > 3))\n> (2 rows)\n\n> [1]\n> https://www.postgresql.org/message-id/attachment/151254/v1-0001-uniquekey-on-base-relation-and-used-it-for-mark-d.patch\n\nI believe these are two different things and we should not mix the two up.\n\nLooking at your patch, I see you have:\n\n+ /* The not null attrs from catalogs or baserestrictinfo. */\n+ Bitmapset *notnullattrs;\n\nWhereas, I have:\n\n/* zero-based set containing attnums of NOT NULL columns */\nBitmapset *notnullattnums;\n\nI'm a bit worried that your definition of notnullattrs could lead to\nconfusion about which optimisations will be possible.\n\nLet's say for example I want to write some code that optimises the\nexpression evaluation code to transform EEOP_FUNCEXPR_STRICT into\nEEOP_FUNCEXPR when all function arguments are Vars that have NOT NULL\nconstraints and are not nullable by any outer join. With my\ndefinition, it should be safe to do this, but with your definition, we\ncan't trust we'll not see any NULLs as if the strict function is\nevaluated before the strict base qual that filters the NULLs then the\nstrict function could be called with NULL.\n\nPerhaps we'd want another Bitmapset that has members for strict OpExrs\nthat filter NULLs and we could document that it's only safe to assume\nthere are no NULLs beyond the scan level.... but I'd say that's\nanother patch and I don't want to feed you design ideas here and\nderail this patch.\n\nDavid\n\n\n", "msg_date": "Tue, 23 Jan 2024 00:01:09 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "On Mon, 22 Jan 2024 at 17:32, Peter Smith <smithpb2250@gmail.com> wrote:\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2].\n\nI've attached v11 which updates the expected results in some newly\nadded regression tests.\n\nNo other changes.\n\nDavid", "msg_date": "Tue, 23 Jan 2024 00:11:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "On Tue, 23 Jan 2024 at 00:11, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached v11 which updates the expected results in some newly\n> added regression tests.\n\nI went over this again. I did a little more work adjusting comments\nand pushed it.\n\nThanks for all your assistance with this, Richard.\n\nDavid\n\n\n", "msg_date": "Tue, 23 Jan 2024 18:10:52 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" }, { "msg_contents": "On Tue, Jan 23, 2024 at 1:11 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I went over this again. I did a little more work adjusting comments\n> and pushed it.\n>\n> Thanks for all your assistance with this, Richard.\n\n\nThanks for pushing! This is really great.\n\nThanks\nRichard\n\nOn Tue, Jan 23, 2024 at 1:11 PM David Rowley <dgrowleyml@gmail.com> wrote:\nI went over this again. I did a little more work adjusting comments\nand pushed it.\n\nThanks for all your assistance with this, Richard.Thanks for pushing!  This is really great.ThanksRichard", "msg_date": "Tue, 23 Jan 2024 14:16:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing const-false IS NULL quals and redundant IS NOT NULL\n quals" } ]